Sep 13 00:04:53.740833 kernel: Linux version 6.6.106-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Sep 12 22:30:50 -00 2025 Sep 13 00:04:53.740849 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=2945e6465d436b7d1da8a9350a0544af0bd9aec821cd06987451d5e1d3071534 Sep 13 00:04:53.740856 kernel: Disabled fast string operations Sep 13 00:04:53.740860 kernel: BIOS-provided physical RAM map: Sep 13 00:04:53.740864 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ebff] usable Sep 13 00:04:53.740868 kernel: BIOS-e820: [mem 0x000000000009ec00-0x000000000009ffff] reserved Sep 13 00:04:53.740875 kernel: BIOS-e820: [mem 0x00000000000dc000-0x00000000000fffff] reserved Sep 13 00:04:53.740879 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007fedffff] usable Sep 13 00:04:53.740883 kernel: BIOS-e820: [mem 0x000000007fee0000-0x000000007fefefff] ACPI data Sep 13 00:04:53.740887 kernel: BIOS-e820: [mem 0x000000007feff000-0x000000007fefffff] ACPI NVS Sep 13 00:04:53.740892 kernel: BIOS-e820: [mem 0x000000007ff00000-0x000000007fffffff] usable Sep 13 00:04:53.740940 kernel: BIOS-e820: [mem 0x00000000f0000000-0x00000000f7ffffff] reserved Sep 13 00:04:53.740944 kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec0ffff] reserved Sep 13 00:04:53.740949 kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved Sep 13 00:04:53.740957 kernel: BIOS-e820: [mem 0x00000000fffe0000-0x00000000ffffffff] reserved Sep 13 00:04:53.740962 kernel: NX (Execute Disable) protection: active Sep 13 00:04:53.740966 kernel: APIC: Static calls initialized Sep 13 00:04:53.740971 kernel: SMBIOS 2.7 present. Sep 13 00:04:53.740976 kernel: DMI: VMware, Inc. VMware Virtual Platform/440BX Desktop Reference Platform, BIOS 6.00 05/28/2020 Sep 13 00:04:53.740981 kernel: vmware: hypercall mode: 0x00 Sep 13 00:04:53.740986 kernel: Hypervisor detected: VMware Sep 13 00:04:53.740991 kernel: vmware: TSC freq read from hypervisor : 3408.000 MHz Sep 13 00:04:53.740997 kernel: vmware: Host bus clock speed read from hypervisor : 66000000 Hz Sep 13 00:04:53.741002 kernel: vmware: using clock offset of 3932172313 ns Sep 13 00:04:53.741006 kernel: tsc: Detected 3408.000 MHz processor Sep 13 00:04:53.741012 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 13 00:04:53.741017 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 13 00:04:53.741022 kernel: last_pfn = 0x80000 max_arch_pfn = 0x400000000 Sep 13 00:04:53.741027 kernel: total RAM covered: 3072M Sep 13 00:04:53.741032 kernel: Found optimal setting for mtrr clean up Sep 13 00:04:53.741037 kernel: gran_size: 64K chunk_size: 64K num_reg: 2 lose cover RAM: 0G Sep 13 00:04:53.741044 kernel: MTRR map: 6 entries (5 fixed + 1 variable; max 21), built from 8 variable MTRRs Sep 13 00:04:53.741049 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 13 00:04:53.741053 kernel: Using GB pages for direct mapping Sep 13 00:04:53.741058 kernel: ACPI: Early table checksum verification disabled Sep 13 00:04:53.741063 kernel: ACPI: RSDP 0x00000000000F6A00 000024 (v02 PTLTD ) Sep 13 00:04:53.741068 kernel: ACPI: XSDT 0x000000007FEE965B 00005C (v01 INTEL 440BX 06040000 VMW 01324272) Sep 13 00:04:53.741078 kernel: ACPI: FACP 0x000000007FEFEE73 0000F4 (v04 INTEL 440BX 06040000 PTL 000F4240) Sep 13 00:04:53.741083 kernel: ACPI: DSDT 0x000000007FEEAD55 01411E (v01 PTLTD Custom 06040000 MSFT 03000001) Sep 13 00:04:53.741088 kernel: ACPI: FACS 0x000000007FEFFFC0 000040 Sep 13 00:04:53.741096 kernel: ACPI: FACS 0x000000007FEFFFC0 000040 Sep 13 00:04:53.741101 kernel: ACPI: BOOT 0x000000007FEEAD2D 000028 (v01 PTLTD $SBFTBL$ 06040000 LTP 00000001) Sep 13 00:04:53.741107 kernel: ACPI: APIC 0x000000007FEEA5EB 000742 (v01 PTLTD ? APIC 06040000 LTP 00000000) Sep 13 00:04:53.741112 kernel: ACPI: MCFG 0x000000007FEEA5AF 00003C (v01 PTLTD $PCITBL$ 06040000 LTP 00000001) Sep 13 00:04:53.741117 kernel: ACPI: SRAT 0x000000007FEE9757 0008A8 (v02 VMWARE MEMPLUG 06040000 VMW 00000001) Sep 13 00:04:53.741123 kernel: ACPI: HPET 0x000000007FEE971F 000038 (v01 VMWARE VMW HPET 06040000 VMW 00000001) Sep 13 00:04:53.741129 kernel: ACPI: WAET 0x000000007FEE96F7 000028 (v01 VMWARE VMW WAET 06040000 VMW 00000001) Sep 13 00:04:53.741134 kernel: ACPI: Reserving FACP table memory at [mem 0x7fefee73-0x7fefef66] Sep 13 00:04:53.741139 kernel: ACPI: Reserving DSDT table memory at [mem 0x7feead55-0x7fefee72] Sep 13 00:04:53.741144 kernel: ACPI: Reserving FACS table memory at [mem 0x7fefffc0-0x7fefffff] Sep 13 00:04:53.741150 kernel: ACPI: Reserving FACS table memory at [mem 0x7fefffc0-0x7fefffff] Sep 13 00:04:53.741155 kernel: ACPI: Reserving BOOT table memory at [mem 0x7feead2d-0x7feead54] Sep 13 00:04:53.741160 kernel: ACPI: Reserving APIC table memory at [mem 0x7feea5eb-0x7feead2c] Sep 13 00:04:53.741165 kernel: ACPI: Reserving MCFG table memory at [mem 0x7feea5af-0x7feea5ea] Sep 13 00:04:53.741170 kernel: ACPI: Reserving SRAT table memory at [mem 0x7fee9757-0x7fee9ffe] Sep 13 00:04:53.741176 kernel: ACPI: Reserving HPET table memory at [mem 0x7fee971f-0x7fee9756] Sep 13 00:04:53.741182 kernel: ACPI: Reserving WAET table memory at [mem 0x7fee96f7-0x7fee971e] Sep 13 00:04:53.741187 kernel: system APIC only can use physical flat Sep 13 00:04:53.741192 kernel: APIC: Switched APIC routing to: physical flat Sep 13 00:04:53.741197 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Sep 13 00:04:53.741202 kernel: SRAT: PXM 0 -> APIC 0x02 -> Node 0 Sep 13 00:04:53.741207 kernel: SRAT: PXM 0 -> APIC 0x04 -> Node 0 Sep 13 00:04:53.741212 kernel: SRAT: PXM 0 -> APIC 0x06 -> Node 0 Sep 13 00:04:53.741218 kernel: SRAT: PXM 0 -> APIC 0x08 -> Node 0 Sep 13 00:04:53.741224 kernel: SRAT: PXM 0 -> APIC 0x0a -> Node 0 Sep 13 00:04:53.741229 kernel: SRAT: PXM 0 -> APIC 0x0c -> Node 0 Sep 13 00:04:53.741234 kernel: SRAT: PXM 0 -> APIC 0x0e -> Node 0 Sep 13 00:04:53.741239 kernel: SRAT: PXM 0 -> APIC 0x10 -> Node 0 Sep 13 00:04:53.741244 kernel: SRAT: PXM 0 -> APIC 0x12 -> Node 0 Sep 13 00:04:53.741249 kernel: SRAT: PXM 0 -> APIC 0x14 -> Node 0 Sep 13 00:04:53.741254 kernel: SRAT: PXM 0 -> APIC 0x16 -> Node 0 Sep 13 00:04:53.741259 kernel: SRAT: PXM 0 -> APIC 0x18 -> Node 0 Sep 13 00:04:53.741264 kernel: SRAT: PXM 0 -> APIC 0x1a -> Node 0 Sep 13 00:04:53.741269 kernel: SRAT: PXM 0 -> APIC 0x1c -> Node 0 Sep 13 00:04:53.741275 kernel: SRAT: PXM 0 -> APIC 0x1e -> Node 0 Sep 13 00:04:53.741280 kernel: SRAT: PXM 0 -> APIC 0x20 -> Node 0 Sep 13 00:04:53.741285 kernel: SRAT: PXM 0 -> APIC 0x22 -> Node 0 Sep 13 00:04:53.741290 kernel: SRAT: PXM 0 -> APIC 0x24 -> Node 0 Sep 13 00:04:53.741295 kernel: SRAT: PXM 0 -> APIC 0x26 -> Node 0 Sep 13 00:04:53.741300 kernel: SRAT: PXM 0 -> APIC 0x28 -> Node 0 Sep 13 00:04:53.741305 kernel: SRAT: PXM 0 -> APIC 0x2a -> Node 0 Sep 13 00:04:53.741311 kernel: SRAT: PXM 0 -> APIC 0x2c -> Node 0 Sep 13 00:04:53.741316 kernel: SRAT: PXM 0 -> APIC 0x2e -> Node 0 Sep 13 00:04:53.741321 kernel: SRAT: PXM 0 -> APIC 0x30 -> Node 0 Sep 13 00:04:53.741326 kernel: SRAT: PXM 0 -> APIC 0x32 -> Node 0 Sep 13 00:04:53.741332 kernel: SRAT: PXM 0 -> APIC 0x34 -> Node 0 Sep 13 00:04:53.741337 kernel: SRAT: PXM 0 -> APIC 0x36 -> Node 0 Sep 13 00:04:53.741342 kernel: SRAT: PXM 0 -> APIC 0x38 -> Node 0 Sep 13 00:04:53.741347 kernel: SRAT: PXM 0 -> APIC 0x3a -> Node 0 Sep 13 00:04:53.741352 kernel: SRAT: PXM 0 -> APIC 0x3c -> Node 0 Sep 13 00:04:53.741357 kernel: SRAT: PXM 0 -> APIC 0x3e -> Node 0 Sep 13 00:04:53.741362 kernel: SRAT: PXM 0 -> APIC 0x40 -> Node 0 Sep 13 00:04:53.741367 kernel: SRAT: PXM 0 -> APIC 0x42 -> Node 0 Sep 13 00:04:53.741372 kernel: SRAT: PXM 0 -> APIC 0x44 -> Node 0 Sep 13 00:04:53.741378 kernel: SRAT: PXM 0 -> APIC 0x46 -> Node 0 Sep 13 00:04:53.741383 kernel: SRAT: PXM 0 -> APIC 0x48 -> Node 0 Sep 13 00:04:53.741389 kernel: SRAT: PXM 0 -> APIC 0x4a -> Node 0 Sep 13 00:04:53.741394 kernel: SRAT: PXM 0 -> APIC 0x4c -> Node 0 Sep 13 00:04:53.741399 kernel: SRAT: PXM 0 -> APIC 0x4e -> Node 0 Sep 13 00:04:53.741404 kernel: SRAT: PXM 0 -> APIC 0x50 -> Node 0 Sep 13 00:04:53.741409 kernel: SRAT: PXM 0 -> APIC 0x52 -> Node 0 Sep 13 00:04:53.741414 kernel: SRAT: PXM 0 -> APIC 0x54 -> Node 0 Sep 13 00:04:53.741419 kernel: SRAT: PXM 0 -> APIC 0x56 -> Node 0 Sep 13 00:04:53.741424 kernel: SRAT: PXM 0 -> APIC 0x58 -> Node 0 Sep 13 00:04:53.741429 kernel: SRAT: PXM 0 -> APIC 0x5a -> Node 0 Sep 13 00:04:53.741435 kernel: SRAT: PXM 0 -> APIC 0x5c -> Node 0 Sep 13 00:04:53.741440 kernel: SRAT: PXM 0 -> APIC 0x5e -> Node 0 Sep 13 00:04:53.741445 kernel: SRAT: PXM 0 -> APIC 0x60 -> Node 0 Sep 13 00:04:53.741450 kernel: SRAT: PXM 0 -> APIC 0x62 -> Node 0 Sep 13 00:04:53.741455 kernel: SRAT: PXM 0 -> APIC 0x64 -> Node 0 Sep 13 00:04:53.741460 kernel: SRAT: PXM 0 -> APIC 0x66 -> Node 0 Sep 13 00:04:53.741465 kernel: SRAT: PXM 0 -> APIC 0x68 -> Node 0 Sep 13 00:04:53.741470 kernel: SRAT: PXM 0 -> APIC 0x6a -> Node 0 Sep 13 00:04:53.741475 kernel: SRAT: PXM 0 -> APIC 0x6c -> Node 0 Sep 13 00:04:53.741480 kernel: SRAT: PXM 0 -> APIC 0x6e -> Node 0 Sep 13 00:04:53.741486 kernel: SRAT: PXM 0 -> APIC 0x70 -> Node 0 Sep 13 00:04:53.741491 kernel: SRAT: PXM 0 -> APIC 0x72 -> Node 0 Sep 13 00:04:53.741496 kernel: SRAT: PXM 0 -> APIC 0x74 -> Node 0 Sep 13 00:04:53.741507 kernel: SRAT: PXM 0 -> APIC 0x76 -> Node 0 Sep 13 00:04:53.741512 kernel: SRAT: PXM 0 -> APIC 0x78 -> Node 0 Sep 13 00:04:53.741517 kernel: SRAT: PXM 0 -> APIC 0x7a -> Node 0 Sep 13 00:04:53.741523 kernel: SRAT: PXM 0 -> APIC 0x7c -> Node 0 Sep 13 00:04:53.741528 kernel: SRAT: PXM 0 -> APIC 0x7e -> Node 0 Sep 13 00:04:53.741534 kernel: SRAT: PXM 0 -> APIC 0x80 -> Node 0 Sep 13 00:04:53.741540 kernel: SRAT: PXM 0 -> APIC 0x82 -> Node 0 Sep 13 00:04:53.741545 kernel: SRAT: PXM 0 -> APIC 0x84 -> Node 0 Sep 13 00:04:53.741550 kernel: SRAT: PXM 0 -> APIC 0x86 -> Node 0 Sep 13 00:04:53.741556 kernel: SRAT: PXM 0 -> APIC 0x88 -> Node 0 Sep 13 00:04:53.741561 kernel: SRAT: PXM 0 -> APIC 0x8a -> Node 0 Sep 13 00:04:53.741566 kernel: SRAT: PXM 0 -> APIC 0x8c -> Node 0 Sep 13 00:04:53.741572 kernel: SRAT: PXM 0 -> APIC 0x8e -> Node 0 Sep 13 00:04:53.741577 kernel: SRAT: PXM 0 -> APIC 0x90 -> Node 0 Sep 13 00:04:53.741582 kernel: SRAT: PXM 0 -> APIC 0x92 -> Node 0 Sep 13 00:04:53.741589 kernel: SRAT: PXM 0 -> APIC 0x94 -> Node 0 Sep 13 00:04:53.741595 kernel: SRAT: PXM 0 -> APIC 0x96 -> Node 0 Sep 13 00:04:53.741600 kernel: SRAT: PXM 0 -> APIC 0x98 -> Node 0 Sep 13 00:04:53.741605 kernel: SRAT: PXM 0 -> APIC 0x9a -> Node 0 Sep 13 00:04:53.741611 kernel: SRAT: PXM 0 -> APIC 0x9c -> Node 0 Sep 13 00:04:53.741616 kernel: SRAT: PXM 0 -> APIC 0x9e -> Node 0 Sep 13 00:04:53.741621 kernel: SRAT: PXM 0 -> APIC 0xa0 -> Node 0 Sep 13 00:04:53.741627 kernel: SRAT: PXM 0 -> APIC 0xa2 -> Node 0 Sep 13 00:04:53.741632 kernel: SRAT: PXM 0 -> APIC 0xa4 -> Node 0 Sep 13 00:04:53.741637 kernel: SRAT: PXM 0 -> APIC 0xa6 -> Node 0 Sep 13 00:04:53.741644 kernel: SRAT: PXM 0 -> APIC 0xa8 -> Node 0 Sep 13 00:04:53.741649 kernel: SRAT: PXM 0 -> APIC 0xaa -> Node 0 Sep 13 00:04:53.741654 kernel: SRAT: PXM 0 -> APIC 0xac -> Node 0 Sep 13 00:04:53.741660 kernel: SRAT: PXM 0 -> APIC 0xae -> Node 0 Sep 13 00:04:53.741665 kernel: SRAT: PXM 0 -> APIC 0xb0 -> Node 0 Sep 13 00:04:53.741670 kernel: SRAT: PXM 0 -> APIC 0xb2 -> Node 0 Sep 13 00:04:53.741675 kernel: SRAT: PXM 0 -> APIC 0xb4 -> Node 0 Sep 13 00:04:53.741681 kernel: SRAT: PXM 0 -> APIC 0xb6 -> Node 0 Sep 13 00:04:53.741686 kernel: SRAT: PXM 0 -> APIC 0xb8 -> Node 0 Sep 13 00:04:53.741692 kernel: SRAT: PXM 0 -> APIC 0xba -> Node 0 Sep 13 00:04:53.741697 kernel: SRAT: PXM 0 -> APIC 0xbc -> Node 0 Sep 13 00:04:53.741703 kernel: SRAT: PXM 0 -> APIC 0xbe -> Node 0 Sep 13 00:04:53.741709 kernel: SRAT: PXM 0 -> APIC 0xc0 -> Node 0 Sep 13 00:04:53.741714 kernel: SRAT: PXM 0 -> APIC 0xc2 -> Node 0 Sep 13 00:04:53.741719 kernel: SRAT: PXM 0 -> APIC 0xc4 -> Node 0 Sep 13 00:04:53.741725 kernel: SRAT: PXM 0 -> APIC 0xc6 -> Node 0 Sep 13 00:04:53.741730 kernel: SRAT: PXM 0 -> APIC 0xc8 -> Node 0 Sep 13 00:04:53.741735 kernel: SRAT: PXM 0 -> APIC 0xca -> Node 0 Sep 13 00:04:53.741741 kernel: SRAT: PXM 0 -> APIC 0xcc -> Node 0 Sep 13 00:04:53.741746 kernel: SRAT: PXM 0 -> APIC 0xce -> Node 0 Sep 13 00:04:53.741751 kernel: SRAT: PXM 0 -> APIC 0xd0 -> Node 0 Sep 13 00:04:53.741758 kernel: SRAT: PXM 0 -> APIC 0xd2 -> Node 0 Sep 13 00:04:53.741763 kernel: SRAT: PXM 0 -> APIC 0xd4 -> Node 0 Sep 13 00:04:53.741769 kernel: SRAT: PXM 0 -> APIC 0xd6 -> Node 0 Sep 13 00:04:53.741774 kernel: SRAT: PXM 0 -> APIC 0xd8 -> Node 0 Sep 13 00:04:53.741779 kernel: SRAT: PXM 0 -> APIC 0xda -> Node 0 Sep 13 00:04:53.741785 kernel: SRAT: PXM 0 -> APIC 0xdc -> Node 0 Sep 13 00:04:53.741790 kernel: SRAT: PXM 0 -> APIC 0xde -> Node 0 Sep 13 00:04:53.741796 kernel: SRAT: PXM 0 -> APIC 0xe0 -> Node 0 Sep 13 00:04:53.743913 kernel: SRAT: PXM 0 -> APIC 0xe2 -> Node 0 Sep 13 00:04:53.743924 kernel: SRAT: PXM 0 -> APIC 0xe4 -> Node 0 Sep 13 00:04:53.743929 kernel: SRAT: PXM 0 -> APIC 0xe6 -> Node 0 Sep 13 00:04:53.743935 kernel: SRAT: PXM 0 -> APIC 0xe8 -> Node 0 Sep 13 00:04:53.743940 kernel: SRAT: PXM 0 -> APIC 0xea -> Node 0 Sep 13 00:04:53.743946 kernel: SRAT: PXM 0 -> APIC 0xec -> Node 0 Sep 13 00:04:53.743951 kernel: SRAT: PXM 0 -> APIC 0xee -> Node 0 Sep 13 00:04:53.743957 kernel: SRAT: PXM 0 -> APIC 0xf0 -> Node 0 Sep 13 00:04:53.743962 kernel: SRAT: PXM 0 -> APIC 0xf2 -> Node 0 Sep 13 00:04:53.743968 kernel: SRAT: PXM 0 -> APIC 0xf4 -> Node 0 Sep 13 00:04:53.743973 kernel: SRAT: PXM 0 -> APIC 0xf6 -> Node 0 Sep 13 00:04:53.743978 kernel: SRAT: PXM 0 -> APIC 0xf8 -> Node 0 Sep 13 00:04:53.743985 kernel: SRAT: PXM 0 -> APIC 0xfa -> Node 0 Sep 13 00:04:53.743991 kernel: SRAT: PXM 0 -> APIC 0xfc -> Node 0 Sep 13 00:04:53.743996 kernel: SRAT: PXM 0 -> APIC 0xfe -> Node 0 Sep 13 00:04:53.744002 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Sep 13 00:04:53.744007 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Sep 13 00:04:53.744013 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000-0xbfffffff] hotplug Sep 13 00:04:53.744019 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7fffffff] -> [mem 0x00000000-0x7fffffff] Sep 13 00:04:53.744025 kernel: NODE_DATA(0) allocated [mem 0x7fffa000-0x7fffffff] Sep 13 00:04:53.744031 kernel: Zone ranges: Sep 13 00:04:53.744038 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 13 00:04:53.744043 kernel: DMA32 [mem 0x0000000001000000-0x000000007fffffff] Sep 13 00:04:53.744049 kernel: Normal empty Sep 13 00:04:53.744055 kernel: Movable zone start for each node Sep 13 00:04:53.744060 kernel: Early memory node ranges Sep 13 00:04:53.744066 kernel: node 0: [mem 0x0000000000001000-0x000000000009dfff] Sep 13 00:04:53.744075 kernel: node 0: [mem 0x0000000000100000-0x000000007fedffff] Sep 13 00:04:53.744081 kernel: node 0: [mem 0x000000007ff00000-0x000000007fffffff] Sep 13 00:04:53.744086 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007fffffff] Sep 13 00:04:53.744092 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 13 00:04:53.744099 kernel: On node 0, zone DMA: 98 pages in unavailable ranges Sep 13 00:04:53.744105 kernel: On node 0, zone DMA32: 32 pages in unavailable ranges Sep 13 00:04:53.744110 kernel: ACPI: PM-Timer IO Port: 0x1008 Sep 13 00:04:53.744116 kernel: system APIC only can use physical flat Sep 13 00:04:53.744122 kernel: ACPI: LAPIC_NMI (acpi_id[0x00] high edge lint[0x1]) Sep 13 00:04:53.744127 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1]) Sep 13 00:04:53.744133 kernel: ACPI: LAPIC_NMI (acpi_id[0x02] high edge lint[0x1]) Sep 13 00:04:53.744138 kernel: ACPI: LAPIC_NMI (acpi_id[0x03] high edge lint[0x1]) Sep 13 00:04:53.744144 kernel: ACPI: LAPIC_NMI (acpi_id[0x04] high edge lint[0x1]) Sep 13 00:04:53.744151 kernel: ACPI: LAPIC_NMI (acpi_id[0x05] high edge lint[0x1]) Sep 13 00:04:53.744156 kernel: ACPI: LAPIC_NMI (acpi_id[0x06] high edge lint[0x1]) Sep 13 00:04:53.744162 kernel: ACPI: LAPIC_NMI (acpi_id[0x07] high edge lint[0x1]) Sep 13 00:04:53.744167 kernel: ACPI: LAPIC_NMI (acpi_id[0x08] high edge lint[0x1]) Sep 13 00:04:53.744173 kernel: ACPI: LAPIC_NMI (acpi_id[0x09] high edge lint[0x1]) Sep 13 00:04:53.744178 kernel: ACPI: LAPIC_NMI (acpi_id[0x0a] high edge lint[0x1]) Sep 13 00:04:53.744184 kernel: ACPI: LAPIC_NMI (acpi_id[0x0b] high edge lint[0x1]) Sep 13 00:04:53.744190 kernel: ACPI: LAPIC_NMI (acpi_id[0x0c] high edge lint[0x1]) Sep 13 00:04:53.744195 kernel: ACPI: LAPIC_NMI (acpi_id[0x0d] high edge lint[0x1]) Sep 13 00:04:53.744201 kernel: ACPI: LAPIC_NMI (acpi_id[0x0e] high edge lint[0x1]) Sep 13 00:04:53.744207 kernel: ACPI: LAPIC_NMI (acpi_id[0x0f] high edge lint[0x1]) Sep 13 00:04:53.744213 kernel: ACPI: LAPIC_NMI (acpi_id[0x10] high edge lint[0x1]) Sep 13 00:04:53.744219 kernel: ACPI: LAPIC_NMI (acpi_id[0x11] high edge lint[0x1]) Sep 13 00:04:53.744224 kernel: ACPI: LAPIC_NMI (acpi_id[0x12] high edge lint[0x1]) Sep 13 00:04:53.744230 kernel: ACPI: LAPIC_NMI (acpi_id[0x13] high edge lint[0x1]) Sep 13 00:04:53.744235 kernel: ACPI: LAPIC_NMI (acpi_id[0x14] high edge lint[0x1]) Sep 13 00:04:53.744241 kernel: ACPI: LAPIC_NMI (acpi_id[0x15] high edge lint[0x1]) Sep 13 00:04:53.744246 kernel: ACPI: LAPIC_NMI (acpi_id[0x16] high edge lint[0x1]) Sep 13 00:04:53.744252 kernel: ACPI: LAPIC_NMI (acpi_id[0x17] high edge lint[0x1]) Sep 13 00:04:53.744258 kernel: ACPI: LAPIC_NMI (acpi_id[0x18] high edge lint[0x1]) Sep 13 00:04:53.744264 kernel: ACPI: LAPIC_NMI (acpi_id[0x19] high edge lint[0x1]) Sep 13 00:04:53.744269 kernel: ACPI: LAPIC_NMI (acpi_id[0x1a] high edge lint[0x1]) Sep 13 00:04:53.744275 kernel: ACPI: LAPIC_NMI (acpi_id[0x1b] high edge lint[0x1]) Sep 13 00:04:53.744281 kernel: ACPI: LAPIC_NMI (acpi_id[0x1c] high edge lint[0x1]) Sep 13 00:04:53.744286 kernel: ACPI: LAPIC_NMI (acpi_id[0x1d] high edge lint[0x1]) Sep 13 00:04:53.744292 kernel: ACPI: LAPIC_NMI (acpi_id[0x1e] high edge lint[0x1]) Sep 13 00:04:53.744297 kernel: ACPI: LAPIC_NMI (acpi_id[0x1f] high edge lint[0x1]) Sep 13 00:04:53.744303 kernel: ACPI: LAPIC_NMI (acpi_id[0x20] high edge lint[0x1]) Sep 13 00:04:53.744308 kernel: ACPI: LAPIC_NMI (acpi_id[0x21] high edge lint[0x1]) Sep 13 00:04:53.744315 kernel: ACPI: LAPIC_NMI (acpi_id[0x22] high edge lint[0x1]) Sep 13 00:04:53.744320 kernel: ACPI: LAPIC_NMI (acpi_id[0x23] high edge lint[0x1]) Sep 13 00:04:53.744326 kernel: ACPI: LAPIC_NMI (acpi_id[0x24] high edge lint[0x1]) Sep 13 00:04:53.744331 kernel: ACPI: LAPIC_NMI (acpi_id[0x25] high edge lint[0x1]) Sep 13 00:04:53.744337 kernel: ACPI: LAPIC_NMI (acpi_id[0x26] high edge lint[0x1]) Sep 13 00:04:53.744342 kernel: ACPI: LAPIC_NMI (acpi_id[0x27] high edge lint[0x1]) Sep 13 00:04:53.744348 kernel: ACPI: LAPIC_NMI (acpi_id[0x28] high edge lint[0x1]) Sep 13 00:04:53.744353 kernel: ACPI: LAPIC_NMI (acpi_id[0x29] high edge lint[0x1]) Sep 13 00:04:53.744359 kernel: ACPI: LAPIC_NMI (acpi_id[0x2a] high edge lint[0x1]) Sep 13 00:04:53.744364 kernel: ACPI: LAPIC_NMI (acpi_id[0x2b] high edge lint[0x1]) Sep 13 00:04:53.744371 kernel: ACPI: LAPIC_NMI (acpi_id[0x2c] high edge lint[0x1]) Sep 13 00:04:53.744377 kernel: ACPI: LAPIC_NMI (acpi_id[0x2d] high edge lint[0x1]) Sep 13 00:04:53.744382 kernel: ACPI: LAPIC_NMI (acpi_id[0x2e] high edge lint[0x1]) Sep 13 00:04:53.744388 kernel: ACPI: LAPIC_NMI (acpi_id[0x2f] high edge lint[0x1]) Sep 13 00:04:53.744393 kernel: ACPI: LAPIC_NMI (acpi_id[0x30] high edge lint[0x1]) Sep 13 00:04:53.744399 kernel: ACPI: LAPIC_NMI (acpi_id[0x31] high edge lint[0x1]) Sep 13 00:04:53.744405 kernel: ACPI: LAPIC_NMI (acpi_id[0x32] high edge lint[0x1]) Sep 13 00:04:53.744410 kernel: ACPI: LAPIC_NMI (acpi_id[0x33] high edge lint[0x1]) Sep 13 00:04:53.744416 kernel: ACPI: LAPIC_NMI (acpi_id[0x34] high edge lint[0x1]) Sep 13 00:04:53.744422 kernel: ACPI: LAPIC_NMI (acpi_id[0x35] high edge lint[0x1]) Sep 13 00:04:53.744428 kernel: ACPI: LAPIC_NMI (acpi_id[0x36] high edge lint[0x1]) Sep 13 00:04:53.744433 kernel: ACPI: LAPIC_NMI (acpi_id[0x37] high edge lint[0x1]) Sep 13 00:04:53.744439 kernel: ACPI: LAPIC_NMI (acpi_id[0x38] high edge lint[0x1]) Sep 13 00:04:53.744444 kernel: ACPI: LAPIC_NMI (acpi_id[0x39] high edge lint[0x1]) Sep 13 00:04:53.744450 kernel: ACPI: LAPIC_NMI (acpi_id[0x3a] high edge lint[0x1]) Sep 13 00:04:53.744455 kernel: ACPI: LAPIC_NMI (acpi_id[0x3b] high edge lint[0x1]) Sep 13 00:04:53.744461 kernel: ACPI: LAPIC_NMI (acpi_id[0x3c] high edge lint[0x1]) Sep 13 00:04:53.744467 kernel: ACPI: LAPIC_NMI (acpi_id[0x3d] high edge lint[0x1]) Sep 13 00:04:53.744473 kernel: ACPI: LAPIC_NMI (acpi_id[0x3e] high edge lint[0x1]) Sep 13 00:04:53.744479 kernel: ACPI: LAPIC_NMI (acpi_id[0x3f] high edge lint[0x1]) Sep 13 00:04:53.744485 kernel: ACPI: LAPIC_NMI (acpi_id[0x40] high edge lint[0x1]) Sep 13 00:04:53.744490 kernel: ACPI: LAPIC_NMI (acpi_id[0x41] high edge lint[0x1]) Sep 13 00:04:53.744496 kernel: ACPI: LAPIC_NMI (acpi_id[0x42] high edge lint[0x1]) Sep 13 00:04:53.744501 kernel: ACPI: LAPIC_NMI (acpi_id[0x43] high edge lint[0x1]) Sep 13 00:04:53.744507 kernel: ACPI: LAPIC_NMI (acpi_id[0x44] high edge lint[0x1]) Sep 13 00:04:53.744512 kernel: ACPI: LAPIC_NMI (acpi_id[0x45] high edge lint[0x1]) Sep 13 00:04:53.744518 kernel: ACPI: LAPIC_NMI (acpi_id[0x46] high edge lint[0x1]) Sep 13 00:04:53.744523 kernel: ACPI: LAPIC_NMI (acpi_id[0x47] high edge lint[0x1]) Sep 13 00:04:53.744530 kernel: ACPI: LAPIC_NMI (acpi_id[0x48] high edge lint[0x1]) Sep 13 00:04:53.744536 kernel: ACPI: LAPIC_NMI (acpi_id[0x49] high edge lint[0x1]) Sep 13 00:04:53.744541 kernel: ACPI: LAPIC_NMI (acpi_id[0x4a] high edge lint[0x1]) Sep 13 00:04:53.744547 kernel: ACPI: LAPIC_NMI (acpi_id[0x4b] high edge lint[0x1]) Sep 13 00:04:53.744553 kernel: ACPI: LAPIC_NMI (acpi_id[0x4c] high edge lint[0x1]) Sep 13 00:04:53.744558 kernel: ACPI: LAPIC_NMI (acpi_id[0x4d] high edge lint[0x1]) Sep 13 00:04:53.744564 kernel: ACPI: LAPIC_NMI (acpi_id[0x4e] high edge lint[0x1]) Sep 13 00:04:53.744569 kernel: ACPI: LAPIC_NMI (acpi_id[0x4f] high edge lint[0x1]) Sep 13 00:04:53.744575 kernel: ACPI: LAPIC_NMI (acpi_id[0x50] high edge lint[0x1]) Sep 13 00:04:53.744580 kernel: ACPI: LAPIC_NMI (acpi_id[0x51] high edge lint[0x1]) Sep 13 00:04:53.744587 kernel: ACPI: LAPIC_NMI (acpi_id[0x52] high edge lint[0x1]) Sep 13 00:04:53.744592 kernel: ACPI: LAPIC_NMI (acpi_id[0x53] high edge lint[0x1]) Sep 13 00:04:53.744598 kernel: ACPI: LAPIC_NMI (acpi_id[0x54] high edge lint[0x1]) Sep 13 00:04:53.744603 kernel: ACPI: LAPIC_NMI (acpi_id[0x55] high edge lint[0x1]) Sep 13 00:04:53.744609 kernel: ACPI: LAPIC_NMI (acpi_id[0x56] high edge lint[0x1]) Sep 13 00:04:53.744614 kernel: ACPI: LAPIC_NMI (acpi_id[0x57] high edge lint[0x1]) Sep 13 00:04:53.744620 kernel: ACPI: LAPIC_NMI (acpi_id[0x58] high edge lint[0x1]) Sep 13 00:04:53.744625 kernel: ACPI: LAPIC_NMI (acpi_id[0x59] high edge lint[0x1]) Sep 13 00:04:53.744631 kernel: ACPI: LAPIC_NMI (acpi_id[0x5a] high edge lint[0x1]) Sep 13 00:04:53.744638 kernel: ACPI: LAPIC_NMI (acpi_id[0x5b] high edge lint[0x1]) Sep 13 00:04:53.744643 kernel: ACPI: LAPIC_NMI (acpi_id[0x5c] high edge lint[0x1]) Sep 13 00:04:53.744649 kernel: ACPI: LAPIC_NMI (acpi_id[0x5d] high edge lint[0x1]) Sep 13 00:04:53.744654 kernel: ACPI: LAPIC_NMI (acpi_id[0x5e] high edge lint[0x1]) Sep 13 00:04:53.744660 kernel: ACPI: LAPIC_NMI (acpi_id[0x5f] high edge lint[0x1]) Sep 13 00:04:53.744666 kernel: ACPI: LAPIC_NMI (acpi_id[0x60] high edge lint[0x1]) Sep 13 00:04:53.744671 kernel: ACPI: LAPIC_NMI (acpi_id[0x61] high edge lint[0x1]) Sep 13 00:04:53.744677 kernel: ACPI: LAPIC_NMI (acpi_id[0x62] high edge lint[0x1]) Sep 13 00:04:53.744682 kernel: ACPI: LAPIC_NMI (acpi_id[0x63] high edge lint[0x1]) Sep 13 00:04:53.744688 kernel: ACPI: LAPIC_NMI (acpi_id[0x64] high edge lint[0x1]) Sep 13 00:04:53.744694 kernel: ACPI: LAPIC_NMI (acpi_id[0x65] high edge lint[0x1]) Sep 13 00:04:53.744700 kernel: ACPI: LAPIC_NMI (acpi_id[0x66] high edge lint[0x1]) Sep 13 00:04:53.744706 kernel: ACPI: LAPIC_NMI (acpi_id[0x67] high edge lint[0x1]) Sep 13 00:04:53.744711 kernel: ACPI: LAPIC_NMI (acpi_id[0x68] high edge lint[0x1]) Sep 13 00:04:53.744717 kernel: ACPI: LAPIC_NMI (acpi_id[0x69] high edge lint[0x1]) Sep 13 00:04:53.744722 kernel: ACPI: LAPIC_NMI (acpi_id[0x6a] high edge lint[0x1]) Sep 13 00:04:53.744728 kernel: ACPI: LAPIC_NMI (acpi_id[0x6b] high edge lint[0x1]) Sep 13 00:04:53.744733 kernel: ACPI: LAPIC_NMI (acpi_id[0x6c] high edge lint[0x1]) Sep 13 00:04:53.744739 kernel: ACPI: LAPIC_NMI (acpi_id[0x6d] high edge lint[0x1]) Sep 13 00:04:53.744744 kernel: ACPI: LAPIC_NMI (acpi_id[0x6e] high edge lint[0x1]) Sep 13 00:04:53.744751 kernel: ACPI: LAPIC_NMI (acpi_id[0x6f] high edge lint[0x1]) Sep 13 00:04:53.744756 kernel: ACPI: LAPIC_NMI (acpi_id[0x70] high edge lint[0x1]) Sep 13 00:04:53.744762 kernel: ACPI: LAPIC_NMI (acpi_id[0x71] high edge lint[0x1]) Sep 13 00:04:53.744768 kernel: ACPI: LAPIC_NMI (acpi_id[0x72] high edge lint[0x1]) Sep 13 00:04:53.744773 kernel: ACPI: LAPIC_NMI (acpi_id[0x73] high edge lint[0x1]) Sep 13 00:04:53.744778 kernel: ACPI: LAPIC_NMI (acpi_id[0x74] high edge lint[0x1]) Sep 13 00:04:53.744784 kernel: ACPI: LAPIC_NMI (acpi_id[0x75] high edge lint[0x1]) Sep 13 00:04:53.744790 kernel: ACPI: LAPIC_NMI (acpi_id[0x76] high edge lint[0x1]) Sep 13 00:04:53.744795 kernel: ACPI: LAPIC_NMI (acpi_id[0x77] high edge lint[0x1]) Sep 13 00:04:53.744802 kernel: ACPI: LAPIC_NMI (acpi_id[0x78] high edge lint[0x1]) Sep 13 00:04:53.744807 kernel: ACPI: LAPIC_NMI (acpi_id[0x79] high edge lint[0x1]) Sep 13 00:04:53.744813 kernel: ACPI: LAPIC_NMI (acpi_id[0x7a] high edge lint[0x1]) Sep 13 00:04:53.744818 kernel: ACPI: LAPIC_NMI (acpi_id[0x7b] high edge lint[0x1]) Sep 13 00:04:53.744824 kernel: ACPI: LAPIC_NMI (acpi_id[0x7c] high edge lint[0x1]) Sep 13 00:04:53.744829 kernel: ACPI: LAPIC_NMI (acpi_id[0x7d] high edge lint[0x1]) Sep 13 00:04:53.744835 kernel: ACPI: LAPIC_NMI (acpi_id[0x7e] high edge lint[0x1]) Sep 13 00:04:53.744841 kernel: ACPI: LAPIC_NMI (acpi_id[0x7f] high edge lint[0x1]) Sep 13 00:04:53.744846 kernel: IOAPIC[0]: apic_id 1, version 17, address 0xfec00000, GSI 0-23 Sep 13 00:04:53.744852 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 high edge) Sep 13 00:04:53.744858 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 13 00:04:53.744864 kernel: ACPI: HPET id: 0x8086af01 base: 0xfed00000 Sep 13 00:04:53.744870 kernel: TSC deadline timer available Sep 13 00:04:53.744875 kernel: smpboot: Allowing 128 CPUs, 126 hotplug CPUs Sep 13 00:04:53.744881 kernel: [mem 0x80000000-0xefffffff] available for PCI devices Sep 13 00:04:53.744887 kernel: Booting paravirtualized kernel on VMware hypervisor Sep 13 00:04:53.744908 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 13 00:04:53.744917 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:128 nr_cpu_ids:128 nr_node_ids:1 Sep 13 00:04:53.744923 kernel: percpu: Embedded 58 pages/cpu s197160 r8192 d32216 u262144 Sep 13 00:04:53.744931 kernel: pcpu-alloc: s197160 r8192 d32216 u262144 alloc=1*2097152 Sep 13 00:04:53.744937 kernel: pcpu-alloc: [0] 000 001 002 003 004 005 006 007 Sep 13 00:04:53.744942 kernel: pcpu-alloc: [0] 008 009 010 011 012 013 014 015 Sep 13 00:04:53.744948 kernel: pcpu-alloc: [0] 016 017 018 019 020 021 022 023 Sep 13 00:04:53.744953 kernel: pcpu-alloc: [0] 024 025 026 027 028 029 030 031 Sep 13 00:04:53.744959 kernel: pcpu-alloc: [0] 032 033 034 035 036 037 038 039 Sep 13 00:04:53.744972 kernel: pcpu-alloc: [0] 040 041 042 043 044 045 046 047 Sep 13 00:04:53.744979 kernel: pcpu-alloc: [0] 048 049 050 051 052 053 054 055 Sep 13 00:04:53.744985 kernel: pcpu-alloc: [0] 056 057 058 059 060 061 062 063 Sep 13 00:04:53.744992 kernel: pcpu-alloc: [0] 064 065 066 067 068 069 070 071 Sep 13 00:04:53.744998 kernel: pcpu-alloc: [0] 072 073 074 075 076 077 078 079 Sep 13 00:04:53.745003 kernel: pcpu-alloc: [0] 080 081 082 083 084 085 086 087 Sep 13 00:04:53.745009 kernel: pcpu-alloc: [0] 088 089 090 091 092 093 094 095 Sep 13 00:04:53.745015 kernel: pcpu-alloc: [0] 096 097 098 099 100 101 102 103 Sep 13 00:04:53.745021 kernel: pcpu-alloc: [0] 104 105 106 107 108 109 110 111 Sep 13 00:04:53.745026 kernel: pcpu-alloc: [0] 112 113 114 115 116 117 118 119 Sep 13 00:04:53.745032 kernel: pcpu-alloc: [0] 120 121 122 123 124 125 126 127 Sep 13 00:04:53.745040 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=2945e6465d436b7d1da8a9350a0544af0bd9aec821cd06987451d5e1d3071534 Sep 13 00:04:53.745047 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 13 00:04:53.745052 kernel: random: crng init done Sep 13 00:04:53.745058 kernel: printk: log_buf_len individual max cpu contribution: 4096 bytes Sep 13 00:04:53.745064 kernel: printk: log_buf_len total cpu_extra contributions: 520192 bytes Sep 13 00:04:53.745070 kernel: printk: log_buf_len min size: 262144 bytes Sep 13 00:04:53.745076 kernel: printk: log_buf_len: 1048576 bytes Sep 13 00:04:53.745082 kernel: printk: early log buf free: 239648(91%) Sep 13 00:04:53.745089 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 13 00:04:53.745095 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Sep 13 00:04:53.745101 kernel: Fallback order for Node 0: 0 Sep 13 00:04:53.745107 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515808 Sep 13 00:04:53.745113 kernel: Policy zone: DMA32 Sep 13 00:04:53.745119 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 13 00:04:53.745125 kernel: Memory: 1936388K/2096628K available (12288K kernel code, 2293K rwdata, 22744K rodata, 42884K init, 2312K bss, 159980K reserved, 0K cma-reserved) Sep 13 00:04:53.745132 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=128, Nodes=1 Sep 13 00:04:53.745138 kernel: ftrace: allocating 37974 entries in 149 pages Sep 13 00:04:53.745144 kernel: ftrace: allocated 149 pages with 4 groups Sep 13 00:04:53.745150 kernel: Dynamic Preempt: voluntary Sep 13 00:04:53.745157 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 13 00:04:53.745163 kernel: rcu: RCU event tracing is enabled. Sep 13 00:04:53.745169 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=128. Sep 13 00:04:53.745175 kernel: Trampoline variant of Tasks RCU enabled. Sep 13 00:04:53.745182 kernel: Rude variant of Tasks RCU enabled. Sep 13 00:04:53.745188 kernel: Tracing variant of Tasks RCU enabled. Sep 13 00:04:53.745194 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 13 00:04:53.745200 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=128 Sep 13 00:04:53.745206 kernel: NR_IRQS: 33024, nr_irqs: 1448, preallocated irqs: 16 Sep 13 00:04:53.745212 kernel: rcu: srcu_init: Setting srcu_struct sizes to big. Sep 13 00:04:53.745218 kernel: Console: colour VGA+ 80x25 Sep 13 00:04:53.745224 kernel: printk: console [tty0] enabled Sep 13 00:04:53.745230 kernel: printk: console [ttyS0] enabled Sep 13 00:04:53.745237 kernel: ACPI: Core revision 20230628 Sep 13 00:04:53.745243 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 133484882848 ns Sep 13 00:04:53.745249 kernel: APIC: Switch to symmetric I/O mode setup Sep 13 00:04:53.745255 kernel: x2apic enabled Sep 13 00:04:53.745261 kernel: APIC: Switched APIC routing to: physical x2apic Sep 13 00:04:53.745268 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Sep 13 00:04:53.745274 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x311fd3cd494, max_idle_ns: 440795223879 ns Sep 13 00:04:53.745280 kernel: Calibrating delay loop (skipped) preset value.. 6816.00 BogoMIPS (lpj=3408000) Sep 13 00:04:53.745286 kernel: Disabled fast string operations Sep 13 00:04:53.745292 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Sep 13 00:04:53.745299 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Sep 13 00:04:53.748923 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 13 00:04:53.748938 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Sep 13 00:04:53.748945 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Sep 13 00:04:53.748951 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS Sep 13 00:04:53.748957 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT Sep 13 00:04:53.748963 kernel: RETBleed: Mitigation: Enhanced IBRS Sep 13 00:04:53.748969 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Sep 13 00:04:53.748979 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Sep 13 00:04:53.748985 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Sep 13 00:04:53.748991 kernel: SRBDS: Unknown: Dependent on hypervisor status Sep 13 00:04:53.748997 kernel: GDS: Unknown: Dependent on hypervisor status Sep 13 00:04:53.749003 kernel: active return thunk: its_return_thunk Sep 13 00:04:53.749010 kernel: ITS: Mitigation: Aligned branch/return thunks Sep 13 00:04:53.749016 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 13 00:04:53.749022 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 13 00:04:53.749028 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 13 00:04:53.749035 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 13 00:04:53.749041 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Sep 13 00:04:53.749047 kernel: Freeing SMP alternatives memory: 32K Sep 13 00:04:53.749053 kernel: pid_max: default: 131072 minimum: 1024 Sep 13 00:04:53.749059 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Sep 13 00:04:53.749065 kernel: landlock: Up and running. Sep 13 00:04:53.749071 kernel: SELinux: Initializing. Sep 13 00:04:53.749077 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Sep 13 00:04:53.749084 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Sep 13 00:04:53.749091 kernel: smpboot: CPU0: Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz (family: 0x6, model: 0x9e, stepping: 0xd) Sep 13 00:04:53.749097 kernel: RCU Tasks: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. Sep 13 00:04:53.749103 kernel: RCU Tasks Rude: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. Sep 13 00:04:53.749109 kernel: RCU Tasks Trace: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. Sep 13 00:04:53.749117 kernel: Performance Events: Skylake events, core PMU driver. Sep 13 00:04:53.749127 kernel: core: CPUID marked event: 'cpu cycles' unavailable Sep 13 00:04:53.749136 kernel: core: CPUID marked event: 'instructions' unavailable Sep 13 00:04:53.749142 kernel: core: CPUID marked event: 'bus cycles' unavailable Sep 13 00:04:53.749148 kernel: core: CPUID marked event: 'cache references' unavailable Sep 13 00:04:53.749155 kernel: core: CPUID marked event: 'cache misses' unavailable Sep 13 00:04:53.749161 kernel: core: CPUID marked event: 'branch instructions' unavailable Sep 13 00:04:53.749167 kernel: core: CPUID marked event: 'branch misses' unavailable Sep 13 00:04:53.749173 kernel: ... version: 1 Sep 13 00:04:53.749179 kernel: ... bit width: 48 Sep 13 00:04:53.749185 kernel: ... generic registers: 4 Sep 13 00:04:53.749191 kernel: ... value mask: 0000ffffffffffff Sep 13 00:04:53.749198 kernel: ... max period: 000000007fffffff Sep 13 00:04:53.749203 kernel: ... fixed-purpose events: 0 Sep 13 00:04:53.749211 kernel: ... event mask: 000000000000000f Sep 13 00:04:53.749217 kernel: signal: max sigframe size: 1776 Sep 13 00:04:53.749223 kernel: rcu: Hierarchical SRCU implementation. Sep 13 00:04:53.749229 kernel: rcu: Max phase no-delay instances is 400. Sep 13 00:04:53.749235 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Sep 13 00:04:53.749241 kernel: smp: Bringing up secondary CPUs ... Sep 13 00:04:53.749247 kernel: smpboot: x86: Booting SMP configuration: Sep 13 00:04:53.749253 kernel: .... node #0, CPUs: #1 Sep 13 00:04:53.749259 kernel: Disabled fast string operations Sep 13 00:04:53.749266 kernel: smpboot: CPU 1 Converting physical 2 to logical package 1 Sep 13 00:04:53.749272 kernel: smpboot: CPU 1 Converting physical 0 to logical die 1 Sep 13 00:04:53.749278 kernel: smp: Brought up 1 node, 2 CPUs Sep 13 00:04:53.749284 kernel: smpboot: Max logical packages: 128 Sep 13 00:04:53.749290 kernel: smpboot: Total of 2 processors activated (13632.00 BogoMIPS) Sep 13 00:04:53.749296 kernel: devtmpfs: initialized Sep 13 00:04:53.749302 kernel: x86/mm: Memory block size: 128MB Sep 13 00:04:53.749308 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7feff000-0x7fefffff] (4096 bytes) Sep 13 00:04:53.749314 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 13 00:04:53.749321 kernel: futex hash table entries: 32768 (order: 9, 2097152 bytes, linear) Sep 13 00:04:53.749328 kernel: pinctrl core: initialized pinctrl subsystem Sep 13 00:04:53.749334 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 13 00:04:53.749340 kernel: audit: initializing netlink subsys (disabled) Sep 13 00:04:53.749346 kernel: audit: type=2000 audit(1757721892.091:1): state=initialized audit_enabled=0 res=1 Sep 13 00:04:53.749352 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 13 00:04:53.749358 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 13 00:04:53.749364 kernel: cpuidle: using governor menu Sep 13 00:04:53.749370 kernel: Simple Boot Flag at 0x36 set to 0x80 Sep 13 00:04:53.749377 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 13 00:04:53.749383 kernel: dca service started, version 1.12.1 Sep 13 00:04:53.749390 kernel: PCI: MMCONFIG for domain 0000 [bus 00-7f] at [mem 0xf0000000-0xf7ffffff] (base 0xf0000000) Sep 13 00:04:53.749396 kernel: PCI: Using configuration type 1 for base access Sep 13 00:04:53.749402 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 13 00:04:53.749408 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 13 00:04:53.749414 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Sep 13 00:04:53.749420 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 13 00:04:53.749426 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Sep 13 00:04:53.749433 kernel: ACPI: Added _OSI(Module Device) Sep 13 00:04:53.749439 kernel: ACPI: Added _OSI(Processor Device) Sep 13 00:04:53.749445 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 13 00:04:53.749451 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 13 00:04:53.749457 kernel: ACPI: [Firmware Bug]: BIOS _OSI(Linux) query ignored Sep 13 00:04:53.749463 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Sep 13 00:04:53.749469 kernel: ACPI: Interpreter enabled Sep 13 00:04:53.749475 kernel: ACPI: PM: (supports S0 S1 S5) Sep 13 00:04:53.749481 kernel: ACPI: Using IOAPIC for interrupt routing Sep 13 00:04:53.749488 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 13 00:04:53.749494 kernel: PCI: Using E820 reservations for host bridge windows Sep 13 00:04:53.749500 kernel: ACPI: Enabled 4 GPEs in block 00 to 0F Sep 13 00:04:53.749506 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-7f]) Sep 13 00:04:53.749598 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 13 00:04:53.749656 kernel: acpi PNP0A03:00: _OSC: platform does not support [AER LTR] Sep 13 00:04:53.749708 kernel: acpi PNP0A03:00: _OSC: OS now controls [PCIeHotplug PME PCIeCapability] Sep 13 00:04:53.749719 kernel: PCI host bridge to bus 0000:00 Sep 13 00:04:53.749773 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Sep 13 00:04:53.749837 kernel: pci_bus 0000:00: root bus resource [mem 0x000cc000-0x000dbfff window] Sep 13 00:04:53.749887 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Sep 13 00:04:53.749990 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Sep 13 00:04:53.750037 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xfeff window] Sep 13 00:04:53.750083 kernel: pci_bus 0000:00: root bus resource [bus 00-7f] Sep 13 00:04:53.750150 kernel: pci 0000:00:00.0: [8086:7190] type 00 class 0x060000 Sep 13 00:04:53.750207 kernel: pci 0000:00:01.0: [8086:7191] type 01 class 0x060400 Sep 13 00:04:53.750263 kernel: pci 0000:00:07.0: [8086:7110] type 00 class 0x060100 Sep 13 00:04:53.750319 kernel: pci 0000:00:07.1: [8086:7111] type 00 class 0x01018a Sep 13 00:04:53.750371 kernel: pci 0000:00:07.1: reg 0x20: [io 0x1060-0x106f] Sep 13 00:04:53.750422 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Sep 13 00:04:53.750482 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Sep 13 00:04:53.750535 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Sep 13 00:04:53.750586 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Sep 13 00:04:53.750643 kernel: pci 0000:00:07.3: [8086:7113] type 00 class 0x068000 Sep 13 00:04:53.750695 kernel: pci 0000:00:07.3: quirk: [io 0x1000-0x103f] claimed by PIIX4 ACPI Sep 13 00:04:53.750746 kernel: pci 0000:00:07.3: quirk: [io 0x1040-0x104f] claimed by PIIX4 SMB Sep 13 00:04:53.750800 kernel: pci 0000:00:07.7: [15ad:0740] type 00 class 0x088000 Sep 13 00:04:53.750860 kernel: pci 0000:00:07.7: reg 0x10: [io 0x1080-0x10bf] Sep 13 00:04:53.753039 kernel: pci 0000:00:07.7: reg 0x14: [mem 0xfebfe000-0xfebfffff 64bit] Sep 13 00:04:53.753103 kernel: pci 0000:00:0f.0: [15ad:0405] type 00 class 0x030000 Sep 13 00:04:53.753157 kernel: pci 0000:00:0f.0: reg 0x10: [io 0x1070-0x107f] Sep 13 00:04:53.753209 kernel: pci 0000:00:0f.0: reg 0x14: [mem 0xe8000000-0xefffffff pref] Sep 13 00:04:53.753260 kernel: pci 0000:00:0f.0: reg 0x18: [mem 0xfe000000-0xfe7fffff] Sep 13 00:04:53.753310 kernel: pci 0000:00:0f.0: reg 0x30: [mem 0x00000000-0x00007fff pref] Sep 13 00:04:53.753366 kernel: pci 0000:00:0f.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Sep 13 00:04:53.753422 kernel: pci 0000:00:11.0: [15ad:0790] type 01 class 0x060401 Sep 13 00:04:53.753478 kernel: pci 0000:00:15.0: [15ad:07a0] type 01 class 0x060400 Sep 13 00:04:53.753530 kernel: pci 0000:00:15.0: PME# supported from D0 D3hot D3cold Sep 13 00:04:53.753591 kernel: pci 0000:00:15.1: [15ad:07a0] type 01 class 0x060400 Sep 13 00:04:53.753643 kernel: pci 0000:00:15.1: PME# supported from D0 D3hot D3cold Sep 13 00:04:53.753702 kernel: pci 0000:00:15.2: [15ad:07a0] type 01 class 0x060400 Sep 13 00:04:53.753755 kernel: pci 0000:00:15.2: PME# supported from D0 D3hot D3cold Sep 13 00:04:53.753812 kernel: pci 0000:00:15.3: [15ad:07a0] type 01 class 0x060400 Sep 13 00:04:53.753864 kernel: pci 0000:00:15.3: PME# supported from D0 D3hot D3cold Sep 13 00:04:53.753974 kernel: pci 0000:00:15.4: [15ad:07a0] type 01 class 0x060400 Sep 13 00:04:53.754027 kernel: pci 0000:00:15.4: PME# supported from D0 D3hot D3cold Sep 13 00:04:53.754098 kernel: pci 0000:00:15.5: [15ad:07a0] type 01 class 0x060400 Sep 13 00:04:53.754161 kernel: pci 0000:00:15.5: PME# supported from D0 D3hot D3cold Sep 13 00:04:53.754216 kernel: pci 0000:00:15.6: [15ad:07a0] type 01 class 0x060400 Sep 13 00:04:53.754268 kernel: pci 0000:00:15.6: PME# supported from D0 D3hot D3cold Sep 13 00:04:53.754322 kernel: pci 0000:00:15.7: [15ad:07a0] type 01 class 0x060400 Sep 13 00:04:53.754374 kernel: pci 0000:00:15.7: PME# supported from D0 D3hot D3cold Sep 13 00:04:53.754432 kernel: pci 0000:00:16.0: [15ad:07a0] type 01 class 0x060400 Sep 13 00:04:53.754490 kernel: pci 0000:00:16.0: PME# supported from D0 D3hot D3cold Sep 13 00:04:53.754551 kernel: pci 0000:00:16.1: [15ad:07a0] type 01 class 0x060400 Sep 13 00:04:53.754603 kernel: pci 0000:00:16.1: PME# supported from D0 D3hot D3cold Sep 13 00:04:53.754660 kernel: pci 0000:00:16.2: [15ad:07a0] type 01 class 0x060400 Sep 13 00:04:53.754712 kernel: pci 0000:00:16.2: PME# supported from D0 D3hot D3cold Sep 13 00:04:53.754771 kernel: pci 0000:00:16.3: [15ad:07a0] type 01 class 0x060400 Sep 13 00:04:53.754823 kernel: pci 0000:00:16.3: PME# supported from D0 D3hot D3cold Sep 13 00:04:53.754879 kernel: pci 0000:00:16.4: [15ad:07a0] type 01 class 0x060400 Sep 13 00:04:53.758196 kernel: pci 0000:00:16.4: PME# supported from D0 D3hot D3cold Sep 13 00:04:53.758271 kernel: pci 0000:00:16.5: [15ad:07a0] type 01 class 0x060400 Sep 13 00:04:53.758332 kernel: pci 0000:00:16.5: PME# supported from D0 D3hot D3cold Sep 13 00:04:53.758395 kernel: pci 0000:00:16.6: [15ad:07a0] type 01 class 0x060400 Sep 13 00:04:53.758456 kernel: pci 0000:00:16.6: PME# supported from D0 D3hot D3cold Sep 13 00:04:53.758512 kernel: pci 0000:00:16.7: [15ad:07a0] type 01 class 0x060400 Sep 13 00:04:53.758564 kernel: pci 0000:00:16.7: PME# supported from D0 D3hot D3cold Sep 13 00:04:53.758620 kernel: pci 0000:00:17.0: [15ad:07a0] type 01 class 0x060400 Sep 13 00:04:53.758671 kernel: pci 0000:00:17.0: PME# supported from D0 D3hot D3cold Sep 13 00:04:53.758732 kernel: pci 0000:00:17.1: [15ad:07a0] type 01 class 0x060400 Sep 13 00:04:53.758785 kernel: pci 0000:00:17.1: PME# supported from D0 D3hot D3cold Sep 13 00:04:53.758840 kernel: pci 0000:00:17.2: [15ad:07a0] type 01 class 0x060400 Sep 13 00:04:53.758892 kernel: pci 0000:00:17.2: PME# supported from D0 D3hot D3cold Sep 13 00:04:53.758959 kernel: pci 0000:00:17.3: [15ad:07a0] type 01 class 0x060400 Sep 13 00:04:53.759011 kernel: pci 0000:00:17.3: PME# supported from D0 D3hot D3cold Sep 13 00:04:53.759070 kernel: pci 0000:00:17.4: [15ad:07a0] type 01 class 0x060400 Sep 13 00:04:53.759122 kernel: pci 0000:00:17.4: PME# supported from D0 D3hot D3cold Sep 13 00:04:53.759178 kernel: pci 0000:00:17.5: [15ad:07a0] type 01 class 0x060400 Sep 13 00:04:53.759230 kernel: pci 0000:00:17.5: PME# supported from D0 D3hot D3cold Sep 13 00:04:53.759285 kernel: pci 0000:00:17.6: [15ad:07a0] type 01 class 0x060400 Sep 13 00:04:53.759336 kernel: pci 0000:00:17.6: PME# supported from D0 D3hot D3cold Sep 13 00:04:53.759391 kernel: pci 0000:00:17.7: [15ad:07a0] type 01 class 0x060400 Sep 13 00:04:53.759446 kernel: pci 0000:00:17.7: PME# supported from D0 D3hot D3cold Sep 13 00:04:53.759501 kernel: pci 0000:00:18.0: [15ad:07a0] type 01 class 0x060400 Sep 13 00:04:53.759553 kernel: pci 0000:00:18.0: PME# supported from D0 D3hot D3cold Sep 13 00:04:53.759607 kernel: pci 0000:00:18.1: [15ad:07a0] type 01 class 0x060400 Sep 13 00:04:53.759659 kernel: pci 0000:00:18.1: PME# supported from D0 D3hot D3cold Sep 13 00:04:53.759716 kernel: pci 0000:00:18.2: [15ad:07a0] type 01 class 0x060400 Sep 13 00:04:53.759770 kernel: pci 0000:00:18.2: PME# supported from D0 D3hot D3cold Sep 13 00:04:53.759825 kernel: pci 0000:00:18.3: [15ad:07a0] type 01 class 0x060400 Sep 13 00:04:53.759876 kernel: pci 0000:00:18.3: PME# supported from D0 D3hot D3cold Sep 13 00:04:53.762628 kernel: pci 0000:00:18.4: [15ad:07a0] type 01 class 0x060400 Sep 13 00:04:53.762691 kernel: pci 0000:00:18.4: PME# supported from D0 D3hot D3cold Sep 13 00:04:53.762750 kernel: pci 0000:00:18.5: [15ad:07a0] type 01 class 0x060400 Sep 13 00:04:53.762808 kernel: pci 0000:00:18.5: PME# supported from D0 D3hot D3cold Sep 13 00:04:53.762865 kernel: pci 0000:00:18.6: [15ad:07a0] type 01 class 0x060400 Sep 13 00:04:53.762935 kernel: pci 0000:00:18.6: PME# supported from D0 D3hot D3cold Sep 13 00:04:53.762995 kernel: pci 0000:00:18.7: [15ad:07a0] type 01 class 0x060400 Sep 13 00:04:53.763047 kernel: pci 0000:00:18.7: PME# supported from D0 D3hot D3cold Sep 13 00:04:53.763108 kernel: pci_bus 0000:01: extended config space not accessible Sep 13 00:04:53.763168 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Sep 13 00:04:53.763231 kernel: pci_bus 0000:02: extended config space not accessible Sep 13 00:04:53.763241 kernel: acpiphp: Slot [32] registered Sep 13 00:04:53.763248 kernel: acpiphp: Slot [33] registered Sep 13 00:04:53.763254 kernel: acpiphp: Slot [34] registered Sep 13 00:04:53.763260 kernel: acpiphp: Slot [35] registered Sep 13 00:04:53.763266 kernel: acpiphp: Slot [36] registered Sep 13 00:04:53.763272 kernel: acpiphp: Slot [37] registered Sep 13 00:04:53.763278 kernel: acpiphp: Slot [38] registered Sep 13 00:04:53.763287 kernel: acpiphp: Slot [39] registered Sep 13 00:04:53.763293 kernel: acpiphp: Slot [40] registered Sep 13 00:04:53.763298 kernel: acpiphp: Slot [41] registered Sep 13 00:04:53.763304 kernel: acpiphp: Slot [42] registered Sep 13 00:04:53.763310 kernel: acpiphp: Slot [43] registered Sep 13 00:04:53.763316 kernel: acpiphp: Slot [44] registered Sep 13 00:04:53.763322 kernel: acpiphp: Slot [45] registered Sep 13 00:04:53.763328 kernel: acpiphp: Slot [46] registered Sep 13 00:04:53.763334 kernel: acpiphp: Slot [47] registered Sep 13 00:04:53.763341 kernel: acpiphp: Slot [48] registered Sep 13 00:04:53.763347 kernel: acpiphp: Slot [49] registered Sep 13 00:04:53.763353 kernel: acpiphp: Slot [50] registered Sep 13 00:04:53.763359 kernel: acpiphp: Slot [51] registered Sep 13 00:04:53.763365 kernel: acpiphp: Slot [52] registered Sep 13 00:04:53.763371 kernel: acpiphp: Slot [53] registered Sep 13 00:04:53.763377 kernel: acpiphp: Slot [54] registered Sep 13 00:04:53.763383 kernel: acpiphp: Slot [55] registered Sep 13 00:04:53.763389 kernel: acpiphp: Slot [56] registered Sep 13 00:04:53.763395 kernel: acpiphp: Slot [57] registered Sep 13 00:04:53.763403 kernel: acpiphp: Slot [58] registered Sep 13 00:04:53.763409 kernel: acpiphp: Slot [59] registered Sep 13 00:04:53.763415 kernel: acpiphp: Slot [60] registered Sep 13 00:04:53.763421 kernel: acpiphp: Slot [61] registered Sep 13 00:04:53.763427 kernel: acpiphp: Slot [62] registered Sep 13 00:04:53.763433 kernel: acpiphp: Slot [63] registered Sep 13 00:04:53.763487 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] (subtractive decode) Sep 13 00:04:53.763547 kernel: pci 0000:00:11.0: bridge window [io 0x2000-0x3fff] Sep 13 00:04:53.763600 kernel: pci 0000:00:11.0: bridge window [mem 0xfd600000-0xfdffffff] Sep 13 00:04:53.763655 kernel: pci 0000:00:11.0: bridge window [mem 0xe7b00000-0xe7ffffff 64bit pref] Sep 13 00:04:53.763706 kernel: pci 0000:00:11.0: bridge window [mem 0x000a0000-0x000bffff window] (subtractive decode) Sep 13 00:04:53.763764 kernel: pci 0000:00:11.0: bridge window [mem 0x000cc000-0x000dbfff window] (subtractive decode) Sep 13 00:04:53.763814 kernel: pci 0000:00:11.0: bridge window [mem 0xc0000000-0xfebfffff window] (subtractive decode) Sep 13 00:04:53.763865 kernel: pci 0000:00:11.0: bridge window [io 0x0000-0x0cf7 window] (subtractive decode) Sep 13 00:04:53.763926 kernel: pci 0000:00:11.0: bridge window [io 0x0d00-0xfeff window] (subtractive decode) Sep 13 00:04:53.763988 kernel: pci 0000:03:00.0: [15ad:07c0] type 00 class 0x010700 Sep 13 00:04:53.764046 kernel: pci 0000:03:00.0: reg 0x10: [io 0x4000-0x4007] Sep 13 00:04:53.764099 kernel: pci 0000:03:00.0: reg 0x14: [mem 0xfd5f8000-0xfd5fffff 64bit] Sep 13 00:04:53.764151 kernel: pci 0000:03:00.0: reg 0x30: [mem 0x00000000-0x0000ffff pref] Sep 13 00:04:53.764203 kernel: pci 0000:03:00.0: PME# supported from D0 D3hot D3cold Sep 13 00:04:53.764255 kernel: pci 0000:03:00.0: disabling ASPM on pre-1.1 PCIe device. You can enable it with 'pcie_aspm=force' Sep 13 00:04:53.764309 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] Sep 13 00:04:53.764360 kernel: pci 0000:00:15.0: bridge window [io 0x4000-0x4fff] Sep 13 00:04:53.764413 kernel: pci 0000:00:15.0: bridge window [mem 0xfd500000-0xfd5fffff] Sep 13 00:04:53.764466 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] Sep 13 00:04:53.764517 kernel: pci 0000:00:15.1: bridge window [io 0x8000-0x8fff] Sep 13 00:04:53.764568 kernel: pci 0000:00:15.1: bridge window [mem 0xfd100000-0xfd1fffff] Sep 13 00:04:53.764619 kernel: pci 0000:00:15.1: bridge window [mem 0xe7800000-0xe78fffff 64bit pref] Sep 13 00:04:53.764673 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] Sep 13 00:04:53.764735 kernel: pci 0000:00:15.2: bridge window [io 0xc000-0xcfff] Sep 13 00:04:53.764792 kernel: pci 0000:00:15.2: bridge window [mem 0xfcd00000-0xfcdfffff] Sep 13 00:04:53.764848 kernel: pci 0000:00:15.2: bridge window [mem 0xe7400000-0xe74fffff 64bit pref] Sep 13 00:04:53.768805 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] Sep 13 00:04:53.768884 kernel: pci 0000:00:15.3: bridge window [mem 0xfc900000-0xfc9fffff] Sep 13 00:04:53.768965 kernel: pci 0000:00:15.3: bridge window [mem 0xe7000000-0xe70fffff 64bit pref] Sep 13 00:04:53.769020 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] Sep 13 00:04:53.769071 kernel: pci 0000:00:15.4: bridge window [mem 0xfc500000-0xfc5fffff] Sep 13 00:04:53.769123 kernel: pci 0000:00:15.4: bridge window [mem 0xe6c00000-0xe6cfffff 64bit pref] Sep 13 00:04:53.769182 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] Sep 13 00:04:53.769233 kernel: pci 0000:00:15.5: bridge window [mem 0xfc100000-0xfc1fffff] Sep 13 00:04:53.769285 kernel: pci 0000:00:15.5: bridge window [mem 0xe6800000-0xe68fffff 64bit pref] Sep 13 00:04:53.769338 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] Sep 13 00:04:53.769397 kernel: pci 0000:00:15.6: bridge window [mem 0xfbd00000-0xfbdfffff] Sep 13 00:04:53.769456 kernel: pci 0000:00:15.6: bridge window [mem 0xe6400000-0xe64fffff 64bit pref] Sep 13 00:04:53.769512 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] Sep 13 00:04:53.769568 kernel: pci 0000:00:15.7: bridge window [mem 0xfb900000-0xfb9fffff] Sep 13 00:04:53.769620 kernel: pci 0000:00:15.7: bridge window [mem 0xe6000000-0xe60fffff 64bit pref] Sep 13 00:04:53.769679 kernel: pci 0000:0b:00.0: [15ad:07b0] type 00 class 0x020000 Sep 13 00:04:53.769734 kernel: pci 0000:0b:00.0: reg 0x10: [mem 0xfd4fc000-0xfd4fcfff] Sep 13 00:04:53.769787 kernel: pci 0000:0b:00.0: reg 0x14: [mem 0xfd4fd000-0xfd4fdfff] Sep 13 00:04:53.769842 kernel: pci 0000:0b:00.0: reg 0x18: [mem 0xfd4fe000-0xfd4fffff] Sep 13 00:04:53.769910 kernel: pci 0000:0b:00.0: reg 0x1c: [io 0x5000-0x500f] Sep 13 00:04:53.769972 kernel: pci 0000:0b:00.0: reg 0x30: [mem 0x00000000-0x0000ffff pref] Sep 13 00:04:53.770027 kernel: pci 0000:0b:00.0: supports D1 D2 Sep 13 00:04:53.770079 kernel: pci 0000:0b:00.0: PME# supported from D0 D1 D2 D3hot D3cold Sep 13 00:04:53.770131 kernel: pci 0000:0b:00.0: disabling ASPM on pre-1.1 PCIe device. You can enable it with 'pcie_aspm=force' Sep 13 00:04:53.770184 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] Sep 13 00:04:53.770235 kernel: pci 0000:00:16.0: bridge window [io 0x5000-0x5fff] Sep 13 00:04:53.770296 kernel: pci 0000:00:16.0: bridge window [mem 0xfd400000-0xfd4fffff] Sep 13 00:04:53.770351 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] Sep 13 00:04:53.770403 kernel: pci 0000:00:16.1: bridge window [io 0x9000-0x9fff] Sep 13 00:04:53.770458 kernel: pci 0000:00:16.1: bridge window [mem 0xfd000000-0xfd0fffff] Sep 13 00:04:53.770510 kernel: pci 0000:00:16.1: bridge window [mem 0xe7700000-0xe77fffff 64bit pref] Sep 13 00:04:53.770569 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] Sep 13 00:04:53.770622 kernel: pci 0000:00:16.2: bridge window [io 0xd000-0xdfff] Sep 13 00:04:53.770673 kernel: pci 0000:00:16.2: bridge window [mem 0xfcc00000-0xfccfffff] Sep 13 00:04:53.770727 kernel: pci 0000:00:16.2: bridge window [mem 0xe7300000-0xe73fffff 64bit pref] Sep 13 00:04:53.770787 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] Sep 13 00:04:53.770839 kernel: pci 0000:00:16.3: bridge window [mem 0xfc800000-0xfc8fffff] Sep 13 00:04:53.770890 kernel: pci 0000:00:16.3: bridge window [mem 0xe6f00000-0xe6ffffff 64bit pref] Sep 13 00:04:53.771076 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] Sep 13 00:04:53.771135 kernel: pci 0000:00:16.4: bridge window [mem 0xfc400000-0xfc4fffff] Sep 13 00:04:53.771187 kernel: pci 0000:00:16.4: bridge window [mem 0xe6b00000-0xe6bfffff 64bit pref] Sep 13 00:04:53.771244 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] Sep 13 00:04:53.771300 kernel: pci 0000:00:16.5: bridge window [mem 0xfc000000-0xfc0fffff] Sep 13 00:04:53.771351 kernel: pci 0000:00:16.5: bridge window [mem 0xe6700000-0xe67fffff 64bit pref] Sep 13 00:04:53.771408 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] Sep 13 00:04:53.771461 kernel: pci 0000:00:16.6: bridge window [mem 0xfbc00000-0xfbcfffff] Sep 13 00:04:53.771511 kernel: pci 0000:00:16.6: bridge window [mem 0xe6300000-0xe63fffff 64bit pref] Sep 13 00:04:53.771564 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] Sep 13 00:04:53.771616 kernel: pci 0000:00:16.7: bridge window [mem 0xfb800000-0xfb8fffff] Sep 13 00:04:53.771672 kernel: pci 0000:00:16.7: bridge window [mem 0xe5f00000-0xe5ffffff 64bit pref] Sep 13 00:04:53.771728 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] Sep 13 00:04:53.771779 kernel: pci 0000:00:17.0: bridge window [io 0x6000-0x6fff] Sep 13 00:04:53.771830 kernel: pci 0000:00:17.0: bridge window [mem 0xfd300000-0xfd3fffff] Sep 13 00:04:53.771881 kernel: pci 0000:00:17.0: bridge window [mem 0xe7a00000-0xe7afffff 64bit pref] Sep 13 00:04:53.771957 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] Sep 13 00:04:53.772009 kernel: pci 0000:00:17.1: bridge window [io 0xa000-0xafff] Sep 13 00:04:53.772060 kernel: pci 0000:00:17.1: bridge window [mem 0xfcf00000-0xfcffffff] Sep 13 00:04:53.772114 kernel: pci 0000:00:17.1: bridge window [mem 0xe7600000-0xe76fffff 64bit pref] Sep 13 00:04:53.772167 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] Sep 13 00:04:53.772218 kernel: pci 0000:00:17.2: bridge window [io 0xe000-0xefff] Sep 13 00:04:53.772268 kernel: pci 0000:00:17.2: bridge window [mem 0xfcb00000-0xfcbfffff] Sep 13 00:04:53.772319 kernel: pci 0000:00:17.2: bridge window [mem 0xe7200000-0xe72fffff 64bit pref] Sep 13 00:04:53.772373 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] Sep 13 00:04:53.772424 kernel: pci 0000:00:17.3: bridge window [mem 0xfc700000-0xfc7fffff] Sep 13 00:04:53.772474 kernel: pci 0000:00:17.3: bridge window [mem 0xe6e00000-0xe6efffff 64bit pref] Sep 13 00:04:53.772531 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] Sep 13 00:04:53.772583 kernel: pci 0000:00:17.4: bridge window [mem 0xfc300000-0xfc3fffff] Sep 13 00:04:53.772634 kernel: pci 0000:00:17.4: bridge window [mem 0xe6a00000-0xe6afffff 64bit pref] Sep 13 00:04:53.772688 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] Sep 13 00:04:53.772740 kernel: pci 0000:00:17.5: bridge window [mem 0xfbf00000-0xfbffffff] Sep 13 00:04:53.772791 kernel: pci 0000:00:17.5: bridge window [mem 0xe6600000-0xe66fffff 64bit pref] Sep 13 00:04:53.772845 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] Sep 13 00:04:53.772946 kernel: pci 0000:00:17.6: bridge window [mem 0xfbb00000-0xfbbfffff] Sep 13 00:04:53.773015 kernel: pci 0000:00:17.6: bridge window [mem 0xe6200000-0xe62fffff 64bit pref] Sep 13 00:04:53.773070 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] Sep 13 00:04:53.773121 kernel: pci 0000:00:17.7: bridge window [mem 0xfb700000-0xfb7fffff] Sep 13 00:04:53.773177 kernel: pci 0000:00:17.7: bridge window [mem 0xe5e00000-0xe5efffff 64bit pref] Sep 13 00:04:53.773237 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] Sep 13 00:04:53.773288 kernel: pci 0000:00:18.0: bridge window [io 0x7000-0x7fff] Sep 13 00:04:53.773338 kernel: pci 0000:00:18.0: bridge window [mem 0xfd200000-0xfd2fffff] Sep 13 00:04:53.773388 kernel: pci 0000:00:18.0: bridge window [mem 0xe7900000-0xe79fffff 64bit pref] Sep 13 00:04:53.773851 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] Sep 13 00:04:53.773919 kernel: pci 0000:00:18.1: bridge window [io 0xb000-0xbfff] Sep 13 00:04:53.773974 kernel: pci 0000:00:18.1: bridge window [mem 0xfce00000-0xfcefffff] Sep 13 00:04:53.774026 kernel: pci 0000:00:18.1: bridge window [mem 0xe7500000-0xe75fffff 64bit pref] Sep 13 00:04:53.774089 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] Sep 13 00:04:53.774141 kernel: pci 0000:00:18.2: bridge window [mem 0xfca00000-0xfcafffff] Sep 13 00:04:53.774192 kernel: pci 0000:00:18.2: bridge window [mem 0xe7100000-0xe71fffff 64bit pref] Sep 13 00:04:53.774251 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] Sep 13 00:04:53.774303 kernel: pci 0000:00:18.3: bridge window [mem 0xfc600000-0xfc6fffff] Sep 13 00:04:53.774355 kernel: pci 0000:00:18.3: bridge window [mem 0xe6d00000-0xe6dfffff 64bit pref] Sep 13 00:04:53.774408 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] Sep 13 00:04:53.774459 kernel: pci 0000:00:18.4: bridge window [mem 0xfc200000-0xfc2fffff] Sep 13 00:04:53.774510 kernel: pci 0000:00:18.4: bridge window [mem 0xe6900000-0xe69fffff 64bit pref] Sep 13 00:04:53.774563 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] Sep 13 00:04:53.774622 kernel: pci 0000:00:18.5: bridge window [mem 0xfbe00000-0xfbefffff] Sep 13 00:04:53.774677 kernel: pci 0000:00:18.5: bridge window [mem 0xe6500000-0xe65fffff 64bit pref] Sep 13 00:04:53.774730 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] Sep 13 00:04:53.774780 kernel: pci 0000:00:18.6: bridge window [mem 0xfba00000-0xfbafffff] Sep 13 00:04:53.774830 kernel: pci 0000:00:18.6: bridge window [mem 0xe6100000-0xe61fffff 64bit pref] Sep 13 00:04:53.774883 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] Sep 13 00:04:53.774944 kernel: pci 0000:00:18.7: bridge window [mem 0xfb600000-0xfb6fffff] Sep 13 00:04:53.775001 kernel: pci 0000:00:18.7: bridge window [mem 0xe5d00000-0xe5dfffff 64bit pref] Sep 13 00:04:53.775010 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 9 Sep 13 00:04:53.775017 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 0 Sep 13 00:04:53.775026 kernel: ACPI: PCI: Interrupt link LNKB disabled Sep 13 00:04:53.775032 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Sep 13 00:04:53.775038 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 10 Sep 13 00:04:53.775045 kernel: iommu: Default domain type: Translated Sep 13 00:04:53.775051 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 13 00:04:53.775061 kernel: PCI: Using ACPI for IRQ routing Sep 13 00:04:53.775067 kernel: PCI: pci_cache_line_size set to 64 bytes Sep 13 00:04:53.775074 kernel: e820: reserve RAM buffer [mem 0x0009ec00-0x0009ffff] Sep 13 00:04:53.775079 kernel: e820: reserve RAM buffer [mem 0x7fee0000-0x7fffffff] Sep 13 00:04:53.775136 kernel: pci 0000:00:0f.0: vgaarb: setting as boot VGA device Sep 13 00:04:53.775192 kernel: pci 0000:00:0f.0: vgaarb: bridge control possible Sep 13 00:04:53.775244 kernel: pci 0000:00:0f.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Sep 13 00:04:53.775253 kernel: vgaarb: loaded Sep 13 00:04:53.775259 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 Sep 13 00:04:53.775265 kernel: hpet0: 16 comparators, 64-bit 14.318180 MHz counter Sep 13 00:04:53.775272 kernel: clocksource: Switched to clocksource tsc-early Sep 13 00:04:53.775278 kernel: VFS: Disk quotas dquot_6.6.0 Sep 13 00:04:53.775289 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 13 00:04:53.775296 kernel: pnp: PnP ACPI init Sep 13 00:04:53.775354 kernel: system 00:00: [io 0x1000-0x103f] has been reserved Sep 13 00:04:53.775403 kernel: system 00:00: [io 0x1040-0x104f] has been reserved Sep 13 00:04:53.775450 kernel: system 00:00: [io 0x0cf0-0x0cf1] has been reserved Sep 13 00:04:53.775500 kernel: system 00:04: [mem 0xfed00000-0xfed003ff] has been reserved Sep 13 00:04:53.775551 kernel: pnp 00:06: [dma 2] Sep 13 00:04:53.775605 kernel: system 00:07: [io 0xfce0-0xfcff] has been reserved Sep 13 00:04:53.775652 kernel: system 00:07: [mem 0xf0000000-0xf7ffffff] has been reserved Sep 13 00:04:53.775698 kernel: system 00:07: [mem 0xfe800000-0xfe9fffff] has been reserved Sep 13 00:04:53.775706 kernel: pnp: PnP ACPI: found 8 devices Sep 13 00:04:53.775713 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 13 00:04:53.775719 kernel: NET: Registered PF_INET protocol family Sep 13 00:04:53.775725 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 13 00:04:53.775732 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Sep 13 00:04:53.775740 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 13 00:04:53.775746 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Sep 13 00:04:53.775752 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Sep 13 00:04:53.775758 kernel: TCP: Hash tables configured (established 16384 bind 16384) Sep 13 00:04:53.775764 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Sep 13 00:04:53.775770 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Sep 13 00:04:53.775776 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 13 00:04:53.775783 kernel: NET: Registered PF_XDP protocol family Sep 13 00:04:53.775836 kernel: pci 0000:00:15.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 Sep 13 00:04:53.775907 kernel: pci 0000:00:15.3: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Sep 13 00:04:53.775965 kernel: pci 0000:00:15.4: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Sep 13 00:04:53.776020 kernel: pci 0000:00:15.5: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Sep 13 00:04:53.776076 kernel: pci 0000:00:15.6: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Sep 13 00:04:53.776134 kernel: pci 0000:00:15.7: bridge window [io 0x1000-0x0fff] to [bus 0a] add_size 1000 Sep 13 00:04:53.776227 kernel: pci 0000:00:16.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 0b] add_size 200000 add_align 100000 Sep 13 00:04:53.776332 kernel: pci 0000:00:16.3: bridge window [io 0x1000-0x0fff] to [bus 0e] add_size 1000 Sep 13 00:04:53.776388 kernel: pci 0000:00:16.4: bridge window [io 0x1000-0x0fff] to [bus 0f] add_size 1000 Sep 13 00:04:53.776441 kernel: pci 0000:00:16.5: bridge window [io 0x1000-0x0fff] to [bus 10] add_size 1000 Sep 13 00:04:53.776493 kernel: pci 0000:00:16.6: bridge window [io 0x1000-0x0fff] to [bus 11] add_size 1000 Sep 13 00:04:53.776547 kernel: pci 0000:00:16.7: bridge window [io 0x1000-0x0fff] to [bus 12] add_size 1000 Sep 13 00:04:53.776603 kernel: pci 0000:00:17.3: bridge window [io 0x1000-0x0fff] to [bus 16] add_size 1000 Sep 13 00:04:53.776656 kernel: pci 0000:00:17.4: bridge window [io 0x1000-0x0fff] to [bus 17] add_size 1000 Sep 13 00:04:53.776708 kernel: pci 0000:00:17.5: bridge window [io 0x1000-0x0fff] to [bus 18] add_size 1000 Sep 13 00:04:53.776761 kernel: pci 0000:00:17.6: bridge window [io 0x1000-0x0fff] to [bus 19] add_size 1000 Sep 13 00:04:53.776814 kernel: pci 0000:00:17.7: bridge window [io 0x1000-0x0fff] to [bus 1a] add_size 1000 Sep 13 00:04:53.776865 kernel: pci 0000:00:18.2: bridge window [io 0x1000-0x0fff] to [bus 1d] add_size 1000 Sep 13 00:04:53.777001 kernel: pci 0000:00:18.3: bridge window [io 0x1000-0x0fff] to [bus 1e] add_size 1000 Sep 13 00:04:53.777069 kernel: pci 0000:00:18.4: bridge window [io 0x1000-0x0fff] to [bus 1f] add_size 1000 Sep 13 00:04:53.777137 kernel: pci 0000:00:18.5: bridge window [io 0x1000-0x0fff] to [bus 20] add_size 1000 Sep 13 00:04:53.777190 kernel: pci 0000:00:18.6: bridge window [io 0x1000-0x0fff] to [bus 21] add_size 1000 Sep 13 00:04:53.777240 kernel: pci 0000:00:18.7: bridge window [io 0x1000-0x0fff] to [bus 22] add_size 1000 Sep 13 00:04:53.777291 kernel: pci 0000:00:15.0: BAR 15: assigned [mem 0xc0000000-0xc01fffff 64bit pref] Sep 13 00:04:53.777347 kernel: pci 0000:00:16.0: BAR 15: assigned [mem 0xc0200000-0xc03fffff 64bit pref] Sep 13 00:04:53.777398 kernel: pci 0000:00:15.3: BAR 13: no space for [io size 0x1000] Sep 13 00:04:53.777448 kernel: pci 0000:00:15.3: BAR 13: failed to assign [io size 0x1000] Sep 13 00:04:53.777499 kernel: pci 0000:00:15.4: BAR 13: no space for [io size 0x1000] Sep 13 00:04:53.777549 kernel: pci 0000:00:15.4: BAR 13: failed to assign [io size 0x1000] Sep 13 00:04:53.777599 kernel: pci 0000:00:15.5: BAR 13: no space for [io size 0x1000] Sep 13 00:04:53.777651 kernel: pci 0000:00:15.5: BAR 13: failed to assign [io size 0x1000] Sep 13 00:04:53.777702 kernel: pci 0000:00:15.6: BAR 13: no space for [io size 0x1000] Sep 13 00:04:53.777755 kernel: pci 0000:00:15.6: BAR 13: failed to assign [io size 0x1000] Sep 13 00:04:53.777806 kernel: pci 0000:00:15.7: BAR 13: no space for [io size 0x1000] Sep 13 00:04:53.777857 kernel: pci 0000:00:15.7: BAR 13: failed to assign [io size 0x1000] Sep 13 00:04:53.777915 kernel: pci 0000:00:16.3: BAR 13: no space for [io size 0x1000] Sep 13 00:04:53.777967 kernel: pci 0000:00:16.3: BAR 13: failed to assign [io size 0x1000] Sep 13 00:04:53.778019 kernel: pci 0000:00:16.4: BAR 13: no space for [io size 0x1000] Sep 13 00:04:53.778076 kernel: pci 0000:00:16.4: BAR 13: failed to assign [io size 0x1000] Sep 13 00:04:53.778136 kernel: pci 0000:00:16.5: BAR 13: no space for [io size 0x1000] Sep 13 00:04:53.778191 kernel: pci 0000:00:16.5: BAR 13: failed to assign [io size 0x1000] Sep 13 00:04:53.778242 kernel: pci 0000:00:16.6: BAR 13: no space for [io size 0x1000] Sep 13 00:04:53.778298 kernel: pci 0000:00:16.6: BAR 13: failed to assign [io size 0x1000] Sep 13 00:04:53.778350 kernel: pci 0000:00:16.7: BAR 13: no space for [io size 0x1000] Sep 13 00:04:53.778400 kernel: pci 0000:00:16.7: BAR 13: failed to assign [io size 0x1000] Sep 13 00:04:53.778452 kernel: pci 0000:00:17.3: BAR 13: no space for [io size 0x1000] Sep 13 00:04:53.778503 kernel: pci 0000:00:17.3: BAR 13: failed to assign [io size 0x1000] Sep 13 00:04:53.778554 kernel: pci 0000:00:17.4: BAR 13: no space for [io size 0x1000] Sep 13 00:04:53.778608 kernel: pci 0000:00:17.4: BAR 13: failed to assign [io size 0x1000] Sep 13 00:04:53.778659 kernel: pci 0000:00:17.5: BAR 13: no space for [io size 0x1000] Sep 13 00:04:53.778709 kernel: pci 0000:00:17.5: BAR 13: failed to assign [io size 0x1000] Sep 13 00:04:53.778760 kernel: pci 0000:00:17.6: BAR 13: no space for [io size 0x1000] Sep 13 00:04:53.778810 kernel: pci 0000:00:17.6: BAR 13: failed to assign [io size 0x1000] Sep 13 00:04:53.778860 kernel: pci 0000:00:17.7: BAR 13: no space for [io size 0x1000] Sep 13 00:04:53.778923 kernel: pci 0000:00:17.7: BAR 13: failed to assign [io size 0x1000] Sep 13 00:04:53.778977 kernel: pci 0000:00:18.2: BAR 13: no space for [io size 0x1000] Sep 13 00:04:53.779030 kernel: pci 0000:00:18.2: BAR 13: failed to assign [io size 0x1000] Sep 13 00:04:53.779081 kernel: pci 0000:00:18.3: BAR 13: no space for [io size 0x1000] Sep 13 00:04:53.779131 kernel: pci 0000:00:18.3: BAR 13: failed to assign [io size 0x1000] Sep 13 00:04:53.779183 kernel: pci 0000:00:18.4: BAR 13: no space for [io size 0x1000] Sep 13 00:04:53.779234 kernel: pci 0000:00:18.4: BAR 13: failed to assign [io size 0x1000] Sep 13 00:04:53.779285 kernel: pci 0000:00:18.5: BAR 13: no space for [io size 0x1000] Sep 13 00:04:53.779335 kernel: pci 0000:00:18.5: BAR 13: failed to assign [io size 0x1000] Sep 13 00:04:53.779386 kernel: pci 0000:00:18.6: BAR 13: no space for [io size 0x1000] Sep 13 00:04:53.779439 kernel: pci 0000:00:18.6: BAR 13: failed to assign [io size 0x1000] Sep 13 00:04:53.779490 kernel: pci 0000:00:18.7: BAR 13: no space for [io size 0x1000] Sep 13 00:04:53.779540 kernel: pci 0000:00:18.7: BAR 13: failed to assign [io size 0x1000] Sep 13 00:04:53.779590 kernel: pci 0000:00:18.7: BAR 13: no space for [io size 0x1000] Sep 13 00:04:53.779641 kernel: pci 0000:00:18.7: BAR 13: failed to assign [io size 0x1000] Sep 13 00:04:53.779691 kernel: pci 0000:00:18.6: BAR 13: no space for [io size 0x1000] Sep 13 00:04:53.779741 kernel: pci 0000:00:18.6: BAR 13: failed to assign [io size 0x1000] Sep 13 00:04:53.779791 kernel: pci 0000:00:18.5: BAR 13: no space for [io size 0x1000] Sep 13 00:04:53.779842 kernel: pci 0000:00:18.5: BAR 13: failed to assign [io size 0x1000] Sep 13 00:04:53.779906 kernel: pci 0000:00:18.4: BAR 13: no space for [io size 0x1000] Sep 13 00:04:53.779960 kernel: pci 0000:00:18.4: BAR 13: failed to assign [io size 0x1000] Sep 13 00:04:53.780026 kernel: pci 0000:00:18.3: BAR 13: no space for [io size 0x1000] Sep 13 00:04:53.780282 kernel: pci 0000:00:18.3: BAR 13: failed to assign [io size 0x1000] Sep 13 00:04:53.780397 kernel: pci 0000:00:18.2: BAR 13: no space for [io size 0x1000] Sep 13 00:04:53.780472 kernel: pci 0000:00:18.2: BAR 13: failed to assign [io size 0x1000] Sep 13 00:04:53.780527 kernel: pci 0000:00:17.7: BAR 13: no space for [io size 0x1000] Sep 13 00:04:53.780578 kernel: pci 0000:00:17.7: BAR 13: failed to assign [io size 0x1000] Sep 13 00:04:53.780630 kernel: pci 0000:00:17.6: BAR 13: no space for [io size 0x1000] Sep 13 00:04:53.780680 kernel: pci 0000:00:17.6: BAR 13: failed to assign [io size 0x1000] Sep 13 00:04:53.780734 kernel: pci 0000:00:17.5: BAR 13: no space for [io size 0x1000] Sep 13 00:04:53.780784 kernel: pci 0000:00:17.5: BAR 13: failed to assign [io size 0x1000] Sep 13 00:04:53.780835 kernel: pci 0000:00:17.4: BAR 13: no space for [io size 0x1000] Sep 13 00:04:53.780885 kernel: pci 0000:00:17.4: BAR 13: failed to assign [io size 0x1000] Sep 13 00:04:53.780953 kernel: pci 0000:00:17.3: BAR 13: no space for [io size 0x1000] Sep 13 00:04:53.781004 kernel: pci 0000:00:17.3: BAR 13: failed to assign [io size 0x1000] Sep 13 00:04:53.781055 kernel: pci 0000:00:16.7: BAR 13: no space for [io size 0x1000] Sep 13 00:04:53.781109 kernel: pci 0000:00:16.7: BAR 13: failed to assign [io size 0x1000] Sep 13 00:04:53.781160 kernel: pci 0000:00:16.6: BAR 13: no space for [io size 0x1000] Sep 13 00:04:53.781213 kernel: pci 0000:00:16.6: BAR 13: failed to assign [io size 0x1000] Sep 13 00:04:53.781264 kernel: pci 0000:00:16.5: BAR 13: no space for [io size 0x1000] Sep 13 00:04:53.781314 kernel: pci 0000:00:16.5: BAR 13: failed to assign [io size 0x1000] Sep 13 00:04:53.781365 kernel: pci 0000:00:16.4: BAR 13: no space for [io size 0x1000] Sep 13 00:04:53.781415 kernel: pci 0000:00:16.4: BAR 13: failed to assign [io size 0x1000] Sep 13 00:04:53.781465 kernel: pci 0000:00:16.3: BAR 13: no space for [io size 0x1000] Sep 13 00:04:53.781515 kernel: pci 0000:00:16.3: BAR 13: failed to assign [io size 0x1000] Sep 13 00:04:53.781566 kernel: pci 0000:00:15.7: BAR 13: no space for [io size 0x1000] Sep 13 00:04:53.781616 kernel: pci 0000:00:15.7: BAR 13: failed to assign [io size 0x1000] Sep 13 00:04:53.781666 kernel: pci 0000:00:15.6: BAR 13: no space for [io size 0x1000] Sep 13 00:04:53.781720 kernel: pci 0000:00:15.6: BAR 13: failed to assign [io size 0x1000] Sep 13 00:04:53.781770 kernel: pci 0000:00:15.5: BAR 13: no space for [io size 0x1000] Sep 13 00:04:53.781820 kernel: pci 0000:00:15.5: BAR 13: failed to assign [io size 0x1000] Sep 13 00:04:53.781870 kernel: pci 0000:00:15.4: BAR 13: no space for [io size 0x1000] Sep 13 00:04:53.781928 kernel: pci 0000:00:15.4: BAR 13: failed to assign [io size 0x1000] Sep 13 00:04:53.781979 kernel: pci 0000:00:15.3: BAR 13: no space for [io size 0x1000] Sep 13 00:04:53.782038 kernel: pci 0000:00:15.3: BAR 13: failed to assign [io size 0x1000] Sep 13 00:04:53.782106 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Sep 13 00:04:53.782158 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] Sep 13 00:04:53.782213 kernel: pci 0000:00:11.0: bridge window [io 0x2000-0x3fff] Sep 13 00:04:53.782263 kernel: pci 0000:00:11.0: bridge window [mem 0xfd600000-0xfdffffff] Sep 13 00:04:53.782313 kernel: pci 0000:00:11.0: bridge window [mem 0xe7b00000-0xe7ffffff 64bit pref] Sep 13 00:04:53.782369 kernel: pci 0000:03:00.0: BAR 6: assigned [mem 0xfd500000-0xfd50ffff pref] Sep 13 00:04:53.782422 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] Sep 13 00:04:53.782472 kernel: pci 0000:00:15.0: bridge window [io 0x4000-0x4fff] Sep 13 00:04:53.782524 kernel: pci 0000:00:15.0: bridge window [mem 0xfd500000-0xfd5fffff] Sep 13 00:04:53.782576 kernel: pci 0000:00:15.0: bridge window [mem 0xc0000000-0xc01fffff 64bit pref] Sep 13 00:04:53.782630 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] Sep 13 00:04:53.782682 kernel: pci 0000:00:15.1: bridge window [io 0x8000-0x8fff] Sep 13 00:04:53.782732 kernel: pci 0000:00:15.1: bridge window [mem 0xfd100000-0xfd1fffff] Sep 13 00:04:53.782784 kernel: pci 0000:00:15.1: bridge window [mem 0xe7800000-0xe78fffff 64bit pref] Sep 13 00:04:53.782838 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] Sep 13 00:04:53.782890 kernel: pci 0000:00:15.2: bridge window [io 0xc000-0xcfff] Sep 13 00:04:53.782963 kernel: pci 0000:00:15.2: bridge window [mem 0xfcd00000-0xfcdfffff] Sep 13 00:04:53.783014 kernel: pci 0000:00:15.2: bridge window [mem 0xe7400000-0xe74fffff 64bit pref] Sep 13 00:04:53.783064 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] Sep 13 00:04:53.783118 kernel: pci 0000:00:15.3: bridge window [mem 0xfc900000-0xfc9fffff] Sep 13 00:04:53.783168 kernel: pci 0000:00:15.3: bridge window [mem 0xe7000000-0xe70fffff 64bit pref] Sep 13 00:04:53.783219 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] Sep 13 00:04:53.783305 kernel: pci 0000:00:15.4: bridge window [mem 0xfc500000-0xfc5fffff] Sep 13 00:04:53.783378 kernel: pci 0000:00:15.4: bridge window [mem 0xe6c00000-0xe6cfffff 64bit pref] Sep 13 00:04:53.783450 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] Sep 13 00:04:53.783505 kernel: pci 0000:00:15.5: bridge window [mem 0xfc100000-0xfc1fffff] Sep 13 00:04:53.783555 kernel: pci 0000:00:15.5: bridge window [mem 0xe6800000-0xe68fffff 64bit pref] Sep 13 00:04:53.783606 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] Sep 13 00:04:53.783657 kernel: pci 0000:00:15.6: bridge window [mem 0xfbd00000-0xfbdfffff] Sep 13 00:04:53.783708 kernel: pci 0000:00:15.6: bridge window [mem 0xe6400000-0xe64fffff 64bit pref] Sep 13 00:04:53.783759 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] Sep 13 00:04:53.783810 kernel: pci 0000:00:15.7: bridge window [mem 0xfb900000-0xfb9fffff] Sep 13 00:04:53.783861 kernel: pci 0000:00:15.7: bridge window [mem 0xe6000000-0xe60fffff 64bit pref] Sep 13 00:04:53.784187 kernel: pci 0000:0b:00.0: BAR 6: assigned [mem 0xfd400000-0xfd40ffff pref] Sep 13 00:04:53.784243 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] Sep 13 00:04:53.784298 kernel: pci 0000:00:16.0: bridge window [io 0x5000-0x5fff] Sep 13 00:04:53.784349 kernel: pci 0000:00:16.0: bridge window [mem 0xfd400000-0xfd4fffff] Sep 13 00:04:53.784400 kernel: pci 0000:00:16.0: bridge window [mem 0xc0200000-0xc03fffff 64bit pref] Sep 13 00:04:53.784724 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] Sep 13 00:04:53.784784 kernel: pci 0000:00:16.1: bridge window [io 0x9000-0x9fff] Sep 13 00:04:53.784836 kernel: pci 0000:00:16.1: bridge window [mem 0xfd000000-0xfd0fffff] Sep 13 00:04:53.784888 kernel: pci 0000:00:16.1: bridge window [mem 0xe7700000-0xe77fffff 64bit pref] Sep 13 00:04:53.785064 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] Sep 13 00:04:53.785118 kernel: pci 0000:00:16.2: bridge window [io 0xd000-0xdfff] Sep 13 00:04:53.785173 kernel: pci 0000:00:16.2: bridge window [mem 0xfcc00000-0xfccfffff] Sep 13 00:04:53.785225 kernel: pci 0000:00:16.2: bridge window [mem 0xe7300000-0xe73fffff 64bit pref] Sep 13 00:04:53.785278 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] Sep 13 00:04:53.785329 kernel: pci 0000:00:16.3: bridge window [mem 0xfc800000-0xfc8fffff] Sep 13 00:04:53.785380 kernel: pci 0000:00:16.3: bridge window [mem 0xe6f00000-0xe6ffffff 64bit pref] Sep 13 00:04:53.785431 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] Sep 13 00:04:53.785482 kernel: pci 0000:00:16.4: bridge window [mem 0xfc400000-0xfc4fffff] Sep 13 00:04:53.785533 kernel: pci 0000:00:16.4: bridge window [mem 0xe6b00000-0xe6bfffff 64bit pref] Sep 13 00:04:53.785583 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] Sep 13 00:04:53.785637 kernel: pci 0000:00:16.5: bridge window [mem 0xfc000000-0xfc0fffff] Sep 13 00:04:53.785688 kernel: pci 0000:00:16.5: bridge window [mem 0xe6700000-0xe67fffff 64bit pref] Sep 13 00:04:53.785739 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] Sep 13 00:04:53.785790 kernel: pci 0000:00:16.6: bridge window [mem 0xfbc00000-0xfbcfffff] Sep 13 00:04:53.785840 kernel: pci 0000:00:16.6: bridge window [mem 0xe6300000-0xe63fffff 64bit pref] Sep 13 00:04:53.785891 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] Sep 13 00:04:53.786002 kernel: pci 0000:00:16.7: bridge window [mem 0xfb800000-0xfb8fffff] Sep 13 00:04:53.786052 kernel: pci 0000:00:16.7: bridge window [mem 0xe5f00000-0xe5ffffff 64bit pref] Sep 13 00:04:53.787375 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] Sep 13 00:04:53.787435 kernel: pci 0000:00:17.0: bridge window [io 0x6000-0x6fff] Sep 13 00:04:53.787493 kernel: pci 0000:00:17.0: bridge window [mem 0xfd300000-0xfd3fffff] Sep 13 00:04:53.787545 kernel: pci 0000:00:17.0: bridge window [mem 0xe7a00000-0xe7afffff 64bit pref] Sep 13 00:04:53.787599 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] Sep 13 00:04:53.787650 kernel: pci 0000:00:17.1: bridge window [io 0xa000-0xafff] Sep 13 00:04:53.787701 kernel: pci 0000:00:17.1: bridge window [mem 0xfcf00000-0xfcffffff] Sep 13 00:04:53.787752 kernel: pci 0000:00:17.1: bridge window [mem 0xe7600000-0xe76fffff 64bit pref] Sep 13 00:04:53.787805 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] Sep 13 00:04:53.787856 kernel: pci 0000:00:17.2: bridge window [io 0xe000-0xefff] Sep 13 00:04:53.787942 kernel: pci 0000:00:17.2: bridge window [mem 0xfcb00000-0xfcbfffff] Sep 13 00:04:53.788014 kernel: pci 0000:00:17.2: bridge window [mem 0xe7200000-0xe72fffff 64bit pref] Sep 13 00:04:53.788068 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] Sep 13 00:04:53.788125 kernel: pci 0000:00:17.3: bridge window [mem 0xfc700000-0xfc7fffff] Sep 13 00:04:53.788175 kernel: pci 0000:00:17.3: bridge window [mem 0xe6e00000-0xe6efffff 64bit pref] Sep 13 00:04:53.788226 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] Sep 13 00:04:53.788276 kernel: pci 0000:00:17.4: bridge window [mem 0xfc300000-0xfc3fffff] Sep 13 00:04:53.788327 kernel: pci 0000:00:17.4: bridge window [mem 0xe6a00000-0xe6afffff 64bit pref] Sep 13 00:04:53.788378 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] Sep 13 00:04:53.788429 kernel: pci 0000:00:17.5: bridge window [mem 0xfbf00000-0xfbffffff] Sep 13 00:04:53.788479 kernel: pci 0000:00:17.5: bridge window [mem 0xe6600000-0xe66fffff 64bit pref] Sep 13 00:04:53.788534 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] Sep 13 00:04:53.788586 kernel: pci 0000:00:17.6: bridge window [mem 0xfbb00000-0xfbbfffff] Sep 13 00:04:53.788637 kernel: pci 0000:00:17.6: bridge window [mem 0xe6200000-0xe62fffff 64bit pref] Sep 13 00:04:53.788687 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] Sep 13 00:04:53.788738 kernel: pci 0000:00:17.7: bridge window [mem 0xfb700000-0xfb7fffff] Sep 13 00:04:53.788788 kernel: pci 0000:00:17.7: bridge window [mem 0xe5e00000-0xe5efffff 64bit pref] Sep 13 00:04:53.788839 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] Sep 13 00:04:53.788890 kernel: pci 0000:00:18.0: bridge window [io 0x7000-0x7fff] Sep 13 00:04:53.789974 kernel: pci 0000:00:18.0: bridge window [mem 0xfd200000-0xfd2fffff] Sep 13 00:04:53.790034 kernel: pci 0000:00:18.0: bridge window [mem 0xe7900000-0xe79fffff 64bit pref] Sep 13 00:04:53.790094 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] Sep 13 00:04:53.790147 kernel: pci 0000:00:18.1: bridge window [io 0xb000-0xbfff] Sep 13 00:04:53.790200 kernel: pci 0000:00:18.1: bridge window [mem 0xfce00000-0xfcefffff] Sep 13 00:04:53.790252 kernel: pci 0000:00:18.1: bridge window [mem 0xe7500000-0xe75fffff 64bit pref] Sep 13 00:04:53.790303 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] Sep 13 00:04:53.790355 kernel: pci 0000:00:18.2: bridge window [mem 0xfca00000-0xfcafffff] Sep 13 00:04:53.790406 kernel: pci 0000:00:18.2: bridge window [mem 0xe7100000-0xe71fffff 64bit pref] Sep 13 00:04:53.790457 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] Sep 13 00:04:53.790510 kernel: pci 0000:00:18.3: bridge window [mem 0xfc600000-0xfc6fffff] Sep 13 00:04:53.790564 kernel: pci 0000:00:18.3: bridge window [mem 0xe6d00000-0xe6dfffff 64bit pref] Sep 13 00:04:53.790615 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] Sep 13 00:04:53.790665 kernel: pci 0000:00:18.4: bridge window [mem 0xfc200000-0xfc2fffff] Sep 13 00:04:53.790716 kernel: pci 0000:00:18.4: bridge window [mem 0xe6900000-0xe69fffff 64bit pref] Sep 13 00:04:53.790767 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] Sep 13 00:04:53.790818 kernel: pci 0000:00:18.5: bridge window [mem 0xfbe00000-0xfbefffff] Sep 13 00:04:53.790868 kernel: pci 0000:00:18.5: bridge window [mem 0xe6500000-0xe65fffff 64bit pref] Sep 13 00:04:53.791023 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] Sep 13 00:04:53.791080 kernel: pci 0000:00:18.6: bridge window [mem 0xfba00000-0xfbafffff] Sep 13 00:04:53.791135 kernel: pci 0000:00:18.6: bridge window [mem 0xe6100000-0xe61fffff 64bit pref] Sep 13 00:04:53.791187 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] Sep 13 00:04:53.791237 kernel: pci 0000:00:18.7: bridge window [mem 0xfb600000-0xfb6fffff] Sep 13 00:04:53.791288 kernel: pci 0000:00:18.7: bridge window [mem 0xe5d00000-0xe5dfffff 64bit pref] Sep 13 00:04:53.791338 kernel: pci_bus 0000:00: resource 4 [mem 0x000a0000-0x000bffff window] Sep 13 00:04:53.791384 kernel: pci_bus 0000:00: resource 5 [mem 0x000cc000-0x000dbfff window] Sep 13 00:04:53.791429 kernel: pci_bus 0000:00: resource 6 [mem 0xc0000000-0xfebfffff window] Sep 13 00:04:53.791474 kernel: pci_bus 0000:00: resource 7 [io 0x0000-0x0cf7 window] Sep 13 00:04:53.791517 kernel: pci_bus 0000:00: resource 8 [io 0x0d00-0xfeff window] Sep 13 00:04:53.791570 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x3fff] Sep 13 00:04:53.791618 kernel: pci_bus 0000:02: resource 1 [mem 0xfd600000-0xfdffffff] Sep 13 00:04:53.791664 kernel: pci_bus 0000:02: resource 2 [mem 0xe7b00000-0xe7ffffff 64bit pref] Sep 13 00:04:53.791710 kernel: pci_bus 0000:02: resource 4 [mem 0x000a0000-0x000bffff window] Sep 13 00:04:53.791756 kernel: pci_bus 0000:02: resource 5 [mem 0x000cc000-0x000dbfff window] Sep 13 00:04:53.791804 kernel: pci_bus 0000:02: resource 6 [mem 0xc0000000-0xfebfffff window] Sep 13 00:04:53.791849 kernel: pci_bus 0000:02: resource 7 [io 0x0000-0x0cf7 window] Sep 13 00:04:53.791913 kernel: pci_bus 0000:02: resource 8 [io 0x0d00-0xfeff window] Sep 13 00:04:53.791970 kernel: pci_bus 0000:03: resource 0 [io 0x4000-0x4fff] Sep 13 00:04:53.792017 kernel: pci_bus 0000:03: resource 1 [mem 0xfd500000-0xfd5fffff] Sep 13 00:04:53.792064 kernel: pci_bus 0000:03: resource 2 [mem 0xc0000000-0xc01fffff 64bit pref] Sep 13 00:04:53.792114 kernel: pci_bus 0000:04: resource 0 [io 0x8000-0x8fff] Sep 13 00:04:53.792162 kernel: pci_bus 0000:04: resource 1 [mem 0xfd100000-0xfd1fffff] Sep 13 00:04:53.792208 kernel: pci_bus 0000:04: resource 2 [mem 0xe7800000-0xe78fffff 64bit pref] Sep 13 00:04:53.792261 kernel: pci_bus 0000:05: resource 0 [io 0xc000-0xcfff] Sep 13 00:04:53.792308 kernel: pci_bus 0000:05: resource 1 [mem 0xfcd00000-0xfcdfffff] Sep 13 00:04:53.792354 kernel: pci_bus 0000:05: resource 2 [mem 0xe7400000-0xe74fffff 64bit pref] Sep 13 00:04:53.792405 kernel: pci_bus 0000:06: resource 1 [mem 0xfc900000-0xfc9fffff] Sep 13 00:04:53.792452 kernel: pci_bus 0000:06: resource 2 [mem 0xe7000000-0xe70fffff 64bit pref] Sep 13 00:04:53.792503 kernel: pci_bus 0000:07: resource 1 [mem 0xfc500000-0xfc5fffff] Sep 13 00:04:53.792554 kernel: pci_bus 0000:07: resource 2 [mem 0xe6c00000-0xe6cfffff 64bit pref] Sep 13 00:04:53.792605 kernel: pci_bus 0000:08: resource 1 [mem 0xfc100000-0xfc1fffff] Sep 13 00:04:53.792652 kernel: pci_bus 0000:08: resource 2 [mem 0xe6800000-0xe68fffff 64bit pref] Sep 13 00:04:53.792702 kernel: pci_bus 0000:09: resource 1 [mem 0xfbd00000-0xfbdfffff] Sep 13 00:04:53.792750 kernel: pci_bus 0000:09: resource 2 [mem 0xe6400000-0xe64fffff 64bit pref] Sep 13 00:04:53.792804 kernel: pci_bus 0000:0a: resource 1 [mem 0xfb900000-0xfb9fffff] Sep 13 00:04:53.792862 kernel: pci_bus 0000:0a: resource 2 [mem 0xe6000000-0xe60fffff 64bit pref] Sep 13 00:04:53.792945 kernel: pci_bus 0000:0b: resource 0 [io 0x5000-0x5fff] Sep 13 00:04:53.792994 kernel: pci_bus 0000:0b: resource 1 [mem 0xfd400000-0xfd4fffff] Sep 13 00:04:53.793041 kernel: pci_bus 0000:0b: resource 2 [mem 0xc0200000-0xc03fffff 64bit pref] Sep 13 00:04:53.793092 kernel: pci_bus 0000:0c: resource 0 [io 0x9000-0x9fff] Sep 13 00:04:53.793185 kernel: pci_bus 0000:0c: resource 1 [mem 0xfd000000-0xfd0fffff] Sep 13 00:04:53.793234 kernel: pci_bus 0000:0c: resource 2 [mem 0xe7700000-0xe77fffff 64bit pref] Sep 13 00:04:53.793290 kernel: pci_bus 0000:0d: resource 0 [io 0xd000-0xdfff] Sep 13 00:04:53.793339 kernel: pci_bus 0000:0d: resource 1 [mem 0xfcc00000-0xfccfffff] Sep 13 00:04:53.793390 kernel: pci_bus 0000:0d: resource 2 [mem 0xe7300000-0xe73fffff 64bit pref] Sep 13 00:04:53.793442 kernel: pci_bus 0000:0e: resource 1 [mem 0xfc800000-0xfc8fffff] Sep 13 00:04:53.793491 kernel: pci_bus 0000:0e: resource 2 [mem 0xe6f00000-0xe6ffffff 64bit pref] Sep 13 00:04:53.793543 kernel: pci_bus 0000:0f: resource 1 [mem 0xfc400000-0xfc4fffff] Sep 13 00:04:53.793594 kernel: pci_bus 0000:0f: resource 2 [mem 0xe6b00000-0xe6bfffff 64bit pref] Sep 13 00:04:53.793646 kernel: pci_bus 0000:10: resource 1 [mem 0xfc000000-0xfc0fffff] Sep 13 00:04:53.793694 kernel: pci_bus 0000:10: resource 2 [mem 0xe6700000-0xe67fffff 64bit pref] Sep 13 00:04:53.793745 kernel: pci_bus 0000:11: resource 1 [mem 0xfbc00000-0xfbcfffff] Sep 13 00:04:53.793793 kernel: pci_bus 0000:11: resource 2 [mem 0xe6300000-0xe63fffff 64bit pref] Sep 13 00:04:53.793844 kernel: pci_bus 0000:12: resource 1 [mem 0xfb800000-0xfb8fffff] Sep 13 00:04:53.793952 kernel: pci_bus 0000:12: resource 2 [mem 0xe5f00000-0xe5ffffff 64bit pref] Sep 13 00:04:53.794006 kernel: pci_bus 0000:13: resource 0 [io 0x6000-0x6fff] Sep 13 00:04:53.794054 kernel: pci_bus 0000:13: resource 1 [mem 0xfd300000-0xfd3fffff] Sep 13 00:04:53.794106 kernel: pci_bus 0000:13: resource 2 [mem 0xe7a00000-0xe7afffff 64bit pref] Sep 13 00:04:53.794160 kernel: pci_bus 0000:14: resource 0 [io 0xa000-0xafff] Sep 13 00:04:53.794208 kernel: pci_bus 0000:14: resource 1 [mem 0xfcf00000-0xfcffffff] Sep 13 00:04:53.794254 kernel: pci_bus 0000:14: resource 2 [mem 0xe7600000-0xe76fffff 64bit pref] Sep 13 00:04:53.794309 kernel: pci_bus 0000:15: resource 0 [io 0xe000-0xefff] Sep 13 00:04:53.794356 kernel: pci_bus 0000:15: resource 1 [mem 0xfcb00000-0xfcbfffff] Sep 13 00:04:53.794403 kernel: pci_bus 0000:15: resource 2 [mem 0xe7200000-0xe72fffff 64bit pref] Sep 13 00:04:53.794453 kernel: pci_bus 0000:16: resource 1 [mem 0xfc700000-0xfc7fffff] Sep 13 00:04:53.794501 kernel: pci_bus 0000:16: resource 2 [mem 0xe6e00000-0xe6efffff 64bit pref] Sep 13 00:04:53.794552 kernel: pci_bus 0000:17: resource 1 [mem 0xfc300000-0xfc3fffff] Sep 13 00:04:53.794603 kernel: pci_bus 0000:17: resource 2 [mem 0xe6a00000-0xe6afffff 64bit pref] Sep 13 00:04:53.794655 kernel: pci_bus 0000:18: resource 1 [mem 0xfbf00000-0xfbffffff] Sep 13 00:04:53.794702 kernel: pci_bus 0000:18: resource 2 [mem 0xe6600000-0xe66fffff 64bit pref] Sep 13 00:04:53.794754 kernel: pci_bus 0000:19: resource 1 [mem 0xfbb00000-0xfbbfffff] Sep 13 00:04:53.794801 kernel: pci_bus 0000:19: resource 2 [mem 0xe6200000-0xe62fffff 64bit pref] Sep 13 00:04:53.794852 kernel: pci_bus 0000:1a: resource 1 [mem 0xfb700000-0xfb7fffff] Sep 13 00:04:53.794908 kernel: pci_bus 0000:1a: resource 2 [mem 0xe5e00000-0xe5efffff 64bit pref] Sep 13 00:04:53.794965 kernel: pci_bus 0000:1b: resource 0 [io 0x7000-0x7fff] Sep 13 00:04:53.795013 kernel: pci_bus 0000:1b: resource 1 [mem 0xfd200000-0xfd2fffff] Sep 13 00:04:53.795060 kernel: pci_bus 0000:1b: resource 2 [mem 0xe7900000-0xe79fffff 64bit pref] Sep 13 00:04:53.795113 kernel: pci_bus 0000:1c: resource 0 [io 0xb000-0xbfff] Sep 13 00:04:53.795161 kernel: pci_bus 0000:1c: resource 1 [mem 0xfce00000-0xfcefffff] Sep 13 00:04:53.795208 kernel: pci_bus 0000:1c: resource 2 [mem 0xe7500000-0xe75fffff 64bit pref] Sep 13 00:04:53.795261 kernel: pci_bus 0000:1d: resource 1 [mem 0xfca00000-0xfcafffff] Sep 13 00:04:53.795309 kernel: pci_bus 0000:1d: resource 2 [mem 0xe7100000-0xe71fffff 64bit pref] Sep 13 00:04:53.795360 kernel: pci_bus 0000:1e: resource 1 [mem 0xfc600000-0xfc6fffff] Sep 13 00:04:53.795407 kernel: pci_bus 0000:1e: resource 2 [mem 0xe6d00000-0xe6dfffff 64bit pref] Sep 13 00:04:53.795458 kernel: pci_bus 0000:1f: resource 1 [mem 0xfc200000-0xfc2fffff] Sep 13 00:04:53.795506 kernel: pci_bus 0000:1f: resource 2 [mem 0xe6900000-0xe69fffff 64bit pref] Sep 13 00:04:53.795561 kernel: pci_bus 0000:20: resource 1 [mem 0xfbe00000-0xfbefffff] Sep 13 00:04:53.795608 kernel: pci_bus 0000:20: resource 2 [mem 0xe6500000-0xe65fffff 64bit pref] Sep 13 00:04:53.795659 kernel: pci_bus 0000:21: resource 1 [mem 0xfba00000-0xfbafffff] Sep 13 00:04:53.795707 kernel: pci_bus 0000:21: resource 2 [mem 0xe6100000-0xe61fffff 64bit pref] Sep 13 00:04:53.795758 kernel: pci_bus 0000:22: resource 1 [mem 0xfb600000-0xfb6fffff] Sep 13 00:04:53.795806 kernel: pci_bus 0000:22: resource 2 [mem 0xe5d00000-0xe5dfffff 64bit pref] Sep 13 00:04:53.795863 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Sep 13 00:04:53.795872 kernel: PCI: CLS 32 bytes, default 64 Sep 13 00:04:53.795879 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Sep 13 00:04:53.795886 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x311fd3cd494, max_idle_ns: 440795223879 ns Sep 13 00:04:53.795899 kernel: clocksource: Switched to clocksource tsc Sep 13 00:04:53.795914 kernel: Initialise system trusted keyrings Sep 13 00:04:53.795921 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Sep 13 00:04:53.795927 kernel: Key type asymmetric registered Sep 13 00:04:53.795934 kernel: Asymmetric key parser 'x509' registered Sep 13 00:04:53.795942 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Sep 13 00:04:53.795949 kernel: io scheduler mq-deadline registered Sep 13 00:04:53.795955 kernel: io scheduler kyber registered Sep 13 00:04:53.795961 kernel: io scheduler bfq registered Sep 13 00:04:53.796020 kernel: pcieport 0000:00:15.0: PME: Signaling with IRQ 24 Sep 13 00:04:53.796076 kernel: pcieport 0000:00:15.0: pciehp: Slot #160 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 13 00:04:53.796131 kernel: pcieport 0000:00:15.1: PME: Signaling with IRQ 25 Sep 13 00:04:53.796183 kernel: pcieport 0000:00:15.1: pciehp: Slot #161 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 13 00:04:53.796237 kernel: pcieport 0000:00:15.2: PME: Signaling with IRQ 26 Sep 13 00:04:53.796290 kernel: pcieport 0000:00:15.2: pciehp: Slot #162 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 13 00:04:53.796342 kernel: pcieport 0000:00:15.3: PME: Signaling with IRQ 27 Sep 13 00:04:53.796393 kernel: pcieport 0000:00:15.3: pciehp: Slot #163 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 13 00:04:53.796445 kernel: pcieport 0000:00:15.4: PME: Signaling with IRQ 28 Sep 13 00:04:53.796497 kernel: pcieport 0000:00:15.4: pciehp: Slot #164 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 13 00:04:53.796553 kernel: pcieport 0000:00:15.5: PME: Signaling with IRQ 29 Sep 13 00:04:53.796604 kernel: pcieport 0000:00:15.5: pciehp: Slot #165 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 13 00:04:53.796656 kernel: pcieport 0000:00:15.6: PME: Signaling with IRQ 30 Sep 13 00:04:53.796708 kernel: pcieport 0000:00:15.6: pciehp: Slot #166 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 13 00:04:53.796761 kernel: pcieport 0000:00:15.7: PME: Signaling with IRQ 31 Sep 13 00:04:53.796816 kernel: pcieport 0000:00:15.7: pciehp: Slot #167 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 13 00:04:53.796867 kernel: pcieport 0000:00:16.0: PME: Signaling with IRQ 32 Sep 13 00:04:53.796947 kernel: pcieport 0000:00:16.0: pciehp: Slot #192 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 13 00:04:53.797000 kernel: pcieport 0000:00:16.1: PME: Signaling with IRQ 33 Sep 13 00:04:53.797051 kernel: pcieport 0000:00:16.1: pciehp: Slot #193 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 13 00:04:53.797102 kernel: pcieport 0000:00:16.2: PME: Signaling with IRQ 34 Sep 13 00:04:53.797153 kernel: pcieport 0000:00:16.2: pciehp: Slot #194 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 13 00:04:53.797206 kernel: pcieport 0000:00:16.3: PME: Signaling with IRQ 35 Sep 13 00:04:53.797257 kernel: pcieport 0000:00:16.3: pciehp: Slot #195 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 13 00:04:53.797312 kernel: pcieport 0000:00:16.4: PME: Signaling with IRQ 36 Sep 13 00:04:53.797363 kernel: pcieport 0000:00:16.4: pciehp: Slot #196 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 13 00:04:53.797414 kernel: pcieport 0000:00:16.5: PME: Signaling with IRQ 37 Sep 13 00:04:53.797467 kernel: pcieport 0000:00:16.5: pciehp: Slot #197 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 13 00:04:53.797522 kernel: pcieport 0000:00:16.6: PME: Signaling with IRQ 38 Sep 13 00:04:53.797596 kernel: pcieport 0000:00:16.6: pciehp: Slot #198 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 13 00:04:53.797666 kernel: pcieport 0000:00:16.7: PME: Signaling with IRQ 39 Sep 13 00:04:53.797741 kernel: pcieport 0000:00:16.7: pciehp: Slot #199 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 13 00:04:53.797796 kernel: pcieport 0000:00:17.0: PME: Signaling with IRQ 40 Sep 13 00:04:53.797852 kernel: pcieport 0000:00:17.0: pciehp: Slot #224 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 13 00:04:53.797926 kernel: pcieport 0000:00:17.1: PME: Signaling with IRQ 41 Sep 13 00:04:53.797981 kernel: pcieport 0000:00:17.1: pciehp: Slot #225 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 13 00:04:53.798033 kernel: pcieport 0000:00:17.2: PME: Signaling with IRQ 42 Sep 13 00:04:53.798112 kernel: pcieport 0000:00:17.2: pciehp: Slot #226 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 13 00:04:53.798177 kernel: pcieport 0000:00:17.3: PME: Signaling with IRQ 43 Sep 13 00:04:53.798230 kernel: pcieport 0000:00:17.3: pciehp: Slot #227 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 13 00:04:53.798293 kernel: pcieport 0000:00:17.4: PME: Signaling with IRQ 44 Sep 13 00:04:53.798351 kernel: pcieport 0000:00:17.4: pciehp: Slot #228 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 13 00:04:53.798410 kernel: pcieport 0000:00:17.5: PME: Signaling with IRQ 45 Sep 13 00:04:53.798463 kernel: pcieport 0000:00:17.5: pciehp: Slot #229 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 13 00:04:53.798525 kernel: pcieport 0000:00:17.6: PME: Signaling with IRQ 46 Sep 13 00:04:53.798581 kernel: pcieport 0000:00:17.6: pciehp: Slot #230 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 13 00:04:53.798633 kernel: pcieport 0000:00:17.7: PME: Signaling with IRQ 47 Sep 13 00:04:53.798684 kernel: pcieport 0000:00:17.7: pciehp: Slot #231 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 13 00:04:53.798737 kernel: pcieport 0000:00:18.0: PME: Signaling with IRQ 48 Sep 13 00:04:53.798788 kernel: pcieport 0000:00:18.0: pciehp: Slot #256 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 13 00:04:53.798839 kernel: pcieport 0000:00:18.1: PME: Signaling with IRQ 49 Sep 13 00:04:53.799005 kernel: pcieport 0000:00:18.1: pciehp: Slot #257 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 13 00:04:53.799065 kernel: pcieport 0000:00:18.2: PME: Signaling with IRQ 50 Sep 13 00:04:53.799117 kernel: pcieport 0000:00:18.2: pciehp: Slot #258 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 13 00:04:53.799168 kernel: pcieport 0000:00:18.3: PME: Signaling with IRQ 51 Sep 13 00:04:53.799220 kernel: pcieport 0000:00:18.3: pciehp: Slot #259 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 13 00:04:53.799272 kernel: pcieport 0000:00:18.4: PME: Signaling with IRQ 52 Sep 13 00:04:53.799326 kernel: pcieport 0000:00:18.4: pciehp: Slot #260 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 13 00:04:53.799377 kernel: pcieport 0000:00:18.5: PME: Signaling with IRQ 53 Sep 13 00:04:53.799429 kernel: pcieport 0000:00:18.5: pciehp: Slot #261 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 13 00:04:53.799480 kernel: pcieport 0000:00:18.6: PME: Signaling with IRQ 54 Sep 13 00:04:53.799531 kernel: pcieport 0000:00:18.6: pciehp: Slot #262 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 13 00:04:53.799585 kernel: pcieport 0000:00:18.7: PME: Signaling with IRQ 55 Sep 13 00:04:53.799636 kernel: pcieport 0000:00:18.7: pciehp: Slot #263 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 13 00:04:53.799646 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 13 00:04:53.799653 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 13 00:04:53.799660 kernel: 00:05: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 13 00:04:53.799667 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBC,PNP0f13:MOUS] at 0x60,0x64 irq 1,12 Sep 13 00:04:53.799674 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Sep 13 00:04:53.799682 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Sep 13 00:04:53.799735 kernel: rtc_cmos 00:01: registered as rtc0 Sep 13 00:04:53.799785 kernel: rtc_cmos 00:01: setting system clock to 2025-09-13T00:04:53 UTC (1757721893) Sep 13 00:04:53.799832 kernel: rtc_cmos 00:01: alarms up to one month, y3k, 114 bytes nvram Sep 13 00:04:53.799841 kernel: intel_pstate: CPU model not supported Sep 13 00:04:53.799847 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Sep 13 00:04:53.799854 kernel: NET: Registered PF_INET6 protocol family Sep 13 00:04:53.799860 kernel: Segment Routing with IPv6 Sep 13 00:04:53.799869 kernel: In-situ OAM (IOAM) with IPv6 Sep 13 00:04:53.799876 kernel: NET: Registered PF_PACKET protocol family Sep 13 00:04:53.799882 kernel: Key type dns_resolver registered Sep 13 00:04:53.799889 kernel: IPI shorthand broadcast: enabled Sep 13 00:04:53.801305 kernel: sched_clock: Marking stable (917003453, 240745519)->(1214698653, -56949681) Sep 13 00:04:53.801315 kernel: registered taskstats version 1 Sep 13 00:04:53.801322 kernel: Loading compiled-in X.509 certificates Sep 13 00:04:53.801329 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.106-flatcar: 1274e0c573ac8d09163d6bc6d1ee1445fb2f8cc6' Sep 13 00:04:53.801338 kernel: Key type .fscrypt registered Sep 13 00:04:53.801346 kernel: Key type fscrypt-provisioning registered Sep 13 00:04:53.801352 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 13 00:04:53.801358 kernel: ima: Allocated hash algorithm: sha1 Sep 13 00:04:53.801365 kernel: ima: No architecture policies found Sep 13 00:04:53.801372 kernel: clk: Disabling unused clocks Sep 13 00:04:53.801378 kernel: Freeing unused kernel image (initmem) memory: 42884K Sep 13 00:04:53.801385 kernel: Write protecting the kernel read-only data: 36864k Sep 13 00:04:53.801391 kernel: Freeing unused kernel image (rodata/data gap) memory: 1832K Sep 13 00:04:53.801398 kernel: Run /init as init process Sep 13 00:04:53.801405 kernel: with arguments: Sep 13 00:04:53.801412 kernel: /init Sep 13 00:04:53.801418 kernel: with environment: Sep 13 00:04:53.801424 kernel: HOME=/ Sep 13 00:04:53.801431 kernel: TERM=linux Sep 13 00:04:53.801437 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 13 00:04:53.801444 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 13 00:04:53.801453 systemd[1]: Detected virtualization vmware. Sep 13 00:04:53.801461 systemd[1]: Detected architecture x86-64. Sep 13 00:04:53.801467 systemd[1]: Running in initrd. Sep 13 00:04:53.801474 systemd[1]: No hostname configured, using default hostname. Sep 13 00:04:53.801480 systemd[1]: Hostname set to . Sep 13 00:04:53.801487 systemd[1]: Initializing machine ID from random generator. Sep 13 00:04:53.801493 systemd[1]: Queued start job for default target initrd.target. Sep 13 00:04:53.801500 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 13 00:04:53.801507 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 13 00:04:53.801515 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 13 00:04:53.801522 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 13 00:04:53.801528 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 13 00:04:53.801535 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 13 00:04:53.801543 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 13 00:04:53.801550 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 13 00:04:53.801557 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 13 00:04:53.801564 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 13 00:04:53.801571 systemd[1]: Reached target paths.target - Path Units. Sep 13 00:04:53.801577 systemd[1]: Reached target slices.target - Slice Units. Sep 13 00:04:53.801584 systemd[1]: Reached target swap.target - Swaps. Sep 13 00:04:53.801590 systemd[1]: Reached target timers.target - Timer Units. Sep 13 00:04:53.801597 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 13 00:04:53.801604 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 13 00:04:53.801610 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 13 00:04:53.801618 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Sep 13 00:04:53.801625 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 13 00:04:53.801631 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 13 00:04:53.801638 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 13 00:04:53.801645 systemd[1]: Reached target sockets.target - Socket Units. Sep 13 00:04:53.801651 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 13 00:04:53.801658 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 13 00:04:53.801665 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 13 00:04:53.801671 systemd[1]: Starting systemd-fsck-usr.service... Sep 13 00:04:53.801679 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 13 00:04:53.801686 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 13 00:04:53.801704 systemd-journald[216]: Collecting audit messages is disabled. Sep 13 00:04:53.801721 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 13 00:04:53.801730 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 13 00:04:53.801737 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 13 00:04:53.801743 systemd[1]: Finished systemd-fsck-usr.service. Sep 13 00:04:53.801750 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 13 00:04:53.801759 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 13 00:04:53.801765 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 13 00:04:53.801772 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 00:04:53.801780 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 13 00:04:53.801787 kernel: Bridge firewalling registered Sep 13 00:04:53.801794 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 13 00:04:53.801800 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 13 00:04:53.801807 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 13 00:04:53.801814 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 13 00:04:53.801822 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 13 00:04:53.801829 systemd-journald[216]: Journal started Sep 13 00:04:53.801843 systemd-journald[216]: Runtime Journal (/run/log/journal/5719d7ff84694b6888fd8c83d8e4a934) is 4.8M, max 38.7M, 33.8M free. Sep 13 00:04:53.759917 systemd-modules-load[217]: Inserted module 'overlay' Sep 13 00:04:53.784486 systemd-modules-load[217]: Inserted module 'br_netfilter' Sep 13 00:04:53.805529 systemd[1]: Started systemd-journald.service - Journal Service. Sep 13 00:04:53.815037 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 13 00:04:53.815488 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 13 00:04:53.816978 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 13 00:04:53.819672 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 13 00:04:53.821113 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 13 00:04:53.825532 dracut-cmdline[249]: dracut-dracut-053 Sep 13 00:04:53.827668 dracut-cmdline[249]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=2945e6465d436b7d1da8a9350a0544af0bd9aec821cd06987451d5e1d3071534 Sep 13 00:04:53.840982 systemd-resolved[251]: Positive Trust Anchors: Sep 13 00:04:53.840991 systemd-resolved[251]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 13 00:04:53.841013 systemd-resolved[251]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 13 00:04:53.843035 systemd-resolved[251]: Defaulting to hostname 'linux'. Sep 13 00:04:53.843632 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 13 00:04:53.843992 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 13 00:04:53.873912 kernel: SCSI subsystem initialized Sep 13 00:04:53.881914 kernel: Loading iSCSI transport class v2.0-870. Sep 13 00:04:53.889913 kernel: iscsi: registered transport (tcp) Sep 13 00:04:53.904919 kernel: iscsi: registered transport (qla4xxx) Sep 13 00:04:53.904964 kernel: QLogic iSCSI HBA Driver Sep 13 00:04:53.925597 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 13 00:04:53.930000 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 13 00:04:53.945177 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 13 00:04:53.945222 kernel: device-mapper: uevent: version 1.0.3 Sep 13 00:04:53.946248 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Sep 13 00:04:53.978917 kernel: raid6: avx2x4 gen() 49590 MB/s Sep 13 00:04:53.994910 kernel: raid6: avx2x2 gen() 52888 MB/s Sep 13 00:04:54.012100 kernel: raid6: avx2x1 gen() 42971 MB/s Sep 13 00:04:54.012150 kernel: raid6: using algorithm avx2x2 gen() 52888 MB/s Sep 13 00:04:54.030127 kernel: raid6: .... xor() 30180 MB/s, rmw enabled Sep 13 00:04:54.030181 kernel: raid6: using avx2x2 recovery algorithm Sep 13 00:04:54.043910 kernel: xor: automatically using best checksumming function avx Sep 13 00:04:54.148912 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 13 00:04:54.154403 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 13 00:04:54.159009 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 13 00:04:54.167219 systemd-udevd[435]: Using default interface naming scheme 'v255'. Sep 13 00:04:54.170368 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 13 00:04:54.178024 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 13 00:04:54.185515 dracut-pre-trigger[439]: rd.md=0: removing MD RAID activation Sep 13 00:04:54.202122 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 13 00:04:54.207992 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 13 00:04:54.281487 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 13 00:04:54.285010 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 13 00:04:54.293015 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 13 00:04:54.293495 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 13 00:04:54.294295 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 13 00:04:54.294684 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 13 00:04:54.298031 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 13 00:04:54.307568 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 13 00:04:54.343908 kernel: VMware PVSCSI driver - version 1.0.7.0-k Sep 13 00:04:54.352963 kernel: vmw_pvscsi: using 64bit dma Sep 13 00:04:54.357045 kernel: VMware vmxnet3 virtual NIC driver - version 1.7.0.0-k-NAPI Sep 13 00:04:54.357074 kernel: vmw_pvscsi: max_id: 16 Sep 13 00:04:54.357083 kernel: vmw_pvscsi: setting ring_pages to 8 Sep 13 00:04:54.361922 kernel: vmxnet3 0000:0b:00.0: # of Tx queues : 2, # of Rx queues : 2 Sep 13 00:04:54.369529 kernel: vmxnet3 0000:0b:00.0 eth0: NIC Link is Up 10000 Mbps Sep 13 00:04:54.369644 kernel: vmw_pvscsi: enabling reqCallThreshold Sep 13 00:04:54.369654 kernel: vmw_pvscsi: driver-based request coalescing enabled Sep 13 00:04:54.369661 kernel: vmw_pvscsi: using MSI-X Sep 13 00:04:54.370387 kernel: cryptd: max_cpu_qlen set to 1000 Sep 13 00:04:54.373055 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 13 00:04:54.373130 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 13 00:04:54.373459 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 13 00:04:54.373581 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 13 00:04:54.381455 kernel: vmxnet3 0000:0b:00.0 ens192: renamed from eth0 Sep 13 00:04:54.381553 kernel: scsi host0: VMware PVSCSI storage adapter rev 2, req/cmp/msg rings: 8/8/1 pages, cmd_per_lun=254 Sep 13 00:04:54.381628 kernel: vmw_pvscsi 0000:03:00.0: VMware PVSCSI rev 2 host #0 Sep 13 00:04:54.381695 kernel: scsi 0:0:0:0: Direct-Access VMware Virtual disk 2.0 PQ: 0 ANSI: 6 Sep 13 00:04:54.373651 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 00:04:54.381358 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 13 00:04:54.382912 kernel: AVX2 version of gcm_enc/dec engaged. Sep 13 00:04:54.387366 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 13 00:04:54.387917 kernel: AES CTR mode by8 optimization enabled Sep 13 00:04:54.400003 kernel: libata version 3.00 loaded. Sep 13 00:04:54.405119 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 00:04:54.406910 kernel: ata_piix 0000:00:07.1: version 2.13 Sep 13 00:04:54.409956 kernel: scsi host1: ata_piix Sep 13 00:04:54.410256 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 13 00:04:54.413370 kernel: scsi host2: ata_piix Sep 13 00:04:54.413472 kernel: ata1: PATA max UDMA/33 cmd 0x1f0 ctl 0x3f6 bmdma 0x1060 irq 14 Sep 13 00:04:54.413483 kernel: ata2: PATA max UDMA/33 cmd 0x170 ctl 0x376 bmdma 0x1068 irq 15 Sep 13 00:04:54.421567 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 13 00:04:54.582927 kernel: ata2.00: ATAPI: VMware Virtual IDE CDROM Drive, 00000001, max UDMA/33 Sep 13 00:04:54.588969 kernel: scsi 2:0:0:0: CD-ROM NECVMWar VMware IDE CDR10 1.00 PQ: 0 ANSI: 5 Sep 13 00:04:54.599731 kernel: sd 0:0:0:0: [sda] 17805312 512-byte logical blocks: (9.12 GB/8.49 GiB) Sep 13 00:04:54.600037 kernel: sd 0:0:0:0: [sda] Write Protect is off Sep 13 00:04:54.600135 kernel: sd 0:0:0:0: [sda] Mode Sense: 31 00 00 00 Sep 13 00:04:54.600223 kernel: sd 0:0:0:0: [sda] Cache data unavailable Sep 13 00:04:54.600286 kernel: sd 0:0:0:0: [sda] Assuming drive cache: write through Sep 13 00:04:54.629924 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 13 00:04:54.630920 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Sep 13 00:04:54.643926 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 1x/1x writer dvd-ram cd/rw xa/form2 cdda tray Sep 13 00:04:54.644143 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Sep 13 00:04:54.654925 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Sep 13 00:04:54.666911 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/sda6 scanned by (udev-worker) (481) Sep 13 00:04:54.668670 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_disk EFI-SYSTEM. Sep 13 00:04:54.669877 kernel: BTRFS: device fsid fa70a3b0-3d47-4508-bba0-9fa4607626aa devid 1 transid 36 /dev/sda3 scanned by (udev-worker) (493) Sep 13 00:04:54.672784 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_disk ROOT. Sep 13 00:04:54.678834 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_disk OEM. Sep 13 00:04:54.682615 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_disk USR-A. Sep 13 00:04:54.682783 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_disk USR-A. Sep 13 00:04:54.687997 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 13 00:04:54.750913 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 13 00:04:54.756928 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 13 00:04:55.802937 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 13 00:04:55.803776 disk-uuid[590]: The operation has completed successfully. Sep 13 00:04:55.845461 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 13 00:04:55.845537 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 13 00:04:55.848002 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 13 00:04:55.852297 sh[606]: Success Sep 13 00:04:55.861907 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Sep 13 00:04:55.976442 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 13 00:04:55.977571 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 13 00:04:55.977891 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 13 00:04:56.030925 kernel: BTRFS info (device dm-0): first mount of filesystem fa70a3b0-3d47-4508-bba0-9fa4607626aa Sep 13 00:04:56.030967 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Sep 13 00:04:56.030977 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Sep 13 00:04:56.030985 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 13 00:04:56.032326 kernel: BTRFS info (device dm-0): using free space tree Sep 13 00:04:56.039913 kernel: BTRFS info (device dm-0): enabling ssd optimizations Sep 13 00:04:56.042120 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 13 00:04:56.050002 systemd[1]: Starting afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments... Sep 13 00:04:56.051477 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 13 00:04:56.077236 kernel: BTRFS info (device sda6): first mount of filesystem 94088f30-ba7d-4694-bba6-875359d7b417 Sep 13 00:04:56.077277 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Sep 13 00:04:56.077286 kernel: BTRFS info (device sda6): using free space tree Sep 13 00:04:56.110917 kernel: BTRFS info (device sda6): enabling ssd optimizations Sep 13 00:04:56.120326 systemd[1]: mnt-oem.mount: Deactivated successfully. Sep 13 00:04:56.122097 kernel: BTRFS info (device sda6): last unmount of filesystem 94088f30-ba7d-4694-bba6-875359d7b417 Sep 13 00:04:56.129388 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 13 00:04:56.133025 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 13 00:04:56.187992 systemd[1]: Finished afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments. Sep 13 00:04:56.193998 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 13 00:04:56.261702 ignition[665]: Ignition 2.19.0 Sep 13 00:04:56.261710 ignition[665]: Stage: fetch-offline Sep 13 00:04:56.261733 ignition[665]: no configs at "/usr/lib/ignition/base.d" Sep 13 00:04:56.261739 ignition[665]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Sep 13 00:04:56.261804 ignition[665]: parsed url from cmdline: "" Sep 13 00:04:56.261807 ignition[665]: no config URL provided Sep 13 00:04:56.261810 ignition[665]: reading system config file "/usr/lib/ignition/user.ign" Sep 13 00:04:56.261815 ignition[665]: no config at "/usr/lib/ignition/user.ign" Sep 13 00:04:56.262191 ignition[665]: config successfully fetched Sep 13 00:04:56.262214 ignition[665]: parsing config with SHA512: c0eb510831ef02a8e5bf2e3d8264eff5a2489ae71cc3ec289809249edd76aa93ec0365019cd1ad09a86078aa2c8a9e8390d2e6b929e54decf77f356be764311d Sep 13 00:04:56.265498 unknown[665]: fetched base config from "system" Sep 13 00:04:56.265743 ignition[665]: fetch-offline: fetch-offline passed Sep 13 00:04:56.265506 unknown[665]: fetched user config from "vmware" Sep 13 00:04:56.265781 ignition[665]: Ignition finished successfully Sep 13 00:04:56.265927 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 13 00:04:56.266800 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 13 00:04:56.271020 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 13 00:04:56.284613 systemd-networkd[800]: lo: Link UP Sep 13 00:04:56.284622 systemd-networkd[800]: lo: Gained carrier Sep 13 00:04:56.285468 systemd-networkd[800]: Enumeration completed Sep 13 00:04:56.285771 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 13 00:04:56.285772 systemd-networkd[800]: ens192: Configuring with /etc/systemd/network/10-dracut-cmdline-99.network. Sep 13 00:04:56.285942 systemd[1]: Reached target network.target - Network. Sep 13 00:04:56.286028 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 13 00:04:56.290996 kernel: vmxnet3 0000:0b:00.0 ens192: intr type 3, mode 0, 3 vectors allocated Sep 13 00:04:56.291119 kernel: vmxnet3 0000:0b:00.0 ens192: NIC Link is Up 10000 Mbps Sep 13 00:04:56.289369 systemd-networkd[800]: ens192: Link UP Sep 13 00:04:56.289371 systemd-networkd[800]: ens192: Gained carrier Sep 13 00:04:56.294549 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 13 00:04:56.305345 ignition[802]: Ignition 2.19.0 Sep 13 00:04:56.305353 ignition[802]: Stage: kargs Sep 13 00:04:56.305501 ignition[802]: no configs at "/usr/lib/ignition/base.d" Sep 13 00:04:56.305508 ignition[802]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Sep 13 00:04:56.307373 ignition[802]: kargs: kargs passed Sep 13 00:04:56.307442 ignition[802]: Ignition finished successfully Sep 13 00:04:56.308686 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 13 00:04:56.313030 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 13 00:04:56.320858 ignition[809]: Ignition 2.19.0 Sep 13 00:04:56.320865 ignition[809]: Stage: disks Sep 13 00:04:56.320995 ignition[809]: no configs at "/usr/lib/ignition/base.d" Sep 13 00:04:56.321002 ignition[809]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Sep 13 00:04:56.321592 ignition[809]: disks: disks passed Sep 13 00:04:56.321624 ignition[809]: Ignition finished successfully Sep 13 00:04:56.322314 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 13 00:04:56.322849 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 13 00:04:56.323100 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 13 00:04:56.323315 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 13 00:04:56.323521 systemd[1]: Reached target sysinit.target - System Initialization. Sep 13 00:04:56.323727 systemd[1]: Reached target basic.target - Basic System. Sep 13 00:04:56.328992 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 13 00:04:56.378310 systemd-fsck[817]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Sep 13 00:04:56.379573 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 13 00:04:56.385052 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 13 00:04:56.467673 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 13 00:04:56.467907 kernel: EXT4-fs (sda9): mounted filesystem 3a3ecd49-b269-4fcb-bb61-e2994e1868ee r/w with ordered data mode. Quota mode: none. Sep 13 00:04:56.468189 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 13 00:04:56.492007 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 13 00:04:56.493382 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 13 00:04:56.493663 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 13 00:04:56.493691 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 13 00:04:56.493705 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 13 00:04:56.498545 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 13 00:04:56.499632 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 13 00:04:56.502911 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by mount (825) Sep 13 00:04:56.507194 kernel: BTRFS info (device sda6): first mount of filesystem 94088f30-ba7d-4694-bba6-875359d7b417 Sep 13 00:04:56.507230 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Sep 13 00:04:56.507239 kernel: BTRFS info (device sda6): using free space tree Sep 13 00:04:56.512911 kernel: BTRFS info (device sda6): enabling ssd optimizations Sep 13 00:04:56.514390 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 13 00:04:56.549460 initrd-setup-root[849]: cut: /sysroot/etc/passwd: No such file or directory Sep 13 00:04:56.558447 initrd-setup-root[856]: cut: /sysroot/etc/group: No such file or directory Sep 13 00:04:56.566024 initrd-setup-root[863]: cut: /sysroot/etc/shadow: No such file or directory Sep 13 00:04:56.570609 initrd-setup-root[870]: cut: /sysroot/etc/gshadow: No such file or directory Sep 13 00:04:56.644119 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 13 00:04:56.647998 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 13 00:04:56.649979 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 13 00:04:56.653982 kernel: BTRFS info (device sda6): last unmount of filesystem 94088f30-ba7d-4694-bba6-875359d7b417 Sep 13 00:04:56.671932 ignition[938]: INFO : Ignition 2.19.0 Sep 13 00:04:56.671932 ignition[938]: INFO : Stage: mount Sep 13 00:04:56.671932 ignition[938]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 13 00:04:56.671932 ignition[938]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Sep 13 00:04:56.671932 ignition[938]: INFO : mount: mount passed Sep 13 00:04:56.671932 ignition[938]: INFO : Ignition finished successfully Sep 13 00:04:56.672624 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 13 00:04:56.678009 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 13 00:04:56.707205 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 13 00:04:57.028283 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 13 00:04:57.033013 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 13 00:04:57.041601 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (949) Sep 13 00:04:57.041634 kernel: BTRFS info (device sda6): first mount of filesystem 94088f30-ba7d-4694-bba6-875359d7b417 Sep 13 00:04:57.041644 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Sep 13 00:04:57.042585 kernel: BTRFS info (device sda6): using free space tree Sep 13 00:04:57.045906 kernel: BTRFS info (device sda6): enabling ssd optimizations Sep 13 00:04:57.047011 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 13 00:04:57.061995 ignition[966]: INFO : Ignition 2.19.0 Sep 13 00:04:57.061995 ignition[966]: INFO : Stage: files Sep 13 00:04:57.062512 ignition[966]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 13 00:04:57.062512 ignition[966]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Sep 13 00:04:57.062986 ignition[966]: DEBUG : files: compiled without relabeling support, skipping Sep 13 00:04:57.065680 ignition[966]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 13 00:04:57.065680 ignition[966]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 13 00:04:57.089608 ignition[966]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 13 00:04:57.089815 ignition[966]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 13 00:04:57.089970 ignition[966]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 13 00:04:57.089867 unknown[966]: wrote ssh authorized keys file for user: core Sep 13 00:04:57.101748 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Sep 13 00:04:57.102137 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Sep 13 00:04:57.248931 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 13 00:04:57.544189 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Sep 13 00:04:57.544464 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Sep 13 00:04:57.544464 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Sep 13 00:04:57.544464 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 13 00:04:57.544464 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 13 00:04:57.544464 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 13 00:04:57.545284 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 13 00:04:57.545284 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 13 00:04:57.545284 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 13 00:04:57.545284 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 13 00:04:57.545284 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 13 00:04:57.545284 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 13 00:04:57.545284 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 13 00:04:57.545284 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 13 00:04:57.545284 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Sep 13 00:04:57.966648 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Sep 13 00:04:58.032018 systemd-networkd[800]: ens192: Gained IPv6LL Sep 13 00:04:58.922259 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 13 00:04:58.922766 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/etc/systemd/network/00-vmware.network" Sep 13 00:04:58.922766 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/etc/systemd/network/00-vmware.network" Sep 13 00:04:58.922766 ignition[966]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Sep 13 00:04:58.922766 ignition[966]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 13 00:04:58.923711 ignition[966]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 13 00:04:58.923711 ignition[966]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Sep 13 00:04:58.923711 ignition[966]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Sep 13 00:04:58.923711 ignition[966]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 13 00:04:58.923711 ignition[966]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 13 00:04:58.923711 ignition[966]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Sep 13 00:04:58.923711 ignition[966]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Sep 13 00:04:59.017712 ignition[966]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Sep 13 00:04:59.020928 ignition[966]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Sep 13 00:04:59.020928 ignition[966]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Sep 13 00:04:59.020928 ignition[966]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Sep 13 00:04:59.020928 ignition[966]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Sep 13 00:04:59.020928 ignition[966]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 13 00:04:59.021712 ignition[966]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 13 00:04:59.021712 ignition[966]: INFO : files: files passed Sep 13 00:04:59.021712 ignition[966]: INFO : Ignition finished successfully Sep 13 00:04:59.023085 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 13 00:04:59.028096 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 13 00:04:59.029026 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 13 00:04:59.030871 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 13 00:04:59.030982 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 13 00:04:59.040356 initrd-setup-root-after-ignition[1001]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 13 00:04:59.040685 initrd-setup-root-after-ignition[997]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 13 00:04:59.040685 initrd-setup-root-after-ignition[997]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 13 00:04:59.041764 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 13 00:04:59.042046 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 13 00:04:59.045006 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 13 00:04:59.057943 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 13 00:04:59.058000 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 13 00:04:59.058219 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 13 00:04:59.058316 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 13 00:04:59.058426 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 13 00:04:59.059493 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 13 00:04:59.068200 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 13 00:04:59.073991 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 13 00:04:59.079285 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 13 00:04:59.079593 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 13 00:04:59.079759 systemd[1]: Stopped target timers.target - Timer Units. Sep 13 00:04:59.079905 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 13 00:04:59.079980 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 13 00:04:59.080218 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 13 00:04:59.080362 systemd[1]: Stopped target basic.target - Basic System. Sep 13 00:04:59.080500 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 13 00:04:59.080647 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 13 00:04:59.080799 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 13 00:04:59.082259 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 13 00:04:59.082407 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 13 00:04:59.082587 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 13 00:04:59.082755 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 13 00:04:59.082912 systemd[1]: Stopped target swap.target - Swaps. Sep 13 00:04:59.083079 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 13 00:04:59.083175 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 13 00:04:59.083698 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 13 00:04:59.083952 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 13 00:04:59.084118 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 13 00:04:59.084165 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 13 00:04:59.084320 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 13 00:04:59.084423 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 13 00:04:59.084836 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 13 00:04:59.084953 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 13 00:04:59.085265 systemd[1]: Stopped target paths.target - Path Units. Sep 13 00:04:59.085425 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 13 00:04:59.088922 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 13 00:04:59.089132 systemd[1]: Stopped target slices.target - Slice Units. Sep 13 00:04:59.089412 systemd[1]: Stopped target sockets.target - Socket Units. Sep 13 00:04:59.089629 systemd[1]: iscsid.socket: Deactivated successfully. Sep 13 00:04:59.089689 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 13 00:04:59.089979 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 13 00:04:59.090061 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 13 00:04:59.090246 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 13 00:04:59.090313 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 13 00:04:59.090568 systemd[1]: ignition-files.service: Deactivated successfully. Sep 13 00:04:59.090627 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 13 00:04:59.099089 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 13 00:04:59.099245 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 13 00:04:59.099356 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 13 00:04:59.102070 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 13 00:04:59.102210 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 13 00:04:59.102313 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 13 00:04:59.102857 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 13 00:04:59.102940 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 13 00:04:59.105984 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 13 00:04:59.106148 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 13 00:04:59.113743 ignition[1021]: INFO : Ignition 2.19.0 Sep 13 00:04:59.113743 ignition[1021]: INFO : Stage: umount Sep 13 00:04:59.113743 ignition[1021]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 13 00:04:59.113743 ignition[1021]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Sep 13 00:04:59.114381 ignition[1021]: INFO : umount: umount passed Sep 13 00:04:59.114381 ignition[1021]: INFO : Ignition finished successfully Sep 13 00:04:59.115222 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 13 00:04:59.115286 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 13 00:04:59.115703 systemd[1]: Stopped target network.target - Network. Sep 13 00:04:59.115955 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 13 00:04:59.115987 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 13 00:04:59.116254 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 13 00:04:59.116278 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 13 00:04:59.116639 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 13 00:04:59.116662 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 13 00:04:59.116914 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 13 00:04:59.116938 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 13 00:04:59.117552 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 13 00:04:59.117829 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 13 00:04:59.120883 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 13 00:04:59.121096 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 13 00:04:59.121562 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 13 00:04:59.121700 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 13 00:04:59.128320 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 13 00:04:59.128685 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 13 00:04:59.128737 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 13 00:04:59.128914 systemd[1]: afterburn-network-kargs.service: Deactivated successfully. Sep 13 00:04:59.128939 systemd[1]: Stopped afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments. Sep 13 00:04:59.129117 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 13 00:04:59.129847 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 13 00:04:59.132366 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 13 00:04:59.132433 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 13 00:04:59.138807 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 13 00:04:59.138881 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 13 00:04:59.139540 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 13 00:04:59.139698 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 13 00:04:59.140048 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 13 00:04:59.140074 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 13 00:04:59.140967 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 13 00:04:59.141158 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 13 00:04:59.143427 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 13 00:04:59.143530 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 13 00:04:59.144364 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 13 00:04:59.144394 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 13 00:04:59.144523 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 13 00:04:59.144545 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 13 00:04:59.144782 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 13 00:04:59.144813 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 13 00:04:59.145267 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 13 00:04:59.145300 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 13 00:04:59.145603 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 13 00:04:59.145635 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 13 00:04:59.151029 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 13 00:04:59.151137 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 13 00:04:59.151172 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 13 00:04:59.151301 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 13 00:04:59.151323 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 00:04:59.154721 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 13 00:04:59.154792 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 13 00:04:59.234385 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 13 00:04:59.234469 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 13 00:04:59.234916 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 13 00:04:59.235049 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 13 00:04:59.235081 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 13 00:04:59.238987 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 13 00:04:59.246413 systemd[1]: Switching root. Sep 13 00:04:59.282138 systemd-journald[216]: Journal stopped Sep 13 00:04:53.740833 kernel: Linux version 6.6.106-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Sep 12 22:30:50 -00 2025 Sep 13 00:04:53.740849 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=2945e6465d436b7d1da8a9350a0544af0bd9aec821cd06987451d5e1d3071534 Sep 13 00:04:53.740856 kernel: Disabled fast string operations Sep 13 00:04:53.740860 kernel: BIOS-provided physical RAM map: Sep 13 00:04:53.740864 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ebff] usable Sep 13 00:04:53.740868 kernel: BIOS-e820: [mem 0x000000000009ec00-0x000000000009ffff] reserved Sep 13 00:04:53.740875 kernel: BIOS-e820: [mem 0x00000000000dc000-0x00000000000fffff] reserved Sep 13 00:04:53.740879 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007fedffff] usable Sep 13 00:04:53.740883 kernel: BIOS-e820: [mem 0x000000007fee0000-0x000000007fefefff] ACPI data Sep 13 00:04:53.740887 kernel: BIOS-e820: [mem 0x000000007feff000-0x000000007fefffff] ACPI NVS Sep 13 00:04:53.740892 kernel: BIOS-e820: [mem 0x000000007ff00000-0x000000007fffffff] usable Sep 13 00:04:53.740940 kernel: BIOS-e820: [mem 0x00000000f0000000-0x00000000f7ffffff] reserved Sep 13 00:04:53.740944 kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec0ffff] reserved Sep 13 00:04:53.740949 kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved Sep 13 00:04:53.740957 kernel: BIOS-e820: [mem 0x00000000fffe0000-0x00000000ffffffff] reserved Sep 13 00:04:53.740962 kernel: NX (Execute Disable) protection: active Sep 13 00:04:53.740966 kernel: APIC: Static calls initialized Sep 13 00:04:53.740971 kernel: SMBIOS 2.7 present. Sep 13 00:04:53.740976 kernel: DMI: VMware, Inc. VMware Virtual Platform/440BX Desktop Reference Platform, BIOS 6.00 05/28/2020 Sep 13 00:04:53.740981 kernel: vmware: hypercall mode: 0x00 Sep 13 00:04:53.740986 kernel: Hypervisor detected: VMware Sep 13 00:04:53.740991 kernel: vmware: TSC freq read from hypervisor : 3408.000 MHz Sep 13 00:04:53.740997 kernel: vmware: Host bus clock speed read from hypervisor : 66000000 Hz Sep 13 00:04:53.741002 kernel: vmware: using clock offset of 3932172313 ns Sep 13 00:04:53.741006 kernel: tsc: Detected 3408.000 MHz processor Sep 13 00:04:53.741012 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 13 00:04:53.741017 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 13 00:04:53.741022 kernel: last_pfn = 0x80000 max_arch_pfn = 0x400000000 Sep 13 00:04:53.741027 kernel: total RAM covered: 3072M Sep 13 00:04:53.741032 kernel: Found optimal setting for mtrr clean up Sep 13 00:04:53.741037 kernel: gran_size: 64K chunk_size: 64K num_reg: 2 lose cover RAM: 0G Sep 13 00:04:53.741044 kernel: MTRR map: 6 entries (5 fixed + 1 variable; max 21), built from 8 variable MTRRs Sep 13 00:04:53.741049 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 13 00:04:53.741053 kernel: Using GB pages for direct mapping Sep 13 00:04:53.741058 kernel: ACPI: Early table checksum verification disabled Sep 13 00:04:53.741063 kernel: ACPI: RSDP 0x00000000000F6A00 000024 (v02 PTLTD ) Sep 13 00:04:53.741068 kernel: ACPI: XSDT 0x000000007FEE965B 00005C (v01 INTEL 440BX 06040000 VMW 01324272) Sep 13 00:04:53.741078 kernel: ACPI: FACP 0x000000007FEFEE73 0000F4 (v04 INTEL 440BX 06040000 PTL 000F4240) Sep 13 00:04:53.741083 kernel: ACPI: DSDT 0x000000007FEEAD55 01411E (v01 PTLTD Custom 06040000 MSFT 03000001) Sep 13 00:04:53.741088 kernel: ACPI: FACS 0x000000007FEFFFC0 000040 Sep 13 00:04:53.741096 kernel: ACPI: FACS 0x000000007FEFFFC0 000040 Sep 13 00:04:53.741101 kernel: ACPI: BOOT 0x000000007FEEAD2D 000028 (v01 PTLTD $SBFTBL$ 06040000 LTP 00000001) Sep 13 00:04:53.741107 kernel: ACPI: APIC 0x000000007FEEA5EB 000742 (v01 PTLTD ? APIC 06040000 LTP 00000000) Sep 13 00:04:53.741112 kernel: ACPI: MCFG 0x000000007FEEA5AF 00003C (v01 PTLTD $PCITBL$ 06040000 LTP 00000001) Sep 13 00:04:53.741117 kernel: ACPI: SRAT 0x000000007FEE9757 0008A8 (v02 VMWARE MEMPLUG 06040000 VMW 00000001) Sep 13 00:04:53.741123 kernel: ACPI: HPET 0x000000007FEE971F 000038 (v01 VMWARE VMW HPET 06040000 VMW 00000001) Sep 13 00:04:53.741129 kernel: ACPI: WAET 0x000000007FEE96F7 000028 (v01 VMWARE VMW WAET 06040000 VMW 00000001) Sep 13 00:04:53.741134 kernel: ACPI: Reserving FACP table memory at [mem 0x7fefee73-0x7fefef66] Sep 13 00:04:53.741139 kernel: ACPI: Reserving DSDT table memory at [mem 0x7feead55-0x7fefee72] Sep 13 00:04:53.741144 kernel: ACPI: Reserving FACS table memory at [mem 0x7fefffc0-0x7fefffff] Sep 13 00:04:53.741150 kernel: ACPI: Reserving FACS table memory at [mem 0x7fefffc0-0x7fefffff] Sep 13 00:04:53.741155 kernel: ACPI: Reserving BOOT table memory at [mem 0x7feead2d-0x7feead54] Sep 13 00:04:53.741160 kernel: ACPI: Reserving APIC table memory at [mem 0x7feea5eb-0x7feead2c] Sep 13 00:04:53.741165 kernel: ACPI: Reserving MCFG table memory at [mem 0x7feea5af-0x7feea5ea] Sep 13 00:04:53.741170 kernel: ACPI: Reserving SRAT table memory at [mem 0x7fee9757-0x7fee9ffe] Sep 13 00:04:53.741176 kernel: ACPI: Reserving HPET table memory at [mem 0x7fee971f-0x7fee9756] Sep 13 00:04:53.741182 kernel: ACPI: Reserving WAET table memory at [mem 0x7fee96f7-0x7fee971e] Sep 13 00:04:53.741187 kernel: system APIC only can use physical flat Sep 13 00:04:53.741192 kernel: APIC: Switched APIC routing to: physical flat Sep 13 00:04:53.741197 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Sep 13 00:04:53.741202 kernel: SRAT: PXM 0 -> APIC 0x02 -> Node 0 Sep 13 00:04:53.741207 kernel: SRAT: PXM 0 -> APIC 0x04 -> Node 0 Sep 13 00:04:53.741212 kernel: SRAT: PXM 0 -> APIC 0x06 -> Node 0 Sep 13 00:04:53.741218 kernel: SRAT: PXM 0 -> APIC 0x08 -> Node 0 Sep 13 00:04:53.741224 kernel: SRAT: PXM 0 -> APIC 0x0a -> Node 0 Sep 13 00:04:53.741229 kernel: SRAT: PXM 0 -> APIC 0x0c -> Node 0 Sep 13 00:04:53.741234 kernel: SRAT: PXM 0 -> APIC 0x0e -> Node 0 Sep 13 00:04:53.741239 kernel: SRAT: PXM 0 -> APIC 0x10 -> Node 0 Sep 13 00:04:53.741244 kernel: SRAT: PXM 0 -> APIC 0x12 -> Node 0 Sep 13 00:04:53.741249 kernel: SRAT: PXM 0 -> APIC 0x14 -> Node 0 Sep 13 00:04:53.741254 kernel: SRAT: PXM 0 -> APIC 0x16 -> Node 0 Sep 13 00:04:53.741259 kernel: SRAT: PXM 0 -> APIC 0x18 -> Node 0 Sep 13 00:04:53.741264 kernel: SRAT: PXM 0 -> APIC 0x1a -> Node 0 Sep 13 00:04:53.741269 kernel: SRAT: PXM 0 -> APIC 0x1c -> Node 0 Sep 13 00:04:53.741275 kernel: SRAT: PXM 0 -> APIC 0x1e -> Node 0 Sep 13 00:04:53.741280 kernel: SRAT: PXM 0 -> APIC 0x20 -> Node 0 Sep 13 00:04:53.741285 kernel: SRAT: PXM 0 -> APIC 0x22 -> Node 0 Sep 13 00:04:53.741290 kernel: SRAT: PXM 0 -> APIC 0x24 -> Node 0 Sep 13 00:04:53.741295 kernel: SRAT: PXM 0 -> APIC 0x26 -> Node 0 Sep 13 00:04:53.741300 kernel: SRAT: PXM 0 -> APIC 0x28 -> Node 0 Sep 13 00:04:53.741305 kernel: SRAT: PXM 0 -> APIC 0x2a -> Node 0 Sep 13 00:04:53.741311 kernel: SRAT: PXM 0 -> APIC 0x2c -> Node 0 Sep 13 00:04:53.741316 kernel: SRAT: PXM 0 -> APIC 0x2e -> Node 0 Sep 13 00:04:53.741321 kernel: SRAT: PXM 0 -> APIC 0x30 -> Node 0 Sep 13 00:04:53.741326 kernel: SRAT: PXM 0 -> APIC 0x32 -> Node 0 Sep 13 00:04:53.741332 kernel: SRAT: PXM 0 -> APIC 0x34 -> Node 0 Sep 13 00:04:53.741337 kernel: SRAT: PXM 0 -> APIC 0x36 -> Node 0 Sep 13 00:04:53.741342 kernel: SRAT: PXM 0 -> APIC 0x38 -> Node 0 Sep 13 00:04:53.741347 kernel: SRAT: PXM 0 -> APIC 0x3a -> Node 0 Sep 13 00:04:53.741352 kernel: SRAT: PXM 0 -> APIC 0x3c -> Node 0 Sep 13 00:04:53.741357 kernel: SRAT: PXM 0 -> APIC 0x3e -> Node 0 Sep 13 00:04:53.741362 kernel: SRAT: PXM 0 -> APIC 0x40 -> Node 0 Sep 13 00:04:53.741367 kernel: SRAT: PXM 0 -> APIC 0x42 -> Node 0 Sep 13 00:04:53.741372 kernel: SRAT: PXM 0 -> APIC 0x44 -> Node 0 Sep 13 00:04:53.741378 kernel: SRAT: PXM 0 -> APIC 0x46 -> Node 0 Sep 13 00:04:53.741383 kernel: SRAT: PXM 0 -> APIC 0x48 -> Node 0 Sep 13 00:04:53.741389 kernel: SRAT: PXM 0 -> APIC 0x4a -> Node 0 Sep 13 00:04:53.741394 kernel: SRAT: PXM 0 -> APIC 0x4c -> Node 0 Sep 13 00:04:53.741399 kernel: SRAT: PXM 0 -> APIC 0x4e -> Node 0 Sep 13 00:04:53.741404 kernel: SRAT: PXM 0 -> APIC 0x50 -> Node 0 Sep 13 00:04:53.741409 kernel: SRAT: PXM 0 -> APIC 0x52 -> Node 0 Sep 13 00:04:53.741414 kernel: SRAT: PXM 0 -> APIC 0x54 -> Node 0 Sep 13 00:04:53.741419 kernel: SRAT: PXM 0 -> APIC 0x56 -> Node 0 Sep 13 00:04:53.741424 kernel: SRAT: PXM 0 -> APIC 0x58 -> Node 0 Sep 13 00:04:53.741429 kernel: SRAT: PXM 0 -> APIC 0x5a -> Node 0 Sep 13 00:04:53.741435 kernel: SRAT: PXM 0 -> APIC 0x5c -> Node 0 Sep 13 00:04:53.741440 kernel: SRAT: PXM 0 -> APIC 0x5e -> Node 0 Sep 13 00:04:53.741445 kernel: SRAT: PXM 0 -> APIC 0x60 -> Node 0 Sep 13 00:04:53.741450 kernel: SRAT: PXM 0 -> APIC 0x62 -> Node 0 Sep 13 00:04:53.741455 kernel: SRAT: PXM 0 -> APIC 0x64 -> Node 0 Sep 13 00:04:53.741460 kernel: SRAT: PXM 0 -> APIC 0x66 -> Node 0 Sep 13 00:04:53.741465 kernel: SRAT: PXM 0 -> APIC 0x68 -> Node 0 Sep 13 00:04:53.741470 kernel: SRAT: PXM 0 -> APIC 0x6a -> Node 0 Sep 13 00:04:53.741475 kernel: SRAT: PXM 0 -> APIC 0x6c -> Node 0 Sep 13 00:04:53.741480 kernel: SRAT: PXM 0 -> APIC 0x6e -> Node 0 Sep 13 00:04:53.741486 kernel: SRAT: PXM 0 -> APIC 0x70 -> Node 0 Sep 13 00:04:53.741491 kernel: SRAT: PXM 0 -> APIC 0x72 -> Node 0 Sep 13 00:04:53.741496 kernel: SRAT: PXM 0 -> APIC 0x74 -> Node 0 Sep 13 00:04:53.741507 kernel: SRAT: PXM 0 -> APIC 0x76 -> Node 0 Sep 13 00:04:53.741512 kernel: SRAT: PXM 0 -> APIC 0x78 -> Node 0 Sep 13 00:04:53.741517 kernel: SRAT: PXM 0 -> APIC 0x7a -> Node 0 Sep 13 00:04:53.741523 kernel: SRAT: PXM 0 -> APIC 0x7c -> Node 0 Sep 13 00:04:53.741528 kernel: SRAT: PXM 0 -> APIC 0x7e -> Node 0 Sep 13 00:04:53.741534 kernel: SRAT: PXM 0 -> APIC 0x80 -> Node 0 Sep 13 00:04:53.741540 kernel: SRAT: PXM 0 -> APIC 0x82 -> Node 0 Sep 13 00:04:53.741545 kernel: SRAT: PXM 0 -> APIC 0x84 -> Node 0 Sep 13 00:04:53.741550 kernel: SRAT: PXM 0 -> APIC 0x86 -> Node 0 Sep 13 00:04:53.741556 kernel: SRAT: PXM 0 -> APIC 0x88 -> Node 0 Sep 13 00:04:53.741561 kernel: SRAT: PXM 0 -> APIC 0x8a -> Node 0 Sep 13 00:04:53.741566 kernel: SRAT: PXM 0 -> APIC 0x8c -> Node 0 Sep 13 00:04:53.741572 kernel: SRAT: PXM 0 -> APIC 0x8e -> Node 0 Sep 13 00:04:53.741577 kernel: SRAT: PXM 0 -> APIC 0x90 -> Node 0 Sep 13 00:04:53.741582 kernel: SRAT: PXM 0 -> APIC 0x92 -> Node 0 Sep 13 00:04:53.741589 kernel: SRAT: PXM 0 -> APIC 0x94 -> Node 0 Sep 13 00:04:53.741595 kernel: SRAT: PXM 0 -> APIC 0x96 -> Node 0 Sep 13 00:04:53.741600 kernel: SRAT: PXM 0 -> APIC 0x98 -> Node 0 Sep 13 00:04:53.741605 kernel: SRAT: PXM 0 -> APIC 0x9a -> Node 0 Sep 13 00:04:53.741611 kernel: SRAT: PXM 0 -> APIC 0x9c -> Node 0 Sep 13 00:04:53.741616 kernel: SRAT: PXM 0 -> APIC 0x9e -> Node 0 Sep 13 00:04:53.741621 kernel: SRAT: PXM 0 -> APIC 0xa0 -> Node 0 Sep 13 00:04:53.741627 kernel: SRAT: PXM 0 -> APIC 0xa2 -> Node 0 Sep 13 00:04:53.741632 kernel: SRAT: PXM 0 -> APIC 0xa4 -> Node 0 Sep 13 00:04:53.741637 kernel: SRAT: PXM 0 -> APIC 0xa6 -> Node 0 Sep 13 00:04:53.741644 kernel: SRAT: PXM 0 -> APIC 0xa8 -> Node 0 Sep 13 00:04:53.741649 kernel: SRAT: PXM 0 -> APIC 0xaa -> Node 0 Sep 13 00:04:53.741654 kernel: SRAT: PXM 0 -> APIC 0xac -> Node 0 Sep 13 00:04:53.741660 kernel: SRAT: PXM 0 -> APIC 0xae -> Node 0 Sep 13 00:04:53.741665 kernel: SRAT: PXM 0 -> APIC 0xb0 -> Node 0 Sep 13 00:04:53.741670 kernel: SRAT: PXM 0 -> APIC 0xb2 -> Node 0 Sep 13 00:04:53.741675 kernel: SRAT: PXM 0 -> APIC 0xb4 -> Node 0 Sep 13 00:04:53.741681 kernel: SRAT: PXM 0 -> APIC 0xb6 -> Node 0 Sep 13 00:04:53.741686 kernel: SRAT: PXM 0 -> APIC 0xb8 -> Node 0 Sep 13 00:04:53.741692 kernel: SRAT: PXM 0 -> APIC 0xba -> Node 0 Sep 13 00:04:53.741697 kernel: SRAT: PXM 0 -> APIC 0xbc -> Node 0 Sep 13 00:04:53.741703 kernel: SRAT: PXM 0 -> APIC 0xbe -> Node 0 Sep 13 00:04:53.741709 kernel: SRAT: PXM 0 -> APIC 0xc0 -> Node 0 Sep 13 00:04:53.741714 kernel: SRAT: PXM 0 -> APIC 0xc2 -> Node 0 Sep 13 00:04:53.741719 kernel: SRAT: PXM 0 -> APIC 0xc4 -> Node 0 Sep 13 00:04:53.741725 kernel: SRAT: PXM 0 -> APIC 0xc6 -> Node 0 Sep 13 00:04:53.741730 kernel: SRAT: PXM 0 -> APIC 0xc8 -> Node 0 Sep 13 00:04:53.741735 kernel: SRAT: PXM 0 -> APIC 0xca -> Node 0 Sep 13 00:04:53.741741 kernel: SRAT: PXM 0 -> APIC 0xcc -> Node 0 Sep 13 00:04:53.741746 kernel: SRAT: PXM 0 -> APIC 0xce -> Node 0 Sep 13 00:04:53.741751 kernel: SRAT: PXM 0 -> APIC 0xd0 -> Node 0 Sep 13 00:04:53.741758 kernel: SRAT: PXM 0 -> APIC 0xd2 -> Node 0 Sep 13 00:04:53.741763 kernel: SRAT: PXM 0 -> APIC 0xd4 -> Node 0 Sep 13 00:04:53.741769 kernel: SRAT: PXM 0 -> APIC 0xd6 -> Node 0 Sep 13 00:04:53.741774 kernel: SRAT: PXM 0 -> APIC 0xd8 -> Node 0 Sep 13 00:04:53.741779 kernel: SRAT: PXM 0 -> APIC 0xda -> Node 0 Sep 13 00:04:53.741785 kernel: SRAT: PXM 0 -> APIC 0xdc -> Node 0 Sep 13 00:04:53.741790 kernel: SRAT: PXM 0 -> APIC 0xde -> Node 0 Sep 13 00:04:53.741796 kernel: SRAT: PXM 0 -> APIC 0xe0 -> Node 0 Sep 13 00:04:53.743913 kernel: SRAT: PXM 0 -> APIC 0xe2 -> Node 0 Sep 13 00:04:53.743924 kernel: SRAT: PXM 0 -> APIC 0xe4 -> Node 0 Sep 13 00:04:53.743929 kernel: SRAT: PXM 0 -> APIC 0xe6 -> Node 0 Sep 13 00:04:53.743935 kernel: SRAT: PXM 0 -> APIC 0xe8 -> Node 0 Sep 13 00:04:53.743940 kernel: SRAT: PXM 0 -> APIC 0xea -> Node 0 Sep 13 00:04:53.743946 kernel: SRAT: PXM 0 -> APIC 0xec -> Node 0 Sep 13 00:04:53.743951 kernel: SRAT: PXM 0 -> APIC 0xee -> Node 0 Sep 13 00:04:53.743957 kernel: SRAT: PXM 0 -> APIC 0xf0 -> Node 0 Sep 13 00:04:53.743962 kernel: SRAT: PXM 0 -> APIC 0xf2 -> Node 0 Sep 13 00:04:53.743968 kernel: SRAT: PXM 0 -> APIC 0xf4 -> Node 0 Sep 13 00:04:53.743973 kernel: SRAT: PXM 0 -> APIC 0xf6 -> Node 0 Sep 13 00:04:53.743978 kernel: SRAT: PXM 0 -> APIC 0xf8 -> Node 0 Sep 13 00:04:53.743985 kernel: SRAT: PXM 0 -> APIC 0xfa -> Node 0 Sep 13 00:04:53.743991 kernel: SRAT: PXM 0 -> APIC 0xfc -> Node 0 Sep 13 00:04:53.743996 kernel: SRAT: PXM 0 -> APIC 0xfe -> Node 0 Sep 13 00:04:53.744002 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Sep 13 00:04:53.744007 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Sep 13 00:04:53.744013 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000-0xbfffffff] hotplug Sep 13 00:04:53.744019 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7fffffff] -> [mem 0x00000000-0x7fffffff] Sep 13 00:04:53.744025 kernel: NODE_DATA(0) allocated [mem 0x7fffa000-0x7fffffff] Sep 13 00:04:53.744031 kernel: Zone ranges: Sep 13 00:04:53.744038 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 13 00:04:53.744043 kernel: DMA32 [mem 0x0000000001000000-0x000000007fffffff] Sep 13 00:04:53.744049 kernel: Normal empty Sep 13 00:04:53.744055 kernel: Movable zone start for each node Sep 13 00:04:53.744060 kernel: Early memory node ranges Sep 13 00:04:53.744066 kernel: node 0: [mem 0x0000000000001000-0x000000000009dfff] Sep 13 00:04:53.744075 kernel: node 0: [mem 0x0000000000100000-0x000000007fedffff] Sep 13 00:04:53.744081 kernel: node 0: [mem 0x000000007ff00000-0x000000007fffffff] Sep 13 00:04:53.744086 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007fffffff] Sep 13 00:04:53.744092 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 13 00:04:53.744099 kernel: On node 0, zone DMA: 98 pages in unavailable ranges Sep 13 00:04:53.744105 kernel: On node 0, zone DMA32: 32 pages in unavailable ranges Sep 13 00:04:53.744110 kernel: ACPI: PM-Timer IO Port: 0x1008 Sep 13 00:04:53.744116 kernel: system APIC only can use physical flat Sep 13 00:04:53.744122 kernel: ACPI: LAPIC_NMI (acpi_id[0x00] high edge lint[0x1]) Sep 13 00:04:53.744127 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1]) Sep 13 00:04:53.744133 kernel: ACPI: LAPIC_NMI (acpi_id[0x02] high edge lint[0x1]) Sep 13 00:04:53.744138 kernel: ACPI: LAPIC_NMI (acpi_id[0x03] high edge lint[0x1]) Sep 13 00:04:53.744144 kernel: ACPI: LAPIC_NMI (acpi_id[0x04] high edge lint[0x1]) Sep 13 00:04:53.744151 kernel: ACPI: LAPIC_NMI (acpi_id[0x05] high edge lint[0x1]) Sep 13 00:04:53.744156 kernel: ACPI: LAPIC_NMI (acpi_id[0x06] high edge lint[0x1]) Sep 13 00:04:53.744162 kernel: ACPI: LAPIC_NMI (acpi_id[0x07] high edge lint[0x1]) Sep 13 00:04:53.744167 kernel: ACPI: LAPIC_NMI (acpi_id[0x08] high edge lint[0x1]) Sep 13 00:04:53.744173 kernel: ACPI: LAPIC_NMI (acpi_id[0x09] high edge lint[0x1]) Sep 13 00:04:53.744178 kernel: ACPI: LAPIC_NMI (acpi_id[0x0a] high edge lint[0x1]) Sep 13 00:04:53.744184 kernel: ACPI: LAPIC_NMI (acpi_id[0x0b] high edge lint[0x1]) Sep 13 00:04:53.744190 kernel: ACPI: LAPIC_NMI (acpi_id[0x0c] high edge lint[0x1]) Sep 13 00:04:53.744195 kernel: ACPI: LAPIC_NMI (acpi_id[0x0d] high edge lint[0x1]) Sep 13 00:04:53.744201 kernel: ACPI: LAPIC_NMI (acpi_id[0x0e] high edge lint[0x1]) Sep 13 00:04:53.744207 kernel: ACPI: LAPIC_NMI (acpi_id[0x0f] high edge lint[0x1]) Sep 13 00:04:53.744213 kernel: ACPI: LAPIC_NMI (acpi_id[0x10] high edge lint[0x1]) Sep 13 00:04:53.744219 kernel: ACPI: LAPIC_NMI (acpi_id[0x11] high edge lint[0x1]) Sep 13 00:04:53.744224 kernel: ACPI: LAPIC_NMI (acpi_id[0x12] high edge lint[0x1]) Sep 13 00:04:53.744230 kernel: ACPI: LAPIC_NMI (acpi_id[0x13] high edge lint[0x1]) Sep 13 00:04:53.744235 kernel: ACPI: LAPIC_NMI (acpi_id[0x14] high edge lint[0x1]) Sep 13 00:04:53.744241 kernel: ACPI: LAPIC_NMI (acpi_id[0x15] high edge lint[0x1]) Sep 13 00:04:53.744246 kernel: ACPI: LAPIC_NMI (acpi_id[0x16] high edge lint[0x1]) Sep 13 00:04:53.744252 kernel: ACPI: LAPIC_NMI (acpi_id[0x17] high edge lint[0x1]) Sep 13 00:04:53.744258 kernel: ACPI: LAPIC_NMI (acpi_id[0x18] high edge lint[0x1]) Sep 13 00:04:53.744264 kernel: ACPI: LAPIC_NMI (acpi_id[0x19] high edge lint[0x1]) Sep 13 00:04:53.744269 kernel: ACPI: LAPIC_NMI (acpi_id[0x1a] high edge lint[0x1]) Sep 13 00:04:53.744275 kernel: ACPI: LAPIC_NMI (acpi_id[0x1b] high edge lint[0x1]) Sep 13 00:04:53.744281 kernel: ACPI: LAPIC_NMI (acpi_id[0x1c] high edge lint[0x1]) Sep 13 00:04:53.744286 kernel: ACPI: LAPIC_NMI (acpi_id[0x1d] high edge lint[0x1]) Sep 13 00:04:53.744292 kernel: ACPI: LAPIC_NMI (acpi_id[0x1e] high edge lint[0x1]) Sep 13 00:04:53.744297 kernel: ACPI: LAPIC_NMI (acpi_id[0x1f] high edge lint[0x1]) Sep 13 00:04:53.744303 kernel: ACPI: LAPIC_NMI (acpi_id[0x20] high edge lint[0x1]) Sep 13 00:04:53.744308 kernel: ACPI: LAPIC_NMI (acpi_id[0x21] high edge lint[0x1]) Sep 13 00:04:53.744315 kernel: ACPI: LAPIC_NMI (acpi_id[0x22] high edge lint[0x1]) Sep 13 00:04:53.744320 kernel: ACPI: LAPIC_NMI (acpi_id[0x23] high edge lint[0x1]) Sep 13 00:04:53.744326 kernel: ACPI: LAPIC_NMI (acpi_id[0x24] high edge lint[0x1]) Sep 13 00:04:53.744331 kernel: ACPI: LAPIC_NMI (acpi_id[0x25] high edge lint[0x1]) Sep 13 00:04:53.744337 kernel: ACPI: LAPIC_NMI (acpi_id[0x26] high edge lint[0x1]) Sep 13 00:04:53.744342 kernel: ACPI: LAPIC_NMI (acpi_id[0x27] high edge lint[0x1]) Sep 13 00:04:53.744348 kernel: ACPI: LAPIC_NMI (acpi_id[0x28] high edge lint[0x1]) Sep 13 00:04:53.744353 kernel: ACPI: LAPIC_NMI (acpi_id[0x29] high edge lint[0x1]) Sep 13 00:04:53.744359 kernel: ACPI: LAPIC_NMI (acpi_id[0x2a] high edge lint[0x1]) Sep 13 00:04:53.744364 kernel: ACPI: LAPIC_NMI (acpi_id[0x2b] high edge lint[0x1]) Sep 13 00:04:53.744371 kernel: ACPI: LAPIC_NMI (acpi_id[0x2c] high edge lint[0x1]) Sep 13 00:04:53.744377 kernel: ACPI: LAPIC_NMI (acpi_id[0x2d] high edge lint[0x1]) Sep 13 00:04:53.744382 kernel: ACPI: LAPIC_NMI (acpi_id[0x2e] high edge lint[0x1]) Sep 13 00:04:53.744388 kernel: ACPI: LAPIC_NMI (acpi_id[0x2f] high edge lint[0x1]) Sep 13 00:04:53.744393 kernel: ACPI: LAPIC_NMI (acpi_id[0x30] high edge lint[0x1]) Sep 13 00:04:53.744399 kernel: ACPI: LAPIC_NMI (acpi_id[0x31] high edge lint[0x1]) Sep 13 00:04:53.744405 kernel: ACPI: LAPIC_NMI (acpi_id[0x32] high edge lint[0x1]) Sep 13 00:04:53.744410 kernel: ACPI: LAPIC_NMI (acpi_id[0x33] high edge lint[0x1]) Sep 13 00:04:53.744416 kernel: ACPI: LAPIC_NMI (acpi_id[0x34] high edge lint[0x1]) Sep 13 00:04:53.744422 kernel: ACPI: LAPIC_NMI (acpi_id[0x35] high edge lint[0x1]) Sep 13 00:04:53.744428 kernel: ACPI: LAPIC_NMI (acpi_id[0x36] high edge lint[0x1]) Sep 13 00:04:53.744433 kernel: ACPI: LAPIC_NMI (acpi_id[0x37] high edge lint[0x1]) Sep 13 00:04:53.744439 kernel: ACPI: LAPIC_NMI (acpi_id[0x38] high edge lint[0x1]) Sep 13 00:04:53.744444 kernel: ACPI: LAPIC_NMI (acpi_id[0x39] high edge lint[0x1]) Sep 13 00:04:53.744450 kernel: ACPI: LAPIC_NMI (acpi_id[0x3a] high edge lint[0x1]) Sep 13 00:04:53.744455 kernel: ACPI: LAPIC_NMI (acpi_id[0x3b] high edge lint[0x1]) Sep 13 00:04:53.744461 kernel: ACPI: LAPIC_NMI (acpi_id[0x3c] high edge lint[0x1]) Sep 13 00:04:53.744467 kernel: ACPI: LAPIC_NMI (acpi_id[0x3d] high edge lint[0x1]) Sep 13 00:04:53.744473 kernel: ACPI: LAPIC_NMI (acpi_id[0x3e] high edge lint[0x1]) Sep 13 00:04:53.744479 kernel: ACPI: LAPIC_NMI (acpi_id[0x3f] high edge lint[0x1]) Sep 13 00:04:53.744485 kernel: ACPI: LAPIC_NMI (acpi_id[0x40] high edge lint[0x1]) Sep 13 00:04:53.744490 kernel: ACPI: LAPIC_NMI (acpi_id[0x41] high edge lint[0x1]) Sep 13 00:04:53.744496 kernel: ACPI: LAPIC_NMI (acpi_id[0x42] high edge lint[0x1]) Sep 13 00:04:53.744501 kernel: ACPI: LAPIC_NMI (acpi_id[0x43] high edge lint[0x1]) Sep 13 00:04:53.744507 kernel: ACPI: LAPIC_NMI (acpi_id[0x44] high edge lint[0x1]) Sep 13 00:04:53.744512 kernel: ACPI: LAPIC_NMI (acpi_id[0x45] high edge lint[0x1]) Sep 13 00:04:53.744518 kernel: ACPI: LAPIC_NMI (acpi_id[0x46] high edge lint[0x1]) Sep 13 00:04:53.744523 kernel: ACPI: LAPIC_NMI (acpi_id[0x47] high edge lint[0x1]) Sep 13 00:04:53.744530 kernel: ACPI: LAPIC_NMI (acpi_id[0x48] high edge lint[0x1]) Sep 13 00:04:53.744536 kernel: ACPI: LAPIC_NMI (acpi_id[0x49] high edge lint[0x1]) Sep 13 00:04:53.744541 kernel: ACPI: LAPIC_NMI (acpi_id[0x4a] high edge lint[0x1]) Sep 13 00:04:53.744547 kernel: ACPI: LAPIC_NMI (acpi_id[0x4b] high edge lint[0x1]) Sep 13 00:04:53.744553 kernel: ACPI: LAPIC_NMI (acpi_id[0x4c] high edge lint[0x1]) Sep 13 00:04:53.744558 kernel: ACPI: LAPIC_NMI (acpi_id[0x4d] high edge lint[0x1]) Sep 13 00:04:53.744564 kernel: ACPI: LAPIC_NMI (acpi_id[0x4e] high edge lint[0x1]) Sep 13 00:04:53.744569 kernel: ACPI: LAPIC_NMI (acpi_id[0x4f] high edge lint[0x1]) Sep 13 00:04:53.744575 kernel: ACPI: LAPIC_NMI (acpi_id[0x50] high edge lint[0x1]) Sep 13 00:04:53.744580 kernel: ACPI: LAPIC_NMI (acpi_id[0x51] high edge lint[0x1]) Sep 13 00:04:53.744587 kernel: ACPI: LAPIC_NMI (acpi_id[0x52] high edge lint[0x1]) Sep 13 00:04:53.744592 kernel: ACPI: LAPIC_NMI (acpi_id[0x53] high edge lint[0x1]) Sep 13 00:04:53.744598 kernel: ACPI: LAPIC_NMI (acpi_id[0x54] high edge lint[0x1]) Sep 13 00:04:53.744603 kernel: ACPI: LAPIC_NMI (acpi_id[0x55] high edge lint[0x1]) Sep 13 00:04:53.744609 kernel: ACPI: LAPIC_NMI (acpi_id[0x56] high edge lint[0x1]) Sep 13 00:04:53.744614 kernel: ACPI: LAPIC_NMI (acpi_id[0x57] high edge lint[0x1]) Sep 13 00:04:53.744620 kernel: ACPI: LAPIC_NMI (acpi_id[0x58] high edge lint[0x1]) Sep 13 00:04:53.744625 kernel: ACPI: LAPIC_NMI (acpi_id[0x59] high edge lint[0x1]) Sep 13 00:04:53.744631 kernel: ACPI: LAPIC_NMI (acpi_id[0x5a] high edge lint[0x1]) Sep 13 00:04:53.744638 kernel: ACPI: LAPIC_NMI (acpi_id[0x5b] high edge lint[0x1]) Sep 13 00:04:53.744643 kernel: ACPI: LAPIC_NMI (acpi_id[0x5c] high edge lint[0x1]) Sep 13 00:04:53.744649 kernel: ACPI: LAPIC_NMI (acpi_id[0x5d] high edge lint[0x1]) Sep 13 00:04:53.744654 kernel: ACPI: LAPIC_NMI (acpi_id[0x5e] high edge lint[0x1]) Sep 13 00:04:53.744660 kernel: ACPI: LAPIC_NMI (acpi_id[0x5f] high edge lint[0x1]) Sep 13 00:04:53.744666 kernel: ACPI: LAPIC_NMI (acpi_id[0x60] high edge lint[0x1]) Sep 13 00:04:53.744671 kernel: ACPI: LAPIC_NMI (acpi_id[0x61] high edge lint[0x1]) Sep 13 00:04:53.744677 kernel: ACPI: LAPIC_NMI (acpi_id[0x62] high edge lint[0x1]) Sep 13 00:04:53.744682 kernel: ACPI: LAPIC_NMI (acpi_id[0x63] high edge lint[0x1]) Sep 13 00:04:53.744688 kernel: ACPI: LAPIC_NMI (acpi_id[0x64] high edge lint[0x1]) Sep 13 00:04:53.744694 kernel: ACPI: LAPIC_NMI (acpi_id[0x65] high edge lint[0x1]) Sep 13 00:04:53.744700 kernel: ACPI: LAPIC_NMI (acpi_id[0x66] high edge lint[0x1]) Sep 13 00:04:53.744706 kernel: ACPI: LAPIC_NMI (acpi_id[0x67] high edge lint[0x1]) Sep 13 00:04:53.744711 kernel: ACPI: LAPIC_NMI (acpi_id[0x68] high edge lint[0x1]) Sep 13 00:04:53.744717 kernel: ACPI: LAPIC_NMI (acpi_id[0x69] high edge lint[0x1]) Sep 13 00:04:53.744722 kernel: ACPI: LAPIC_NMI (acpi_id[0x6a] high edge lint[0x1]) Sep 13 00:04:53.744728 kernel: ACPI: LAPIC_NMI (acpi_id[0x6b] high edge lint[0x1]) Sep 13 00:04:53.744733 kernel: ACPI: LAPIC_NMI (acpi_id[0x6c] high edge lint[0x1]) Sep 13 00:04:53.744739 kernel: ACPI: LAPIC_NMI (acpi_id[0x6d] high edge lint[0x1]) Sep 13 00:04:53.744744 kernel: ACPI: LAPIC_NMI (acpi_id[0x6e] high edge lint[0x1]) Sep 13 00:04:53.744751 kernel: ACPI: LAPIC_NMI (acpi_id[0x6f] high edge lint[0x1]) Sep 13 00:04:53.744756 kernel: ACPI: LAPIC_NMI (acpi_id[0x70] high edge lint[0x1]) Sep 13 00:04:53.744762 kernel: ACPI: LAPIC_NMI (acpi_id[0x71] high edge lint[0x1]) Sep 13 00:04:53.744768 kernel: ACPI: LAPIC_NMI (acpi_id[0x72] high edge lint[0x1]) Sep 13 00:04:53.744773 kernel: ACPI: LAPIC_NMI (acpi_id[0x73] high edge lint[0x1]) Sep 13 00:04:53.744778 kernel: ACPI: LAPIC_NMI (acpi_id[0x74] high edge lint[0x1]) Sep 13 00:04:53.744784 kernel: ACPI: LAPIC_NMI (acpi_id[0x75] high edge lint[0x1]) Sep 13 00:04:53.744790 kernel: ACPI: LAPIC_NMI (acpi_id[0x76] high edge lint[0x1]) Sep 13 00:04:53.744795 kernel: ACPI: LAPIC_NMI (acpi_id[0x77] high edge lint[0x1]) Sep 13 00:04:53.744802 kernel: ACPI: LAPIC_NMI (acpi_id[0x78] high edge lint[0x1]) Sep 13 00:04:53.744807 kernel: ACPI: LAPIC_NMI (acpi_id[0x79] high edge lint[0x1]) Sep 13 00:04:53.744813 kernel: ACPI: LAPIC_NMI (acpi_id[0x7a] high edge lint[0x1]) Sep 13 00:04:53.744818 kernel: ACPI: LAPIC_NMI (acpi_id[0x7b] high edge lint[0x1]) Sep 13 00:04:53.744824 kernel: ACPI: LAPIC_NMI (acpi_id[0x7c] high edge lint[0x1]) Sep 13 00:04:53.744829 kernel: ACPI: LAPIC_NMI (acpi_id[0x7d] high edge lint[0x1]) Sep 13 00:04:53.744835 kernel: ACPI: LAPIC_NMI (acpi_id[0x7e] high edge lint[0x1]) Sep 13 00:04:53.744841 kernel: ACPI: LAPIC_NMI (acpi_id[0x7f] high edge lint[0x1]) Sep 13 00:04:53.744846 kernel: IOAPIC[0]: apic_id 1, version 17, address 0xfec00000, GSI 0-23 Sep 13 00:04:53.744852 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 high edge) Sep 13 00:04:53.744858 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 13 00:04:53.744864 kernel: ACPI: HPET id: 0x8086af01 base: 0xfed00000 Sep 13 00:04:53.744870 kernel: TSC deadline timer available Sep 13 00:04:53.744875 kernel: smpboot: Allowing 128 CPUs, 126 hotplug CPUs Sep 13 00:04:53.744881 kernel: [mem 0x80000000-0xefffffff] available for PCI devices Sep 13 00:04:53.744887 kernel: Booting paravirtualized kernel on VMware hypervisor Sep 13 00:04:53.744908 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 13 00:04:53.744917 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:128 nr_cpu_ids:128 nr_node_ids:1 Sep 13 00:04:53.744923 kernel: percpu: Embedded 58 pages/cpu s197160 r8192 d32216 u262144 Sep 13 00:04:53.744931 kernel: pcpu-alloc: s197160 r8192 d32216 u262144 alloc=1*2097152 Sep 13 00:04:53.744937 kernel: pcpu-alloc: [0] 000 001 002 003 004 005 006 007 Sep 13 00:04:53.744942 kernel: pcpu-alloc: [0] 008 009 010 011 012 013 014 015 Sep 13 00:04:53.744948 kernel: pcpu-alloc: [0] 016 017 018 019 020 021 022 023 Sep 13 00:04:53.744953 kernel: pcpu-alloc: [0] 024 025 026 027 028 029 030 031 Sep 13 00:04:53.744959 kernel: pcpu-alloc: [0] 032 033 034 035 036 037 038 039 Sep 13 00:04:53.744972 kernel: pcpu-alloc: [0] 040 041 042 043 044 045 046 047 Sep 13 00:04:53.744979 kernel: pcpu-alloc: [0] 048 049 050 051 052 053 054 055 Sep 13 00:04:53.744985 kernel: pcpu-alloc: [0] 056 057 058 059 060 061 062 063 Sep 13 00:04:53.744992 kernel: pcpu-alloc: [0] 064 065 066 067 068 069 070 071 Sep 13 00:04:53.744998 kernel: pcpu-alloc: [0] 072 073 074 075 076 077 078 079 Sep 13 00:04:53.745003 kernel: pcpu-alloc: [0] 080 081 082 083 084 085 086 087 Sep 13 00:04:53.745009 kernel: pcpu-alloc: [0] 088 089 090 091 092 093 094 095 Sep 13 00:04:53.745015 kernel: pcpu-alloc: [0] 096 097 098 099 100 101 102 103 Sep 13 00:04:53.745021 kernel: pcpu-alloc: [0] 104 105 106 107 108 109 110 111 Sep 13 00:04:53.745026 kernel: pcpu-alloc: [0] 112 113 114 115 116 117 118 119 Sep 13 00:04:53.745032 kernel: pcpu-alloc: [0] 120 121 122 123 124 125 126 127 Sep 13 00:04:53.745040 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=2945e6465d436b7d1da8a9350a0544af0bd9aec821cd06987451d5e1d3071534 Sep 13 00:04:53.745047 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 13 00:04:53.745052 kernel: random: crng init done Sep 13 00:04:53.745058 kernel: printk: log_buf_len individual max cpu contribution: 4096 bytes Sep 13 00:04:53.745064 kernel: printk: log_buf_len total cpu_extra contributions: 520192 bytes Sep 13 00:04:53.745070 kernel: printk: log_buf_len min size: 262144 bytes Sep 13 00:04:53.745076 kernel: printk: log_buf_len: 1048576 bytes Sep 13 00:04:53.745082 kernel: printk: early log buf free: 239648(91%) Sep 13 00:04:53.745089 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 13 00:04:53.745095 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Sep 13 00:04:53.745101 kernel: Fallback order for Node 0: 0 Sep 13 00:04:53.745107 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515808 Sep 13 00:04:53.745113 kernel: Policy zone: DMA32 Sep 13 00:04:53.745119 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 13 00:04:53.745125 kernel: Memory: 1936388K/2096628K available (12288K kernel code, 2293K rwdata, 22744K rodata, 42884K init, 2312K bss, 159980K reserved, 0K cma-reserved) Sep 13 00:04:53.745132 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=128, Nodes=1 Sep 13 00:04:53.745138 kernel: ftrace: allocating 37974 entries in 149 pages Sep 13 00:04:53.745144 kernel: ftrace: allocated 149 pages with 4 groups Sep 13 00:04:53.745150 kernel: Dynamic Preempt: voluntary Sep 13 00:04:53.745157 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 13 00:04:53.745163 kernel: rcu: RCU event tracing is enabled. Sep 13 00:04:53.745169 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=128. Sep 13 00:04:53.745175 kernel: Trampoline variant of Tasks RCU enabled. Sep 13 00:04:53.745182 kernel: Rude variant of Tasks RCU enabled. Sep 13 00:04:53.745188 kernel: Tracing variant of Tasks RCU enabled. Sep 13 00:04:53.745194 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 13 00:04:53.745200 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=128 Sep 13 00:04:53.745206 kernel: NR_IRQS: 33024, nr_irqs: 1448, preallocated irqs: 16 Sep 13 00:04:53.745212 kernel: rcu: srcu_init: Setting srcu_struct sizes to big. Sep 13 00:04:53.745218 kernel: Console: colour VGA+ 80x25 Sep 13 00:04:53.745224 kernel: printk: console [tty0] enabled Sep 13 00:04:53.745230 kernel: printk: console [ttyS0] enabled Sep 13 00:04:53.745237 kernel: ACPI: Core revision 20230628 Sep 13 00:04:53.745243 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 133484882848 ns Sep 13 00:04:53.745249 kernel: APIC: Switch to symmetric I/O mode setup Sep 13 00:04:53.745255 kernel: x2apic enabled Sep 13 00:04:53.745261 kernel: APIC: Switched APIC routing to: physical x2apic Sep 13 00:04:53.745268 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Sep 13 00:04:53.745274 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x311fd3cd494, max_idle_ns: 440795223879 ns Sep 13 00:04:53.745280 kernel: Calibrating delay loop (skipped) preset value.. 6816.00 BogoMIPS (lpj=3408000) Sep 13 00:04:53.745286 kernel: Disabled fast string operations Sep 13 00:04:53.745292 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Sep 13 00:04:53.745299 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Sep 13 00:04:53.748923 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 13 00:04:53.748938 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Sep 13 00:04:53.748945 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Sep 13 00:04:53.748951 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS Sep 13 00:04:53.748957 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT Sep 13 00:04:53.748963 kernel: RETBleed: Mitigation: Enhanced IBRS Sep 13 00:04:53.748969 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Sep 13 00:04:53.748979 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Sep 13 00:04:53.748985 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Sep 13 00:04:53.748991 kernel: SRBDS: Unknown: Dependent on hypervisor status Sep 13 00:04:53.748997 kernel: GDS: Unknown: Dependent on hypervisor status Sep 13 00:04:53.749003 kernel: active return thunk: its_return_thunk Sep 13 00:04:53.749010 kernel: ITS: Mitigation: Aligned branch/return thunks Sep 13 00:04:53.749016 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 13 00:04:53.749022 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 13 00:04:53.749028 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 13 00:04:53.749035 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 13 00:04:53.749041 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Sep 13 00:04:53.749047 kernel: Freeing SMP alternatives memory: 32K Sep 13 00:04:53.749053 kernel: pid_max: default: 131072 minimum: 1024 Sep 13 00:04:53.749059 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Sep 13 00:04:53.749065 kernel: landlock: Up and running. Sep 13 00:04:53.749071 kernel: SELinux: Initializing. Sep 13 00:04:53.749077 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Sep 13 00:04:53.749084 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Sep 13 00:04:53.749091 kernel: smpboot: CPU0: Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz (family: 0x6, model: 0x9e, stepping: 0xd) Sep 13 00:04:53.749097 kernel: RCU Tasks: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. Sep 13 00:04:53.749103 kernel: RCU Tasks Rude: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. Sep 13 00:04:53.749109 kernel: RCU Tasks Trace: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. Sep 13 00:04:53.749117 kernel: Performance Events: Skylake events, core PMU driver. Sep 13 00:04:53.749127 kernel: core: CPUID marked event: 'cpu cycles' unavailable Sep 13 00:04:53.749136 kernel: core: CPUID marked event: 'instructions' unavailable Sep 13 00:04:53.749142 kernel: core: CPUID marked event: 'bus cycles' unavailable Sep 13 00:04:53.749148 kernel: core: CPUID marked event: 'cache references' unavailable Sep 13 00:04:53.749155 kernel: core: CPUID marked event: 'cache misses' unavailable Sep 13 00:04:53.749161 kernel: core: CPUID marked event: 'branch instructions' unavailable Sep 13 00:04:53.749167 kernel: core: CPUID marked event: 'branch misses' unavailable Sep 13 00:04:53.749173 kernel: ... version: 1 Sep 13 00:04:53.749179 kernel: ... bit width: 48 Sep 13 00:04:53.749185 kernel: ... generic registers: 4 Sep 13 00:04:53.749191 kernel: ... value mask: 0000ffffffffffff Sep 13 00:04:53.749198 kernel: ... max period: 000000007fffffff Sep 13 00:04:53.749203 kernel: ... fixed-purpose events: 0 Sep 13 00:04:53.749211 kernel: ... event mask: 000000000000000f Sep 13 00:04:53.749217 kernel: signal: max sigframe size: 1776 Sep 13 00:04:53.749223 kernel: rcu: Hierarchical SRCU implementation. Sep 13 00:04:53.749229 kernel: rcu: Max phase no-delay instances is 400. Sep 13 00:04:53.749235 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Sep 13 00:04:53.749241 kernel: smp: Bringing up secondary CPUs ... Sep 13 00:04:53.749247 kernel: smpboot: x86: Booting SMP configuration: Sep 13 00:04:53.749253 kernel: .... node #0, CPUs: #1 Sep 13 00:04:53.749259 kernel: Disabled fast string operations Sep 13 00:04:53.749266 kernel: smpboot: CPU 1 Converting physical 2 to logical package 1 Sep 13 00:04:53.749272 kernel: smpboot: CPU 1 Converting physical 0 to logical die 1 Sep 13 00:04:53.749278 kernel: smp: Brought up 1 node, 2 CPUs Sep 13 00:04:53.749284 kernel: smpboot: Max logical packages: 128 Sep 13 00:04:53.749290 kernel: smpboot: Total of 2 processors activated (13632.00 BogoMIPS) Sep 13 00:04:53.749296 kernel: devtmpfs: initialized Sep 13 00:04:53.749302 kernel: x86/mm: Memory block size: 128MB Sep 13 00:04:53.749308 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7feff000-0x7fefffff] (4096 bytes) Sep 13 00:04:53.749314 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 13 00:04:53.749321 kernel: futex hash table entries: 32768 (order: 9, 2097152 bytes, linear) Sep 13 00:04:53.749328 kernel: pinctrl core: initialized pinctrl subsystem Sep 13 00:04:53.749334 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 13 00:04:53.749340 kernel: audit: initializing netlink subsys (disabled) Sep 13 00:04:53.749346 kernel: audit: type=2000 audit(1757721892.091:1): state=initialized audit_enabled=0 res=1 Sep 13 00:04:53.749352 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 13 00:04:53.749358 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 13 00:04:53.749364 kernel: cpuidle: using governor menu Sep 13 00:04:53.749370 kernel: Simple Boot Flag at 0x36 set to 0x80 Sep 13 00:04:53.749377 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 13 00:04:53.749383 kernel: dca service started, version 1.12.1 Sep 13 00:04:53.749390 kernel: PCI: MMCONFIG for domain 0000 [bus 00-7f] at [mem 0xf0000000-0xf7ffffff] (base 0xf0000000) Sep 13 00:04:53.749396 kernel: PCI: Using configuration type 1 for base access Sep 13 00:04:53.749402 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 13 00:04:53.749408 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 13 00:04:53.749414 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Sep 13 00:04:53.749420 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 13 00:04:53.749426 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Sep 13 00:04:53.749433 kernel: ACPI: Added _OSI(Module Device) Sep 13 00:04:53.749439 kernel: ACPI: Added _OSI(Processor Device) Sep 13 00:04:53.749445 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 13 00:04:53.749451 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 13 00:04:53.749457 kernel: ACPI: [Firmware Bug]: BIOS _OSI(Linux) query ignored Sep 13 00:04:53.749463 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Sep 13 00:04:53.749469 kernel: ACPI: Interpreter enabled Sep 13 00:04:53.749475 kernel: ACPI: PM: (supports S0 S1 S5) Sep 13 00:04:53.749481 kernel: ACPI: Using IOAPIC for interrupt routing Sep 13 00:04:53.749488 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 13 00:04:53.749494 kernel: PCI: Using E820 reservations for host bridge windows Sep 13 00:04:53.749500 kernel: ACPI: Enabled 4 GPEs in block 00 to 0F Sep 13 00:04:53.749506 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-7f]) Sep 13 00:04:53.749598 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 13 00:04:53.749656 kernel: acpi PNP0A03:00: _OSC: platform does not support [AER LTR] Sep 13 00:04:53.749708 kernel: acpi PNP0A03:00: _OSC: OS now controls [PCIeHotplug PME PCIeCapability] Sep 13 00:04:53.749719 kernel: PCI host bridge to bus 0000:00 Sep 13 00:04:53.749773 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Sep 13 00:04:53.749837 kernel: pci_bus 0000:00: root bus resource [mem 0x000cc000-0x000dbfff window] Sep 13 00:04:53.749887 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Sep 13 00:04:53.749990 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Sep 13 00:04:53.750037 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xfeff window] Sep 13 00:04:53.750083 kernel: pci_bus 0000:00: root bus resource [bus 00-7f] Sep 13 00:04:53.750150 kernel: pci 0000:00:00.0: [8086:7190] type 00 class 0x060000 Sep 13 00:04:53.750207 kernel: pci 0000:00:01.0: [8086:7191] type 01 class 0x060400 Sep 13 00:04:53.750263 kernel: pci 0000:00:07.0: [8086:7110] type 00 class 0x060100 Sep 13 00:04:53.750319 kernel: pci 0000:00:07.1: [8086:7111] type 00 class 0x01018a Sep 13 00:04:53.750371 kernel: pci 0000:00:07.1: reg 0x20: [io 0x1060-0x106f] Sep 13 00:04:53.750422 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Sep 13 00:04:53.750482 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Sep 13 00:04:53.750535 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Sep 13 00:04:53.750586 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Sep 13 00:04:53.750643 kernel: pci 0000:00:07.3: [8086:7113] type 00 class 0x068000 Sep 13 00:04:53.750695 kernel: pci 0000:00:07.3: quirk: [io 0x1000-0x103f] claimed by PIIX4 ACPI Sep 13 00:04:53.750746 kernel: pci 0000:00:07.3: quirk: [io 0x1040-0x104f] claimed by PIIX4 SMB Sep 13 00:04:53.750800 kernel: pci 0000:00:07.7: [15ad:0740] type 00 class 0x088000 Sep 13 00:04:53.750860 kernel: pci 0000:00:07.7: reg 0x10: [io 0x1080-0x10bf] Sep 13 00:04:53.753039 kernel: pci 0000:00:07.7: reg 0x14: [mem 0xfebfe000-0xfebfffff 64bit] Sep 13 00:04:53.753103 kernel: pci 0000:00:0f.0: [15ad:0405] type 00 class 0x030000 Sep 13 00:04:53.753157 kernel: pci 0000:00:0f.0: reg 0x10: [io 0x1070-0x107f] Sep 13 00:04:53.753209 kernel: pci 0000:00:0f.0: reg 0x14: [mem 0xe8000000-0xefffffff pref] Sep 13 00:04:53.753260 kernel: pci 0000:00:0f.0: reg 0x18: [mem 0xfe000000-0xfe7fffff] Sep 13 00:04:53.753310 kernel: pci 0000:00:0f.0: reg 0x30: [mem 0x00000000-0x00007fff pref] Sep 13 00:04:53.753366 kernel: pci 0000:00:0f.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Sep 13 00:04:53.753422 kernel: pci 0000:00:11.0: [15ad:0790] type 01 class 0x060401 Sep 13 00:04:53.753478 kernel: pci 0000:00:15.0: [15ad:07a0] type 01 class 0x060400 Sep 13 00:04:53.753530 kernel: pci 0000:00:15.0: PME# supported from D0 D3hot D3cold Sep 13 00:04:53.753591 kernel: pci 0000:00:15.1: [15ad:07a0] type 01 class 0x060400 Sep 13 00:04:53.753643 kernel: pci 0000:00:15.1: PME# supported from D0 D3hot D3cold Sep 13 00:04:53.753702 kernel: pci 0000:00:15.2: [15ad:07a0] type 01 class 0x060400 Sep 13 00:04:53.753755 kernel: pci 0000:00:15.2: PME# supported from D0 D3hot D3cold Sep 13 00:04:53.753812 kernel: pci 0000:00:15.3: [15ad:07a0] type 01 class 0x060400 Sep 13 00:04:53.753864 kernel: pci 0000:00:15.3: PME# supported from D0 D3hot D3cold Sep 13 00:04:53.753974 kernel: pci 0000:00:15.4: [15ad:07a0] type 01 class 0x060400 Sep 13 00:04:53.754027 kernel: pci 0000:00:15.4: PME# supported from D0 D3hot D3cold Sep 13 00:04:53.754098 kernel: pci 0000:00:15.5: [15ad:07a0] type 01 class 0x060400 Sep 13 00:04:53.754161 kernel: pci 0000:00:15.5: PME# supported from D0 D3hot D3cold Sep 13 00:04:53.754216 kernel: pci 0000:00:15.6: [15ad:07a0] type 01 class 0x060400 Sep 13 00:04:53.754268 kernel: pci 0000:00:15.6: PME# supported from D0 D3hot D3cold Sep 13 00:04:53.754322 kernel: pci 0000:00:15.7: [15ad:07a0] type 01 class 0x060400 Sep 13 00:04:53.754374 kernel: pci 0000:00:15.7: PME# supported from D0 D3hot D3cold Sep 13 00:04:53.754432 kernel: pci 0000:00:16.0: [15ad:07a0] type 01 class 0x060400 Sep 13 00:04:53.754490 kernel: pci 0000:00:16.0: PME# supported from D0 D3hot D3cold Sep 13 00:04:53.754551 kernel: pci 0000:00:16.1: [15ad:07a0] type 01 class 0x060400 Sep 13 00:04:53.754603 kernel: pci 0000:00:16.1: PME# supported from D0 D3hot D3cold Sep 13 00:04:53.754660 kernel: pci 0000:00:16.2: [15ad:07a0] type 01 class 0x060400 Sep 13 00:04:53.754712 kernel: pci 0000:00:16.2: PME# supported from D0 D3hot D3cold Sep 13 00:04:53.754771 kernel: pci 0000:00:16.3: [15ad:07a0] type 01 class 0x060400 Sep 13 00:04:53.754823 kernel: pci 0000:00:16.3: PME# supported from D0 D3hot D3cold Sep 13 00:04:53.754879 kernel: pci 0000:00:16.4: [15ad:07a0] type 01 class 0x060400 Sep 13 00:04:53.758196 kernel: pci 0000:00:16.4: PME# supported from D0 D3hot D3cold Sep 13 00:04:53.758271 kernel: pci 0000:00:16.5: [15ad:07a0] type 01 class 0x060400 Sep 13 00:04:53.758332 kernel: pci 0000:00:16.5: PME# supported from D0 D3hot D3cold Sep 13 00:04:53.758395 kernel: pci 0000:00:16.6: [15ad:07a0] type 01 class 0x060400 Sep 13 00:04:53.758456 kernel: pci 0000:00:16.6: PME# supported from D0 D3hot D3cold Sep 13 00:04:53.758512 kernel: pci 0000:00:16.7: [15ad:07a0] type 01 class 0x060400 Sep 13 00:04:53.758564 kernel: pci 0000:00:16.7: PME# supported from D0 D3hot D3cold Sep 13 00:04:53.758620 kernel: pci 0000:00:17.0: [15ad:07a0] type 01 class 0x060400 Sep 13 00:04:53.758671 kernel: pci 0000:00:17.0: PME# supported from D0 D3hot D3cold Sep 13 00:04:53.758732 kernel: pci 0000:00:17.1: [15ad:07a0] type 01 class 0x060400 Sep 13 00:04:53.758785 kernel: pci 0000:00:17.1: PME# supported from D0 D3hot D3cold Sep 13 00:04:53.758840 kernel: pci 0000:00:17.2: [15ad:07a0] type 01 class 0x060400 Sep 13 00:04:53.758892 kernel: pci 0000:00:17.2: PME# supported from D0 D3hot D3cold Sep 13 00:04:53.758959 kernel: pci 0000:00:17.3: [15ad:07a0] type 01 class 0x060400 Sep 13 00:04:53.759011 kernel: pci 0000:00:17.3: PME# supported from D0 D3hot D3cold Sep 13 00:04:53.759070 kernel: pci 0000:00:17.4: [15ad:07a0] type 01 class 0x060400 Sep 13 00:04:53.759122 kernel: pci 0000:00:17.4: PME# supported from D0 D3hot D3cold Sep 13 00:04:53.759178 kernel: pci 0000:00:17.5: [15ad:07a0] type 01 class 0x060400 Sep 13 00:04:53.759230 kernel: pci 0000:00:17.5: PME# supported from D0 D3hot D3cold Sep 13 00:04:53.759285 kernel: pci 0000:00:17.6: [15ad:07a0] type 01 class 0x060400 Sep 13 00:04:53.759336 kernel: pci 0000:00:17.6: PME# supported from D0 D3hot D3cold Sep 13 00:04:53.759391 kernel: pci 0000:00:17.7: [15ad:07a0] type 01 class 0x060400 Sep 13 00:04:53.759446 kernel: pci 0000:00:17.7: PME# supported from D0 D3hot D3cold Sep 13 00:04:53.759501 kernel: pci 0000:00:18.0: [15ad:07a0] type 01 class 0x060400 Sep 13 00:04:53.759553 kernel: pci 0000:00:18.0: PME# supported from D0 D3hot D3cold Sep 13 00:04:53.759607 kernel: pci 0000:00:18.1: [15ad:07a0] type 01 class 0x060400 Sep 13 00:04:53.759659 kernel: pci 0000:00:18.1: PME# supported from D0 D3hot D3cold Sep 13 00:04:53.759716 kernel: pci 0000:00:18.2: [15ad:07a0] type 01 class 0x060400 Sep 13 00:04:53.759770 kernel: pci 0000:00:18.2: PME# supported from D0 D3hot D3cold Sep 13 00:04:53.759825 kernel: pci 0000:00:18.3: [15ad:07a0] type 01 class 0x060400 Sep 13 00:04:53.759876 kernel: pci 0000:00:18.3: PME# supported from D0 D3hot D3cold Sep 13 00:04:53.762628 kernel: pci 0000:00:18.4: [15ad:07a0] type 01 class 0x060400 Sep 13 00:04:53.762691 kernel: pci 0000:00:18.4: PME# supported from D0 D3hot D3cold Sep 13 00:04:53.762750 kernel: pci 0000:00:18.5: [15ad:07a0] type 01 class 0x060400 Sep 13 00:04:53.762808 kernel: pci 0000:00:18.5: PME# supported from D0 D3hot D3cold Sep 13 00:04:53.762865 kernel: pci 0000:00:18.6: [15ad:07a0] type 01 class 0x060400 Sep 13 00:04:53.762935 kernel: pci 0000:00:18.6: PME# supported from D0 D3hot D3cold Sep 13 00:04:53.762995 kernel: pci 0000:00:18.7: [15ad:07a0] type 01 class 0x060400 Sep 13 00:04:53.763047 kernel: pci 0000:00:18.7: PME# supported from D0 D3hot D3cold Sep 13 00:04:53.763108 kernel: pci_bus 0000:01: extended config space not accessible Sep 13 00:04:53.763168 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Sep 13 00:04:53.763231 kernel: pci_bus 0000:02: extended config space not accessible Sep 13 00:04:53.763241 kernel: acpiphp: Slot [32] registered Sep 13 00:04:53.763248 kernel: acpiphp: Slot [33] registered Sep 13 00:04:53.763254 kernel: acpiphp: Slot [34] registered Sep 13 00:04:53.763260 kernel: acpiphp: Slot [35] registered Sep 13 00:04:53.763266 kernel: acpiphp: Slot [36] registered Sep 13 00:04:53.763272 kernel: acpiphp: Slot [37] registered Sep 13 00:04:53.763278 kernel: acpiphp: Slot [38] registered Sep 13 00:04:53.763287 kernel: acpiphp: Slot [39] registered Sep 13 00:04:53.763293 kernel: acpiphp: Slot [40] registered Sep 13 00:04:53.763298 kernel: acpiphp: Slot [41] registered Sep 13 00:04:53.763304 kernel: acpiphp: Slot [42] registered Sep 13 00:04:53.763310 kernel: acpiphp: Slot [43] registered Sep 13 00:04:53.763316 kernel: acpiphp: Slot [44] registered Sep 13 00:04:53.763322 kernel: acpiphp: Slot [45] registered Sep 13 00:04:53.763328 kernel: acpiphp: Slot [46] registered Sep 13 00:04:53.763334 kernel: acpiphp: Slot [47] registered Sep 13 00:04:53.763341 kernel: acpiphp: Slot [48] registered Sep 13 00:04:53.763347 kernel: acpiphp: Slot [49] registered Sep 13 00:04:53.763353 kernel: acpiphp: Slot [50] registered Sep 13 00:04:53.763359 kernel: acpiphp: Slot [51] registered Sep 13 00:04:53.763365 kernel: acpiphp: Slot [52] registered Sep 13 00:04:53.763371 kernel: acpiphp: Slot [53] registered Sep 13 00:04:53.763377 kernel: acpiphp: Slot [54] registered Sep 13 00:04:53.763383 kernel: acpiphp: Slot [55] registered Sep 13 00:04:53.763389 kernel: acpiphp: Slot [56] registered Sep 13 00:04:53.763395 kernel: acpiphp: Slot [57] registered Sep 13 00:04:53.763403 kernel: acpiphp: Slot [58] registered Sep 13 00:04:53.763409 kernel: acpiphp: Slot [59] registered Sep 13 00:04:53.763415 kernel: acpiphp: Slot [60] registered Sep 13 00:04:53.763421 kernel: acpiphp: Slot [61] registered Sep 13 00:04:53.763427 kernel: acpiphp: Slot [62] registered Sep 13 00:04:53.763433 kernel: acpiphp: Slot [63] registered Sep 13 00:04:53.763487 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] (subtractive decode) Sep 13 00:04:53.763547 kernel: pci 0000:00:11.0: bridge window [io 0x2000-0x3fff] Sep 13 00:04:53.763600 kernel: pci 0000:00:11.0: bridge window [mem 0xfd600000-0xfdffffff] Sep 13 00:04:53.763655 kernel: pci 0000:00:11.0: bridge window [mem 0xe7b00000-0xe7ffffff 64bit pref] Sep 13 00:04:53.763706 kernel: pci 0000:00:11.0: bridge window [mem 0x000a0000-0x000bffff window] (subtractive decode) Sep 13 00:04:53.763764 kernel: pci 0000:00:11.0: bridge window [mem 0x000cc000-0x000dbfff window] (subtractive decode) Sep 13 00:04:53.763814 kernel: pci 0000:00:11.0: bridge window [mem 0xc0000000-0xfebfffff window] (subtractive decode) Sep 13 00:04:53.763865 kernel: pci 0000:00:11.0: bridge window [io 0x0000-0x0cf7 window] (subtractive decode) Sep 13 00:04:53.763926 kernel: pci 0000:00:11.0: bridge window [io 0x0d00-0xfeff window] (subtractive decode) Sep 13 00:04:53.763988 kernel: pci 0000:03:00.0: [15ad:07c0] type 00 class 0x010700 Sep 13 00:04:53.764046 kernel: pci 0000:03:00.0: reg 0x10: [io 0x4000-0x4007] Sep 13 00:04:53.764099 kernel: pci 0000:03:00.0: reg 0x14: [mem 0xfd5f8000-0xfd5fffff 64bit] Sep 13 00:04:53.764151 kernel: pci 0000:03:00.0: reg 0x30: [mem 0x00000000-0x0000ffff pref] Sep 13 00:04:53.764203 kernel: pci 0000:03:00.0: PME# supported from D0 D3hot D3cold Sep 13 00:04:53.764255 kernel: pci 0000:03:00.0: disabling ASPM on pre-1.1 PCIe device. You can enable it with 'pcie_aspm=force' Sep 13 00:04:53.764309 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] Sep 13 00:04:53.764360 kernel: pci 0000:00:15.0: bridge window [io 0x4000-0x4fff] Sep 13 00:04:53.764413 kernel: pci 0000:00:15.0: bridge window [mem 0xfd500000-0xfd5fffff] Sep 13 00:04:53.764466 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] Sep 13 00:04:53.764517 kernel: pci 0000:00:15.1: bridge window [io 0x8000-0x8fff] Sep 13 00:04:53.764568 kernel: pci 0000:00:15.1: bridge window [mem 0xfd100000-0xfd1fffff] Sep 13 00:04:53.764619 kernel: pci 0000:00:15.1: bridge window [mem 0xe7800000-0xe78fffff 64bit pref] Sep 13 00:04:53.764673 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] Sep 13 00:04:53.764735 kernel: pci 0000:00:15.2: bridge window [io 0xc000-0xcfff] Sep 13 00:04:53.764792 kernel: pci 0000:00:15.2: bridge window [mem 0xfcd00000-0xfcdfffff] Sep 13 00:04:53.764848 kernel: pci 0000:00:15.2: bridge window [mem 0xe7400000-0xe74fffff 64bit pref] Sep 13 00:04:53.768805 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] Sep 13 00:04:53.768884 kernel: pci 0000:00:15.3: bridge window [mem 0xfc900000-0xfc9fffff] Sep 13 00:04:53.768965 kernel: pci 0000:00:15.3: bridge window [mem 0xe7000000-0xe70fffff 64bit pref] Sep 13 00:04:53.769020 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] Sep 13 00:04:53.769071 kernel: pci 0000:00:15.4: bridge window [mem 0xfc500000-0xfc5fffff] Sep 13 00:04:53.769123 kernel: pci 0000:00:15.4: bridge window [mem 0xe6c00000-0xe6cfffff 64bit pref] Sep 13 00:04:53.769182 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] Sep 13 00:04:53.769233 kernel: pci 0000:00:15.5: bridge window [mem 0xfc100000-0xfc1fffff] Sep 13 00:04:53.769285 kernel: pci 0000:00:15.5: bridge window [mem 0xe6800000-0xe68fffff 64bit pref] Sep 13 00:04:53.769338 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] Sep 13 00:04:53.769397 kernel: pci 0000:00:15.6: bridge window [mem 0xfbd00000-0xfbdfffff] Sep 13 00:04:53.769456 kernel: pci 0000:00:15.6: bridge window [mem 0xe6400000-0xe64fffff 64bit pref] Sep 13 00:04:53.769512 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] Sep 13 00:04:53.769568 kernel: pci 0000:00:15.7: bridge window [mem 0xfb900000-0xfb9fffff] Sep 13 00:04:53.769620 kernel: pci 0000:00:15.7: bridge window [mem 0xe6000000-0xe60fffff 64bit pref] Sep 13 00:04:53.769679 kernel: pci 0000:0b:00.0: [15ad:07b0] type 00 class 0x020000 Sep 13 00:04:53.769734 kernel: pci 0000:0b:00.0: reg 0x10: [mem 0xfd4fc000-0xfd4fcfff] Sep 13 00:04:53.769787 kernel: pci 0000:0b:00.0: reg 0x14: [mem 0xfd4fd000-0xfd4fdfff] Sep 13 00:04:53.769842 kernel: pci 0000:0b:00.0: reg 0x18: [mem 0xfd4fe000-0xfd4fffff] Sep 13 00:04:53.769910 kernel: pci 0000:0b:00.0: reg 0x1c: [io 0x5000-0x500f] Sep 13 00:04:53.769972 kernel: pci 0000:0b:00.0: reg 0x30: [mem 0x00000000-0x0000ffff pref] Sep 13 00:04:53.770027 kernel: pci 0000:0b:00.0: supports D1 D2 Sep 13 00:04:53.770079 kernel: pci 0000:0b:00.0: PME# supported from D0 D1 D2 D3hot D3cold Sep 13 00:04:53.770131 kernel: pci 0000:0b:00.0: disabling ASPM on pre-1.1 PCIe device. You can enable it with 'pcie_aspm=force' Sep 13 00:04:53.770184 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] Sep 13 00:04:53.770235 kernel: pci 0000:00:16.0: bridge window [io 0x5000-0x5fff] Sep 13 00:04:53.770296 kernel: pci 0000:00:16.0: bridge window [mem 0xfd400000-0xfd4fffff] Sep 13 00:04:53.770351 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] Sep 13 00:04:53.770403 kernel: pci 0000:00:16.1: bridge window [io 0x9000-0x9fff] Sep 13 00:04:53.770458 kernel: pci 0000:00:16.1: bridge window [mem 0xfd000000-0xfd0fffff] Sep 13 00:04:53.770510 kernel: pci 0000:00:16.1: bridge window [mem 0xe7700000-0xe77fffff 64bit pref] Sep 13 00:04:53.770569 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] Sep 13 00:04:53.770622 kernel: pci 0000:00:16.2: bridge window [io 0xd000-0xdfff] Sep 13 00:04:53.770673 kernel: pci 0000:00:16.2: bridge window [mem 0xfcc00000-0xfccfffff] Sep 13 00:04:53.770727 kernel: pci 0000:00:16.2: bridge window [mem 0xe7300000-0xe73fffff 64bit pref] Sep 13 00:04:53.770787 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] Sep 13 00:04:53.770839 kernel: pci 0000:00:16.3: bridge window [mem 0xfc800000-0xfc8fffff] Sep 13 00:04:53.770890 kernel: pci 0000:00:16.3: bridge window [mem 0xe6f00000-0xe6ffffff 64bit pref] Sep 13 00:04:53.771076 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] Sep 13 00:04:53.771135 kernel: pci 0000:00:16.4: bridge window [mem 0xfc400000-0xfc4fffff] Sep 13 00:04:53.771187 kernel: pci 0000:00:16.4: bridge window [mem 0xe6b00000-0xe6bfffff 64bit pref] Sep 13 00:04:53.771244 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] Sep 13 00:04:53.771300 kernel: pci 0000:00:16.5: bridge window [mem 0xfc000000-0xfc0fffff] Sep 13 00:04:53.771351 kernel: pci 0000:00:16.5: bridge window [mem 0xe6700000-0xe67fffff 64bit pref] Sep 13 00:04:53.771408 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] Sep 13 00:04:53.771461 kernel: pci 0000:00:16.6: bridge window [mem 0xfbc00000-0xfbcfffff] Sep 13 00:04:53.771511 kernel: pci 0000:00:16.6: bridge window [mem 0xe6300000-0xe63fffff 64bit pref] Sep 13 00:04:53.771564 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] Sep 13 00:04:53.771616 kernel: pci 0000:00:16.7: bridge window [mem 0xfb800000-0xfb8fffff] Sep 13 00:04:53.771672 kernel: pci 0000:00:16.7: bridge window [mem 0xe5f00000-0xe5ffffff 64bit pref] Sep 13 00:04:53.771728 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] Sep 13 00:04:53.771779 kernel: pci 0000:00:17.0: bridge window [io 0x6000-0x6fff] Sep 13 00:04:53.771830 kernel: pci 0000:00:17.0: bridge window [mem 0xfd300000-0xfd3fffff] Sep 13 00:04:53.771881 kernel: pci 0000:00:17.0: bridge window [mem 0xe7a00000-0xe7afffff 64bit pref] Sep 13 00:04:53.771957 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] Sep 13 00:04:53.772009 kernel: pci 0000:00:17.1: bridge window [io 0xa000-0xafff] Sep 13 00:04:53.772060 kernel: pci 0000:00:17.1: bridge window [mem 0xfcf00000-0xfcffffff] Sep 13 00:04:53.772114 kernel: pci 0000:00:17.1: bridge window [mem 0xe7600000-0xe76fffff 64bit pref] Sep 13 00:04:53.772167 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] Sep 13 00:04:53.772218 kernel: pci 0000:00:17.2: bridge window [io 0xe000-0xefff] Sep 13 00:04:53.772268 kernel: pci 0000:00:17.2: bridge window [mem 0xfcb00000-0xfcbfffff] Sep 13 00:04:53.772319 kernel: pci 0000:00:17.2: bridge window [mem 0xe7200000-0xe72fffff 64bit pref] Sep 13 00:04:53.772373 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] Sep 13 00:04:53.772424 kernel: pci 0000:00:17.3: bridge window [mem 0xfc700000-0xfc7fffff] Sep 13 00:04:53.772474 kernel: pci 0000:00:17.3: bridge window [mem 0xe6e00000-0xe6efffff 64bit pref] Sep 13 00:04:53.772531 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] Sep 13 00:04:53.772583 kernel: pci 0000:00:17.4: bridge window [mem 0xfc300000-0xfc3fffff] Sep 13 00:04:53.772634 kernel: pci 0000:00:17.4: bridge window [mem 0xe6a00000-0xe6afffff 64bit pref] Sep 13 00:04:53.772688 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] Sep 13 00:04:53.772740 kernel: pci 0000:00:17.5: bridge window [mem 0xfbf00000-0xfbffffff] Sep 13 00:04:53.772791 kernel: pci 0000:00:17.5: bridge window [mem 0xe6600000-0xe66fffff 64bit pref] Sep 13 00:04:53.772845 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] Sep 13 00:04:53.772946 kernel: pci 0000:00:17.6: bridge window [mem 0xfbb00000-0xfbbfffff] Sep 13 00:04:53.773015 kernel: pci 0000:00:17.6: bridge window [mem 0xe6200000-0xe62fffff 64bit pref] Sep 13 00:04:53.773070 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] Sep 13 00:04:53.773121 kernel: pci 0000:00:17.7: bridge window [mem 0xfb700000-0xfb7fffff] Sep 13 00:04:53.773177 kernel: pci 0000:00:17.7: bridge window [mem 0xe5e00000-0xe5efffff 64bit pref] Sep 13 00:04:53.773237 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] Sep 13 00:04:53.773288 kernel: pci 0000:00:18.0: bridge window [io 0x7000-0x7fff] Sep 13 00:04:53.773338 kernel: pci 0000:00:18.0: bridge window [mem 0xfd200000-0xfd2fffff] Sep 13 00:04:53.773388 kernel: pci 0000:00:18.0: bridge window [mem 0xe7900000-0xe79fffff 64bit pref] Sep 13 00:04:53.773851 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] Sep 13 00:04:53.773919 kernel: pci 0000:00:18.1: bridge window [io 0xb000-0xbfff] Sep 13 00:04:53.773974 kernel: pci 0000:00:18.1: bridge window [mem 0xfce00000-0xfcefffff] Sep 13 00:04:53.774026 kernel: pci 0000:00:18.1: bridge window [mem 0xe7500000-0xe75fffff 64bit pref] Sep 13 00:04:53.774089 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] Sep 13 00:04:53.774141 kernel: pci 0000:00:18.2: bridge window [mem 0xfca00000-0xfcafffff] Sep 13 00:04:53.774192 kernel: pci 0000:00:18.2: bridge window [mem 0xe7100000-0xe71fffff 64bit pref] Sep 13 00:04:53.774251 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] Sep 13 00:04:53.774303 kernel: pci 0000:00:18.3: bridge window [mem 0xfc600000-0xfc6fffff] Sep 13 00:04:53.774355 kernel: pci 0000:00:18.3: bridge window [mem 0xe6d00000-0xe6dfffff 64bit pref] Sep 13 00:04:53.774408 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] Sep 13 00:04:53.774459 kernel: pci 0000:00:18.4: bridge window [mem 0xfc200000-0xfc2fffff] Sep 13 00:04:53.774510 kernel: pci 0000:00:18.4: bridge window [mem 0xe6900000-0xe69fffff 64bit pref] Sep 13 00:04:53.774563 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] Sep 13 00:04:53.774622 kernel: pci 0000:00:18.5: bridge window [mem 0xfbe00000-0xfbefffff] Sep 13 00:04:53.774677 kernel: pci 0000:00:18.5: bridge window [mem 0xe6500000-0xe65fffff 64bit pref] Sep 13 00:04:53.774730 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] Sep 13 00:04:53.774780 kernel: pci 0000:00:18.6: bridge window [mem 0xfba00000-0xfbafffff] Sep 13 00:04:53.774830 kernel: pci 0000:00:18.6: bridge window [mem 0xe6100000-0xe61fffff 64bit pref] Sep 13 00:04:53.774883 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] Sep 13 00:04:53.774944 kernel: pci 0000:00:18.7: bridge window [mem 0xfb600000-0xfb6fffff] Sep 13 00:04:53.775001 kernel: pci 0000:00:18.7: bridge window [mem 0xe5d00000-0xe5dfffff 64bit pref] Sep 13 00:04:53.775010 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 9 Sep 13 00:04:53.775017 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 0 Sep 13 00:04:53.775026 kernel: ACPI: PCI: Interrupt link LNKB disabled Sep 13 00:04:53.775032 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Sep 13 00:04:53.775038 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 10 Sep 13 00:04:53.775045 kernel: iommu: Default domain type: Translated Sep 13 00:04:53.775051 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 13 00:04:53.775061 kernel: PCI: Using ACPI for IRQ routing Sep 13 00:04:53.775067 kernel: PCI: pci_cache_line_size set to 64 bytes Sep 13 00:04:53.775074 kernel: e820: reserve RAM buffer [mem 0x0009ec00-0x0009ffff] Sep 13 00:04:53.775079 kernel: e820: reserve RAM buffer [mem 0x7fee0000-0x7fffffff] Sep 13 00:04:53.775136 kernel: pci 0000:00:0f.0: vgaarb: setting as boot VGA device Sep 13 00:04:53.775192 kernel: pci 0000:00:0f.0: vgaarb: bridge control possible Sep 13 00:04:53.775244 kernel: pci 0000:00:0f.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Sep 13 00:04:53.775253 kernel: vgaarb: loaded Sep 13 00:04:53.775259 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 Sep 13 00:04:53.775265 kernel: hpet0: 16 comparators, 64-bit 14.318180 MHz counter Sep 13 00:04:53.775272 kernel: clocksource: Switched to clocksource tsc-early Sep 13 00:04:53.775278 kernel: VFS: Disk quotas dquot_6.6.0 Sep 13 00:04:53.775289 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 13 00:04:53.775296 kernel: pnp: PnP ACPI init Sep 13 00:04:53.775354 kernel: system 00:00: [io 0x1000-0x103f] has been reserved Sep 13 00:04:53.775403 kernel: system 00:00: [io 0x1040-0x104f] has been reserved Sep 13 00:04:53.775450 kernel: system 00:00: [io 0x0cf0-0x0cf1] has been reserved Sep 13 00:04:53.775500 kernel: system 00:04: [mem 0xfed00000-0xfed003ff] has been reserved Sep 13 00:04:53.775551 kernel: pnp 00:06: [dma 2] Sep 13 00:04:53.775605 kernel: system 00:07: [io 0xfce0-0xfcff] has been reserved Sep 13 00:04:53.775652 kernel: system 00:07: [mem 0xf0000000-0xf7ffffff] has been reserved Sep 13 00:04:53.775698 kernel: system 00:07: [mem 0xfe800000-0xfe9fffff] has been reserved Sep 13 00:04:53.775706 kernel: pnp: PnP ACPI: found 8 devices Sep 13 00:04:53.775713 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 13 00:04:53.775719 kernel: NET: Registered PF_INET protocol family Sep 13 00:04:53.775725 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 13 00:04:53.775732 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Sep 13 00:04:53.775740 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 13 00:04:53.775746 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Sep 13 00:04:53.775752 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Sep 13 00:04:53.775758 kernel: TCP: Hash tables configured (established 16384 bind 16384) Sep 13 00:04:53.775764 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Sep 13 00:04:53.775770 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Sep 13 00:04:53.775776 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 13 00:04:53.775783 kernel: NET: Registered PF_XDP protocol family Sep 13 00:04:53.775836 kernel: pci 0000:00:15.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 Sep 13 00:04:53.775907 kernel: pci 0000:00:15.3: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Sep 13 00:04:53.775965 kernel: pci 0000:00:15.4: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Sep 13 00:04:53.776020 kernel: pci 0000:00:15.5: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Sep 13 00:04:53.776076 kernel: pci 0000:00:15.6: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Sep 13 00:04:53.776134 kernel: pci 0000:00:15.7: bridge window [io 0x1000-0x0fff] to [bus 0a] add_size 1000 Sep 13 00:04:53.776227 kernel: pci 0000:00:16.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 0b] add_size 200000 add_align 100000 Sep 13 00:04:53.776332 kernel: pci 0000:00:16.3: bridge window [io 0x1000-0x0fff] to [bus 0e] add_size 1000 Sep 13 00:04:53.776388 kernel: pci 0000:00:16.4: bridge window [io 0x1000-0x0fff] to [bus 0f] add_size 1000 Sep 13 00:04:53.776441 kernel: pci 0000:00:16.5: bridge window [io 0x1000-0x0fff] to [bus 10] add_size 1000 Sep 13 00:04:53.776493 kernel: pci 0000:00:16.6: bridge window [io 0x1000-0x0fff] to [bus 11] add_size 1000 Sep 13 00:04:53.776547 kernel: pci 0000:00:16.7: bridge window [io 0x1000-0x0fff] to [bus 12] add_size 1000 Sep 13 00:04:53.776603 kernel: pci 0000:00:17.3: bridge window [io 0x1000-0x0fff] to [bus 16] add_size 1000 Sep 13 00:04:53.776656 kernel: pci 0000:00:17.4: bridge window [io 0x1000-0x0fff] to [bus 17] add_size 1000 Sep 13 00:04:53.776708 kernel: pci 0000:00:17.5: bridge window [io 0x1000-0x0fff] to [bus 18] add_size 1000 Sep 13 00:04:53.776761 kernel: pci 0000:00:17.6: bridge window [io 0x1000-0x0fff] to [bus 19] add_size 1000 Sep 13 00:04:53.776814 kernel: pci 0000:00:17.7: bridge window [io 0x1000-0x0fff] to [bus 1a] add_size 1000 Sep 13 00:04:53.776865 kernel: pci 0000:00:18.2: bridge window [io 0x1000-0x0fff] to [bus 1d] add_size 1000 Sep 13 00:04:53.777001 kernel: pci 0000:00:18.3: bridge window [io 0x1000-0x0fff] to [bus 1e] add_size 1000 Sep 13 00:04:53.777069 kernel: pci 0000:00:18.4: bridge window [io 0x1000-0x0fff] to [bus 1f] add_size 1000 Sep 13 00:04:53.777137 kernel: pci 0000:00:18.5: bridge window [io 0x1000-0x0fff] to [bus 20] add_size 1000 Sep 13 00:04:53.777190 kernel: pci 0000:00:18.6: bridge window [io 0x1000-0x0fff] to [bus 21] add_size 1000 Sep 13 00:04:53.777240 kernel: pci 0000:00:18.7: bridge window [io 0x1000-0x0fff] to [bus 22] add_size 1000 Sep 13 00:04:53.777291 kernel: pci 0000:00:15.0: BAR 15: assigned [mem 0xc0000000-0xc01fffff 64bit pref] Sep 13 00:04:53.777347 kernel: pci 0000:00:16.0: BAR 15: assigned [mem 0xc0200000-0xc03fffff 64bit pref] Sep 13 00:04:53.777398 kernel: pci 0000:00:15.3: BAR 13: no space for [io size 0x1000] Sep 13 00:04:53.777448 kernel: pci 0000:00:15.3: BAR 13: failed to assign [io size 0x1000] Sep 13 00:04:53.777499 kernel: pci 0000:00:15.4: BAR 13: no space for [io size 0x1000] Sep 13 00:04:53.777549 kernel: pci 0000:00:15.4: BAR 13: failed to assign [io size 0x1000] Sep 13 00:04:53.777599 kernel: pci 0000:00:15.5: BAR 13: no space for [io size 0x1000] Sep 13 00:04:53.777651 kernel: pci 0000:00:15.5: BAR 13: failed to assign [io size 0x1000] Sep 13 00:04:53.777702 kernel: pci 0000:00:15.6: BAR 13: no space for [io size 0x1000] Sep 13 00:04:53.777755 kernel: pci 0000:00:15.6: BAR 13: failed to assign [io size 0x1000] Sep 13 00:04:53.777806 kernel: pci 0000:00:15.7: BAR 13: no space for [io size 0x1000] Sep 13 00:04:53.777857 kernel: pci 0000:00:15.7: BAR 13: failed to assign [io size 0x1000] Sep 13 00:04:53.777915 kernel: pci 0000:00:16.3: BAR 13: no space for [io size 0x1000] Sep 13 00:04:53.777967 kernel: pci 0000:00:16.3: BAR 13: failed to assign [io size 0x1000] Sep 13 00:04:53.778019 kernel: pci 0000:00:16.4: BAR 13: no space for [io size 0x1000] Sep 13 00:04:53.778076 kernel: pci 0000:00:16.4: BAR 13: failed to assign [io size 0x1000] Sep 13 00:04:53.778136 kernel: pci 0000:00:16.5: BAR 13: no space for [io size 0x1000] Sep 13 00:04:53.778191 kernel: pci 0000:00:16.5: BAR 13: failed to assign [io size 0x1000] Sep 13 00:04:53.778242 kernel: pci 0000:00:16.6: BAR 13: no space for [io size 0x1000] Sep 13 00:04:53.778298 kernel: pci 0000:00:16.6: BAR 13: failed to assign [io size 0x1000] Sep 13 00:04:53.778350 kernel: pci 0000:00:16.7: BAR 13: no space for [io size 0x1000] Sep 13 00:04:53.778400 kernel: pci 0000:00:16.7: BAR 13: failed to assign [io size 0x1000] Sep 13 00:04:53.778452 kernel: pci 0000:00:17.3: BAR 13: no space for [io size 0x1000] Sep 13 00:04:53.778503 kernel: pci 0000:00:17.3: BAR 13: failed to assign [io size 0x1000] Sep 13 00:04:53.778554 kernel: pci 0000:00:17.4: BAR 13: no space for [io size 0x1000] Sep 13 00:04:53.778608 kernel: pci 0000:00:17.4: BAR 13: failed to assign [io size 0x1000] Sep 13 00:04:53.778659 kernel: pci 0000:00:17.5: BAR 13: no space for [io size 0x1000] Sep 13 00:04:53.778709 kernel: pci 0000:00:17.5: BAR 13: failed to assign [io size 0x1000] Sep 13 00:04:53.778760 kernel: pci 0000:00:17.6: BAR 13: no space for [io size 0x1000] Sep 13 00:04:53.778810 kernel: pci 0000:00:17.6: BAR 13: failed to assign [io size 0x1000] Sep 13 00:04:53.778860 kernel: pci 0000:00:17.7: BAR 13: no space for [io size 0x1000] Sep 13 00:04:53.778923 kernel: pci 0000:00:17.7: BAR 13: failed to assign [io size 0x1000] Sep 13 00:04:53.778977 kernel: pci 0000:00:18.2: BAR 13: no space for [io size 0x1000] Sep 13 00:04:53.779030 kernel: pci 0000:00:18.2: BAR 13: failed to assign [io size 0x1000] Sep 13 00:04:53.779081 kernel: pci 0000:00:18.3: BAR 13: no space for [io size 0x1000] Sep 13 00:04:53.779131 kernel: pci 0000:00:18.3: BAR 13: failed to assign [io size 0x1000] Sep 13 00:04:53.779183 kernel: pci 0000:00:18.4: BAR 13: no space for [io size 0x1000] Sep 13 00:04:53.779234 kernel: pci 0000:00:18.4: BAR 13: failed to assign [io size 0x1000] Sep 13 00:04:53.779285 kernel: pci 0000:00:18.5: BAR 13: no space for [io size 0x1000] Sep 13 00:04:53.779335 kernel: pci 0000:00:18.5: BAR 13: failed to assign [io size 0x1000] Sep 13 00:04:53.779386 kernel: pci 0000:00:18.6: BAR 13: no space for [io size 0x1000] Sep 13 00:04:53.779439 kernel: pci 0000:00:18.6: BAR 13: failed to assign [io size 0x1000] Sep 13 00:04:53.779490 kernel: pci 0000:00:18.7: BAR 13: no space for [io size 0x1000] Sep 13 00:04:53.779540 kernel: pci 0000:00:18.7: BAR 13: failed to assign [io size 0x1000] Sep 13 00:04:53.779590 kernel: pci 0000:00:18.7: BAR 13: no space for [io size 0x1000] Sep 13 00:04:53.779641 kernel: pci 0000:00:18.7: BAR 13: failed to assign [io size 0x1000] Sep 13 00:04:53.779691 kernel: pci 0000:00:18.6: BAR 13: no space for [io size 0x1000] Sep 13 00:04:53.779741 kernel: pci 0000:00:18.6: BAR 13: failed to assign [io size 0x1000] Sep 13 00:04:53.779791 kernel: pci 0000:00:18.5: BAR 13: no space for [io size 0x1000] Sep 13 00:04:53.779842 kernel: pci 0000:00:18.5: BAR 13: failed to assign [io size 0x1000] Sep 13 00:04:53.779906 kernel: pci 0000:00:18.4: BAR 13: no space for [io size 0x1000] Sep 13 00:04:53.779960 kernel: pci 0000:00:18.4: BAR 13: failed to assign [io size 0x1000] Sep 13 00:04:53.780026 kernel: pci 0000:00:18.3: BAR 13: no space for [io size 0x1000] Sep 13 00:04:53.780282 kernel: pci 0000:00:18.3: BAR 13: failed to assign [io size 0x1000] Sep 13 00:04:53.780397 kernel: pci 0000:00:18.2: BAR 13: no space for [io size 0x1000] Sep 13 00:04:53.780472 kernel: pci 0000:00:18.2: BAR 13: failed to assign [io size 0x1000] Sep 13 00:04:53.780527 kernel: pci 0000:00:17.7: BAR 13: no space for [io size 0x1000] Sep 13 00:04:53.780578 kernel: pci 0000:00:17.7: BAR 13: failed to assign [io size 0x1000] Sep 13 00:04:53.780630 kernel: pci 0000:00:17.6: BAR 13: no space for [io size 0x1000] Sep 13 00:04:53.780680 kernel: pci 0000:00:17.6: BAR 13: failed to assign [io size 0x1000] Sep 13 00:04:53.780734 kernel: pci 0000:00:17.5: BAR 13: no space for [io size 0x1000] Sep 13 00:04:53.780784 kernel: pci 0000:00:17.5: BAR 13: failed to assign [io size 0x1000] Sep 13 00:04:53.780835 kernel: pci 0000:00:17.4: BAR 13: no space for [io size 0x1000] Sep 13 00:04:53.780885 kernel: pci 0000:00:17.4: BAR 13: failed to assign [io size 0x1000] Sep 13 00:04:53.780953 kernel: pci 0000:00:17.3: BAR 13: no space for [io size 0x1000] Sep 13 00:04:53.781004 kernel: pci 0000:00:17.3: BAR 13: failed to assign [io size 0x1000] Sep 13 00:04:53.781055 kernel: pci 0000:00:16.7: BAR 13: no space for [io size 0x1000] Sep 13 00:04:53.781109 kernel: pci 0000:00:16.7: BAR 13: failed to assign [io size 0x1000] Sep 13 00:04:53.781160 kernel: pci 0000:00:16.6: BAR 13: no space for [io size 0x1000] Sep 13 00:04:53.781213 kernel: pci 0000:00:16.6: BAR 13: failed to assign [io size 0x1000] Sep 13 00:04:53.781264 kernel: pci 0000:00:16.5: BAR 13: no space for [io size 0x1000] Sep 13 00:04:53.781314 kernel: pci 0000:00:16.5: BAR 13: failed to assign [io size 0x1000] Sep 13 00:04:53.781365 kernel: pci 0000:00:16.4: BAR 13: no space for [io size 0x1000] Sep 13 00:04:53.781415 kernel: pci 0000:00:16.4: BAR 13: failed to assign [io size 0x1000] Sep 13 00:04:53.781465 kernel: pci 0000:00:16.3: BAR 13: no space for [io size 0x1000] Sep 13 00:04:53.781515 kernel: pci 0000:00:16.3: BAR 13: failed to assign [io size 0x1000] Sep 13 00:04:53.781566 kernel: pci 0000:00:15.7: BAR 13: no space for [io size 0x1000] Sep 13 00:04:53.781616 kernel: pci 0000:00:15.7: BAR 13: failed to assign [io size 0x1000] Sep 13 00:04:53.781666 kernel: pci 0000:00:15.6: BAR 13: no space for [io size 0x1000] Sep 13 00:04:53.781720 kernel: pci 0000:00:15.6: BAR 13: failed to assign [io size 0x1000] Sep 13 00:04:53.781770 kernel: pci 0000:00:15.5: BAR 13: no space for [io size 0x1000] Sep 13 00:04:53.781820 kernel: pci 0000:00:15.5: BAR 13: failed to assign [io size 0x1000] Sep 13 00:04:53.781870 kernel: pci 0000:00:15.4: BAR 13: no space for [io size 0x1000] Sep 13 00:04:53.781928 kernel: pci 0000:00:15.4: BAR 13: failed to assign [io size 0x1000] Sep 13 00:04:53.781979 kernel: pci 0000:00:15.3: BAR 13: no space for [io size 0x1000] Sep 13 00:04:53.782038 kernel: pci 0000:00:15.3: BAR 13: failed to assign [io size 0x1000] Sep 13 00:04:53.782106 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Sep 13 00:04:53.782158 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] Sep 13 00:04:53.782213 kernel: pci 0000:00:11.0: bridge window [io 0x2000-0x3fff] Sep 13 00:04:53.782263 kernel: pci 0000:00:11.0: bridge window [mem 0xfd600000-0xfdffffff] Sep 13 00:04:53.782313 kernel: pci 0000:00:11.0: bridge window [mem 0xe7b00000-0xe7ffffff 64bit pref] Sep 13 00:04:53.782369 kernel: pci 0000:03:00.0: BAR 6: assigned [mem 0xfd500000-0xfd50ffff pref] Sep 13 00:04:53.782422 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] Sep 13 00:04:53.782472 kernel: pci 0000:00:15.0: bridge window [io 0x4000-0x4fff] Sep 13 00:04:53.782524 kernel: pci 0000:00:15.0: bridge window [mem 0xfd500000-0xfd5fffff] Sep 13 00:04:53.782576 kernel: pci 0000:00:15.0: bridge window [mem 0xc0000000-0xc01fffff 64bit pref] Sep 13 00:04:53.782630 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] Sep 13 00:04:53.782682 kernel: pci 0000:00:15.1: bridge window [io 0x8000-0x8fff] Sep 13 00:04:53.782732 kernel: pci 0000:00:15.1: bridge window [mem 0xfd100000-0xfd1fffff] Sep 13 00:04:53.782784 kernel: pci 0000:00:15.1: bridge window [mem 0xe7800000-0xe78fffff 64bit pref] Sep 13 00:04:53.782838 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] Sep 13 00:04:53.782890 kernel: pci 0000:00:15.2: bridge window [io 0xc000-0xcfff] Sep 13 00:04:53.782963 kernel: pci 0000:00:15.2: bridge window [mem 0xfcd00000-0xfcdfffff] Sep 13 00:04:53.783014 kernel: pci 0000:00:15.2: bridge window [mem 0xe7400000-0xe74fffff 64bit pref] Sep 13 00:04:53.783064 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] Sep 13 00:04:53.783118 kernel: pci 0000:00:15.3: bridge window [mem 0xfc900000-0xfc9fffff] Sep 13 00:04:53.783168 kernel: pci 0000:00:15.3: bridge window [mem 0xe7000000-0xe70fffff 64bit pref] Sep 13 00:04:53.783219 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] Sep 13 00:04:53.783305 kernel: pci 0000:00:15.4: bridge window [mem 0xfc500000-0xfc5fffff] Sep 13 00:04:53.783378 kernel: pci 0000:00:15.4: bridge window [mem 0xe6c00000-0xe6cfffff 64bit pref] Sep 13 00:04:53.783450 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] Sep 13 00:04:53.783505 kernel: pci 0000:00:15.5: bridge window [mem 0xfc100000-0xfc1fffff] Sep 13 00:04:53.783555 kernel: pci 0000:00:15.5: bridge window [mem 0xe6800000-0xe68fffff 64bit pref] Sep 13 00:04:53.783606 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] Sep 13 00:04:53.783657 kernel: pci 0000:00:15.6: bridge window [mem 0xfbd00000-0xfbdfffff] Sep 13 00:04:53.783708 kernel: pci 0000:00:15.6: bridge window [mem 0xe6400000-0xe64fffff 64bit pref] Sep 13 00:04:53.783759 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] Sep 13 00:04:53.783810 kernel: pci 0000:00:15.7: bridge window [mem 0xfb900000-0xfb9fffff] Sep 13 00:04:53.783861 kernel: pci 0000:00:15.7: bridge window [mem 0xe6000000-0xe60fffff 64bit pref] Sep 13 00:04:53.784187 kernel: pci 0000:0b:00.0: BAR 6: assigned [mem 0xfd400000-0xfd40ffff pref] Sep 13 00:04:53.784243 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] Sep 13 00:04:53.784298 kernel: pci 0000:00:16.0: bridge window [io 0x5000-0x5fff] Sep 13 00:04:53.784349 kernel: pci 0000:00:16.0: bridge window [mem 0xfd400000-0xfd4fffff] Sep 13 00:04:53.784400 kernel: pci 0000:00:16.0: bridge window [mem 0xc0200000-0xc03fffff 64bit pref] Sep 13 00:04:53.784724 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] Sep 13 00:04:53.784784 kernel: pci 0000:00:16.1: bridge window [io 0x9000-0x9fff] Sep 13 00:04:53.784836 kernel: pci 0000:00:16.1: bridge window [mem 0xfd000000-0xfd0fffff] Sep 13 00:04:53.784888 kernel: pci 0000:00:16.1: bridge window [mem 0xe7700000-0xe77fffff 64bit pref] Sep 13 00:04:53.785064 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] Sep 13 00:04:53.785118 kernel: pci 0000:00:16.2: bridge window [io 0xd000-0xdfff] Sep 13 00:04:53.785173 kernel: pci 0000:00:16.2: bridge window [mem 0xfcc00000-0xfccfffff] Sep 13 00:04:53.785225 kernel: pci 0000:00:16.2: bridge window [mem 0xe7300000-0xe73fffff 64bit pref] Sep 13 00:04:53.785278 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] Sep 13 00:04:53.785329 kernel: pci 0000:00:16.3: bridge window [mem 0xfc800000-0xfc8fffff] Sep 13 00:04:53.785380 kernel: pci 0000:00:16.3: bridge window [mem 0xe6f00000-0xe6ffffff 64bit pref] Sep 13 00:04:53.785431 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] Sep 13 00:04:53.785482 kernel: pci 0000:00:16.4: bridge window [mem 0xfc400000-0xfc4fffff] Sep 13 00:04:53.785533 kernel: pci 0000:00:16.4: bridge window [mem 0xe6b00000-0xe6bfffff 64bit pref] Sep 13 00:04:53.785583 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] Sep 13 00:04:53.785637 kernel: pci 0000:00:16.5: bridge window [mem 0xfc000000-0xfc0fffff] Sep 13 00:04:53.785688 kernel: pci 0000:00:16.5: bridge window [mem 0xe6700000-0xe67fffff 64bit pref] Sep 13 00:04:53.785739 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] Sep 13 00:04:53.785790 kernel: pci 0000:00:16.6: bridge window [mem 0xfbc00000-0xfbcfffff] Sep 13 00:04:53.785840 kernel: pci 0000:00:16.6: bridge window [mem 0xe6300000-0xe63fffff 64bit pref] Sep 13 00:04:53.785891 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] Sep 13 00:04:53.786002 kernel: pci 0000:00:16.7: bridge window [mem 0xfb800000-0xfb8fffff] Sep 13 00:04:53.786052 kernel: pci 0000:00:16.7: bridge window [mem 0xe5f00000-0xe5ffffff 64bit pref] Sep 13 00:04:53.787375 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] Sep 13 00:04:53.787435 kernel: pci 0000:00:17.0: bridge window [io 0x6000-0x6fff] Sep 13 00:04:53.787493 kernel: pci 0000:00:17.0: bridge window [mem 0xfd300000-0xfd3fffff] Sep 13 00:04:53.787545 kernel: pci 0000:00:17.0: bridge window [mem 0xe7a00000-0xe7afffff 64bit pref] Sep 13 00:04:53.787599 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] Sep 13 00:04:53.787650 kernel: pci 0000:00:17.1: bridge window [io 0xa000-0xafff] Sep 13 00:04:53.787701 kernel: pci 0000:00:17.1: bridge window [mem 0xfcf00000-0xfcffffff] Sep 13 00:04:53.787752 kernel: pci 0000:00:17.1: bridge window [mem 0xe7600000-0xe76fffff 64bit pref] Sep 13 00:04:53.787805 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] Sep 13 00:04:53.787856 kernel: pci 0000:00:17.2: bridge window [io 0xe000-0xefff] Sep 13 00:04:53.787942 kernel: pci 0000:00:17.2: bridge window [mem 0xfcb00000-0xfcbfffff] Sep 13 00:04:53.788014 kernel: pci 0000:00:17.2: bridge window [mem 0xe7200000-0xe72fffff 64bit pref] Sep 13 00:04:53.788068 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] Sep 13 00:04:53.788125 kernel: pci 0000:00:17.3: bridge window [mem 0xfc700000-0xfc7fffff] Sep 13 00:04:53.788175 kernel: pci 0000:00:17.3: bridge window [mem 0xe6e00000-0xe6efffff 64bit pref] Sep 13 00:04:53.788226 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] Sep 13 00:04:53.788276 kernel: pci 0000:00:17.4: bridge window [mem 0xfc300000-0xfc3fffff] Sep 13 00:04:53.788327 kernel: pci 0000:00:17.4: bridge window [mem 0xe6a00000-0xe6afffff 64bit pref] Sep 13 00:04:53.788378 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] Sep 13 00:04:53.788429 kernel: pci 0000:00:17.5: bridge window [mem 0xfbf00000-0xfbffffff] Sep 13 00:04:53.788479 kernel: pci 0000:00:17.5: bridge window [mem 0xe6600000-0xe66fffff 64bit pref] Sep 13 00:04:53.788534 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] Sep 13 00:04:53.788586 kernel: pci 0000:00:17.6: bridge window [mem 0xfbb00000-0xfbbfffff] Sep 13 00:04:53.788637 kernel: pci 0000:00:17.6: bridge window [mem 0xe6200000-0xe62fffff 64bit pref] Sep 13 00:04:53.788687 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] Sep 13 00:04:53.788738 kernel: pci 0000:00:17.7: bridge window [mem 0xfb700000-0xfb7fffff] Sep 13 00:04:53.788788 kernel: pci 0000:00:17.7: bridge window [mem 0xe5e00000-0xe5efffff 64bit pref] Sep 13 00:04:53.788839 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] Sep 13 00:04:53.788890 kernel: pci 0000:00:18.0: bridge window [io 0x7000-0x7fff] Sep 13 00:04:53.789974 kernel: pci 0000:00:18.0: bridge window [mem 0xfd200000-0xfd2fffff] Sep 13 00:04:53.790034 kernel: pci 0000:00:18.0: bridge window [mem 0xe7900000-0xe79fffff 64bit pref] Sep 13 00:04:53.790094 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] Sep 13 00:04:53.790147 kernel: pci 0000:00:18.1: bridge window [io 0xb000-0xbfff] Sep 13 00:04:53.790200 kernel: pci 0000:00:18.1: bridge window [mem 0xfce00000-0xfcefffff] Sep 13 00:04:53.790252 kernel: pci 0000:00:18.1: bridge window [mem 0xe7500000-0xe75fffff 64bit pref] Sep 13 00:04:53.790303 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] Sep 13 00:04:53.790355 kernel: pci 0000:00:18.2: bridge window [mem 0xfca00000-0xfcafffff] Sep 13 00:04:53.790406 kernel: pci 0000:00:18.2: bridge window [mem 0xe7100000-0xe71fffff 64bit pref] Sep 13 00:04:53.790457 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] Sep 13 00:04:53.790510 kernel: pci 0000:00:18.3: bridge window [mem 0xfc600000-0xfc6fffff] Sep 13 00:04:53.790564 kernel: pci 0000:00:18.3: bridge window [mem 0xe6d00000-0xe6dfffff 64bit pref] Sep 13 00:04:53.790615 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] Sep 13 00:04:53.790665 kernel: pci 0000:00:18.4: bridge window [mem 0xfc200000-0xfc2fffff] Sep 13 00:04:53.790716 kernel: pci 0000:00:18.4: bridge window [mem 0xe6900000-0xe69fffff 64bit pref] Sep 13 00:04:53.790767 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] Sep 13 00:04:53.790818 kernel: pci 0000:00:18.5: bridge window [mem 0xfbe00000-0xfbefffff] Sep 13 00:04:53.790868 kernel: pci 0000:00:18.5: bridge window [mem 0xe6500000-0xe65fffff 64bit pref] Sep 13 00:04:53.791023 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] Sep 13 00:04:53.791080 kernel: pci 0000:00:18.6: bridge window [mem 0xfba00000-0xfbafffff] Sep 13 00:04:53.791135 kernel: pci 0000:00:18.6: bridge window [mem 0xe6100000-0xe61fffff 64bit pref] Sep 13 00:04:53.791187 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] Sep 13 00:04:53.791237 kernel: pci 0000:00:18.7: bridge window [mem 0xfb600000-0xfb6fffff] Sep 13 00:04:53.791288 kernel: pci 0000:00:18.7: bridge window [mem 0xe5d00000-0xe5dfffff 64bit pref] Sep 13 00:04:53.791338 kernel: pci_bus 0000:00: resource 4 [mem 0x000a0000-0x000bffff window] Sep 13 00:04:53.791384 kernel: pci_bus 0000:00: resource 5 [mem 0x000cc000-0x000dbfff window] Sep 13 00:04:53.791429 kernel: pci_bus 0000:00: resource 6 [mem 0xc0000000-0xfebfffff window] Sep 13 00:04:53.791474 kernel: pci_bus 0000:00: resource 7 [io 0x0000-0x0cf7 window] Sep 13 00:04:53.791517 kernel: pci_bus 0000:00: resource 8 [io 0x0d00-0xfeff window] Sep 13 00:04:53.791570 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x3fff] Sep 13 00:04:53.791618 kernel: pci_bus 0000:02: resource 1 [mem 0xfd600000-0xfdffffff] Sep 13 00:04:53.791664 kernel: pci_bus 0000:02: resource 2 [mem 0xe7b00000-0xe7ffffff 64bit pref] Sep 13 00:04:53.791710 kernel: pci_bus 0000:02: resource 4 [mem 0x000a0000-0x000bffff window] Sep 13 00:04:53.791756 kernel: pci_bus 0000:02: resource 5 [mem 0x000cc000-0x000dbfff window] Sep 13 00:04:53.791804 kernel: pci_bus 0000:02: resource 6 [mem 0xc0000000-0xfebfffff window] Sep 13 00:04:53.791849 kernel: pci_bus 0000:02: resource 7 [io 0x0000-0x0cf7 window] Sep 13 00:04:53.791913 kernel: pci_bus 0000:02: resource 8 [io 0x0d00-0xfeff window] Sep 13 00:04:53.791970 kernel: pci_bus 0000:03: resource 0 [io 0x4000-0x4fff] Sep 13 00:04:53.792017 kernel: pci_bus 0000:03: resource 1 [mem 0xfd500000-0xfd5fffff] Sep 13 00:04:53.792064 kernel: pci_bus 0000:03: resource 2 [mem 0xc0000000-0xc01fffff 64bit pref] Sep 13 00:04:53.792114 kernel: pci_bus 0000:04: resource 0 [io 0x8000-0x8fff] Sep 13 00:04:53.792162 kernel: pci_bus 0000:04: resource 1 [mem 0xfd100000-0xfd1fffff] Sep 13 00:04:53.792208 kernel: pci_bus 0000:04: resource 2 [mem 0xe7800000-0xe78fffff 64bit pref] Sep 13 00:04:53.792261 kernel: pci_bus 0000:05: resource 0 [io 0xc000-0xcfff] Sep 13 00:04:53.792308 kernel: pci_bus 0000:05: resource 1 [mem 0xfcd00000-0xfcdfffff] Sep 13 00:04:53.792354 kernel: pci_bus 0000:05: resource 2 [mem 0xe7400000-0xe74fffff 64bit pref] Sep 13 00:04:53.792405 kernel: pci_bus 0000:06: resource 1 [mem 0xfc900000-0xfc9fffff] Sep 13 00:04:53.792452 kernel: pci_bus 0000:06: resource 2 [mem 0xe7000000-0xe70fffff 64bit pref] Sep 13 00:04:53.792503 kernel: pci_bus 0000:07: resource 1 [mem 0xfc500000-0xfc5fffff] Sep 13 00:04:53.792554 kernel: pci_bus 0000:07: resource 2 [mem 0xe6c00000-0xe6cfffff 64bit pref] Sep 13 00:04:53.792605 kernel: pci_bus 0000:08: resource 1 [mem 0xfc100000-0xfc1fffff] Sep 13 00:04:53.792652 kernel: pci_bus 0000:08: resource 2 [mem 0xe6800000-0xe68fffff 64bit pref] Sep 13 00:04:53.792702 kernel: pci_bus 0000:09: resource 1 [mem 0xfbd00000-0xfbdfffff] Sep 13 00:04:53.792750 kernel: pci_bus 0000:09: resource 2 [mem 0xe6400000-0xe64fffff 64bit pref] Sep 13 00:04:53.792804 kernel: pci_bus 0000:0a: resource 1 [mem 0xfb900000-0xfb9fffff] Sep 13 00:04:53.792862 kernel: pci_bus 0000:0a: resource 2 [mem 0xe6000000-0xe60fffff 64bit pref] Sep 13 00:04:53.792945 kernel: pci_bus 0000:0b: resource 0 [io 0x5000-0x5fff] Sep 13 00:04:53.792994 kernel: pci_bus 0000:0b: resource 1 [mem 0xfd400000-0xfd4fffff] Sep 13 00:04:53.793041 kernel: pci_bus 0000:0b: resource 2 [mem 0xc0200000-0xc03fffff 64bit pref] Sep 13 00:04:53.793092 kernel: pci_bus 0000:0c: resource 0 [io 0x9000-0x9fff] Sep 13 00:04:53.793185 kernel: pci_bus 0000:0c: resource 1 [mem 0xfd000000-0xfd0fffff] Sep 13 00:04:53.793234 kernel: pci_bus 0000:0c: resource 2 [mem 0xe7700000-0xe77fffff 64bit pref] Sep 13 00:04:53.793290 kernel: pci_bus 0000:0d: resource 0 [io 0xd000-0xdfff] Sep 13 00:04:53.793339 kernel: pci_bus 0000:0d: resource 1 [mem 0xfcc00000-0xfccfffff] Sep 13 00:04:53.793390 kernel: pci_bus 0000:0d: resource 2 [mem 0xe7300000-0xe73fffff 64bit pref] Sep 13 00:04:53.793442 kernel: pci_bus 0000:0e: resource 1 [mem 0xfc800000-0xfc8fffff] Sep 13 00:04:53.793491 kernel: pci_bus 0000:0e: resource 2 [mem 0xe6f00000-0xe6ffffff 64bit pref] Sep 13 00:04:53.793543 kernel: pci_bus 0000:0f: resource 1 [mem 0xfc400000-0xfc4fffff] Sep 13 00:04:53.793594 kernel: pci_bus 0000:0f: resource 2 [mem 0xe6b00000-0xe6bfffff 64bit pref] Sep 13 00:04:53.793646 kernel: pci_bus 0000:10: resource 1 [mem 0xfc000000-0xfc0fffff] Sep 13 00:04:53.793694 kernel: pci_bus 0000:10: resource 2 [mem 0xe6700000-0xe67fffff 64bit pref] Sep 13 00:04:53.793745 kernel: pci_bus 0000:11: resource 1 [mem 0xfbc00000-0xfbcfffff] Sep 13 00:04:53.793793 kernel: pci_bus 0000:11: resource 2 [mem 0xe6300000-0xe63fffff 64bit pref] Sep 13 00:04:53.793844 kernel: pci_bus 0000:12: resource 1 [mem 0xfb800000-0xfb8fffff] Sep 13 00:04:53.793952 kernel: pci_bus 0000:12: resource 2 [mem 0xe5f00000-0xe5ffffff 64bit pref] Sep 13 00:04:53.794006 kernel: pci_bus 0000:13: resource 0 [io 0x6000-0x6fff] Sep 13 00:04:53.794054 kernel: pci_bus 0000:13: resource 1 [mem 0xfd300000-0xfd3fffff] Sep 13 00:04:53.794106 kernel: pci_bus 0000:13: resource 2 [mem 0xe7a00000-0xe7afffff 64bit pref] Sep 13 00:04:53.794160 kernel: pci_bus 0000:14: resource 0 [io 0xa000-0xafff] Sep 13 00:04:53.794208 kernel: pci_bus 0000:14: resource 1 [mem 0xfcf00000-0xfcffffff] Sep 13 00:04:53.794254 kernel: pci_bus 0000:14: resource 2 [mem 0xe7600000-0xe76fffff 64bit pref] Sep 13 00:04:53.794309 kernel: pci_bus 0000:15: resource 0 [io 0xe000-0xefff] Sep 13 00:04:53.794356 kernel: pci_bus 0000:15: resource 1 [mem 0xfcb00000-0xfcbfffff] Sep 13 00:04:53.794403 kernel: pci_bus 0000:15: resource 2 [mem 0xe7200000-0xe72fffff 64bit pref] Sep 13 00:04:53.794453 kernel: pci_bus 0000:16: resource 1 [mem 0xfc700000-0xfc7fffff] Sep 13 00:04:53.794501 kernel: pci_bus 0000:16: resource 2 [mem 0xe6e00000-0xe6efffff 64bit pref] Sep 13 00:04:53.794552 kernel: pci_bus 0000:17: resource 1 [mem 0xfc300000-0xfc3fffff] Sep 13 00:04:53.794603 kernel: pci_bus 0000:17: resource 2 [mem 0xe6a00000-0xe6afffff 64bit pref] Sep 13 00:04:53.794655 kernel: pci_bus 0000:18: resource 1 [mem 0xfbf00000-0xfbffffff] Sep 13 00:04:53.794702 kernel: pci_bus 0000:18: resource 2 [mem 0xe6600000-0xe66fffff 64bit pref] Sep 13 00:04:53.794754 kernel: pci_bus 0000:19: resource 1 [mem 0xfbb00000-0xfbbfffff] Sep 13 00:04:53.794801 kernel: pci_bus 0000:19: resource 2 [mem 0xe6200000-0xe62fffff 64bit pref] Sep 13 00:04:53.794852 kernel: pci_bus 0000:1a: resource 1 [mem 0xfb700000-0xfb7fffff] Sep 13 00:04:53.794908 kernel: pci_bus 0000:1a: resource 2 [mem 0xe5e00000-0xe5efffff 64bit pref] Sep 13 00:04:53.794965 kernel: pci_bus 0000:1b: resource 0 [io 0x7000-0x7fff] Sep 13 00:04:53.795013 kernel: pci_bus 0000:1b: resource 1 [mem 0xfd200000-0xfd2fffff] Sep 13 00:04:53.795060 kernel: pci_bus 0000:1b: resource 2 [mem 0xe7900000-0xe79fffff 64bit pref] Sep 13 00:04:53.795113 kernel: pci_bus 0000:1c: resource 0 [io 0xb000-0xbfff] Sep 13 00:04:53.795161 kernel: pci_bus 0000:1c: resource 1 [mem 0xfce00000-0xfcefffff] Sep 13 00:04:53.795208 kernel: pci_bus 0000:1c: resource 2 [mem 0xe7500000-0xe75fffff 64bit pref] Sep 13 00:04:53.795261 kernel: pci_bus 0000:1d: resource 1 [mem 0xfca00000-0xfcafffff] Sep 13 00:04:53.795309 kernel: pci_bus 0000:1d: resource 2 [mem 0xe7100000-0xe71fffff 64bit pref] Sep 13 00:04:53.795360 kernel: pci_bus 0000:1e: resource 1 [mem 0xfc600000-0xfc6fffff] Sep 13 00:04:53.795407 kernel: pci_bus 0000:1e: resource 2 [mem 0xe6d00000-0xe6dfffff 64bit pref] Sep 13 00:04:53.795458 kernel: pci_bus 0000:1f: resource 1 [mem 0xfc200000-0xfc2fffff] Sep 13 00:04:53.795506 kernel: pci_bus 0000:1f: resource 2 [mem 0xe6900000-0xe69fffff 64bit pref] Sep 13 00:04:53.795561 kernel: pci_bus 0000:20: resource 1 [mem 0xfbe00000-0xfbefffff] Sep 13 00:04:53.795608 kernel: pci_bus 0000:20: resource 2 [mem 0xe6500000-0xe65fffff 64bit pref] Sep 13 00:04:53.795659 kernel: pci_bus 0000:21: resource 1 [mem 0xfba00000-0xfbafffff] Sep 13 00:04:53.795707 kernel: pci_bus 0000:21: resource 2 [mem 0xe6100000-0xe61fffff 64bit pref] Sep 13 00:04:53.795758 kernel: pci_bus 0000:22: resource 1 [mem 0xfb600000-0xfb6fffff] Sep 13 00:04:53.795806 kernel: pci_bus 0000:22: resource 2 [mem 0xe5d00000-0xe5dfffff 64bit pref] Sep 13 00:04:53.795863 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Sep 13 00:04:53.795872 kernel: PCI: CLS 32 bytes, default 64 Sep 13 00:04:53.795879 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Sep 13 00:04:53.795886 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x311fd3cd494, max_idle_ns: 440795223879 ns Sep 13 00:04:53.795899 kernel: clocksource: Switched to clocksource tsc Sep 13 00:04:53.795914 kernel: Initialise system trusted keyrings Sep 13 00:04:53.795921 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Sep 13 00:04:53.795927 kernel: Key type asymmetric registered Sep 13 00:04:53.795934 kernel: Asymmetric key parser 'x509' registered Sep 13 00:04:53.795942 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Sep 13 00:04:53.795949 kernel: io scheduler mq-deadline registered Sep 13 00:04:53.795955 kernel: io scheduler kyber registered Sep 13 00:04:53.795961 kernel: io scheduler bfq registered Sep 13 00:04:53.796020 kernel: pcieport 0000:00:15.0: PME: Signaling with IRQ 24 Sep 13 00:04:53.796076 kernel: pcieport 0000:00:15.0: pciehp: Slot #160 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 13 00:04:53.796131 kernel: pcieport 0000:00:15.1: PME: Signaling with IRQ 25 Sep 13 00:04:53.796183 kernel: pcieport 0000:00:15.1: pciehp: Slot #161 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 13 00:04:53.796237 kernel: pcieport 0000:00:15.2: PME: Signaling with IRQ 26 Sep 13 00:04:53.796290 kernel: pcieport 0000:00:15.2: pciehp: Slot #162 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 13 00:04:53.796342 kernel: pcieport 0000:00:15.3: PME: Signaling with IRQ 27 Sep 13 00:04:53.796393 kernel: pcieport 0000:00:15.3: pciehp: Slot #163 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 13 00:04:53.796445 kernel: pcieport 0000:00:15.4: PME: Signaling with IRQ 28 Sep 13 00:04:53.796497 kernel: pcieport 0000:00:15.4: pciehp: Slot #164 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 13 00:04:53.796553 kernel: pcieport 0000:00:15.5: PME: Signaling with IRQ 29 Sep 13 00:04:53.796604 kernel: pcieport 0000:00:15.5: pciehp: Slot #165 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 13 00:04:53.796656 kernel: pcieport 0000:00:15.6: PME: Signaling with IRQ 30 Sep 13 00:04:53.796708 kernel: pcieport 0000:00:15.6: pciehp: Slot #166 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 13 00:04:53.796761 kernel: pcieport 0000:00:15.7: PME: Signaling with IRQ 31 Sep 13 00:04:53.796816 kernel: pcieport 0000:00:15.7: pciehp: Slot #167 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 13 00:04:53.796867 kernel: pcieport 0000:00:16.0: PME: Signaling with IRQ 32 Sep 13 00:04:53.796947 kernel: pcieport 0000:00:16.0: pciehp: Slot #192 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 13 00:04:53.797000 kernel: pcieport 0000:00:16.1: PME: Signaling with IRQ 33 Sep 13 00:04:53.797051 kernel: pcieport 0000:00:16.1: pciehp: Slot #193 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 13 00:04:53.797102 kernel: pcieport 0000:00:16.2: PME: Signaling with IRQ 34 Sep 13 00:04:53.797153 kernel: pcieport 0000:00:16.2: pciehp: Slot #194 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 13 00:04:53.797206 kernel: pcieport 0000:00:16.3: PME: Signaling with IRQ 35 Sep 13 00:04:53.797257 kernel: pcieport 0000:00:16.3: pciehp: Slot #195 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 13 00:04:53.797312 kernel: pcieport 0000:00:16.4: PME: Signaling with IRQ 36 Sep 13 00:04:53.797363 kernel: pcieport 0000:00:16.4: pciehp: Slot #196 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 13 00:04:53.797414 kernel: pcieport 0000:00:16.5: PME: Signaling with IRQ 37 Sep 13 00:04:53.797467 kernel: pcieport 0000:00:16.5: pciehp: Slot #197 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 13 00:04:53.797522 kernel: pcieport 0000:00:16.6: PME: Signaling with IRQ 38 Sep 13 00:04:53.797596 kernel: pcieport 0000:00:16.6: pciehp: Slot #198 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 13 00:04:53.797666 kernel: pcieport 0000:00:16.7: PME: Signaling with IRQ 39 Sep 13 00:04:53.797741 kernel: pcieport 0000:00:16.7: pciehp: Slot #199 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 13 00:04:53.797796 kernel: pcieport 0000:00:17.0: PME: Signaling with IRQ 40 Sep 13 00:04:53.797852 kernel: pcieport 0000:00:17.0: pciehp: Slot #224 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 13 00:04:53.797926 kernel: pcieport 0000:00:17.1: PME: Signaling with IRQ 41 Sep 13 00:04:53.797981 kernel: pcieport 0000:00:17.1: pciehp: Slot #225 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 13 00:04:53.798033 kernel: pcieport 0000:00:17.2: PME: Signaling with IRQ 42 Sep 13 00:04:53.798112 kernel: pcieport 0000:00:17.2: pciehp: Slot #226 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 13 00:04:53.798177 kernel: pcieport 0000:00:17.3: PME: Signaling with IRQ 43 Sep 13 00:04:53.798230 kernel: pcieport 0000:00:17.3: pciehp: Slot #227 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 13 00:04:53.798293 kernel: pcieport 0000:00:17.4: PME: Signaling with IRQ 44 Sep 13 00:04:53.798351 kernel: pcieport 0000:00:17.4: pciehp: Slot #228 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 13 00:04:53.798410 kernel: pcieport 0000:00:17.5: PME: Signaling with IRQ 45 Sep 13 00:04:53.798463 kernel: pcieport 0000:00:17.5: pciehp: Slot #229 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 13 00:04:53.798525 kernel: pcieport 0000:00:17.6: PME: Signaling with IRQ 46 Sep 13 00:04:53.798581 kernel: pcieport 0000:00:17.6: pciehp: Slot #230 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 13 00:04:53.798633 kernel: pcieport 0000:00:17.7: PME: Signaling with IRQ 47 Sep 13 00:04:53.798684 kernel: pcieport 0000:00:17.7: pciehp: Slot #231 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 13 00:04:53.798737 kernel: pcieport 0000:00:18.0: PME: Signaling with IRQ 48 Sep 13 00:04:53.798788 kernel: pcieport 0000:00:18.0: pciehp: Slot #256 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 13 00:04:53.798839 kernel: pcieport 0000:00:18.1: PME: Signaling with IRQ 49 Sep 13 00:04:53.799005 kernel: pcieport 0000:00:18.1: pciehp: Slot #257 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 13 00:04:53.799065 kernel: pcieport 0000:00:18.2: PME: Signaling with IRQ 50 Sep 13 00:04:53.799117 kernel: pcieport 0000:00:18.2: pciehp: Slot #258 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 13 00:04:53.799168 kernel: pcieport 0000:00:18.3: PME: Signaling with IRQ 51 Sep 13 00:04:53.799220 kernel: pcieport 0000:00:18.3: pciehp: Slot #259 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 13 00:04:53.799272 kernel: pcieport 0000:00:18.4: PME: Signaling with IRQ 52 Sep 13 00:04:53.799326 kernel: pcieport 0000:00:18.4: pciehp: Slot #260 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 13 00:04:53.799377 kernel: pcieport 0000:00:18.5: PME: Signaling with IRQ 53 Sep 13 00:04:53.799429 kernel: pcieport 0000:00:18.5: pciehp: Slot #261 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 13 00:04:53.799480 kernel: pcieport 0000:00:18.6: PME: Signaling with IRQ 54 Sep 13 00:04:53.799531 kernel: pcieport 0000:00:18.6: pciehp: Slot #262 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 13 00:04:53.799585 kernel: pcieport 0000:00:18.7: PME: Signaling with IRQ 55 Sep 13 00:04:53.799636 kernel: pcieport 0000:00:18.7: pciehp: Slot #263 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 13 00:04:53.799646 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 13 00:04:53.799653 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 13 00:04:53.799660 kernel: 00:05: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 13 00:04:53.799667 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBC,PNP0f13:MOUS] at 0x60,0x64 irq 1,12 Sep 13 00:04:53.799674 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Sep 13 00:04:53.799682 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Sep 13 00:04:53.799735 kernel: rtc_cmos 00:01: registered as rtc0 Sep 13 00:04:53.799785 kernel: rtc_cmos 00:01: setting system clock to 2025-09-13T00:04:53 UTC (1757721893) Sep 13 00:04:53.799832 kernel: rtc_cmos 00:01: alarms up to one month, y3k, 114 bytes nvram Sep 13 00:04:53.799841 kernel: intel_pstate: CPU model not supported Sep 13 00:04:53.799847 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Sep 13 00:04:53.799854 kernel: NET: Registered PF_INET6 protocol family Sep 13 00:04:53.799860 kernel: Segment Routing with IPv6 Sep 13 00:04:53.799869 kernel: In-situ OAM (IOAM) with IPv6 Sep 13 00:04:53.799876 kernel: NET: Registered PF_PACKET protocol family Sep 13 00:04:53.799882 kernel: Key type dns_resolver registered Sep 13 00:04:53.799889 kernel: IPI shorthand broadcast: enabled Sep 13 00:04:53.801305 kernel: sched_clock: Marking stable (917003453, 240745519)->(1214698653, -56949681) Sep 13 00:04:53.801315 kernel: registered taskstats version 1 Sep 13 00:04:53.801322 kernel: Loading compiled-in X.509 certificates Sep 13 00:04:53.801329 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.106-flatcar: 1274e0c573ac8d09163d6bc6d1ee1445fb2f8cc6' Sep 13 00:04:53.801338 kernel: Key type .fscrypt registered Sep 13 00:04:53.801346 kernel: Key type fscrypt-provisioning registered Sep 13 00:04:53.801352 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 13 00:04:53.801358 kernel: ima: Allocated hash algorithm: sha1 Sep 13 00:04:53.801365 kernel: ima: No architecture policies found Sep 13 00:04:53.801372 kernel: clk: Disabling unused clocks Sep 13 00:04:53.801378 kernel: Freeing unused kernel image (initmem) memory: 42884K Sep 13 00:04:53.801385 kernel: Write protecting the kernel read-only data: 36864k Sep 13 00:04:53.801391 kernel: Freeing unused kernel image (rodata/data gap) memory: 1832K Sep 13 00:04:53.801398 kernel: Run /init as init process Sep 13 00:04:53.801405 kernel: with arguments: Sep 13 00:04:53.801412 kernel: /init Sep 13 00:04:53.801418 kernel: with environment: Sep 13 00:04:53.801424 kernel: HOME=/ Sep 13 00:04:53.801431 kernel: TERM=linux Sep 13 00:04:53.801437 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 13 00:04:53.801444 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 13 00:04:53.801453 systemd[1]: Detected virtualization vmware. Sep 13 00:04:53.801461 systemd[1]: Detected architecture x86-64. Sep 13 00:04:53.801467 systemd[1]: Running in initrd. Sep 13 00:04:53.801474 systemd[1]: No hostname configured, using default hostname. Sep 13 00:04:53.801480 systemd[1]: Hostname set to . Sep 13 00:04:53.801487 systemd[1]: Initializing machine ID from random generator. Sep 13 00:04:53.801493 systemd[1]: Queued start job for default target initrd.target. Sep 13 00:04:53.801500 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 13 00:04:53.801507 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 13 00:04:53.801515 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 13 00:04:53.801522 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 13 00:04:53.801528 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 13 00:04:53.801535 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 13 00:04:53.801543 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 13 00:04:53.801550 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 13 00:04:53.801557 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 13 00:04:53.801564 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 13 00:04:53.801571 systemd[1]: Reached target paths.target - Path Units. Sep 13 00:04:53.801577 systemd[1]: Reached target slices.target - Slice Units. Sep 13 00:04:53.801584 systemd[1]: Reached target swap.target - Swaps. Sep 13 00:04:53.801590 systemd[1]: Reached target timers.target - Timer Units. Sep 13 00:04:53.801597 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 13 00:04:53.801604 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 13 00:04:53.801610 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 13 00:04:53.801618 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Sep 13 00:04:53.801625 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 13 00:04:53.801631 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 13 00:04:53.801638 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 13 00:04:53.801645 systemd[1]: Reached target sockets.target - Socket Units. Sep 13 00:04:53.801651 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 13 00:04:53.801658 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 13 00:04:53.801665 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 13 00:04:53.801671 systemd[1]: Starting systemd-fsck-usr.service... Sep 13 00:04:53.801679 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 13 00:04:53.801686 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 13 00:04:53.801704 systemd-journald[216]: Collecting audit messages is disabled. Sep 13 00:04:53.801721 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 13 00:04:53.801730 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 13 00:04:53.801737 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 13 00:04:53.801743 systemd[1]: Finished systemd-fsck-usr.service. Sep 13 00:04:53.801750 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 13 00:04:53.801759 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 13 00:04:53.801765 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 13 00:04:53.801772 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 00:04:53.801780 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 13 00:04:53.801787 kernel: Bridge firewalling registered Sep 13 00:04:53.801794 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 13 00:04:53.801800 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 13 00:04:53.801807 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 13 00:04:53.801814 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 13 00:04:53.801822 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 13 00:04:53.801829 systemd-journald[216]: Journal started Sep 13 00:04:53.801843 systemd-journald[216]: Runtime Journal (/run/log/journal/5719d7ff84694b6888fd8c83d8e4a934) is 4.8M, max 38.7M, 33.8M free. Sep 13 00:04:53.759917 systemd-modules-load[217]: Inserted module 'overlay' Sep 13 00:04:53.784486 systemd-modules-load[217]: Inserted module 'br_netfilter' Sep 13 00:04:53.805529 systemd[1]: Started systemd-journald.service - Journal Service. Sep 13 00:04:53.815037 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 13 00:04:53.815488 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 13 00:04:53.816978 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 13 00:04:53.819672 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 13 00:04:53.821113 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 13 00:04:53.825532 dracut-cmdline[249]: dracut-dracut-053 Sep 13 00:04:53.827668 dracut-cmdline[249]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=2945e6465d436b7d1da8a9350a0544af0bd9aec821cd06987451d5e1d3071534 Sep 13 00:04:53.840982 systemd-resolved[251]: Positive Trust Anchors: Sep 13 00:04:53.840991 systemd-resolved[251]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 13 00:04:53.841013 systemd-resolved[251]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 13 00:04:53.843035 systemd-resolved[251]: Defaulting to hostname 'linux'. Sep 13 00:04:53.843632 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 13 00:04:53.843992 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 13 00:04:53.873912 kernel: SCSI subsystem initialized Sep 13 00:04:53.881914 kernel: Loading iSCSI transport class v2.0-870. Sep 13 00:04:53.889913 kernel: iscsi: registered transport (tcp) Sep 13 00:04:53.904919 kernel: iscsi: registered transport (qla4xxx) Sep 13 00:04:53.904964 kernel: QLogic iSCSI HBA Driver Sep 13 00:04:53.925597 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 13 00:04:53.930000 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 13 00:04:53.945177 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 13 00:04:53.945222 kernel: device-mapper: uevent: version 1.0.3 Sep 13 00:04:53.946248 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Sep 13 00:04:53.978917 kernel: raid6: avx2x4 gen() 49590 MB/s Sep 13 00:04:53.994910 kernel: raid6: avx2x2 gen() 52888 MB/s Sep 13 00:04:54.012100 kernel: raid6: avx2x1 gen() 42971 MB/s Sep 13 00:04:54.012150 kernel: raid6: using algorithm avx2x2 gen() 52888 MB/s Sep 13 00:04:54.030127 kernel: raid6: .... xor() 30180 MB/s, rmw enabled Sep 13 00:04:54.030181 kernel: raid6: using avx2x2 recovery algorithm Sep 13 00:04:54.043910 kernel: xor: automatically using best checksumming function avx Sep 13 00:04:54.148912 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 13 00:04:54.154403 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 13 00:04:54.159009 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 13 00:04:54.167219 systemd-udevd[435]: Using default interface naming scheme 'v255'. Sep 13 00:04:54.170368 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 13 00:04:54.178024 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 13 00:04:54.185515 dracut-pre-trigger[439]: rd.md=0: removing MD RAID activation Sep 13 00:04:54.202122 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 13 00:04:54.207992 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 13 00:04:54.281487 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 13 00:04:54.285010 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 13 00:04:54.293015 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 13 00:04:54.293495 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 13 00:04:54.294295 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 13 00:04:54.294684 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 13 00:04:54.298031 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 13 00:04:54.307568 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 13 00:04:54.343908 kernel: VMware PVSCSI driver - version 1.0.7.0-k Sep 13 00:04:54.352963 kernel: vmw_pvscsi: using 64bit dma Sep 13 00:04:54.357045 kernel: VMware vmxnet3 virtual NIC driver - version 1.7.0.0-k-NAPI Sep 13 00:04:54.357074 kernel: vmw_pvscsi: max_id: 16 Sep 13 00:04:54.357083 kernel: vmw_pvscsi: setting ring_pages to 8 Sep 13 00:04:54.361922 kernel: vmxnet3 0000:0b:00.0: # of Tx queues : 2, # of Rx queues : 2 Sep 13 00:04:54.369529 kernel: vmxnet3 0000:0b:00.0 eth0: NIC Link is Up 10000 Mbps Sep 13 00:04:54.369644 kernel: vmw_pvscsi: enabling reqCallThreshold Sep 13 00:04:54.369654 kernel: vmw_pvscsi: driver-based request coalescing enabled Sep 13 00:04:54.369661 kernel: vmw_pvscsi: using MSI-X Sep 13 00:04:54.370387 kernel: cryptd: max_cpu_qlen set to 1000 Sep 13 00:04:54.373055 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 13 00:04:54.373130 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 13 00:04:54.373459 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 13 00:04:54.373581 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 13 00:04:54.381455 kernel: vmxnet3 0000:0b:00.0 ens192: renamed from eth0 Sep 13 00:04:54.381553 kernel: scsi host0: VMware PVSCSI storage adapter rev 2, req/cmp/msg rings: 8/8/1 pages, cmd_per_lun=254 Sep 13 00:04:54.381628 kernel: vmw_pvscsi 0000:03:00.0: VMware PVSCSI rev 2 host #0 Sep 13 00:04:54.381695 kernel: scsi 0:0:0:0: Direct-Access VMware Virtual disk 2.0 PQ: 0 ANSI: 6 Sep 13 00:04:54.373651 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 00:04:54.381358 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 13 00:04:54.382912 kernel: AVX2 version of gcm_enc/dec engaged. Sep 13 00:04:54.387366 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 13 00:04:54.387917 kernel: AES CTR mode by8 optimization enabled Sep 13 00:04:54.400003 kernel: libata version 3.00 loaded. Sep 13 00:04:54.405119 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 00:04:54.406910 kernel: ata_piix 0000:00:07.1: version 2.13 Sep 13 00:04:54.409956 kernel: scsi host1: ata_piix Sep 13 00:04:54.410256 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 13 00:04:54.413370 kernel: scsi host2: ata_piix Sep 13 00:04:54.413472 kernel: ata1: PATA max UDMA/33 cmd 0x1f0 ctl 0x3f6 bmdma 0x1060 irq 14 Sep 13 00:04:54.413483 kernel: ata2: PATA max UDMA/33 cmd 0x170 ctl 0x376 bmdma 0x1068 irq 15 Sep 13 00:04:54.421567 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 13 00:04:54.582927 kernel: ata2.00: ATAPI: VMware Virtual IDE CDROM Drive, 00000001, max UDMA/33 Sep 13 00:04:54.588969 kernel: scsi 2:0:0:0: CD-ROM NECVMWar VMware IDE CDR10 1.00 PQ: 0 ANSI: 5 Sep 13 00:04:54.599731 kernel: sd 0:0:0:0: [sda] 17805312 512-byte logical blocks: (9.12 GB/8.49 GiB) Sep 13 00:04:54.600037 kernel: sd 0:0:0:0: [sda] Write Protect is off Sep 13 00:04:54.600135 kernel: sd 0:0:0:0: [sda] Mode Sense: 31 00 00 00 Sep 13 00:04:54.600223 kernel: sd 0:0:0:0: [sda] Cache data unavailable Sep 13 00:04:54.600286 kernel: sd 0:0:0:0: [sda] Assuming drive cache: write through Sep 13 00:04:54.629924 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 13 00:04:54.630920 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Sep 13 00:04:54.643926 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 1x/1x writer dvd-ram cd/rw xa/form2 cdda tray Sep 13 00:04:54.644143 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Sep 13 00:04:54.654925 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Sep 13 00:04:54.666911 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/sda6 scanned by (udev-worker) (481) Sep 13 00:04:54.668670 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_disk EFI-SYSTEM. Sep 13 00:04:54.669877 kernel: BTRFS: device fsid fa70a3b0-3d47-4508-bba0-9fa4607626aa devid 1 transid 36 /dev/sda3 scanned by (udev-worker) (493) Sep 13 00:04:54.672784 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_disk ROOT. Sep 13 00:04:54.678834 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_disk OEM. Sep 13 00:04:54.682615 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_disk USR-A. Sep 13 00:04:54.682783 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_disk USR-A. Sep 13 00:04:54.687997 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 13 00:04:54.750913 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 13 00:04:54.756928 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 13 00:04:55.802937 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 13 00:04:55.803776 disk-uuid[590]: The operation has completed successfully. Sep 13 00:04:55.845461 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 13 00:04:55.845537 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 13 00:04:55.848002 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 13 00:04:55.852297 sh[606]: Success Sep 13 00:04:55.861907 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Sep 13 00:04:55.976442 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 13 00:04:55.977571 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 13 00:04:55.977891 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 13 00:04:56.030925 kernel: BTRFS info (device dm-0): first mount of filesystem fa70a3b0-3d47-4508-bba0-9fa4607626aa Sep 13 00:04:56.030967 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Sep 13 00:04:56.030977 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Sep 13 00:04:56.030985 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 13 00:04:56.032326 kernel: BTRFS info (device dm-0): using free space tree Sep 13 00:04:56.039913 kernel: BTRFS info (device dm-0): enabling ssd optimizations Sep 13 00:04:56.042120 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 13 00:04:56.050002 systemd[1]: Starting afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments... Sep 13 00:04:56.051477 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 13 00:04:56.077236 kernel: BTRFS info (device sda6): first mount of filesystem 94088f30-ba7d-4694-bba6-875359d7b417 Sep 13 00:04:56.077277 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Sep 13 00:04:56.077286 kernel: BTRFS info (device sda6): using free space tree Sep 13 00:04:56.110917 kernel: BTRFS info (device sda6): enabling ssd optimizations Sep 13 00:04:56.120326 systemd[1]: mnt-oem.mount: Deactivated successfully. Sep 13 00:04:56.122097 kernel: BTRFS info (device sda6): last unmount of filesystem 94088f30-ba7d-4694-bba6-875359d7b417 Sep 13 00:04:56.129388 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 13 00:04:56.133025 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 13 00:04:56.187992 systemd[1]: Finished afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments. Sep 13 00:04:56.193998 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 13 00:04:56.261702 ignition[665]: Ignition 2.19.0 Sep 13 00:04:56.261710 ignition[665]: Stage: fetch-offline Sep 13 00:04:56.261733 ignition[665]: no configs at "/usr/lib/ignition/base.d" Sep 13 00:04:56.261739 ignition[665]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Sep 13 00:04:56.261804 ignition[665]: parsed url from cmdline: "" Sep 13 00:04:56.261807 ignition[665]: no config URL provided Sep 13 00:04:56.261810 ignition[665]: reading system config file "/usr/lib/ignition/user.ign" Sep 13 00:04:56.261815 ignition[665]: no config at "/usr/lib/ignition/user.ign" Sep 13 00:04:56.262191 ignition[665]: config successfully fetched Sep 13 00:04:56.262214 ignition[665]: parsing config with SHA512: c0eb510831ef02a8e5bf2e3d8264eff5a2489ae71cc3ec289809249edd76aa93ec0365019cd1ad09a86078aa2c8a9e8390d2e6b929e54decf77f356be764311d Sep 13 00:04:56.265498 unknown[665]: fetched base config from "system" Sep 13 00:04:56.265743 ignition[665]: fetch-offline: fetch-offline passed Sep 13 00:04:56.265506 unknown[665]: fetched user config from "vmware" Sep 13 00:04:56.265781 ignition[665]: Ignition finished successfully Sep 13 00:04:56.265927 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 13 00:04:56.266800 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 13 00:04:56.271020 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 13 00:04:56.284613 systemd-networkd[800]: lo: Link UP Sep 13 00:04:56.284622 systemd-networkd[800]: lo: Gained carrier Sep 13 00:04:56.285468 systemd-networkd[800]: Enumeration completed Sep 13 00:04:56.285771 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 13 00:04:56.285772 systemd-networkd[800]: ens192: Configuring with /etc/systemd/network/10-dracut-cmdline-99.network. Sep 13 00:04:56.285942 systemd[1]: Reached target network.target - Network. Sep 13 00:04:56.286028 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 13 00:04:56.290996 kernel: vmxnet3 0000:0b:00.0 ens192: intr type 3, mode 0, 3 vectors allocated Sep 13 00:04:56.291119 kernel: vmxnet3 0000:0b:00.0 ens192: NIC Link is Up 10000 Mbps Sep 13 00:04:56.289369 systemd-networkd[800]: ens192: Link UP Sep 13 00:04:56.289371 systemd-networkd[800]: ens192: Gained carrier Sep 13 00:04:56.294549 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 13 00:04:56.305345 ignition[802]: Ignition 2.19.0 Sep 13 00:04:56.305353 ignition[802]: Stage: kargs Sep 13 00:04:56.305501 ignition[802]: no configs at "/usr/lib/ignition/base.d" Sep 13 00:04:56.305508 ignition[802]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Sep 13 00:04:56.307373 ignition[802]: kargs: kargs passed Sep 13 00:04:56.307442 ignition[802]: Ignition finished successfully Sep 13 00:04:56.308686 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 13 00:04:56.313030 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 13 00:04:56.320858 ignition[809]: Ignition 2.19.0 Sep 13 00:04:56.320865 ignition[809]: Stage: disks Sep 13 00:04:56.320995 ignition[809]: no configs at "/usr/lib/ignition/base.d" Sep 13 00:04:56.321002 ignition[809]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Sep 13 00:04:56.321592 ignition[809]: disks: disks passed Sep 13 00:04:56.321624 ignition[809]: Ignition finished successfully Sep 13 00:04:56.322314 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 13 00:04:56.322849 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 13 00:04:56.323100 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 13 00:04:56.323315 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 13 00:04:56.323521 systemd[1]: Reached target sysinit.target - System Initialization. Sep 13 00:04:56.323727 systemd[1]: Reached target basic.target - Basic System. Sep 13 00:04:56.328992 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 13 00:04:56.378310 systemd-fsck[817]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Sep 13 00:04:56.379573 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 13 00:04:56.385052 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 13 00:04:56.467673 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 13 00:04:56.467907 kernel: EXT4-fs (sda9): mounted filesystem 3a3ecd49-b269-4fcb-bb61-e2994e1868ee r/w with ordered data mode. Quota mode: none. Sep 13 00:04:56.468189 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 13 00:04:56.492007 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 13 00:04:56.493382 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 13 00:04:56.493663 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 13 00:04:56.493691 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 13 00:04:56.493705 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 13 00:04:56.498545 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 13 00:04:56.499632 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 13 00:04:56.502911 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by mount (825) Sep 13 00:04:56.507194 kernel: BTRFS info (device sda6): first mount of filesystem 94088f30-ba7d-4694-bba6-875359d7b417 Sep 13 00:04:56.507230 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Sep 13 00:04:56.507239 kernel: BTRFS info (device sda6): using free space tree Sep 13 00:04:56.512911 kernel: BTRFS info (device sda6): enabling ssd optimizations Sep 13 00:04:56.514390 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 13 00:04:56.549460 initrd-setup-root[849]: cut: /sysroot/etc/passwd: No such file or directory Sep 13 00:04:56.558447 initrd-setup-root[856]: cut: /sysroot/etc/group: No such file or directory Sep 13 00:04:56.566024 initrd-setup-root[863]: cut: /sysroot/etc/shadow: No such file or directory Sep 13 00:04:56.570609 initrd-setup-root[870]: cut: /sysroot/etc/gshadow: No such file or directory Sep 13 00:04:56.644119 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 13 00:04:56.647998 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 13 00:04:56.649979 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 13 00:04:56.653982 kernel: BTRFS info (device sda6): last unmount of filesystem 94088f30-ba7d-4694-bba6-875359d7b417 Sep 13 00:04:56.671932 ignition[938]: INFO : Ignition 2.19.0 Sep 13 00:04:56.671932 ignition[938]: INFO : Stage: mount Sep 13 00:04:56.671932 ignition[938]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 13 00:04:56.671932 ignition[938]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Sep 13 00:04:56.671932 ignition[938]: INFO : mount: mount passed Sep 13 00:04:56.671932 ignition[938]: INFO : Ignition finished successfully Sep 13 00:04:56.672624 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 13 00:04:56.678009 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 13 00:04:56.707205 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 13 00:04:57.028283 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 13 00:04:57.033013 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 13 00:04:57.041601 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (949) Sep 13 00:04:57.041634 kernel: BTRFS info (device sda6): first mount of filesystem 94088f30-ba7d-4694-bba6-875359d7b417 Sep 13 00:04:57.041644 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Sep 13 00:04:57.042585 kernel: BTRFS info (device sda6): using free space tree Sep 13 00:04:57.045906 kernel: BTRFS info (device sda6): enabling ssd optimizations Sep 13 00:04:57.047011 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 13 00:04:57.061995 ignition[966]: INFO : Ignition 2.19.0 Sep 13 00:04:57.061995 ignition[966]: INFO : Stage: files Sep 13 00:04:57.062512 ignition[966]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 13 00:04:57.062512 ignition[966]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Sep 13 00:04:57.062986 ignition[966]: DEBUG : files: compiled without relabeling support, skipping Sep 13 00:04:57.065680 ignition[966]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 13 00:04:57.065680 ignition[966]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 13 00:04:57.089608 ignition[966]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 13 00:04:57.089815 ignition[966]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 13 00:04:57.089970 ignition[966]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 13 00:04:57.089867 unknown[966]: wrote ssh authorized keys file for user: core Sep 13 00:04:57.101748 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Sep 13 00:04:57.102137 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Sep 13 00:04:57.248931 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 13 00:04:57.544189 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Sep 13 00:04:57.544464 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Sep 13 00:04:57.544464 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Sep 13 00:04:57.544464 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 13 00:04:57.544464 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 13 00:04:57.544464 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 13 00:04:57.545284 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 13 00:04:57.545284 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 13 00:04:57.545284 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 13 00:04:57.545284 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 13 00:04:57.545284 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 13 00:04:57.545284 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 13 00:04:57.545284 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 13 00:04:57.545284 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 13 00:04:57.545284 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Sep 13 00:04:57.966648 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Sep 13 00:04:58.032018 systemd-networkd[800]: ens192: Gained IPv6LL Sep 13 00:04:58.922259 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 13 00:04:58.922766 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/etc/systemd/network/00-vmware.network" Sep 13 00:04:58.922766 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/etc/systemd/network/00-vmware.network" Sep 13 00:04:58.922766 ignition[966]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Sep 13 00:04:58.922766 ignition[966]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 13 00:04:58.923711 ignition[966]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 13 00:04:58.923711 ignition[966]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Sep 13 00:04:58.923711 ignition[966]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Sep 13 00:04:58.923711 ignition[966]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 13 00:04:58.923711 ignition[966]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 13 00:04:58.923711 ignition[966]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Sep 13 00:04:58.923711 ignition[966]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Sep 13 00:04:59.017712 ignition[966]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Sep 13 00:04:59.020928 ignition[966]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Sep 13 00:04:59.020928 ignition[966]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Sep 13 00:04:59.020928 ignition[966]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Sep 13 00:04:59.020928 ignition[966]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Sep 13 00:04:59.020928 ignition[966]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 13 00:04:59.021712 ignition[966]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 13 00:04:59.021712 ignition[966]: INFO : files: files passed Sep 13 00:04:59.021712 ignition[966]: INFO : Ignition finished successfully Sep 13 00:04:59.023085 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 13 00:04:59.028096 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 13 00:04:59.029026 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 13 00:04:59.030871 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 13 00:04:59.030982 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 13 00:04:59.040356 initrd-setup-root-after-ignition[1001]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 13 00:04:59.040685 initrd-setup-root-after-ignition[997]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 13 00:04:59.040685 initrd-setup-root-after-ignition[997]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 13 00:04:59.041764 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 13 00:04:59.042046 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 13 00:04:59.045006 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 13 00:04:59.057943 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 13 00:04:59.058000 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 13 00:04:59.058219 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 13 00:04:59.058316 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 13 00:04:59.058426 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 13 00:04:59.059493 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 13 00:04:59.068200 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 13 00:04:59.073991 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 13 00:04:59.079285 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 13 00:04:59.079593 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 13 00:04:59.079759 systemd[1]: Stopped target timers.target - Timer Units. Sep 13 00:04:59.079905 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 13 00:04:59.079980 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 13 00:04:59.080218 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 13 00:04:59.080362 systemd[1]: Stopped target basic.target - Basic System. Sep 13 00:04:59.080500 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 13 00:04:59.080647 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 13 00:04:59.080799 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 13 00:04:59.082259 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 13 00:04:59.082407 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 13 00:04:59.082587 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 13 00:04:59.082755 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 13 00:04:59.082912 systemd[1]: Stopped target swap.target - Swaps. Sep 13 00:04:59.083079 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 13 00:04:59.083175 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 13 00:04:59.083698 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 13 00:04:59.083952 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 13 00:04:59.084118 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 13 00:04:59.084165 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 13 00:04:59.084320 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 13 00:04:59.084423 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 13 00:04:59.084836 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 13 00:04:59.084953 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 13 00:04:59.085265 systemd[1]: Stopped target paths.target - Path Units. Sep 13 00:04:59.085425 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 13 00:04:59.088922 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 13 00:04:59.089132 systemd[1]: Stopped target slices.target - Slice Units. Sep 13 00:04:59.089412 systemd[1]: Stopped target sockets.target - Socket Units. Sep 13 00:04:59.089629 systemd[1]: iscsid.socket: Deactivated successfully. Sep 13 00:04:59.089689 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 13 00:04:59.089979 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 13 00:04:59.090061 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 13 00:04:59.090246 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 13 00:04:59.090313 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 13 00:04:59.090568 systemd[1]: ignition-files.service: Deactivated successfully. Sep 13 00:04:59.090627 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 13 00:04:59.099089 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 13 00:04:59.099245 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 13 00:04:59.099356 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 13 00:04:59.102070 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 13 00:04:59.102210 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 13 00:04:59.102313 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 13 00:04:59.102857 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 13 00:04:59.102940 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 13 00:04:59.105984 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 13 00:04:59.106148 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 13 00:04:59.113743 ignition[1021]: INFO : Ignition 2.19.0 Sep 13 00:04:59.113743 ignition[1021]: INFO : Stage: umount Sep 13 00:04:59.113743 ignition[1021]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 13 00:04:59.113743 ignition[1021]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Sep 13 00:04:59.114381 ignition[1021]: INFO : umount: umount passed Sep 13 00:04:59.114381 ignition[1021]: INFO : Ignition finished successfully Sep 13 00:04:59.115222 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 13 00:04:59.115286 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 13 00:04:59.115703 systemd[1]: Stopped target network.target - Network. Sep 13 00:04:59.115955 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 13 00:04:59.115987 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 13 00:04:59.116254 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 13 00:04:59.116278 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 13 00:04:59.116639 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 13 00:04:59.116662 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 13 00:04:59.116914 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 13 00:04:59.116938 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 13 00:04:59.117552 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 13 00:04:59.117829 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 13 00:04:59.120883 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 13 00:04:59.121096 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 13 00:04:59.121562 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 13 00:04:59.121700 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 13 00:04:59.128320 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 13 00:04:59.128685 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 13 00:04:59.128737 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 13 00:04:59.128914 systemd[1]: afterburn-network-kargs.service: Deactivated successfully. Sep 13 00:04:59.128939 systemd[1]: Stopped afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments. Sep 13 00:04:59.129117 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 13 00:04:59.129847 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 13 00:04:59.132366 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 13 00:04:59.132433 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 13 00:04:59.138807 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 13 00:04:59.138881 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 13 00:04:59.139540 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 13 00:04:59.139698 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 13 00:04:59.140048 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 13 00:04:59.140074 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 13 00:04:59.140967 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 13 00:04:59.141158 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 13 00:04:59.143427 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 13 00:04:59.143530 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 13 00:04:59.144364 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 13 00:04:59.144394 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 13 00:04:59.144523 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 13 00:04:59.144545 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 13 00:04:59.144782 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 13 00:04:59.144813 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 13 00:04:59.145267 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 13 00:04:59.145300 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 13 00:04:59.145603 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 13 00:04:59.145635 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 13 00:04:59.151029 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 13 00:04:59.151137 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 13 00:04:59.151172 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 13 00:04:59.151301 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 13 00:04:59.151323 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 00:04:59.154721 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 13 00:04:59.154792 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 13 00:04:59.234385 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 13 00:04:59.234469 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 13 00:04:59.234916 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 13 00:04:59.235049 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 13 00:04:59.235081 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 13 00:04:59.238987 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 13 00:04:59.246413 systemd[1]: Switching root. Sep 13 00:04:59.282138 systemd-journald[216]: Journal stopped Sep 13 00:05:00.878761 systemd-journald[216]: Received SIGTERM from PID 1 (systemd). Sep 13 00:05:00.878785 kernel: SELinux: policy capability network_peer_controls=1 Sep 13 00:05:00.878795 kernel: SELinux: policy capability open_perms=1 Sep 13 00:05:00.878801 kernel: SELinux: policy capability extended_socket_class=1 Sep 13 00:05:00.878806 kernel: SELinux: policy capability always_check_network=0 Sep 13 00:05:00.878812 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 13 00:05:00.878820 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 13 00:05:00.878826 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 13 00:05:00.878832 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 13 00:05:00.878838 kernel: audit: type=1403 audit(1757721900.075:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 13 00:05:00.878845 systemd[1]: Successfully loaded SELinux policy in 58.579ms. Sep 13 00:05:00.878852 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.548ms. Sep 13 00:05:00.878859 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 13 00:05:00.878867 systemd[1]: Detected virtualization vmware. Sep 13 00:05:00.878874 systemd[1]: Detected architecture x86-64. Sep 13 00:05:00.878881 systemd[1]: Detected first boot. Sep 13 00:05:00.878888 systemd[1]: Initializing machine ID from random generator. Sep 13 00:05:00.879674 zram_generator::config[1064]: No configuration found. Sep 13 00:05:00.879687 systemd[1]: Populated /etc with preset unit settings. Sep 13 00:05:00.879696 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Sep 13 00:05:00.879704 systemd[1]: COREOS_CUSTOM_PUBLIC_IPV4=$(ip addr show ens192 | grep -v "inet 10." | grep -Po "inet \K[\d.]+")" > ${OUTPUT}" Sep 13 00:05:00.879711 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 13 00:05:00.879717 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 13 00:05:00.879725 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 13 00:05:00.879735 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 13 00:05:00.879742 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 13 00:05:00.879749 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 13 00:05:00.879756 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 13 00:05:00.879763 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 13 00:05:00.879770 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 13 00:05:00.879777 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 13 00:05:00.879785 systemd[1]: Created slice user.slice - User and Session Slice. Sep 13 00:05:00.879792 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 13 00:05:00.879799 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 13 00:05:00.879806 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 13 00:05:00.879814 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 13 00:05:00.879821 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 13 00:05:00.879828 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 13 00:05:00.879835 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Sep 13 00:05:00.879843 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 13 00:05:00.879851 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 13 00:05:00.879860 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 13 00:05:00.879867 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 13 00:05:00.879874 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 13 00:05:00.879882 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 13 00:05:00.879889 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 13 00:05:00.879907 systemd[1]: Reached target slices.target - Slice Units. Sep 13 00:05:00.879917 systemd[1]: Reached target swap.target - Swaps. Sep 13 00:05:00.879924 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 13 00:05:00.879931 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 13 00:05:00.879938 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 13 00:05:00.879946 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 13 00:05:00.879954 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 13 00:05:00.879961 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 13 00:05:00.879968 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 13 00:05:00.879976 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 13 00:05:00.879983 systemd[1]: Mounting media.mount - External Media Directory... Sep 13 00:05:00.879990 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:05:00.879998 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 13 00:05:00.880005 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 13 00:05:00.880013 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 13 00:05:00.880021 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 13 00:05:00.880029 systemd[1]: Reached target machines.target - Containers. Sep 13 00:05:00.880036 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 13 00:05:00.880043 systemd[1]: Starting ignition-delete-config.service - Ignition (delete config)... Sep 13 00:05:00.880050 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 13 00:05:00.880058 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 13 00:05:00.880065 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 13 00:05:00.880073 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 13 00:05:00.880081 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 13 00:05:00.880088 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 13 00:05:00.880095 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 13 00:05:00.880102 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 13 00:05:00.880109 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 13 00:05:00.880117 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 13 00:05:00.880124 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 13 00:05:00.880131 systemd[1]: Stopped systemd-fsck-usr.service. Sep 13 00:05:00.880139 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 13 00:05:00.880148 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 13 00:05:00.880156 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 13 00:05:00.880177 systemd-journald[1161]: Collecting audit messages is disabled. Sep 13 00:05:00.880196 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 13 00:05:00.880204 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 13 00:05:00.880211 systemd[1]: verity-setup.service: Deactivated successfully. Sep 13 00:05:00.880219 systemd-journald[1161]: Journal started Sep 13 00:05:00.880234 systemd-journald[1161]: Runtime Journal (/run/log/journal/d51e1a7eff3f43be87d4348168bddc83) is 4.8M, max 38.7M, 33.8M free. Sep 13 00:05:00.666918 systemd[1]: Queued start job for default target multi-user.target. Sep 13 00:05:00.727110 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Sep 13 00:05:00.727352 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 13 00:05:00.880812 jq[1131]: true Sep 13 00:05:00.883356 systemd[1]: Stopped verity-setup.service. Sep 13 00:05:00.883371 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:05:00.888348 systemd[1]: Started systemd-journald.service - Journal Service. Sep 13 00:05:00.888372 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 13 00:05:00.889033 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 13 00:05:00.889716 systemd[1]: Mounted media.mount - External Media Directory. Sep 13 00:05:00.889866 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 13 00:05:00.890019 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 13 00:05:00.890308 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 13 00:05:00.890919 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 13 00:05:00.891909 kernel: fuse: init (API version 7.39) Sep 13 00:05:00.892904 kernel: loop: module loaded Sep 13 00:05:00.896781 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 13 00:05:00.897068 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 13 00:05:00.897159 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 13 00:05:00.897670 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 00:05:00.897755 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 13 00:05:00.898529 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 00:05:00.898618 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 13 00:05:00.898861 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 13 00:05:00.898954 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 13 00:05:00.899181 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 00:05:00.899259 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 13 00:05:00.900128 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 13 00:05:00.900375 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 13 00:05:00.904940 jq[1173]: true Sep 13 00:05:00.912472 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 13 00:05:00.916244 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 13 00:05:00.920929 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 13 00:05:00.921061 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 13 00:05:00.921087 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 13 00:05:00.921770 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Sep 13 00:05:00.925328 kernel: ACPI: bus type drm_connector registered Sep 13 00:05:00.924453 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 13 00:05:00.929092 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 13 00:05:00.929545 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 13 00:05:00.934432 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 13 00:05:00.943311 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 13 00:05:00.943488 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 13 00:05:00.948655 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 13 00:05:00.949170 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 13 00:05:00.962121 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 13 00:05:00.972069 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 13 00:05:00.973375 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 13 00:05:00.973519 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 13 00:05:00.973826 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 13 00:05:00.974083 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 13 00:05:00.974259 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 13 00:05:00.974548 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 13 00:05:00.986149 systemd-journald[1161]: Time spent on flushing to /var/log/journal/d51e1a7eff3f43be87d4348168bddc83 is 50.636ms for 1832 entries. Sep 13 00:05:00.986149 systemd-journald[1161]: System Journal (/var/log/journal/d51e1a7eff3f43be87d4348168bddc83) is 8.0M, max 584.8M, 576.8M free. Sep 13 00:05:01.054690 systemd-journald[1161]: Received client request to flush runtime journal. Sep 13 00:05:01.003224 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 13 00:05:01.008369 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 13 00:05:01.057175 kernel: loop0: detected capacity change from 0 to 142488 Sep 13 00:05:01.009012 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 13 00:05:01.023765 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Sep 13 00:05:01.058355 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 13 00:05:01.078429 ignition[1185]: Ignition 2.19.0 Sep 13 00:05:01.167224 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 13 00:05:01.078704 ignition[1185]: deleting config from guestinfo properties Sep 13 00:05:01.173919 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 13 00:05:01.174539 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Sep 13 00:05:01.175773 ignition[1185]: Successfully deleted config Sep 13 00:05:01.179105 systemd[1]: Finished ignition-delete-config.service - Ignition (delete config). Sep 13 00:05:01.218616 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 13 00:05:01.227955 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Sep 13 00:05:01.239042 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 13 00:05:01.239866 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 13 00:05:01.249131 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 13 00:05:01.251197 udevadm[1222]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Sep 13 00:05:01.270631 systemd-tmpfiles[1226]: ACLs are not supported, ignoring. Sep 13 00:05:01.270979 kernel: loop1: detected capacity change from 0 to 140768 Sep 13 00:05:01.270648 systemd-tmpfiles[1226]: ACLs are not supported, ignoring. Sep 13 00:05:01.277479 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 13 00:05:01.311918 kernel: loop2: detected capacity change from 0 to 2976 Sep 13 00:05:01.421913 kernel: loop3: detected capacity change from 0 to 224512 Sep 13 00:05:01.476946 kernel: loop4: detected capacity change from 0 to 142488 Sep 13 00:05:01.569065 kernel: loop5: detected capacity change from 0 to 140768 Sep 13 00:05:01.594942 kernel: loop6: detected capacity change from 0 to 2976 Sep 13 00:05:01.620933 kernel: loop7: detected capacity change from 0 to 224512 Sep 13 00:05:01.706845 (sd-merge)[1232]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-vmware'. Sep 13 00:05:01.707284 (sd-merge)[1232]: Merged extensions into '/usr'. Sep 13 00:05:01.712731 systemd[1]: Reloading requested from client PID 1199 ('systemd-sysext') (unit systemd-sysext.service)... Sep 13 00:05:01.712740 systemd[1]: Reloading... Sep 13 00:05:01.760341 zram_generator::config[1255]: No configuration found. Sep 13 00:05:01.862087 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Sep 13 00:05:01.878320 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 00:05:01.907294 systemd[1]: Reloading finished in 194 ms. Sep 13 00:05:01.934804 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 13 00:05:01.935195 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 13 00:05:01.941983 systemd[1]: Starting ensure-sysext.service... Sep 13 00:05:01.943090 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 13 00:05:01.945879 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 13 00:05:01.960951 systemd[1]: Reloading requested from client PID 1314 ('systemctl') (unit ensure-sysext.service)... Sep 13 00:05:01.960964 systemd[1]: Reloading... Sep 13 00:05:01.970201 systemd-udevd[1316]: Using default interface naming scheme 'v255'. Sep 13 00:05:01.971547 systemd-tmpfiles[1315]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 13 00:05:01.971772 systemd-tmpfiles[1315]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 13 00:05:01.972533 systemd-tmpfiles[1315]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 13 00:05:01.972763 systemd-tmpfiles[1315]: ACLs are not supported, ignoring. Sep 13 00:05:01.972806 systemd-tmpfiles[1315]: ACLs are not supported, ignoring. Sep 13 00:05:01.986732 systemd-tmpfiles[1315]: Detected autofs mount point /boot during canonicalization of boot. Sep 13 00:05:01.986823 systemd-tmpfiles[1315]: Skipping /boot Sep 13 00:05:01.994368 systemd-tmpfiles[1315]: Detected autofs mount point /boot during canonicalization of boot. Sep 13 00:05:01.994617 systemd-tmpfiles[1315]: Skipping /boot Sep 13 00:05:02.036923 zram_generator::config[1344]: No configuration found. Sep 13 00:05:02.159909 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Sep 13 00:05:02.165949 kernel: ACPI: button: Power Button [PWRF] Sep 13 00:05:02.165989 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (1353) Sep 13 00:05:02.174682 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Sep 13 00:05:02.179756 ldconfig[1195]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 13 00:05:02.200635 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 00:05:02.255719 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Sep 13 00:05:02.255868 systemd[1]: Reloading finished in 294 ms. Sep 13 00:05:02.263342 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 13 00:05:02.263719 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 13 00:05:02.269353 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 13 00:05:02.279905 kernel: piix4_smbus 0000:00:07.3: SMBus Host Controller not enabled! Sep 13 00:05:02.291036 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 13 00:05:02.296068 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 13 00:05:02.297908 kernel: vmw_vmci 0000:00:07.7: Using capabilities 0xc Sep 13 00:05:02.299026 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 13 00:05:02.303158 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 13 00:05:02.306089 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 13 00:05:02.307991 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 13 00:05:02.315160 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:05:02.318908 kernel: Guest personality initialized and is active Sep 13 00:05:02.320062 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 13 00:05:02.322539 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 13 00:05:02.328595 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 13 00:05:02.328782 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 13 00:05:02.329954 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Sep 13 00:05:02.329996 kernel: Initialized host personality Sep 13 00:05:02.333927 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 13 00:05:02.334102 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:05:02.334652 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 00:05:02.335205 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 13 00:05:02.338001 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:05:02.346126 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 13 00:05:02.346310 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 13 00:05:02.346387 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:05:02.347633 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_disk OEM. Sep 13 00:05:02.352137 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input4 Sep 13 00:05:02.352061 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 13 00:05:02.355076 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 13 00:05:02.356417 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:05:02.363801 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 13 00:05:02.364107 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 13 00:05:02.364202 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:05:02.368040 systemd[1]: Finished ensure-sysext.service. Sep 13 00:05:02.373050 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 13 00:05:02.373333 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 00:05:02.373423 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 13 00:05:02.376585 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 00:05:02.376703 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 13 00:05:02.376917 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 13 00:05:02.377603 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 00:05:02.377698 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 13 00:05:02.378105 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 13 00:05:02.393329 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 13 00:05:02.405940 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 13 00:05:02.408226 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 13 00:05:02.408551 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 13 00:05:02.412159 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 13 00:05:02.418033 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 13 00:05:02.433909 augenrules[1480]: No rules Sep 13 00:05:02.434881 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 13 00:05:02.435460 (udev-worker)[1347]: id: Truncating stdout of 'dmi_memory_id' up to 16384 byte. Sep 13 00:05:02.447607 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 13 00:05:02.471987 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 13 00:05:02.472514 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 13 00:05:02.476962 kernel: mousedev: PS/2 mouse device common for all mice Sep 13 00:05:02.483122 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 13 00:05:02.487806 systemd-networkd[1433]: lo: Link UP Sep 13 00:05:02.487811 systemd-networkd[1433]: lo: Gained carrier Sep 13 00:05:02.492592 kernel: vmxnet3 0000:0b:00.0 ens192: intr type 3, mode 0, 3 vectors allocated Sep 13 00:05:02.492788 kernel: vmxnet3 0000:0b:00.0 ens192: NIC Link is Up 10000 Mbps Sep 13 00:05:02.489965 systemd-networkd[1433]: Enumeration completed Sep 13 00:05:02.490038 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 13 00:05:02.490215 systemd-networkd[1433]: ens192: Configuring with /etc/systemd/network/00-vmware.network. Sep 13 00:05:02.496391 systemd-networkd[1433]: ens192: Link UP Sep 13 00:05:02.496786 systemd-networkd[1433]: ens192: Gained carrier Sep 13 00:05:02.499029 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 13 00:05:02.508648 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 13 00:05:02.508834 systemd[1]: Reached target time-set.target - System Time Set. Sep 13 00:05:02.516209 systemd-resolved[1434]: Positive Trust Anchors: Sep 13 00:05:02.516221 systemd-resolved[1434]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 13 00:05:02.516246 systemd-resolved[1434]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 13 00:05:02.532218 systemd-resolved[1434]: Defaulting to hostname 'linux'. Sep 13 00:05:02.533403 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 13 00:05:02.533558 systemd[1]: Reached target network.target - Network. Sep 13 00:05:02.533641 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 13 00:06:35.896550 systemd-resolved[1434]: Clock change detected. Flushing caches. Sep 13 00:06:35.896622 systemd-timesyncd[1458]: Contacted time server 67.217.246.204:123 (0.flatcar.pool.ntp.org). Sep 13 00:06:35.896657 systemd-timesyncd[1458]: Initial clock synchronization to Sat 2025-09-13 00:06:35.896515 UTC. Sep 13 00:06:35.922426 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Sep 13 00:06:35.927290 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Sep 13 00:06:35.938194 lvm[1495]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 13 00:06:35.958897 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Sep 13 00:06:35.959473 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 13 00:06:35.967412 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Sep 13 00:06:35.971200 lvm[1498]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 13 00:06:35.972758 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 00:06:35.973025 systemd[1]: Reached target sysinit.target - System Initialization. Sep 13 00:06:35.973370 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 13 00:06:35.973518 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 13 00:06:35.973806 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 13 00:06:35.974045 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 13 00:06:35.974173 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 13 00:06:35.974390 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 13 00:06:35.974409 systemd[1]: Reached target paths.target - Path Units. Sep 13 00:06:35.974573 systemd[1]: Reached target timers.target - Timer Units. Sep 13 00:06:35.975563 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 13 00:06:35.976822 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 13 00:06:35.982256 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 13 00:06:35.982759 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 13 00:06:35.983039 systemd[1]: Reached target sockets.target - Socket Units. Sep 13 00:06:35.983148 systemd[1]: Reached target basic.target - Basic System. Sep 13 00:06:35.983362 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 13 00:06:35.983377 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 13 00:06:35.984262 systemd[1]: Starting containerd.service - containerd container runtime... Sep 13 00:06:35.985278 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 13 00:06:35.988656 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 13 00:06:35.996316 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 13 00:06:35.998407 jq[1505]: false Sep 13 00:06:35.996447 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 13 00:06:35.998454 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 13 00:06:36.001245 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 13 00:06:36.003285 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 13 00:06:36.008013 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 13 00:06:36.009955 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 13 00:06:36.010392 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 13 00:06:36.010838 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 13 00:06:36.011370 systemd[1]: Starting update-engine.service - Update Engine... Sep 13 00:06:36.012259 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 13 00:06:36.015261 systemd[1]: Starting vgauthd.service - VGAuth Service for open-vm-tools... Sep 13 00:06:36.015837 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Sep 13 00:06:36.017556 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 13 00:06:36.017902 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 13 00:06:36.020477 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 13 00:06:36.020638 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 13 00:06:36.035587 systemd[1]: motdgen.service: Deactivated successfully. Sep 13 00:06:36.036226 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 13 00:06:36.040984 jq[1514]: true Sep 13 00:06:36.047621 (ntainerd)[1531]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 13 00:06:36.050175 jq[1535]: true Sep 13 00:06:36.051304 systemd[1]: Started vgauthd.service - VGAuth Service for open-vm-tools. Sep 13 00:06:36.055472 systemd[1]: Starting vmtoolsd.service - Service for virtual machines hosted on VMware... Sep 13 00:06:36.064111 extend-filesystems[1506]: Found loop4 Sep 13 00:06:36.067464 extend-filesystems[1506]: Found loop5 Sep 13 00:06:36.067464 extend-filesystems[1506]: Found loop6 Sep 13 00:06:36.067464 extend-filesystems[1506]: Found loop7 Sep 13 00:06:36.067464 extend-filesystems[1506]: Found sda Sep 13 00:06:36.067464 extend-filesystems[1506]: Found sda1 Sep 13 00:06:36.067464 extend-filesystems[1506]: Found sda2 Sep 13 00:06:36.067464 extend-filesystems[1506]: Found sda3 Sep 13 00:06:36.067464 extend-filesystems[1506]: Found usr Sep 13 00:06:36.067464 extend-filesystems[1506]: Found sda4 Sep 13 00:06:36.067464 extend-filesystems[1506]: Found sda6 Sep 13 00:06:36.067464 extend-filesystems[1506]: Found sda7 Sep 13 00:06:36.067464 extend-filesystems[1506]: Found sda9 Sep 13 00:06:36.067464 extend-filesystems[1506]: Checking size of /dev/sda9 Sep 13 00:06:36.070540 tar[1519]: linux-amd64/LICENSE Sep 13 00:06:36.070540 tar[1519]: linux-amd64/helm Sep 13 00:06:36.070677 update_engine[1513]: I20250913 00:06:36.067170 1513 main.cc:92] Flatcar Update Engine starting Sep 13 00:06:36.084315 systemd[1]: Started vmtoolsd.service - Service for virtual machines hosted on VMware. Sep 13 00:06:36.086310 extend-filesystems[1506]: Old size kept for /dev/sda9 Sep 13 00:06:36.086310 extend-filesystems[1506]: Found sr0 Sep 13 00:06:36.095740 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 13 00:06:36.095867 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 13 00:06:36.098048 systemd-logind[1512]: Watching system buttons on /dev/input/event1 (Power Button) Sep 13 00:06:36.098060 systemd-logind[1512]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 13 00:06:36.098171 systemd-logind[1512]: New seat seat0. Sep 13 00:06:36.099329 systemd[1]: Started systemd-logind.service - User Login Management. Sep 13 00:06:36.121734 unknown[1537]: Pref_Init: Using '/etc/vmware-tools/vgauth.conf' as preferences filepath Sep 13 00:06:36.121936 bash[1565]: Updated "/home/core/.ssh/authorized_keys" Sep 13 00:06:36.123236 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 13 00:06:36.123755 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Sep 13 00:06:36.125301 unknown[1537]: Core dump limit set to -1 Sep 13 00:06:36.130597 dbus-daemon[1504]: [system] SELinux support is enabled Sep 13 00:06:36.132000 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 13 00:06:36.133459 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 13 00:06:36.133478 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 13 00:06:36.133632 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 13 00:06:36.133642 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 13 00:06:36.142263 kernel: NET: Registered PF_VSOCK protocol family Sep 13 00:06:36.143330 dbus-daemon[1504]: [system] Successfully activated service 'org.freedesktop.systemd1' Sep 13 00:06:36.146256 systemd[1]: Started update-engine.service - Update Engine. Sep 13 00:06:36.148121 update_engine[1513]: I20250913 00:06:36.147517 1513 update_check_scheduler.cc:74] Next update check in 7m7s Sep 13 00:06:36.154385 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 13 00:06:36.170425 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (1359) Sep 13 00:06:36.362652 locksmithd[1568]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 13 00:06:36.471418 containerd[1531]: time="2025-09-13T00:06:36.471371167Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Sep 13 00:06:36.514110 containerd[1531]: time="2025-09-13T00:06:36.514078239Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 13 00:06:36.516306 containerd[1531]: time="2025-09-13T00:06:36.516286181Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.106-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 13 00:06:36.516306 containerd[1531]: time="2025-09-13T00:06:36.516303752Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 13 00:06:36.516372 containerd[1531]: time="2025-09-13T00:06:36.516313270Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 13 00:06:36.516428 containerd[1531]: time="2025-09-13T00:06:36.516416706Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Sep 13 00:06:36.516445 containerd[1531]: time="2025-09-13T00:06:36.516429472Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Sep 13 00:06:36.516483 containerd[1531]: time="2025-09-13T00:06:36.516470428Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Sep 13 00:06:36.516483 containerd[1531]: time="2025-09-13T00:06:36.516481302Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 13 00:06:36.516595 containerd[1531]: time="2025-09-13T00:06:36.516582103Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 13 00:06:36.516612 containerd[1531]: time="2025-09-13T00:06:36.516595493Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 13 00:06:36.516612 containerd[1531]: time="2025-09-13T00:06:36.516606575Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Sep 13 00:06:36.516640 containerd[1531]: time="2025-09-13T00:06:36.516613209Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 13 00:06:36.516674 containerd[1531]: time="2025-09-13T00:06:36.516663293Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 13 00:06:36.516808 containerd[1531]: time="2025-09-13T00:06:36.516795982Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 13 00:06:36.516866 containerd[1531]: time="2025-09-13T00:06:36.516854006Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 13 00:06:36.516884 containerd[1531]: time="2025-09-13T00:06:36.516864739Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 13 00:06:36.516920 containerd[1531]: time="2025-09-13T00:06:36.516909810Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 13 00:06:36.516951 containerd[1531]: time="2025-09-13T00:06:36.516941839Z" level=info msg="metadata content store policy set" policy=shared Sep 13 00:06:36.526050 sshd_keygen[1543]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 13 00:06:36.538911 containerd[1531]: time="2025-09-13T00:06:36.538878446Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 13 00:06:36.538988 containerd[1531]: time="2025-09-13T00:06:36.538925824Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 13 00:06:36.538988 containerd[1531]: time="2025-09-13T00:06:36.538937793Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Sep 13 00:06:36.538988 containerd[1531]: time="2025-09-13T00:06:36.538949861Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Sep 13 00:06:36.538988 containerd[1531]: time="2025-09-13T00:06:36.538964824Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 13 00:06:36.539081 containerd[1531]: time="2025-09-13T00:06:36.539070063Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 13 00:06:36.540776 containerd[1531]: time="2025-09-13T00:06:36.539302309Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 13 00:06:36.540776 containerd[1531]: time="2025-09-13T00:06:36.539411975Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Sep 13 00:06:36.540776 containerd[1531]: time="2025-09-13T00:06:36.539424387Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Sep 13 00:06:36.540776 containerd[1531]: time="2025-09-13T00:06:36.539432769Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Sep 13 00:06:36.540776 containerd[1531]: time="2025-09-13T00:06:36.539441249Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 13 00:06:36.540776 containerd[1531]: time="2025-09-13T00:06:36.539449824Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 13 00:06:36.540776 containerd[1531]: time="2025-09-13T00:06:36.539457279Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 13 00:06:36.540776 containerd[1531]: time="2025-09-13T00:06:36.539466057Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 13 00:06:36.540776 containerd[1531]: time="2025-09-13T00:06:36.539474974Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 13 00:06:36.540776 containerd[1531]: time="2025-09-13T00:06:36.539494572Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 13 00:06:36.540776 containerd[1531]: time="2025-09-13T00:06:36.539503374Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 13 00:06:36.540776 containerd[1531]: time="2025-09-13T00:06:36.539510826Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 13 00:06:36.540776 containerd[1531]: time="2025-09-13T00:06:36.539534226Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 13 00:06:36.540776 containerd[1531]: time="2025-09-13T00:06:36.539543972Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 13 00:06:36.540989 containerd[1531]: time="2025-09-13T00:06:36.539552027Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 13 00:06:36.540989 containerd[1531]: time="2025-09-13T00:06:36.539559668Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 13 00:06:36.540989 containerd[1531]: time="2025-09-13T00:06:36.539566519Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 13 00:06:36.540989 containerd[1531]: time="2025-09-13T00:06:36.539573684Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 13 00:06:36.540989 containerd[1531]: time="2025-09-13T00:06:36.539580813Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 13 00:06:36.540989 containerd[1531]: time="2025-09-13T00:06:36.539587600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 13 00:06:36.540989 containerd[1531]: time="2025-09-13T00:06:36.539594851Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Sep 13 00:06:36.540989 containerd[1531]: time="2025-09-13T00:06:36.539628718Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Sep 13 00:06:36.540989 containerd[1531]: time="2025-09-13T00:06:36.539638855Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 13 00:06:36.540989 containerd[1531]: time="2025-09-13T00:06:36.539646196Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Sep 13 00:06:36.540989 containerd[1531]: time="2025-09-13T00:06:36.539674320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 13 00:06:36.540989 containerd[1531]: time="2025-09-13T00:06:36.539692836Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Sep 13 00:06:36.540989 containerd[1531]: time="2025-09-13T00:06:36.539714594Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Sep 13 00:06:36.540989 containerd[1531]: time="2025-09-13T00:06:36.539736816Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 13 00:06:36.540989 containerd[1531]: time="2025-09-13T00:06:36.539755978Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 13 00:06:36.541204 containerd[1531]: time="2025-09-13T00:06:36.539799267Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 13 00:06:36.541204 containerd[1531]: time="2025-09-13T00:06:36.539814501Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Sep 13 00:06:36.541204 containerd[1531]: time="2025-09-13T00:06:36.539822193Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 13 00:06:36.541204 containerd[1531]: time="2025-09-13T00:06:36.539828936Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Sep 13 00:06:36.541204 containerd[1531]: time="2025-09-13T00:06:36.539844232Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 13 00:06:36.541204 containerd[1531]: time="2025-09-13T00:06:36.539871350Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Sep 13 00:06:36.541204 containerd[1531]: time="2025-09-13T00:06:36.539888256Z" level=info msg="NRI interface is disabled by configuration." Sep 13 00:06:36.541204 containerd[1531]: time="2025-09-13T00:06:36.539902418Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 13 00:06:36.541334 containerd[1531]: time="2025-09-13T00:06:36.540120799Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 13 00:06:36.541334 containerd[1531]: time="2025-09-13T00:06:36.540158123Z" level=info msg="Connect containerd service" Sep 13 00:06:36.541334 containerd[1531]: time="2025-09-13T00:06:36.540187817Z" level=info msg="using legacy CRI server" Sep 13 00:06:36.541334 containerd[1531]: time="2025-09-13T00:06:36.540204522Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 13 00:06:36.541334 containerd[1531]: time="2025-09-13T00:06:36.540378222Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 13 00:06:36.541334 containerd[1531]: time="2025-09-13T00:06:36.540865265Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 13 00:06:36.541334 containerd[1531]: time="2025-09-13T00:06:36.541295523Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 13 00:06:36.541334 containerd[1531]: time="2025-09-13T00:06:36.541324497Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 13 00:06:36.541921 containerd[1531]: time="2025-09-13T00:06:36.541367933Z" level=info msg="Start subscribing containerd event" Sep 13 00:06:36.541921 containerd[1531]: time="2025-09-13T00:06:36.541394576Z" level=info msg="Start recovering state" Sep 13 00:06:36.541921 containerd[1531]: time="2025-09-13T00:06:36.541436127Z" level=info msg="Start event monitor" Sep 13 00:06:36.541921 containerd[1531]: time="2025-09-13T00:06:36.541448527Z" level=info msg="Start snapshots syncer" Sep 13 00:06:36.541921 containerd[1531]: time="2025-09-13T00:06:36.541455084Z" level=info msg="Start cni network conf syncer for default" Sep 13 00:06:36.541921 containerd[1531]: time="2025-09-13T00:06:36.541462124Z" level=info msg="Start streaming server" Sep 13 00:06:36.541921 containerd[1531]: time="2025-09-13T00:06:36.541497979Z" level=info msg="containerd successfully booted in 0.072351s" Sep 13 00:06:36.541546 systemd[1]: Started containerd.service - containerd container runtime. Sep 13 00:06:36.545627 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 13 00:06:36.555466 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 13 00:06:36.560078 systemd[1]: issuegen.service: Deactivated successfully. Sep 13 00:06:36.560218 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 13 00:06:36.566355 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 13 00:06:36.576392 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 13 00:06:36.581428 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 13 00:06:36.582619 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Sep 13 00:06:36.582993 systemd[1]: Reached target getty.target - Login Prompts. Sep 13 00:06:36.637669 tar[1519]: linux-amd64/README.md Sep 13 00:06:36.645605 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 13 00:06:37.160359 systemd-networkd[1433]: ens192: Gained IPv6LL Sep 13 00:06:37.161494 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 13 00:06:37.162199 systemd[1]: Reached target network-online.target - Network is Online. Sep 13 00:06:37.166377 systemd[1]: Starting coreos-metadata.service - VMware metadata agent... Sep 13 00:06:37.167536 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:06:37.170879 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 13 00:06:37.199409 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 13 00:06:37.200115 systemd[1]: coreos-metadata.service: Deactivated successfully. Sep 13 00:06:37.200487 systemd[1]: Finished coreos-metadata.service - VMware metadata agent. Sep 13 00:06:37.202498 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 13 00:06:38.673957 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:06:38.674368 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 13 00:06:38.674678 systemd[1]: Startup finished in 1.001s (kernel) + 6.420s (initrd) + 5.351s (userspace) = 12.773s. Sep 13 00:06:38.681233 (kubelet)[1682]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 13 00:06:38.876587 login[1647]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Sep 13 00:06:38.877492 login[1648]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Sep 13 00:06:38.884040 systemd-logind[1512]: New session 2 of user core. Sep 13 00:06:38.885140 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 13 00:06:38.890365 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 13 00:06:38.891979 systemd-logind[1512]: New session 1 of user core. Sep 13 00:06:38.906640 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 13 00:06:38.912408 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 13 00:06:38.917190 (systemd)[1689]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:06:39.105248 systemd[1689]: Queued start job for default target default.target. Sep 13 00:06:39.121112 systemd[1689]: Created slice app.slice - User Application Slice. Sep 13 00:06:39.121136 systemd[1689]: Reached target paths.target - Paths. Sep 13 00:06:39.121145 systemd[1689]: Reached target timers.target - Timers. Sep 13 00:06:39.121891 systemd[1689]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 13 00:06:39.128723 systemd[1689]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 13 00:06:39.128755 systemd[1689]: Reached target sockets.target - Sockets. Sep 13 00:06:39.128764 systemd[1689]: Reached target basic.target - Basic System. Sep 13 00:06:39.128786 systemd[1689]: Reached target default.target - Main User Target. Sep 13 00:06:39.128803 systemd[1689]: Startup finished in 208ms. Sep 13 00:06:39.128912 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 13 00:06:39.137293 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 13 00:06:39.137950 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 13 00:06:39.691772 kubelet[1682]: E0913 00:06:39.691736 1682 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 00:06:39.692893 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 00:06:39.692979 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 00:06:49.943293 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 13 00:06:49.951328 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:06:50.026895 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:06:50.029561 (kubelet)[1731]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 13 00:06:50.093342 kubelet[1731]: E0913 00:06:50.093278 1731 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 00:06:50.095720 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 00:06:50.095811 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 00:07:00.346021 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 13 00:07:00.355288 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:07:00.674377 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:07:00.677030 (kubelet)[1746]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 13 00:07:00.705876 kubelet[1746]: E0913 00:07:00.705839 1746 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 00:07:00.706928 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 00:07:00.707011 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 00:07:06.231386 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 13 00:07:06.232579 systemd[1]: Started sshd@0-139.178.70.103:22-139.178.89.65:41008.service - OpenSSH per-connection server daemon (139.178.89.65:41008). Sep 13 00:07:06.265627 sshd[1754]: Accepted publickey for core from 139.178.89.65 port 41008 ssh2: RSA SHA256:3TZ0siYQ43CmuUVihwoq18hyOHBS5nlRY7FLM6+omOc Sep 13 00:07:06.266372 sshd[1754]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:07:06.268658 systemd-logind[1512]: New session 3 of user core. Sep 13 00:07:06.279367 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 13 00:07:06.329722 systemd[1]: Started sshd@1-139.178.70.103:22-139.178.89.65:41024.service - OpenSSH per-connection server daemon (139.178.89.65:41024). Sep 13 00:07:06.357173 sshd[1759]: Accepted publickey for core from 139.178.89.65 port 41024 ssh2: RSA SHA256:3TZ0siYQ43CmuUVihwoq18hyOHBS5nlRY7FLM6+omOc Sep 13 00:07:06.358172 sshd[1759]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:07:06.361134 systemd-logind[1512]: New session 4 of user core. Sep 13 00:07:06.369359 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 13 00:07:06.419353 sshd[1759]: pam_unix(sshd:session): session closed for user core Sep 13 00:07:06.423605 systemd[1]: sshd@1-139.178.70.103:22-139.178.89.65:41024.service: Deactivated successfully. Sep 13 00:07:06.424856 systemd[1]: session-4.scope: Deactivated successfully. Sep 13 00:07:06.427226 systemd-logind[1512]: Session 4 logged out. Waiting for processes to exit. Sep 13 00:07:06.428443 systemd[1]: Started sshd@2-139.178.70.103:22-139.178.89.65:41036.service - OpenSSH per-connection server daemon (139.178.89.65:41036). Sep 13 00:07:06.429229 systemd-logind[1512]: Removed session 4. Sep 13 00:07:06.454681 sshd[1766]: Accepted publickey for core from 139.178.89.65 port 41036 ssh2: RSA SHA256:3TZ0siYQ43CmuUVihwoq18hyOHBS5nlRY7FLM6+omOc Sep 13 00:07:06.455481 sshd[1766]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:07:06.458980 systemd-logind[1512]: New session 5 of user core. Sep 13 00:07:06.465265 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 13 00:07:06.511927 sshd[1766]: pam_unix(sshd:session): session closed for user core Sep 13 00:07:06.527096 systemd[1]: sshd@2-139.178.70.103:22-139.178.89.65:41036.service: Deactivated successfully. Sep 13 00:07:06.528152 systemd[1]: session-5.scope: Deactivated successfully. Sep 13 00:07:06.528616 systemd-logind[1512]: Session 5 logged out. Waiting for processes to exit. Sep 13 00:07:06.529902 systemd[1]: Started sshd@3-139.178.70.103:22-139.178.89.65:41046.service - OpenSSH per-connection server daemon (139.178.89.65:41046). Sep 13 00:07:06.531346 systemd-logind[1512]: Removed session 5. Sep 13 00:07:06.560160 sshd[1773]: Accepted publickey for core from 139.178.89.65 port 41046 ssh2: RSA SHA256:3TZ0siYQ43CmuUVihwoq18hyOHBS5nlRY7FLM6+omOc Sep 13 00:07:06.560919 sshd[1773]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:07:06.564123 systemd-logind[1512]: New session 6 of user core. Sep 13 00:07:06.566262 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 13 00:07:06.615454 sshd[1773]: pam_unix(sshd:session): session closed for user core Sep 13 00:07:06.626707 systemd[1]: sshd@3-139.178.70.103:22-139.178.89.65:41046.service: Deactivated successfully. Sep 13 00:07:06.627512 systemd[1]: session-6.scope: Deactivated successfully. Sep 13 00:07:06.628682 systemd-logind[1512]: Session 6 logged out. Waiting for processes to exit. Sep 13 00:07:06.636414 systemd[1]: Started sshd@4-139.178.70.103:22-139.178.89.65:41048.service - OpenSSH per-connection server daemon (139.178.89.65:41048). Sep 13 00:07:06.638454 systemd-logind[1512]: Removed session 6. Sep 13 00:07:06.660511 sshd[1780]: Accepted publickey for core from 139.178.89.65 port 41048 ssh2: RSA SHA256:3TZ0siYQ43CmuUVihwoq18hyOHBS5nlRY7FLM6+omOc Sep 13 00:07:06.661195 sshd[1780]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:07:06.663740 systemd-logind[1512]: New session 7 of user core. Sep 13 00:07:06.671267 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 13 00:07:06.727291 sudo[1783]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 13 00:07:06.727513 sudo[1783]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 13 00:07:06.739922 sudo[1783]: pam_unix(sudo:session): session closed for user root Sep 13 00:07:06.741665 sshd[1780]: pam_unix(sshd:session): session closed for user core Sep 13 00:07:06.751118 systemd[1]: sshd@4-139.178.70.103:22-139.178.89.65:41048.service: Deactivated successfully. Sep 13 00:07:06.752156 systemd[1]: session-7.scope: Deactivated successfully. Sep 13 00:07:06.753156 systemd-logind[1512]: Session 7 logged out. Waiting for processes to exit. Sep 13 00:07:06.754047 systemd[1]: Started sshd@5-139.178.70.103:22-139.178.89.65:41060.service - OpenSSH per-connection server daemon (139.178.89.65:41060). Sep 13 00:07:06.755410 systemd-logind[1512]: Removed session 7. Sep 13 00:07:06.783117 sshd[1788]: Accepted publickey for core from 139.178.89.65 port 41060 ssh2: RSA SHA256:3TZ0siYQ43CmuUVihwoq18hyOHBS5nlRY7FLM6+omOc Sep 13 00:07:06.783050 sshd[1788]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:07:06.785542 systemd-logind[1512]: New session 8 of user core. Sep 13 00:07:06.792397 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 13 00:07:06.840302 sudo[1792]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 13 00:07:06.840466 sudo[1792]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 13 00:07:06.842514 sudo[1792]: pam_unix(sudo:session): session closed for user root Sep 13 00:07:06.845876 sudo[1791]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Sep 13 00:07:06.846059 sudo[1791]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 13 00:07:06.855337 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Sep 13 00:07:06.856050 auditctl[1795]: No rules Sep 13 00:07:06.856314 systemd[1]: audit-rules.service: Deactivated successfully. Sep 13 00:07:06.856426 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Sep 13 00:07:06.858234 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 13 00:07:06.872955 augenrules[1813]: No rules Sep 13 00:07:06.873641 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 13 00:07:06.875060 sudo[1791]: pam_unix(sudo:session): session closed for user root Sep 13 00:07:06.875953 sshd[1788]: pam_unix(sshd:session): session closed for user core Sep 13 00:07:06.884604 systemd[1]: sshd@5-139.178.70.103:22-139.178.89.65:41060.service: Deactivated successfully. Sep 13 00:07:06.885380 systemd[1]: session-8.scope: Deactivated successfully. Sep 13 00:07:06.886080 systemd-logind[1512]: Session 8 logged out. Waiting for processes to exit. Sep 13 00:07:06.886771 systemd[1]: Started sshd@6-139.178.70.103:22-139.178.89.65:41070.service - OpenSSH per-connection server daemon (139.178.89.65:41070). Sep 13 00:07:06.888330 systemd-logind[1512]: Removed session 8. Sep 13 00:07:06.913679 sshd[1821]: Accepted publickey for core from 139.178.89.65 port 41070 ssh2: RSA SHA256:3TZ0siYQ43CmuUVihwoq18hyOHBS5nlRY7FLM6+omOc Sep 13 00:07:06.914515 sshd[1821]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:07:06.917212 systemd-logind[1512]: New session 9 of user core. Sep 13 00:07:06.938334 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 13 00:07:06.987103 sudo[1824]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 13 00:07:06.987333 sudo[1824]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 13 00:07:07.357537 (dockerd)[1840]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 13 00:07:07.357790 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 13 00:07:07.720384 dockerd[1840]: time="2025-09-13T00:07:07.720113237Z" level=info msg="Starting up" Sep 13 00:07:07.818095 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport1875456517-merged.mount: Deactivated successfully. Sep 13 00:07:07.838499 dockerd[1840]: time="2025-09-13T00:07:07.838345711Z" level=info msg="Loading containers: start." Sep 13 00:07:07.960200 kernel: Initializing XFRM netlink socket Sep 13 00:07:08.061915 systemd-networkd[1433]: docker0: Link UP Sep 13 00:07:08.120033 dockerd[1840]: time="2025-09-13T00:07:08.120002879Z" level=info msg="Loading containers: done." Sep 13 00:07:08.135837 dockerd[1840]: time="2025-09-13T00:07:08.135805850Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 13 00:07:08.135938 dockerd[1840]: time="2025-09-13T00:07:08.135885230Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Sep 13 00:07:08.135992 dockerd[1840]: time="2025-09-13T00:07:08.135976867Z" level=info msg="Daemon has completed initialization" Sep 13 00:07:08.152322 dockerd[1840]: time="2025-09-13T00:07:08.151978674Z" level=info msg="API listen on /run/docker.sock" Sep 13 00:07:08.152085 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 13 00:07:09.218953 containerd[1531]: time="2025-09-13T00:07:09.218923389Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\"" Sep 13 00:07:09.970747 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount621920613.mount: Deactivated successfully. Sep 13 00:07:10.957481 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Sep 13 00:07:10.965337 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:07:11.043600 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:07:11.048759 (kubelet)[2047]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 13 00:07:11.116388 kubelet[2047]: E0913 00:07:11.116356 2047 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 00:07:11.118967 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 00:07:11.119056 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 00:07:11.439773 containerd[1531]: time="2025-09-13T00:07:11.439669511Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:07:11.451477 containerd[1531]: time="2025-09-13T00:07:11.451436227Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.9: active requests=0, bytes read=28837916" Sep 13 00:07:11.465567 containerd[1531]: time="2025-09-13T00:07:11.465515915Z" level=info msg="ImageCreate event name:\"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:07:11.473854 containerd[1531]: time="2025-09-13T00:07:11.473545812Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:07:11.474524 containerd[1531]: time="2025-09-13T00:07:11.474492843Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.9\" with image id \"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\", size \"28834515\" in 2.255540693s" Sep 13 00:07:11.474583 containerd[1531]: time="2025-09-13T00:07:11.474527913Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\" returns image reference \"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\"" Sep 13 00:07:11.474980 containerd[1531]: time="2025-09-13T00:07:11.474962306Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\"" Sep 13 00:07:13.166827 containerd[1531]: time="2025-09-13T00:07:13.166765500Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:07:13.177563 containerd[1531]: time="2025-09-13T00:07:13.177515227Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.9: active requests=0, bytes read=24787027" Sep 13 00:07:13.183193 containerd[1531]: time="2025-09-13T00:07:13.183163276Z" level=info msg="ImageCreate event name:\"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:07:13.188249 containerd[1531]: time="2025-09-13T00:07:13.188214579Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:07:13.188923 containerd[1531]: time="2025-09-13T00:07:13.188843940Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.9\" with image id \"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\", size \"26421706\" in 1.71329878s" Sep 13 00:07:13.188923 containerd[1531]: time="2025-09-13T00:07:13.188862237Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\" returns image reference \"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\"" Sep 13 00:07:13.189394 containerd[1531]: time="2025-09-13T00:07:13.189323739Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\"" Sep 13 00:07:14.645386 containerd[1531]: time="2025-09-13T00:07:14.644546556Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:07:14.645386 containerd[1531]: time="2025-09-13T00:07:14.645106962Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.9: active requests=0, bytes read=19176289" Sep 13 00:07:14.645386 containerd[1531]: time="2025-09-13T00:07:14.645356597Z" level=info msg="ImageCreate event name:\"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:07:14.647503 containerd[1531]: time="2025-09-13T00:07:14.647482413Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:07:14.648611 containerd[1531]: time="2025-09-13T00:07:14.648587570Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.9\" with image id \"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\", size \"20810986\" in 1.459247676s" Sep 13 00:07:14.648653 containerd[1531]: time="2025-09-13T00:07:14.648609939Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\" returns image reference \"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\"" Sep 13 00:07:14.649158 containerd[1531]: time="2025-09-13T00:07:14.649140037Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\"" Sep 13 00:07:15.791546 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2823014099.mount: Deactivated successfully. Sep 13 00:07:16.171076 containerd[1531]: time="2025-09-13T00:07:16.170998572Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:07:16.173728 containerd[1531]: time="2025-09-13T00:07:16.173699596Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.9: active requests=0, bytes read=30924206" Sep 13 00:07:16.180778 containerd[1531]: time="2025-09-13T00:07:16.180756486Z" level=info msg="ImageCreate event name:\"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:07:16.186208 containerd[1531]: time="2025-09-13T00:07:16.186167757Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:07:16.186769 containerd[1531]: time="2025-09-13T00:07:16.186612054Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.9\" with image id \"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\", repo tag \"registry.k8s.io/kube-proxy:v1.32.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\", size \"30923225\" in 1.537449957s" Sep 13 00:07:16.186769 containerd[1531]: time="2025-09-13T00:07:16.186634342Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\" returns image reference \"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\"" Sep 13 00:07:16.187021 containerd[1531]: time="2025-09-13T00:07:16.186995329Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 13 00:07:16.692075 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3275694458.mount: Deactivated successfully. Sep 13 00:07:17.508881 containerd[1531]: time="2025-09-13T00:07:17.508840950Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:07:17.509802 containerd[1531]: time="2025-09-13T00:07:17.509778358Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Sep 13 00:07:17.510029 containerd[1531]: time="2025-09-13T00:07:17.510009672Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:07:17.512137 containerd[1531]: time="2025-09-13T00:07:17.512119804Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:07:17.512799 containerd[1531]: time="2025-09-13T00:07:17.512778840Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.325762569s" Sep 13 00:07:17.512799 containerd[1531]: time="2025-09-13T00:07:17.512797534Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Sep 13 00:07:17.513105 containerd[1531]: time="2025-09-13T00:07:17.513086811Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 13 00:07:18.003782 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1392628208.mount: Deactivated successfully. Sep 13 00:07:18.006069 containerd[1531]: time="2025-09-13T00:07:18.006049272Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:07:18.006785 containerd[1531]: time="2025-09-13T00:07:18.006749266Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Sep 13 00:07:18.006937 containerd[1531]: time="2025-09-13T00:07:18.006922705Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:07:18.008224 containerd[1531]: time="2025-09-13T00:07:18.008207338Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:07:18.008997 containerd[1531]: time="2025-09-13T00:07:18.008977337Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 495.875734ms" Sep 13 00:07:18.008997 containerd[1531]: time="2025-09-13T00:07:18.008995534Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Sep 13 00:07:18.009273 containerd[1531]: time="2025-09-13T00:07:18.009256463Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Sep 13 00:07:18.460794 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount125687113.mount: Deactivated successfully. Sep 13 00:07:20.903622 update_engine[1513]: I20250913 00:07:20.903254 1513 update_attempter.cc:509] Updating boot flags... Sep 13 00:07:20.932236 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (2183) Sep 13 00:07:20.969274 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (2185) Sep 13 00:07:21.153367 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Sep 13 00:07:21.163562 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:07:21.683455 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:07:21.690435 (kubelet)[2203]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 13 00:07:21.859128 kubelet[2203]: E0913 00:07:21.859096 2203 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 00:07:21.860993 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 00:07:21.861081 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 00:07:22.387876 containerd[1531]: time="2025-09-13T00:07:22.387207487Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:07:22.403742 containerd[1531]: time="2025-09-13T00:07:22.403701157Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57682056" Sep 13 00:07:22.414546 containerd[1531]: time="2025-09-13T00:07:22.414527908Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:07:22.428846 containerd[1531]: time="2025-09-13T00:07:22.428830696Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:07:22.429498 containerd[1531]: time="2025-09-13T00:07:22.429478936Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 4.420204754s" Sep 13 00:07:22.429529 containerd[1531]: time="2025-09-13T00:07:22.429501571Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Sep 13 00:07:24.527360 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:07:24.538321 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:07:24.556862 systemd[1]: Reloading requested from client PID 2240 ('systemctl') (unit session-9.scope)... Sep 13 00:07:24.556872 systemd[1]: Reloading... Sep 13 00:07:24.624204 zram_generator::config[2281]: No configuration found. Sep 13 00:07:24.670479 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Sep 13 00:07:24.685418 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 00:07:24.729150 systemd[1]: Reloading finished in 172 ms. Sep 13 00:07:24.755074 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 13 00:07:24.755124 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 13 00:07:24.755481 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:07:24.760415 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:07:25.075093 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:07:25.078762 (kubelet)[2345]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 13 00:07:25.117365 kubelet[2345]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 00:07:25.117928 kubelet[2345]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 13 00:07:25.117928 kubelet[2345]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 00:07:25.117928 kubelet[2345]: I0913 00:07:25.117624 2345 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 13 00:07:25.735162 kubelet[2345]: I0913 00:07:25.734359 2345 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Sep 13 00:07:25.735162 kubelet[2345]: I0913 00:07:25.734387 2345 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 13 00:07:25.735162 kubelet[2345]: I0913 00:07:25.734583 2345 server.go:954] "Client rotation is on, will bootstrap in background" Sep 13 00:07:25.792011 kubelet[2345]: I0913 00:07:25.791602 2345 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 13 00:07:25.795743 kubelet[2345]: E0913 00:07:25.794925 2345 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://139.178.70.103:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 139.178.70.103:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:07:25.802787 kubelet[2345]: E0913 00:07:25.802763 2345 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 13 00:07:25.802787 kubelet[2345]: I0913 00:07:25.802784 2345 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 13 00:07:25.809335 kubelet[2345]: I0913 00:07:25.809089 2345 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 13 00:07:25.811648 kubelet[2345]: I0913 00:07:25.811455 2345 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 13 00:07:25.811648 kubelet[2345]: I0913 00:07:25.811477 2345 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 13 00:07:25.813162 kubelet[2345]: I0913 00:07:25.813065 2345 topology_manager.go:138] "Creating topology manager with none policy" Sep 13 00:07:25.813162 kubelet[2345]: I0913 00:07:25.813077 2345 container_manager_linux.go:304] "Creating device plugin manager" Sep 13 00:07:25.814282 kubelet[2345]: I0913 00:07:25.814243 2345 state_mem.go:36] "Initialized new in-memory state store" Sep 13 00:07:25.817188 kubelet[2345]: I0913 00:07:25.817171 2345 kubelet.go:446] "Attempting to sync node with API server" Sep 13 00:07:25.817219 kubelet[2345]: I0913 00:07:25.817203 2345 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 13 00:07:25.817219 kubelet[2345]: I0913 00:07:25.817217 2345 kubelet.go:352] "Adding apiserver pod source" Sep 13 00:07:25.817260 kubelet[2345]: I0913 00:07:25.817223 2345 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 13 00:07:25.822283 kubelet[2345]: W0913 00:07:25.822087 2345 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://139.178.70.103:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 139.178.70.103:6443: connect: connection refused Sep 13 00:07:25.822521 kubelet[2345]: E0913 00:07:25.822410 2345 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://139.178.70.103:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 139.178.70.103:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:07:25.823578 kubelet[2345]: I0913 00:07:25.823525 2345 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Sep 13 00:07:25.826104 kubelet[2345]: I0913 00:07:25.826024 2345 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 13 00:07:25.827132 kubelet[2345]: W0913 00:07:25.826859 2345 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 13 00:07:25.831450 kubelet[2345]: W0913 00:07:25.831041 2345 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://139.178.70.103:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 139.178.70.103:6443: connect: connection refused Sep 13 00:07:25.831450 kubelet[2345]: E0913 00:07:25.831081 2345 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://139.178.70.103:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 139.178.70.103:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:07:25.831450 kubelet[2345]: I0913 00:07:25.831282 2345 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 13 00:07:25.831450 kubelet[2345]: I0913 00:07:25.831302 2345 server.go:1287] "Started kubelet" Sep 13 00:07:25.836196 kubelet[2345]: I0913 00:07:25.835913 2345 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Sep 13 00:07:25.842481 kubelet[2345]: I0913 00:07:25.842464 2345 server.go:479] "Adding debug handlers to kubelet server" Sep 13 00:07:25.842985 kubelet[2345]: I0913 00:07:25.842954 2345 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 13 00:07:25.843111 kubelet[2345]: I0913 00:07:25.843097 2345 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 13 00:07:25.847910 kubelet[2345]: I0913 00:07:25.847894 2345 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 13 00:07:25.855062 kubelet[2345]: I0913 00:07:25.855044 2345 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 13 00:07:25.857074 kubelet[2345]: I0913 00:07:25.857066 2345 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 13 00:07:25.857203 kubelet[2345]: E0913 00:07:25.857190 2345 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:07:25.864840 kubelet[2345]: I0913 00:07:25.864481 2345 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 13 00:07:25.864840 kubelet[2345]: I0913 00:07:25.864519 2345 reconciler.go:26] "Reconciler: start to sync state" Sep 13 00:07:25.865730 kubelet[2345]: E0913 00:07:25.856731 2345 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://139.178.70.103:6443/api/v1/namespaces/default/events\": dial tcp 139.178.70.103:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1864aee34ba59971 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-13 00:07:25.831289201 +0000 UTC m=+0.749595570,LastTimestamp:2025-09-13 00:07:25.831289201 +0000 UTC m=+0.749595570,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 13 00:07:25.865730 kubelet[2345]: E0913 00:07:25.865274 2345 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.103:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.103:6443: connect: connection refused" interval="200ms" Sep 13 00:07:25.865730 kubelet[2345]: W0913 00:07:25.865480 2345 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://139.178.70.103:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 139.178.70.103:6443: connect: connection refused Sep 13 00:07:25.865730 kubelet[2345]: E0913 00:07:25.865505 2345 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://139.178.70.103:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 139.178.70.103:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:07:25.867877 kubelet[2345]: I0913 00:07:25.867867 2345 factory.go:221] Registration of the systemd container factory successfully Sep 13 00:07:25.868128 kubelet[2345]: I0913 00:07:25.867961 2345 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 13 00:07:25.873518 kubelet[2345]: I0913 00:07:25.873510 2345 factory.go:221] Registration of the containerd container factory successfully Sep 13 00:07:25.887864 kubelet[2345]: E0913 00:07:25.887850 2345 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 13 00:07:25.888004 kubelet[2345]: I0913 00:07:25.887987 2345 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 13 00:07:25.889795 kubelet[2345]: I0913 00:07:25.889786 2345 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 13 00:07:25.889965 kubelet[2345]: I0913 00:07:25.889959 2345 status_manager.go:227] "Starting to sync pod status with apiserver" Sep 13 00:07:25.890090 kubelet[2345]: I0913 00:07:25.890084 2345 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 13 00:07:25.890123 kubelet[2345]: I0913 00:07:25.890119 2345 kubelet.go:2382] "Starting kubelet main sync loop" Sep 13 00:07:25.890201 kubelet[2345]: E0913 00:07:25.890193 2345 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 13 00:07:25.890625 kubelet[2345]: W0913 00:07:25.890615 2345 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://139.178.70.103:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 139.178.70.103:6443: connect: connection refused Sep 13 00:07:25.891408 kubelet[2345]: E0913 00:07:25.891392 2345 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://139.178.70.103:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 139.178.70.103:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:07:25.891408 kubelet[2345]: I0913 00:07:25.891376 2345 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 13 00:07:25.891408 kubelet[2345]: I0913 00:07:25.891407 2345 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 13 00:07:25.891471 kubelet[2345]: I0913 00:07:25.891415 2345 state_mem.go:36] "Initialized new in-memory state store" Sep 13 00:07:25.901256 kubelet[2345]: I0913 00:07:25.901244 2345 policy_none.go:49] "None policy: Start" Sep 13 00:07:25.901256 kubelet[2345]: I0913 00:07:25.901256 2345 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 13 00:07:25.901329 kubelet[2345]: I0913 00:07:25.901263 2345 state_mem.go:35] "Initializing new in-memory state store" Sep 13 00:07:25.917396 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 13 00:07:25.924670 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 13 00:07:25.926990 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 13 00:07:25.931126 kubelet[2345]: I0913 00:07:25.930642 2345 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 13 00:07:25.931126 kubelet[2345]: I0913 00:07:25.930755 2345 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 13 00:07:25.931126 kubelet[2345]: I0913 00:07:25.930762 2345 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 13 00:07:25.931126 kubelet[2345]: I0913 00:07:25.931068 2345 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 13 00:07:25.932089 kubelet[2345]: E0913 00:07:25.931975 2345 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 13 00:07:25.932089 kubelet[2345]: E0913 00:07:25.932000 2345 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Sep 13 00:07:25.999049 systemd[1]: Created slice kubepods-burstable-pod72a30db4fc25e4da65a3b99eba43be94.slice - libcontainer container kubepods-burstable-pod72a30db4fc25e4da65a3b99eba43be94.slice. Sep 13 00:07:26.011349 kubelet[2345]: E0913 00:07:26.011332 2345 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 13 00:07:26.013272 systemd[1]: Created slice kubepods-burstable-pod9448acc4c6cdc979b9e7b719492b80b9.slice - libcontainer container kubepods-burstable-pod9448acc4c6cdc979b9e7b719492b80b9.slice. Sep 13 00:07:26.016980 kubelet[2345]: E0913 00:07:26.016911 2345 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 13 00:07:26.019004 systemd[1]: Created slice kubepods-burstable-pod1403266a9792debaa127cd8df7a81c3c.slice - libcontainer container kubepods-burstable-pod1403266a9792debaa127cd8df7a81c3c.slice. Sep 13 00:07:26.019905 kubelet[2345]: E0913 00:07:26.019893 2345 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 13 00:07:26.031734 kubelet[2345]: I0913 00:07:26.031719 2345 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 13 00:07:26.031973 kubelet[2345]: E0913 00:07:26.031954 2345 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://139.178.70.103:6443/api/v1/nodes\": dial tcp 139.178.70.103:6443: connect: connection refused" node="localhost" Sep 13 00:07:26.065801 kubelet[2345]: E0913 00:07:26.065767 2345 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.103:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.103:6443: connect: connection refused" interval="400ms" Sep 13 00:07:26.166236 kubelet[2345]: I0913 00:07:26.166136 2345 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9448acc4c6cdc979b9e7b719492b80b9-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"9448acc4c6cdc979b9e7b719492b80b9\") " pod="kube-system/kube-apiserver-localhost" Sep 13 00:07:26.166236 kubelet[2345]: I0913 00:07:26.166165 2345 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 00:07:26.166236 kubelet[2345]: I0913 00:07:26.166207 2345 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 00:07:26.166236 kubelet[2345]: I0913 00:07:26.166228 2345 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 00:07:26.166583 kubelet[2345]: I0913 00:07:26.166245 2345 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 00:07:26.166583 kubelet[2345]: I0913 00:07:26.166275 2345 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/72a30db4fc25e4da65a3b99eba43be94-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"72a30db4fc25e4da65a3b99eba43be94\") " pod="kube-system/kube-scheduler-localhost" Sep 13 00:07:26.166583 kubelet[2345]: I0913 00:07:26.166298 2345 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9448acc4c6cdc979b9e7b719492b80b9-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"9448acc4c6cdc979b9e7b719492b80b9\") " pod="kube-system/kube-apiserver-localhost" Sep 13 00:07:26.166583 kubelet[2345]: I0913 00:07:26.166311 2345 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 00:07:26.166583 kubelet[2345]: I0913 00:07:26.166321 2345 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9448acc4c6cdc979b9e7b719492b80b9-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"9448acc4c6cdc979b9e7b719492b80b9\") " pod="kube-system/kube-apiserver-localhost" Sep 13 00:07:26.233387 kubelet[2345]: I0913 00:07:26.233348 2345 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 13 00:07:26.233788 kubelet[2345]: E0913 00:07:26.233744 2345 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://139.178.70.103:6443/api/v1/nodes\": dial tcp 139.178.70.103:6443: connect: connection refused" node="localhost" Sep 13 00:07:26.313619 containerd[1531]: time="2025-09-13T00:07:26.313202633Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:72a30db4fc25e4da65a3b99eba43be94,Namespace:kube-system,Attempt:0,}" Sep 13 00:07:26.343653 containerd[1531]: time="2025-09-13T00:07:26.343560400Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:9448acc4c6cdc979b9e7b719492b80b9,Namespace:kube-system,Attempt:0,}" Sep 13 00:07:26.344238 containerd[1531]: time="2025-09-13T00:07:26.344216334Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:1403266a9792debaa127cd8df7a81c3c,Namespace:kube-system,Attempt:0,}" Sep 13 00:07:26.466624 kubelet[2345]: E0913 00:07:26.466592 2345 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.103:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.103:6443: connect: connection refused" interval="800ms" Sep 13 00:07:26.635090 kubelet[2345]: I0913 00:07:26.634791 2345 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 13 00:07:26.635090 kubelet[2345]: E0913 00:07:26.634997 2345 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://139.178.70.103:6443/api/v1/nodes\": dial tcp 139.178.70.103:6443: connect: connection refused" node="localhost" Sep 13 00:07:26.670030 kubelet[2345]: W0913 00:07:26.669941 2345 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://139.178.70.103:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 139.178.70.103:6443: connect: connection refused Sep 13 00:07:26.670030 kubelet[2345]: E0913 00:07:26.669992 2345 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://139.178.70.103:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 139.178.70.103:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:07:27.040052 kubelet[2345]: W0913 00:07:27.039981 2345 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://139.178.70.103:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 139.178.70.103:6443: connect: connection refused Sep 13 00:07:27.040052 kubelet[2345]: E0913 00:07:27.040030 2345 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://139.178.70.103:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 139.178.70.103:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:07:27.148747 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2655605451.mount: Deactivated successfully. Sep 13 00:07:27.187223 containerd[1531]: time="2025-09-13T00:07:27.186000881Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 13 00:07:27.187223 containerd[1531]: time="2025-09-13T00:07:27.186748186Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 13 00:07:27.189414 containerd[1531]: time="2025-09-13T00:07:27.189205248Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 13 00:07:27.189928 containerd[1531]: time="2025-09-13T00:07:27.189891015Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 13 00:07:27.190875 containerd[1531]: time="2025-09-13T00:07:27.190261874Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 13 00:07:27.193194 containerd[1531]: time="2025-09-13T00:07:27.193165378Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 13 00:07:27.196825 containerd[1531]: time="2025-09-13T00:07:27.196777332Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Sep 13 00:07:27.199115 containerd[1531]: time="2025-09-13T00:07:27.199070560Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 13 00:07:27.200410 containerd[1531]: time="2025-09-13T00:07:27.199623192Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 856.008274ms" Sep 13 00:07:27.200410 containerd[1531]: time="2025-09-13T00:07:27.200357532Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 856.104403ms" Sep 13 00:07:27.204210 containerd[1531]: time="2025-09-13T00:07:27.204083255Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 890.81737ms" Sep 13 00:07:27.224014 kubelet[2345]: W0913 00:07:27.223963 2345 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://139.178.70.103:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 139.178.70.103:6443: connect: connection refused Sep 13 00:07:27.224014 kubelet[2345]: E0913 00:07:27.223992 2345 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://139.178.70.103:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 139.178.70.103:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:07:27.267850 kubelet[2345]: E0913 00:07:27.267820 2345 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.103:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.103:6443: connect: connection refused" interval="1.6s" Sep 13 00:07:27.360240 kubelet[2345]: W0913 00:07:27.359436 2345 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://139.178.70.103:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 139.178.70.103:6443: connect: connection refused Sep 13 00:07:27.360240 kubelet[2345]: E0913 00:07:27.359483 2345 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://139.178.70.103:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 139.178.70.103:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:07:27.437130 kubelet[2345]: I0913 00:07:27.436831 2345 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 13 00:07:27.437130 kubelet[2345]: E0913 00:07:27.437048 2345 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://139.178.70.103:6443/api/v1/nodes\": dial tcp 139.178.70.103:6443: connect: connection refused" node="localhost" Sep 13 00:07:27.525089 containerd[1531]: time="2025-09-13T00:07:27.524923670Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:07:27.525089 containerd[1531]: time="2025-09-13T00:07:27.525012916Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:07:27.525089 containerd[1531]: time="2025-09-13T00:07:27.525032313Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:07:27.525606 containerd[1531]: time="2025-09-13T00:07:27.525129543Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:07:27.525606 containerd[1531]: time="2025-09-13T00:07:27.525334383Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:07:27.525728 containerd[1531]: time="2025-09-13T00:07:27.525684246Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:07:27.525917 containerd[1531]: time="2025-09-13T00:07:27.525889891Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:07:27.526036 containerd[1531]: time="2025-09-13T00:07:27.526002935Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:07:27.526694 containerd[1531]: time="2025-09-13T00:07:27.526634937Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:07:27.526694 containerd[1531]: time="2025-09-13T00:07:27.526675173Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:07:27.526812 containerd[1531]: time="2025-09-13T00:07:27.526691347Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:07:27.526812 containerd[1531]: time="2025-09-13T00:07:27.526756537Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:07:27.556280 systemd[1]: Started cri-containerd-30ca422bbd196861ef7a9c40685f71d8b858432a5f90c13601357193a760a8bb.scope - libcontainer container 30ca422bbd196861ef7a9c40685f71d8b858432a5f90c13601357193a760a8bb. Sep 13 00:07:27.557629 systemd[1]: Started cri-containerd-44e1ca1a97628150cac12e12c5cf8d40018854a0c5296b2a3178759aa6dfb55f.scope - libcontainer container 44e1ca1a97628150cac12e12c5cf8d40018854a0c5296b2a3178759aa6dfb55f. Sep 13 00:07:27.558844 systemd[1]: Started cri-containerd-b7f3ed109fcce69c705ba002f5f322bc072016b9ea42da356a27dc647876b1b4.scope - libcontainer container b7f3ed109fcce69c705ba002f5f322bc072016b9ea42da356a27dc647876b1b4. Sep 13 00:07:27.593173 kubelet[2345]: E0913 00:07:27.592438 2345 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://139.178.70.103:6443/api/v1/namespaces/default/events\": dial tcp 139.178.70.103:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1864aee34ba59971 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-13 00:07:25.831289201 +0000 UTC m=+0.749595570,LastTimestamp:2025-09-13 00:07:25.831289201 +0000 UTC m=+0.749595570,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 13 00:07:27.606569 containerd[1531]: time="2025-09-13T00:07:27.606480635Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:1403266a9792debaa127cd8df7a81c3c,Namespace:kube-system,Attempt:0,} returns sandbox id \"30ca422bbd196861ef7a9c40685f71d8b858432a5f90c13601357193a760a8bb\"" Sep 13 00:07:27.606765 containerd[1531]: time="2025-09-13T00:07:27.606687334Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:9448acc4c6cdc979b9e7b719492b80b9,Namespace:kube-system,Attempt:0,} returns sandbox id \"b7f3ed109fcce69c705ba002f5f322bc072016b9ea42da356a27dc647876b1b4\"" Sep 13 00:07:27.608830 containerd[1531]: time="2025-09-13T00:07:27.608709272Z" level=info msg="CreateContainer within sandbox \"30ca422bbd196861ef7a9c40685f71d8b858432a5f90c13601357193a760a8bb\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 13 00:07:27.608943 containerd[1531]: time="2025-09-13T00:07:27.608920467Z" level=info msg="CreateContainer within sandbox \"b7f3ed109fcce69c705ba002f5f322bc072016b9ea42da356a27dc647876b1b4\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 13 00:07:27.621786 containerd[1531]: time="2025-09-13T00:07:27.621286582Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:72a30db4fc25e4da65a3b99eba43be94,Namespace:kube-system,Attempt:0,} returns sandbox id \"44e1ca1a97628150cac12e12c5cf8d40018854a0c5296b2a3178759aa6dfb55f\"" Sep 13 00:07:27.622968 containerd[1531]: time="2025-09-13T00:07:27.622952576Z" level=info msg="CreateContainer within sandbox \"44e1ca1a97628150cac12e12c5cf8d40018854a0c5296b2a3178759aa6dfb55f\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 13 00:07:27.782576 containerd[1531]: time="2025-09-13T00:07:27.782540783Z" level=info msg="CreateContainer within sandbox \"30ca422bbd196861ef7a9c40685f71d8b858432a5f90c13601357193a760a8bb\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"d5e931254e8c8568c509ce6d6a563a0ebdb853f10a58a10a08dd626ea1f3f440\"" Sep 13 00:07:27.783419 containerd[1531]: time="2025-09-13T00:07:27.783254357Z" level=info msg="StartContainer for \"d5e931254e8c8568c509ce6d6a563a0ebdb853f10a58a10a08dd626ea1f3f440\"" Sep 13 00:07:27.785988 containerd[1531]: time="2025-09-13T00:07:27.785962800Z" level=info msg="CreateContainer within sandbox \"b7f3ed109fcce69c705ba002f5f322bc072016b9ea42da356a27dc647876b1b4\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"04686169b0a16bc6cc854cc29494c201668ae88d1c0d2a65d7e5797769a5c967\"" Sep 13 00:07:27.787715 containerd[1531]: time="2025-09-13T00:07:27.787683894Z" level=info msg="StartContainer for \"04686169b0a16bc6cc854cc29494c201668ae88d1c0d2a65d7e5797769a5c967\"" Sep 13 00:07:27.790922 containerd[1531]: time="2025-09-13T00:07:27.790891508Z" level=info msg="CreateContainer within sandbox \"44e1ca1a97628150cac12e12c5cf8d40018854a0c5296b2a3178759aa6dfb55f\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"e556259b752fab43acd7dfb0837aa96d2cc92f13135eb1080b81d06ac7c02b51\"" Sep 13 00:07:27.791347 containerd[1531]: time="2025-09-13T00:07:27.791327023Z" level=info msg="StartContainer for \"e556259b752fab43acd7dfb0837aa96d2cc92f13135eb1080b81d06ac7c02b51\"" Sep 13 00:07:27.818654 systemd[1]: Started cri-containerd-d5e931254e8c8568c509ce6d6a563a0ebdb853f10a58a10a08dd626ea1f3f440.scope - libcontainer container d5e931254e8c8568c509ce6d6a563a0ebdb853f10a58a10a08dd626ea1f3f440. Sep 13 00:07:27.827346 systemd[1]: Started cri-containerd-04686169b0a16bc6cc854cc29494c201668ae88d1c0d2a65d7e5797769a5c967.scope - libcontainer container 04686169b0a16bc6cc854cc29494c201668ae88d1c0d2a65d7e5797769a5c967. Sep 13 00:07:27.831087 systemd[1]: Started cri-containerd-e556259b752fab43acd7dfb0837aa96d2cc92f13135eb1080b81d06ac7c02b51.scope - libcontainer container e556259b752fab43acd7dfb0837aa96d2cc92f13135eb1080b81d06ac7c02b51. Sep 13 00:07:27.876017 kubelet[2345]: E0913 00:07:27.875885 2345 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://139.178.70.103:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 139.178.70.103:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:07:27.891257 containerd[1531]: time="2025-09-13T00:07:27.891135733Z" level=info msg="StartContainer for \"04686169b0a16bc6cc854cc29494c201668ae88d1c0d2a65d7e5797769a5c967\" returns successfully" Sep 13 00:07:27.893094 containerd[1531]: time="2025-09-13T00:07:27.891331507Z" level=info msg="StartContainer for \"d5e931254e8c8568c509ce6d6a563a0ebdb853f10a58a10a08dd626ea1f3f440\" returns successfully" Sep 13 00:07:27.898965 containerd[1531]: time="2025-09-13T00:07:27.898879205Z" level=info msg="StartContainer for \"e556259b752fab43acd7dfb0837aa96d2cc92f13135eb1080b81d06ac7c02b51\" returns successfully" Sep 13 00:07:27.901706 kubelet[2345]: E0913 00:07:27.901691 2345 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 13 00:07:27.903483 kubelet[2345]: E0913 00:07:27.902976 2345 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 13 00:07:27.905401 kubelet[2345]: E0913 00:07:27.905388 2345 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 13 00:07:28.714572 kubelet[2345]: W0913 00:07:28.714499 2345 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://139.178.70.103:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 139.178.70.103:6443: connect: connection refused Sep 13 00:07:28.714572 kubelet[2345]: E0913 00:07:28.714548 2345 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://139.178.70.103:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 139.178.70.103:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:07:28.868647 kubelet[2345]: E0913 00:07:28.868607 2345 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.103:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.103:6443: connect: connection refused" interval="3.2s" Sep 13 00:07:28.906418 kubelet[2345]: E0913 00:07:28.906391 2345 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 13 00:07:28.906652 kubelet[2345]: E0913 00:07:28.906537 2345 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 13 00:07:28.906652 kubelet[2345]: E0913 00:07:28.906587 2345 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 13 00:07:29.038405 kubelet[2345]: I0913 00:07:29.038160 2345 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 13 00:07:30.171498 kubelet[2345]: I0913 00:07:30.171367 2345 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 13 00:07:30.171498 kubelet[2345]: E0913 00:07:30.171407 2345 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Sep 13 00:07:30.257721 kubelet[2345]: I0913 00:07:30.257699 2345 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 13 00:07:30.276321 kubelet[2345]: E0913 00:07:30.276293 2345 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Sep 13 00:07:30.276321 kubelet[2345]: I0913 00:07:30.276313 2345 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 13 00:07:30.277488 kubelet[2345]: E0913 00:07:30.277377 2345 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Sep 13 00:07:30.277488 kubelet[2345]: I0913 00:07:30.277396 2345 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 13 00:07:30.281194 kubelet[2345]: E0913 00:07:30.279233 2345 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Sep 13 00:07:30.829099 kubelet[2345]: I0913 00:07:30.829071 2345 apiserver.go:52] "Watching apiserver" Sep 13 00:07:30.864983 kubelet[2345]: I0913 00:07:30.864944 2345 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 13 00:07:31.815307 systemd[1]: Reloading requested from client PID 2618 ('systemctl') (unit session-9.scope)... Sep 13 00:07:31.815320 systemd[1]: Reloading... Sep 13 00:07:31.876204 zram_generator::config[2658]: No configuration found. Sep 13 00:07:31.940375 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Sep 13 00:07:31.956088 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 00:07:32.011111 systemd[1]: Reloading finished in 195 ms. Sep 13 00:07:32.035731 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:07:32.048824 systemd[1]: kubelet.service: Deactivated successfully. Sep 13 00:07:32.049011 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:07:32.051371 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:07:32.566815 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:07:32.575529 (kubelet)[2723]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 13 00:07:32.628026 kubelet[2723]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 00:07:32.628026 kubelet[2723]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 13 00:07:32.628026 kubelet[2723]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 00:07:32.629331 kubelet[2723]: I0913 00:07:32.629223 2723 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 13 00:07:32.632927 kubelet[2723]: I0913 00:07:32.632913 2723 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Sep 13 00:07:32.633001 kubelet[2723]: I0913 00:07:32.632996 2723 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 13 00:07:32.633163 kubelet[2723]: I0913 00:07:32.633156 2723 server.go:954] "Client rotation is on, will bootstrap in background" Sep 13 00:07:32.633889 kubelet[2723]: I0913 00:07:32.633880 2723 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 13 00:07:32.637735 kubelet[2723]: I0913 00:07:32.637722 2723 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 13 00:07:32.641267 kubelet[2723]: E0913 00:07:32.641238 2723 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 13 00:07:32.641385 kubelet[2723]: I0913 00:07:32.641378 2723 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 13 00:07:32.645265 kubelet[2723]: I0913 00:07:32.645251 2723 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 13 00:07:32.645480 kubelet[2723]: I0913 00:07:32.645460 2723 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 13 00:07:32.645626 kubelet[2723]: I0913 00:07:32.645529 2723 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 13 00:07:32.645705 kubelet[2723]: I0913 00:07:32.645698 2723 topology_manager.go:138] "Creating topology manager with none policy" Sep 13 00:07:32.645742 kubelet[2723]: I0913 00:07:32.645737 2723 container_manager_linux.go:304] "Creating device plugin manager" Sep 13 00:07:32.645796 kubelet[2723]: I0913 00:07:32.645791 2723 state_mem.go:36] "Initialized new in-memory state store" Sep 13 00:07:32.646442 kubelet[2723]: I0913 00:07:32.646434 2723 kubelet.go:446] "Attempting to sync node with API server" Sep 13 00:07:32.646492 kubelet[2723]: I0913 00:07:32.646487 2723 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 13 00:07:32.646531 kubelet[2723]: I0913 00:07:32.646527 2723 kubelet.go:352] "Adding apiserver pod source" Sep 13 00:07:32.646910 kubelet[2723]: I0913 00:07:32.646902 2723 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 13 00:07:32.648437 kubelet[2723]: I0913 00:07:32.648427 2723 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Sep 13 00:07:32.648708 kubelet[2723]: I0913 00:07:32.648700 2723 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 13 00:07:32.651271 kubelet[2723]: I0913 00:07:32.651258 2723 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 13 00:07:32.651633 kubelet[2723]: I0913 00:07:32.651619 2723 server.go:1287] "Started kubelet" Sep 13 00:07:32.655039 kubelet[2723]: I0913 00:07:32.655016 2723 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 13 00:07:32.660703 kubelet[2723]: I0913 00:07:32.660678 2723 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Sep 13 00:07:32.662887 kubelet[2723]: I0913 00:07:32.662871 2723 server.go:479] "Adding debug handlers to kubelet server" Sep 13 00:07:32.663683 kubelet[2723]: I0913 00:07:32.663646 2723 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 13 00:07:32.663879 kubelet[2723]: I0913 00:07:32.663869 2723 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 13 00:07:32.664055 kubelet[2723]: I0913 00:07:32.664046 2723 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 13 00:07:32.666249 kubelet[2723]: I0913 00:07:32.666239 2723 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 13 00:07:32.666462 kubelet[2723]: E0913 00:07:32.666452 2723 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:07:32.667612 kubelet[2723]: I0913 00:07:32.667603 2723 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 13 00:07:32.667737 kubelet[2723]: I0913 00:07:32.667730 2723 reconciler.go:26] "Reconciler: start to sync state" Sep 13 00:07:32.668876 kubelet[2723]: I0913 00:07:32.668854 2723 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 13 00:07:32.670076 kubelet[2723]: I0913 00:07:32.670067 2723 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 13 00:07:32.672422 kubelet[2723]: I0913 00:07:32.671734 2723 status_manager.go:227] "Starting to sync pod status with apiserver" Sep 13 00:07:32.672422 kubelet[2723]: I0913 00:07:32.671764 2723 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 13 00:07:32.672422 kubelet[2723]: I0913 00:07:32.671769 2723 kubelet.go:2382] "Starting kubelet main sync loop" Sep 13 00:07:32.672422 kubelet[2723]: E0913 00:07:32.671809 2723 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 13 00:07:32.678664 kubelet[2723]: I0913 00:07:32.677704 2723 factory.go:221] Registration of the systemd container factory successfully Sep 13 00:07:32.678664 kubelet[2723]: I0913 00:07:32.677774 2723 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 13 00:07:32.680296 kubelet[2723]: E0913 00:07:32.680002 2723 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 13 00:07:32.681086 kubelet[2723]: I0913 00:07:32.680992 2723 factory.go:221] Registration of the containerd container factory successfully Sep 13 00:07:32.718399 kubelet[2723]: I0913 00:07:32.718381 2723 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 13 00:07:32.718474 kubelet[2723]: I0913 00:07:32.718408 2723 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 13 00:07:32.718474 kubelet[2723]: I0913 00:07:32.718421 2723 state_mem.go:36] "Initialized new in-memory state store" Sep 13 00:07:32.718553 kubelet[2723]: I0913 00:07:32.718539 2723 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 13 00:07:32.718579 kubelet[2723]: I0913 00:07:32.718551 2723 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 13 00:07:32.718579 kubelet[2723]: I0913 00:07:32.718563 2723 policy_none.go:49] "None policy: Start" Sep 13 00:07:32.718579 kubelet[2723]: I0913 00:07:32.718569 2723 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 13 00:07:32.718579 kubelet[2723]: I0913 00:07:32.718574 2723 state_mem.go:35] "Initializing new in-memory state store" Sep 13 00:07:32.718642 kubelet[2723]: I0913 00:07:32.718632 2723 state_mem.go:75] "Updated machine memory state" Sep 13 00:07:32.721409 kubelet[2723]: I0913 00:07:32.720824 2723 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 13 00:07:32.721409 kubelet[2723]: I0913 00:07:32.720908 2723 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 13 00:07:32.721409 kubelet[2723]: I0913 00:07:32.720914 2723 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 13 00:07:32.721409 kubelet[2723]: I0913 00:07:32.721172 2723 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 13 00:07:32.721957 kubelet[2723]: E0913 00:07:32.721948 2723 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 13 00:07:32.773127 kubelet[2723]: I0913 00:07:32.773102 2723 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 13 00:07:32.773790 kubelet[2723]: I0913 00:07:32.773776 2723 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 13 00:07:32.773927 kubelet[2723]: I0913 00:07:32.773920 2723 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 13 00:07:32.822606 kubelet[2723]: I0913 00:07:32.822146 2723 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 13 00:07:32.826667 kubelet[2723]: I0913 00:07:32.826640 2723 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Sep 13 00:07:32.826741 kubelet[2723]: I0913 00:07:32.826701 2723 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 13 00:07:32.869212 kubelet[2723]: I0913 00:07:32.869032 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 00:07:32.869212 kubelet[2723]: I0913 00:07:32.869059 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9448acc4c6cdc979b9e7b719492b80b9-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"9448acc4c6cdc979b9e7b719492b80b9\") " pod="kube-system/kube-apiserver-localhost" Sep 13 00:07:32.869212 kubelet[2723]: I0913 00:07:32.869074 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9448acc4c6cdc979b9e7b719492b80b9-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"9448acc4c6cdc979b9e7b719492b80b9\") " pod="kube-system/kube-apiserver-localhost" Sep 13 00:07:32.869212 kubelet[2723]: I0913 00:07:32.869084 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 00:07:32.869212 kubelet[2723]: I0913 00:07:32.869093 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 00:07:32.869457 kubelet[2723]: I0913 00:07:32.869102 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 00:07:32.869457 kubelet[2723]: I0913 00:07:32.869110 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 00:07:32.869457 kubelet[2723]: I0913 00:07:32.869118 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/72a30db4fc25e4da65a3b99eba43be94-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"72a30db4fc25e4da65a3b99eba43be94\") " pod="kube-system/kube-scheduler-localhost" Sep 13 00:07:32.869457 kubelet[2723]: I0913 00:07:32.869128 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9448acc4c6cdc979b9e7b719492b80b9-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"9448acc4c6cdc979b9e7b719492b80b9\") " pod="kube-system/kube-apiserver-localhost" Sep 13 00:07:33.652159 kubelet[2723]: I0913 00:07:33.651890 2723 apiserver.go:52] "Watching apiserver" Sep 13 00:07:33.667823 kubelet[2723]: I0913 00:07:33.667776 2723 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 13 00:07:33.706037 kubelet[2723]: I0913 00:07:33.706006 2723 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 13 00:07:33.710080 kubelet[2723]: E0913 00:07:33.710057 2723 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 13 00:07:33.725551 kubelet[2723]: I0913 00:07:33.725510 2723 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.7254884050000001 podStartE2EDuration="1.725488405s" podCreationTimestamp="2025-09-13 00:07:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:07:33.722155683 +0000 UTC m=+1.119880037" watchObservedRunningTime="2025-09-13 00:07:33.725488405 +0000 UTC m=+1.123212749" Sep 13 00:07:33.729551 kubelet[2723]: I0913 00:07:33.729526 2723 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.729517875 podStartE2EDuration="1.729517875s" podCreationTimestamp="2025-09-13 00:07:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:07:33.725954389 +0000 UTC m=+1.123678747" watchObservedRunningTime="2025-09-13 00:07:33.729517875 +0000 UTC m=+1.127242220" Sep 13 00:07:33.729640 kubelet[2723]: I0913 00:07:33.729586 2723 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.729582798 podStartE2EDuration="1.729582798s" podCreationTimestamp="2025-09-13 00:07:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:07:33.72943033 +0000 UTC m=+1.127154678" watchObservedRunningTime="2025-09-13 00:07:33.729582798 +0000 UTC m=+1.127307146" Sep 13 00:07:37.817445 kubelet[2723]: I0913 00:07:37.817417 2723 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 13 00:07:37.817846 kubelet[2723]: I0913 00:07:37.817835 2723 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 13 00:07:37.817880 containerd[1531]: time="2025-09-13T00:07:37.817695359Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 13 00:07:38.534808 systemd[1]: Created slice kubepods-besteffort-podc2b7a23b_fc1b_48b8_9ab6_ca0006c722ec.slice - libcontainer container kubepods-besteffort-podc2b7a23b_fc1b_48b8_9ab6_ca0006c722ec.slice. Sep 13 00:07:38.604478 kubelet[2723]: I0913 00:07:38.604445 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c2b7a23b-fc1b-48b8-9ab6-ca0006c722ec-xtables-lock\") pod \"kube-proxy-47qbg\" (UID: \"c2b7a23b-fc1b-48b8-9ab6-ca0006c722ec\") " pod="kube-system/kube-proxy-47qbg" Sep 13 00:07:38.604478 kubelet[2723]: I0913 00:07:38.604480 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5999r\" (UniqueName: \"kubernetes.io/projected/c2b7a23b-fc1b-48b8-9ab6-ca0006c722ec-kube-api-access-5999r\") pod \"kube-proxy-47qbg\" (UID: \"c2b7a23b-fc1b-48b8-9ab6-ca0006c722ec\") " pod="kube-system/kube-proxy-47qbg" Sep 13 00:07:38.604650 kubelet[2723]: I0913 00:07:38.604501 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/c2b7a23b-fc1b-48b8-9ab6-ca0006c722ec-kube-proxy\") pod \"kube-proxy-47qbg\" (UID: \"c2b7a23b-fc1b-48b8-9ab6-ca0006c722ec\") " pod="kube-system/kube-proxy-47qbg" Sep 13 00:07:38.604650 kubelet[2723]: I0913 00:07:38.604514 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c2b7a23b-fc1b-48b8-9ab6-ca0006c722ec-lib-modules\") pod \"kube-proxy-47qbg\" (UID: \"c2b7a23b-fc1b-48b8-9ab6-ca0006c722ec\") " pod="kube-system/kube-proxy-47qbg" Sep 13 00:07:38.788848 systemd[1]: Created slice kubepods-besteffort-pod6bd847b0_c035_41e5_ae03_6c0703c3cf98.slice - libcontainer container kubepods-besteffort-pod6bd847b0_c035_41e5_ae03_6c0703c3cf98.slice. Sep 13 00:07:38.805322 kubelet[2723]: I0913 00:07:38.805307 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/6bd847b0-c035-41e5-ae03-6c0703c3cf98-var-lib-calico\") pod \"tigera-operator-755d956888-lcqqv\" (UID: \"6bd847b0-c035-41e5-ae03-6c0703c3cf98\") " pod="tigera-operator/tigera-operator-755d956888-lcqqv" Sep 13 00:07:38.805412 kubelet[2723]: I0913 00:07:38.805404 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qtnsm\" (UniqueName: \"kubernetes.io/projected/6bd847b0-c035-41e5-ae03-6c0703c3cf98-kube-api-access-qtnsm\") pod \"tigera-operator-755d956888-lcqqv\" (UID: \"6bd847b0-c035-41e5-ae03-6c0703c3cf98\") " pod="tigera-operator/tigera-operator-755d956888-lcqqv" Sep 13 00:07:38.846986 containerd[1531]: time="2025-09-13T00:07:38.846963419Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-47qbg,Uid:c2b7a23b-fc1b-48b8-9ab6-ca0006c722ec,Namespace:kube-system,Attempt:0,}" Sep 13 00:07:38.864934 containerd[1531]: time="2025-09-13T00:07:38.863242129Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:07:38.864934 containerd[1531]: time="2025-09-13T00:07:38.863276697Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:07:38.864934 containerd[1531]: time="2025-09-13T00:07:38.863295920Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:07:38.864934 containerd[1531]: time="2025-09-13T00:07:38.863348338Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:07:38.882280 systemd[1]: Started cri-containerd-aa14f182662624587e3fd7fabff6599f7434c192a5c9a9e567b547a064ed66c3.scope - libcontainer container aa14f182662624587e3fd7fabff6599f7434c192a5c9a9e567b547a064ed66c3. Sep 13 00:07:38.895517 containerd[1531]: time="2025-09-13T00:07:38.895485158Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-47qbg,Uid:c2b7a23b-fc1b-48b8-9ab6-ca0006c722ec,Namespace:kube-system,Attempt:0,} returns sandbox id \"aa14f182662624587e3fd7fabff6599f7434c192a5c9a9e567b547a064ed66c3\"" Sep 13 00:07:38.897985 containerd[1531]: time="2025-09-13T00:07:38.897965592Z" level=info msg="CreateContainer within sandbox \"aa14f182662624587e3fd7fabff6599f7434c192a5c9a9e567b547a064ed66c3\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 13 00:07:38.904803 containerd[1531]: time="2025-09-13T00:07:38.904781051Z" level=info msg="CreateContainer within sandbox \"aa14f182662624587e3fd7fabff6599f7434c192a5c9a9e567b547a064ed66c3\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"55d41accda67f91767895262194f3fe5015b87688c6c00730ca93b451e39b032\"" Sep 13 00:07:38.905107 containerd[1531]: time="2025-09-13T00:07:38.905091207Z" level=info msg="StartContainer for \"55d41accda67f91767895262194f3fe5015b87688c6c00730ca93b451e39b032\"" Sep 13 00:07:38.927298 systemd[1]: Started cri-containerd-55d41accda67f91767895262194f3fe5015b87688c6c00730ca93b451e39b032.scope - libcontainer container 55d41accda67f91767895262194f3fe5015b87688c6c00730ca93b451e39b032. Sep 13 00:07:38.944386 containerd[1531]: time="2025-09-13T00:07:38.944303611Z" level=info msg="StartContainer for \"55d41accda67f91767895262194f3fe5015b87688c6c00730ca93b451e39b032\" returns successfully" Sep 13 00:07:39.091734 containerd[1531]: time="2025-09-13T00:07:39.091646775Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-755d956888-lcqqv,Uid:6bd847b0-c035-41e5-ae03-6c0703c3cf98,Namespace:tigera-operator,Attempt:0,}" Sep 13 00:07:39.103481 containerd[1531]: time="2025-09-13T00:07:39.103287568Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:07:39.103481 containerd[1531]: time="2025-09-13T00:07:39.103324003Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:07:39.103481 containerd[1531]: time="2025-09-13T00:07:39.103335003Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:07:39.103669 containerd[1531]: time="2025-09-13T00:07:39.103529210Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:07:39.115293 systemd[1]: Started cri-containerd-5878c95df4fbc2a88f1c90a281d35c629ae0c526407ebf3a500dd7c57ddbc9fb.scope - libcontainer container 5878c95df4fbc2a88f1c90a281d35c629ae0c526407ebf3a500dd7c57ddbc9fb. Sep 13 00:07:39.147205 containerd[1531]: time="2025-09-13T00:07:39.146808506Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-755d956888-lcqqv,Uid:6bd847b0-c035-41e5-ae03-6c0703c3cf98,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"5878c95df4fbc2a88f1c90a281d35c629ae0c526407ebf3a500dd7c57ddbc9fb\"" Sep 13 00:07:39.148456 containerd[1531]: time="2025-09-13T00:07:39.148043535Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\"" Sep 13 00:07:39.726290 kubelet[2723]: I0913 00:07:39.726245 2723 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-47qbg" podStartSLOduration=1.726229571 podStartE2EDuration="1.726229571s" podCreationTimestamp="2025-09-13 00:07:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:07:39.725729854 +0000 UTC m=+7.123454214" watchObservedRunningTime="2025-09-13 00:07:39.726229571 +0000 UTC m=+7.123953933" Sep 13 00:07:40.478494 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3047285853.mount: Deactivated successfully. Sep 13 00:07:41.448545 containerd[1531]: time="2025-09-13T00:07:41.448495265Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:07:41.449166 containerd[1531]: time="2025-09-13T00:07:41.449131167Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.6: active requests=0, bytes read=25062609" Sep 13 00:07:41.450761 containerd[1531]: time="2025-09-13T00:07:41.449683568Z" level=info msg="ImageCreate event name:\"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:07:41.451977 containerd[1531]: time="2025-09-13T00:07:41.451241536Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:07:41.451977 containerd[1531]: time="2025-09-13T00:07:41.451893783Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.6\" with image id \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\", repo tag \"quay.io/tigera/operator:v1.38.6\", repo digest \"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\", size \"25058604\" in 2.303830796s" Sep 13 00:07:41.451977 containerd[1531]: time="2025-09-13T00:07:41.451914443Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\" returns image reference \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\"" Sep 13 00:07:41.453649 containerd[1531]: time="2025-09-13T00:07:41.453630393Z" level=info msg="CreateContainer within sandbox \"5878c95df4fbc2a88f1c90a281d35c629ae0c526407ebf3a500dd7c57ddbc9fb\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Sep 13 00:07:41.466825 containerd[1531]: time="2025-09-13T00:07:41.466789834Z" level=info msg="CreateContainer within sandbox \"5878c95df4fbc2a88f1c90a281d35c629ae0c526407ebf3a500dd7c57ddbc9fb\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"d5fb0dc6fa204732222821a929e007a88e8edd6569ef7f54298c392b90d3fc40\"" Sep 13 00:07:41.467470 containerd[1531]: time="2025-09-13T00:07:41.467385389Z" level=info msg="StartContainer for \"d5fb0dc6fa204732222821a929e007a88e8edd6569ef7f54298c392b90d3fc40\"" Sep 13 00:07:41.492399 systemd[1]: run-containerd-runc-k8s.io-d5fb0dc6fa204732222821a929e007a88e8edd6569ef7f54298c392b90d3fc40-runc.VxwBcT.mount: Deactivated successfully. Sep 13 00:07:41.500296 systemd[1]: Started cri-containerd-d5fb0dc6fa204732222821a929e007a88e8edd6569ef7f54298c392b90d3fc40.scope - libcontainer container d5fb0dc6fa204732222821a929e007a88e8edd6569ef7f54298c392b90d3fc40. Sep 13 00:07:41.517838 containerd[1531]: time="2025-09-13T00:07:41.517811513Z" level=info msg="StartContainer for \"d5fb0dc6fa204732222821a929e007a88e8edd6569ef7f54298c392b90d3fc40\" returns successfully" Sep 13 00:07:41.728256 kubelet[2723]: I0913 00:07:41.728153 2723 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-755d956888-lcqqv" podStartSLOduration=1.4232317669999999 podStartE2EDuration="3.728137113s" podCreationTimestamp="2025-09-13 00:07:38 +0000 UTC" firstStartedPulling="2025-09-13 00:07:39.147708696 +0000 UTC m=+6.545433042" lastFinishedPulling="2025-09-13 00:07:41.452614035 +0000 UTC m=+8.850338388" observedRunningTime="2025-09-13 00:07:41.727651383 +0000 UTC m=+9.125375736" watchObservedRunningTime="2025-09-13 00:07:41.728137113 +0000 UTC m=+9.125861466" Sep 13 00:07:46.760769 sudo[1824]: pam_unix(sudo:session): session closed for user root Sep 13 00:07:46.773505 sshd[1821]: pam_unix(sshd:session): session closed for user core Sep 13 00:07:46.775623 systemd[1]: sshd@6-139.178.70.103:22-139.178.89.65:41070.service: Deactivated successfully. Sep 13 00:07:46.776775 systemd[1]: session-9.scope: Deactivated successfully. Sep 13 00:07:46.776970 systemd[1]: session-9.scope: Consumed 2.720s CPU time, 143.6M memory peak, 0B memory swap peak. Sep 13 00:07:46.777862 systemd-logind[1512]: Session 9 logged out. Waiting for processes to exit. Sep 13 00:07:46.778779 systemd-logind[1512]: Removed session 9. Sep 13 00:07:49.359614 systemd[1]: Created slice kubepods-besteffort-pod9d31dd93_8b1d_4ffe_b614_6b62caba2c5f.slice - libcontainer container kubepods-besteffort-pod9d31dd93_8b1d_4ffe_b614_6b62caba2c5f.slice. Sep 13 00:07:49.383141 kubelet[2723]: I0913 00:07:49.383113 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9d31dd93-8b1d-4ffe-b614-6b62caba2c5f-tigera-ca-bundle\") pod \"calico-typha-556b8dcc47-fw7wm\" (UID: \"9d31dd93-8b1d-4ffe-b614-6b62caba2c5f\") " pod="calico-system/calico-typha-556b8dcc47-fw7wm" Sep 13 00:07:49.383456 kubelet[2723]: I0913 00:07:49.383443 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/9d31dd93-8b1d-4ffe-b614-6b62caba2c5f-typha-certs\") pod \"calico-typha-556b8dcc47-fw7wm\" (UID: \"9d31dd93-8b1d-4ffe-b614-6b62caba2c5f\") " pod="calico-system/calico-typha-556b8dcc47-fw7wm" Sep 13 00:07:49.383545 kubelet[2723]: I0913 00:07:49.383535 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ps424\" (UniqueName: \"kubernetes.io/projected/9d31dd93-8b1d-4ffe-b614-6b62caba2c5f-kube-api-access-ps424\") pod \"calico-typha-556b8dcc47-fw7wm\" (UID: \"9d31dd93-8b1d-4ffe-b614-6b62caba2c5f\") " pod="calico-system/calico-typha-556b8dcc47-fw7wm" Sep 13 00:07:49.674510 containerd[1531]: time="2025-09-13T00:07:49.674469655Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-556b8dcc47-fw7wm,Uid:9d31dd93-8b1d-4ffe-b614-6b62caba2c5f,Namespace:calico-system,Attempt:0,}" Sep 13 00:07:49.699243 systemd[1]: Created slice kubepods-besteffort-pod75f265e2_1c63_4d0f_808b_0dba0d8fbd16.slice - libcontainer container kubepods-besteffort-pod75f265e2_1c63_4d0f_808b_0dba0d8fbd16.slice. Sep 13 00:07:49.721940 containerd[1531]: time="2025-09-13T00:07:49.721749204Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:07:49.721940 containerd[1531]: time="2025-09-13T00:07:49.721785855Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:07:49.721940 containerd[1531]: time="2025-09-13T00:07:49.721817869Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:07:49.721940 containerd[1531]: time="2025-09-13T00:07:49.721905248Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:07:49.768669 systemd[1]: Started cri-containerd-5882ff2233b03a683048f3b3edb05b6b12c89c0a1eca266ca39395798778c389.scope - libcontainer container 5882ff2233b03a683048f3b3edb05b6b12c89c0a1eca266ca39395798778c389. Sep 13 00:07:49.786634 kubelet[2723]: I0913 00:07:49.786471 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/75f265e2-1c63-4d0f-808b-0dba0d8fbd16-xtables-lock\") pod \"calico-node-cdhzt\" (UID: \"75f265e2-1c63-4d0f-808b-0dba0d8fbd16\") " pod="calico-system/calico-node-cdhzt" Sep 13 00:07:49.786634 kubelet[2723]: I0913 00:07:49.786495 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/75f265e2-1c63-4d0f-808b-0dba0d8fbd16-cni-log-dir\") pod \"calico-node-cdhzt\" (UID: \"75f265e2-1c63-4d0f-808b-0dba0d8fbd16\") " pod="calico-system/calico-node-cdhzt" Sep 13 00:07:49.786634 kubelet[2723]: I0913 00:07:49.786505 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/75f265e2-1c63-4d0f-808b-0dba0d8fbd16-cni-net-dir\") pod \"calico-node-cdhzt\" (UID: \"75f265e2-1c63-4d0f-808b-0dba0d8fbd16\") " pod="calico-system/calico-node-cdhzt" Sep 13 00:07:49.786634 kubelet[2723]: I0913 00:07:49.786515 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/75f265e2-1c63-4d0f-808b-0dba0d8fbd16-policysync\") pod \"calico-node-cdhzt\" (UID: \"75f265e2-1c63-4d0f-808b-0dba0d8fbd16\") " pod="calico-system/calico-node-cdhzt" Sep 13 00:07:49.786634 kubelet[2723]: I0913 00:07:49.786542 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/75f265e2-1c63-4d0f-808b-0dba0d8fbd16-tigera-ca-bundle\") pod \"calico-node-cdhzt\" (UID: \"75f265e2-1c63-4d0f-808b-0dba0d8fbd16\") " pod="calico-system/calico-node-cdhzt" Sep 13 00:07:49.786766 kubelet[2723]: I0913 00:07:49.786551 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/75f265e2-1c63-4d0f-808b-0dba0d8fbd16-var-lib-calico\") pod \"calico-node-cdhzt\" (UID: \"75f265e2-1c63-4d0f-808b-0dba0d8fbd16\") " pod="calico-system/calico-node-cdhzt" Sep 13 00:07:49.786766 kubelet[2723]: I0913 00:07:49.786559 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/75f265e2-1c63-4d0f-808b-0dba0d8fbd16-var-run-calico\") pod \"calico-node-cdhzt\" (UID: \"75f265e2-1c63-4d0f-808b-0dba0d8fbd16\") " pod="calico-system/calico-node-cdhzt" Sep 13 00:07:49.786766 kubelet[2723]: I0913 00:07:49.786567 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/75f265e2-1c63-4d0f-808b-0dba0d8fbd16-lib-modules\") pod \"calico-node-cdhzt\" (UID: \"75f265e2-1c63-4d0f-808b-0dba0d8fbd16\") " pod="calico-system/calico-node-cdhzt" Sep 13 00:07:49.786766 kubelet[2723]: I0913 00:07:49.786576 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/75f265e2-1c63-4d0f-808b-0dba0d8fbd16-node-certs\") pod \"calico-node-cdhzt\" (UID: \"75f265e2-1c63-4d0f-808b-0dba0d8fbd16\") " pod="calico-system/calico-node-cdhzt" Sep 13 00:07:49.786766 kubelet[2723]: I0913 00:07:49.786585 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-72kv2\" (UniqueName: \"kubernetes.io/projected/75f265e2-1c63-4d0f-808b-0dba0d8fbd16-kube-api-access-72kv2\") pod \"calico-node-cdhzt\" (UID: \"75f265e2-1c63-4d0f-808b-0dba0d8fbd16\") " pod="calico-system/calico-node-cdhzt" Sep 13 00:07:49.786845 kubelet[2723]: I0913 00:07:49.786598 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/75f265e2-1c63-4d0f-808b-0dba0d8fbd16-flexvol-driver-host\") pod \"calico-node-cdhzt\" (UID: \"75f265e2-1c63-4d0f-808b-0dba0d8fbd16\") " pod="calico-system/calico-node-cdhzt" Sep 13 00:07:49.786845 kubelet[2723]: I0913 00:07:49.786608 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/75f265e2-1c63-4d0f-808b-0dba0d8fbd16-cni-bin-dir\") pod \"calico-node-cdhzt\" (UID: \"75f265e2-1c63-4d0f-808b-0dba0d8fbd16\") " pod="calico-system/calico-node-cdhzt" Sep 13 00:07:49.798820 containerd[1531]: time="2025-09-13T00:07:49.798795003Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-556b8dcc47-fw7wm,Uid:9d31dd93-8b1d-4ffe-b614-6b62caba2c5f,Namespace:calico-system,Attempt:0,} returns sandbox id \"5882ff2233b03a683048f3b3edb05b6b12c89c0a1eca266ca39395798778c389\"" Sep 13 00:07:49.815127 containerd[1531]: time="2025-09-13T00:07:49.814723915Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\"" Sep 13 00:07:49.946345 kubelet[2723]: E0913 00:07:49.946264 2723 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-prqfd" podUID="42d9d358-be40-4d01-b1a6-5f1a28850352" Sep 13 00:07:49.963156 kubelet[2723]: E0913 00:07:49.963132 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:49.963156 kubelet[2723]: W0913 00:07:49.963150 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:49.964419 kubelet[2723]: E0913 00:07:49.964405 2723 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:49.964552 kubelet[2723]: E0913 00:07:49.964543 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:49.964552 kubelet[2723]: W0913 00:07:49.964551 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:49.964626 kubelet[2723]: E0913 00:07:49.964558 2723 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:49.965054 kubelet[2723]: E0913 00:07:49.965045 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:49.965054 kubelet[2723]: W0913 00:07:49.965053 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:49.965123 kubelet[2723]: E0913 00:07:49.965062 2723 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:49.965226 kubelet[2723]: E0913 00:07:49.965214 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:49.965226 kubelet[2723]: W0913 00:07:49.965219 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:49.965226 kubelet[2723]: E0913 00:07:49.965225 2723 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:49.965369 kubelet[2723]: E0913 00:07:49.965329 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:49.965369 kubelet[2723]: W0913 00:07:49.965334 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:49.965369 kubelet[2723]: E0913 00:07:49.965340 2723 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:49.965492 kubelet[2723]: E0913 00:07:49.965483 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:49.965492 kubelet[2723]: W0913 00:07:49.965488 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:49.965580 kubelet[2723]: E0913 00:07:49.965493 2723 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:49.965619 kubelet[2723]: E0913 00:07:49.965587 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:49.965619 kubelet[2723]: W0913 00:07:49.965592 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:49.965619 kubelet[2723]: E0913 00:07:49.965597 2723 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:49.966220 kubelet[2723]: E0913 00:07:49.966211 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:49.966220 kubelet[2723]: W0913 00:07:49.966218 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:49.966347 kubelet[2723]: E0913 00:07:49.966223 2723 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:49.966347 kubelet[2723]: E0913 00:07:49.966329 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:49.966347 kubelet[2723]: W0913 00:07:49.966333 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:49.966347 kubelet[2723]: E0913 00:07:49.966339 2723 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:49.966517 kubelet[2723]: E0913 00:07:49.966426 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:49.966517 kubelet[2723]: W0913 00:07:49.966431 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:49.966517 kubelet[2723]: E0913 00:07:49.966436 2723 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:49.966567 kubelet[2723]: E0913 00:07:49.966525 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:49.966567 kubelet[2723]: W0913 00:07:49.966530 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:49.966567 kubelet[2723]: E0913 00:07:49.966534 2723 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:49.966627 kubelet[2723]: E0913 00:07:49.966615 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:49.966627 kubelet[2723]: W0913 00:07:49.966622 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:49.966627 kubelet[2723]: E0913 00:07:49.966627 2723 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:49.966850 kubelet[2723]: E0913 00:07:49.966723 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:49.966850 kubelet[2723]: W0913 00:07:49.966728 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:49.966850 kubelet[2723]: E0913 00:07:49.966733 2723 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:49.966850 kubelet[2723]: E0913 00:07:49.966833 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:49.966850 kubelet[2723]: W0913 00:07:49.966837 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:49.966850 kubelet[2723]: E0913 00:07:49.966842 2723 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:49.968237 kubelet[2723]: E0913 00:07:49.968224 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:49.968237 kubelet[2723]: W0913 00:07:49.968234 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:49.968559 kubelet[2723]: E0913 00:07:49.968244 2723 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:49.968559 kubelet[2723]: E0913 00:07:49.968532 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:49.968559 kubelet[2723]: W0913 00:07:49.968539 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:49.968559 kubelet[2723]: E0913 00:07:49.968546 2723 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:49.968726 kubelet[2723]: E0913 00:07:49.968673 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:49.968726 kubelet[2723]: W0913 00:07:49.968678 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:49.968726 kubelet[2723]: E0913 00:07:49.968683 2723 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:49.968871 kubelet[2723]: E0913 00:07:49.968861 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:49.968871 kubelet[2723]: W0913 00:07:49.968867 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:49.968910 kubelet[2723]: E0913 00:07:49.968873 2723 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:49.968996 kubelet[2723]: E0913 00:07:49.968985 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:49.968996 kubelet[2723]: W0913 00:07:49.968990 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:49.968996 kubelet[2723]: E0913 00:07:49.968995 2723 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:49.969408 kubelet[2723]: E0913 00:07:49.969374 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:49.969408 kubelet[2723]: W0913 00:07:49.969381 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:49.969408 kubelet[2723]: E0913 00:07:49.969386 2723 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:49.988725 kubelet[2723]: E0913 00:07:49.988613 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:49.988725 kubelet[2723]: W0913 00:07:49.988631 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:49.988725 kubelet[2723]: E0913 00:07:49.988644 2723 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:49.988725 kubelet[2723]: I0913 00:07:49.988664 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/42d9d358-be40-4d01-b1a6-5f1a28850352-kubelet-dir\") pod \"csi-node-driver-prqfd\" (UID: \"42d9d358-be40-4d01-b1a6-5f1a28850352\") " pod="calico-system/csi-node-driver-prqfd" Sep 13 00:07:49.988897 kubelet[2723]: E0913 00:07:49.988888 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:49.988938 kubelet[2723]: W0913 00:07:49.988931 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:49.988981 kubelet[2723]: E0913 00:07:49.988974 2723 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:49.989100 kubelet[2723]: I0913 00:07:49.989032 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tvc77\" (UniqueName: \"kubernetes.io/projected/42d9d358-be40-4d01-b1a6-5f1a28850352-kube-api-access-tvc77\") pod \"csi-node-driver-prqfd\" (UID: \"42d9d358-be40-4d01-b1a6-5f1a28850352\") " pod="calico-system/csi-node-driver-prqfd" Sep 13 00:07:49.989415 kubelet[2723]: E0913 00:07:49.989121 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:49.989415 kubelet[2723]: W0913 00:07:49.989130 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:49.989415 kubelet[2723]: E0913 00:07:49.989140 2723 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:49.989415 kubelet[2723]: E0913 00:07:49.989287 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:49.989415 kubelet[2723]: W0913 00:07:49.989294 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:49.989415 kubelet[2723]: E0913 00:07:49.989308 2723 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:49.989603 kubelet[2723]: E0913 00:07:49.989590 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:49.989603 kubelet[2723]: W0913 00:07:49.989600 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:49.989645 kubelet[2723]: E0913 00:07:49.989608 2723 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:49.989645 kubelet[2723]: I0913 00:07:49.989619 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/42d9d358-be40-4d01-b1a6-5f1a28850352-socket-dir\") pod \"csi-node-driver-prqfd\" (UID: \"42d9d358-be40-4d01-b1a6-5f1a28850352\") " pod="calico-system/csi-node-driver-prqfd" Sep 13 00:07:49.989744 kubelet[2723]: E0913 00:07:49.989731 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:49.989744 kubelet[2723]: W0913 00:07:49.989736 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:49.989744 kubelet[2723]: E0913 00:07:49.989741 2723 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:49.989857 kubelet[2723]: I0913 00:07:49.989749 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/42d9d358-be40-4d01-b1a6-5f1a28850352-registration-dir\") pod \"csi-node-driver-prqfd\" (UID: \"42d9d358-be40-4d01-b1a6-5f1a28850352\") " pod="calico-system/csi-node-driver-prqfd" Sep 13 00:07:49.990228 kubelet[2723]: E0913 00:07:49.990209 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:49.990228 kubelet[2723]: W0913 00:07:49.990216 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:49.990281 kubelet[2723]: E0913 00:07:49.990240 2723 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:49.990281 kubelet[2723]: I0913 00:07:49.990253 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/42d9d358-be40-4d01-b1a6-5f1a28850352-varrun\") pod \"csi-node-driver-prqfd\" (UID: \"42d9d358-be40-4d01-b1a6-5f1a28850352\") " pod="calico-system/csi-node-driver-prqfd" Sep 13 00:07:49.990396 kubelet[2723]: E0913 00:07:49.990387 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:49.990396 kubelet[2723]: W0913 00:07:49.990395 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:49.990495 kubelet[2723]: E0913 00:07:49.990467 2723 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:49.990534 kubelet[2723]: E0913 00:07:49.990513 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:49.990534 kubelet[2723]: W0913 00:07:49.990518 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:49.990591 kubelet[2723]: E0913 00:07:49.990579 2723 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:49.991580 kubelet[2723]: E0913 00:07:49.991566 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:49.991580 kubelet[2723]: W0913 00:07:49.991578 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:49.991703 kubelet[2723]: E0913 00:07:49.991589 2723 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:49.991750 kubelet[2723]: E0913 00:07:49.991743 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:49.991804 kubelet[2723]: W0913 00:07:49.991781 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:49.991804 kubelet[2723]: E0913 00:07:49.991795 2723 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:49.991916 kubelet[2723]: E0913 00:07:49.991904 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:49.991916 kubelet[2723]: W0913 00:07:49.991913 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:49.991980 kubelet[2723]: E0913 00:07:49.991919 2723 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:49.992026 kubelet[2723]: E0913 00:07:49.992014 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:49.992026 kubelet[2723]: W0913 00:07:49.992020 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:49.992026 kubelet[2723]: E0913 00:07:49.992026 2723 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:49.992148 kubelet[2723]: E0913 00:07:49.992138 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:49.992148 kubelet[2723]: W0913 00:07:49.992144 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:49.992691 kubelet[2723]: E0913 00:07:49.992149 2723 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:49.992691 kubelet[2723]: E0913 00:07:49.992319 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:49.992691 kubelet[2723]: W0913 00:07:49.992324 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:49.992691 kubelet[2723]: E0913 00:07:49.992328 2723 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:50.002959 containerd[1531]: time="2025-09-13T00:07:50.002899761Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-cdhzt,Uid:75f265e2-1c63-4d0f-808b-0dba0d8fbd16,Namespace:calico-system,Attempt:0,}" Sep 13 00:07:50.035080 containerd[1531]: time="2025-09-13T00:07:50.034654085Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:07:50.035080 containerd[1531]: time="2025-09-13T00:07:50.034689277Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:07:50.036194 containerd[1531]: time="2025-09-13T00:07:50.035378548Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:07:50.036194 containerd[1531]: time="2025-09-13T00:07:50.035952148Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:07:50.049661 systemd[1]: Started cri-containerd-f048b9764b4ba1f984556cd65fbdbfe6090e13931f8f1c185a820aff0dfed288.scope - libcontainer container f048b9764b4ba1f984556cd65fbdbfe6090e13931f8f1c185a820aff0dfed288. Sep 13 00:07:50.091121 containerd[1531]: time="2025-09-13T00:07:50.091066895Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-cdhzt,Uid:75f265e2-1c63-4d0f-808b-0dba0d8fbd16,Namespace:calico-system,Attempt:0,} returns sandbox id \"f048b9764b4ba1f984556cd65fbdbfe6090e13931f8f1c185a820aff0dfed288\"" Sep 13 00:07:50.091444 kubelet[2723]: E0913 00:07:50.091425 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:50.091602 kubelet[2723]: W0913 00:07:50.091527 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:50.091602 kubelet[2723]: E0913 00:07:50.091541 2723 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:50.092642 kubelet[2723]: E0913 00:07:50.092548 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:50.092642 kubelet[2723]: W0913 00:07:50.092556 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:50.092642 kubelet[2723]: E0913 00:07:50.092567 2723 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:50.093306 kubelet[2723]: E0913 00:07:50.093211 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:50.093306 kubelet[2723]: W0913 00:07:50.093218 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:50.093440 kubelet[2723]: E0913 00:07:50.093355 2723 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:50.094311 kubelet[2723]: E0913 00:07:50.094270 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:50.094448 kubelet[2723]: W0913 00:07:50.094371 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:50.094448 kubelet[2723]: E0913 00:07:50.094425 2723 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:50.095538 kubelet[2723]: E0913 00:07:50.095457 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:50.095538 kubelet[2723]: W0913 00:07:50.095465 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:50.095729 kubelet[2723]: E0913 00:07:50.095645 2723 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:50.095967 kubelet[2723]: E0913 00:07:50.095928 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:50.096154 kubelet[2723]: W0913 00:07:50.096083 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:50.096417 kubelet[2723]: E0913 00:07:50.096408 2723 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:50.096585 kubelet[2723]: E0913 00:07:50.096560 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:50.096927 kubelet[2723]: W0913 00:07:50.096807 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:50.097060 kubelet[2723]: E0913 00:07:50.096970 2723 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:50.097198 kubelet[2723]: E0913 00:07:50.097104 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:50.097198 kubelet[2723]: W0913 00:07:50.097111 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:50.097539 kubelet[2723]: E0913 00:07:50.097524 2723 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:50.097788 kubelet[2723]: E0913 00:07:50.097780 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:50.097969 kubelet[2723]: W0913 00:07:50.097819 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:50.098935 kubelet[2723]: E0913 00:07:50.098783 2723 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:50.098935 kubelet[2723]: E0913 00:07:50.098821 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:50.098935 kubelet[2723]: W0913 00:07:50.098827 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:50.099223 kubelet[2723]: E0913 00:07:50.099214 2723 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:50.100654 kubelet[2723]: E0913 00:07:50.100645 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:50.100994 kubelet[2723]: W0913 00:07:50.100721 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:50.101085 kubelet[2723]: E0913 00:07:50.101041 2723 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:50.101268 kubelet[2723]: E0913 00:07:50.101235 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:50.101268 kubelet[2723]: W0913 00:07:50.101242 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:50.101617 kubelet[2723]: E0913 00:07:50.101539 2723 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:50.101753 kubelet[2723]: E0913 00:07:50.101671 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:50.101753 kubelet[2723]: W0913 00:07:50.101678 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:50.101887 kubelet[2723]: E0913 00:07:50.101791 2723 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:50.102329 kubelet[2723]: E0913 00:07:50.102275 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:50.102329 kubelet[2723]: W0913 00:07:50.102282 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:50.102414 kubelet[2723]: E0913 00:07:50.102355 2723 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:50.102749 kubelet[2723]: E0913 00:07:50.102674 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:50.102749 kubelet[2723]: W0913 00:07:50.102681 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:50.103118 kubelet[2723]: E0913 00:07:50.102803 2723 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:50.103118 kubelet[2723]: E0913 00:07:50.102874 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:50.103118 kubelet[2723]: W0913 00:07:50.102879 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:50.103260 kubelet[2723]: E0913 00:07:50.103208 2723 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:50.103319 kubelet[2723]: E0913 00:07:50.103308 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:50.103529 kubelet[2723]: W0913 00:07:50.103462 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:50.103626 kubelet[2723]: E0913 00:07:50.103618 2723 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:50.103865 kubelet[2723]: E0913 00:07:50.103802 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:50.103865 kubelet[2723]: W0913 00:07:50.103815 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:50.104050 kubelet[2723]: E0913 00:07:50.103976 2723 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:50.104155 kubelet[2723]: E0913 00:07:50.104129 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:50.104155 kubelet[2723]: W0913 00:07:50.104142 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:50.104346 kubelet[2723]: E0913 00:07:50.104261 2723 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:50.104961 kubelet[2723]: E0913 00:07:50.104953 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:50.105056 kubelet[2723]: W0913 00:07:50.105010 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:50.105113 kubelet[2723]: E0913 00:07:50.105106 2723 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:50.105718 kubelet[2723]: E0913 00:07:50.105646 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:50.105718 kubelet[2723]: W0913 00:07:50.105654 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:50.105843 kubelet[2723]: E0913 00:07:50.105816 2723 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:50.106027 kubelet[2723]: E0913 00:07:50.105969 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:50.106027 kubelet[2723]: W0913 00:07:50.105976 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:50.106308 kubelet[2723]: E0913 00:07:50.106208 2723 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:50.108194 kubelet[2723]: E0913 00:07:50.108105 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:50.108194 kubelet[2723]: W0913 00:07:50.108113 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:50.108194 kubelet[2723]: E0913 00:07:50.108122 2723 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:50.108424 kubelet[2723]: E0913 00:07:50.108356 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:50.108424 kubelet[2723]: W0913 00:07:50.108364 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:50.108424 kubelet[2723]: E0913 00:07:50.108370 2723 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:50.108528 kubelet[2723]: E0913 00:07:50.108522 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:50.108577 kubelet[2723]: W0913 00:07:50.108557 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:50.108577 kubelet[2723]: E0913 00:07:50.108566 2723 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:50.112258 kubelet[2723]: E0913 00:07:50.112223 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:50.112258 kubelet[2723]: W0913 00:07:50.112232 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:50.112258 kubelet[2723]: E0913 00:07:50.112241 2723 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:50.497774 systemd[1]: run-containerd-runc-k8s.io-5882ff2233b03a683048f3b3edb05b6b12c89c0a1eca266ca39395798778c389-runc.W6pwo1.mount: Deactivated successfully. Sep 13 00:07:51.287764 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2041950341.mount: Deactivated successfully. Sep 13 00:07:51.687492 kubelet[2723]: E0913 00:07:51.687465 2723 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-prqfd" podUID="42d9d358-be40-4d01-b1a6-5f1a28850352" Sep 13 00:07:52.279325 containerd[1531]: time="2025-09-13T00:07:52.279298991Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:07:52.279977 containerd[1531]: time="2025-09-13T00:07:52.279823834Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.3: active requests=0, bytes read=35237389" Sep 13 00:07:52.281115 containerd[1531]: time="2025-09-13T00:07:52.280171324Z" level=info msg="ImageCreate event name:\"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:07:52.281456 containerd[1531]: time="2025-09-13T00:07:52.281440676Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:07:52.281985 containerd[1531]: time="2025-09-13T00:07:52.281967079Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.3\" with image id \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\", size \"35237243\" in 2.467214657s" Sep 13 00:07:52.282045 containerd[1531]: time="2025-09-13T00:07:52.282034019Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\" returns image reference \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\"" Sep 13 00:07:52.283665 containerd[1531]: time="2025-09-13T00:07:52.283653192Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\"" Sep 13 00:07:52.291705 containerd[1531]: time="2025-09-13T00:07:52.291683341Z" level=info msg="CreateContainer within sandbox \"5882ff2233b03a683048f3b3edb05b6b12c89c0a1eca266ca39395798778c389\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Sep 13 00:07:52.333355 containerd[1531]: time="2025-09-13T00:07:52.333282075Z" level=info msg="CreateContainer within sandbox \"5882ff2233b03a683048f3b3edb05b6b12c89c0a1eca266ca39395798778c389\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"69422448523851d7d0a2e4037e82c8500e7500b761f4b4c47c5054c20a5133ea\"" Sep 13 00:07:52.334281 containerd[1531]: time="2025-09-13T00:07:52.333732261Z" level=info msg="StartContainer for \"69422448523851d7d0a2e4037e82c8500e7500b761f4b4c47c5054c20a5133ea\"" Sep 13 00:07:52.384287 systemd[1]: Started cri-containerd-69422448523851d7d0a2e4037e82c8500e7500b761f4b4c47c5054c20a5133ea.scope - libcontainer container 69422448523851d7d0a2e4037e82c8500e7500b761f4b4c47c5054c20a5133ea. Sep 13 00:07:52.427130 containerd[1531]: time="2025-09-13T00:07:52.427104802Z" level=info msg="StartContainer for \"69422448523851d7d0a2e4037e82c8500e7500b761f4b4c47c5054c20a5133ea\" returns successfully" Sep 13 00:07:52.895519 kubelet[2723]: E0913 00:07:52.895479 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:52.895519 kubelet[2723]: W0913 00:07:52.895502 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:52.895519 kubelet[2723]: E0913 00:07:52.895524 2723 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:52.895886 kubelet[2723]: E0913 00:07:52.895673 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:52.895886 kubelet[2723]: W0913 00:07:52.895679 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:52.895886 kubelet[2723]: E0913 00:07:52.895685 2723 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:52.895886 kubelet[2723]: E0913 00:07:52.895780 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:52.895886 kubelet[2723]: W0913 00:07:52.895784 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:52.895886 kubelet[2723]: E0913 00:07:52.895789 2723 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:52.896348 kubelet[2723]: E0913 00:07:52.895912 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:52.896348 kubelet[2723]: W0913 00:07:52.895917 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:52.896348 kubelet[2723]: E0913 00:07:52.895922 2723 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:52.896348 kubelet[2723]: E0913 00:07:52.896014 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:52.896348 kubelet[2723]: W0913 00:07:52.896018 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:52.896348 kubelet[2723]: E0913 00:07:52.896023 2723 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:52.896348 kubelet[2723]: E0913 00:07:52.896106 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:52.896348 kubelet[2723]: W0913 00:07:52.896110 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:52.896348 kubelet[2723]: E0913 00:07:52.896115 2723 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:52.896348 kubelet[2723]: E0913 00:07:52.896233 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:52.896785 kubelet[2723]: W0913 00:07:52.896238 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:52.896785 kubelet[2723]: E0913 00:07:52.896253 2723 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:52.896785 kubelet[2723]: E0913 00:07:52.896366 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:52.896785 kubelet[2723]: W0913 00:07:52.896371 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:52.896785 kubelet[2723]: E0913 00:07:52.896376 2723 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:52.896785 kubelet[2723]: E0913 00:07:52.896479 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:52.896785 kubelet[2723]: W0913 00:07:52.896483 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:52.896785 kubelet[2723]: E0913 00:07:52.896488 2723 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:52.896785 kubelet[2723]: E0913 00:07:52.896571 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:52.896785 kubelet[2723]: W0913 00:07:52.896578 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:52.897436 kubelet[2723]: E0913 00:07:52.896582 2723 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:52.897436 kubelet[2723]: E0913 00:07:52.896705 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:52.897436 kubelet[2723]: W0913 00:07:52.896712 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:52.897436 kubelet[2723]: E0913 00:07:52.896719 2723 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:52.897436 kubelet[2723]: E0913 00:07:52.896828 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:52.897436 kubelet[2723]: W0913 00:07:52.896835 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:52.897436 kubelet[2723]: E0913 00:07:52.896840 2723 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:52.897436 kubelet[2723]: E0913 00:07:52.896932 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:52.897436 kubelet[2723]: W0913 00:07:52.896938 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:52.897436 kubelet[2723]: E0913 00:07:52.896945 2723 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:52.897609 kubelet[2723]: E0913 00:07:52.897071 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:52.897609 kubelet[2723]: W0913 00:07:52.897076 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:52.897609 kubelet[2723]: E0913 00:07:52.897081 2723 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:52.897609 kubelet[2723]: E0913 00:07:52.897173 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:52.897609 kubelet[2723]: W0913 00:07:52.897186 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:52.897609 kubelet[2723]: E0913 00:07:52.897192 2723 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:52.918721 kubelet[2723]: E0913 00:07:52.918691 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:52.918721 kubelet[2723]: W0913 00:07:52.918712 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:52.918721 kubelet[2723]: E0913 00:07:52.918731 2723 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:52.918925 kubelet[2723]: E0913 00:07:52.918884 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:52.918925 kubelet[2723]: W0913 00:07:52.918892 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:52.918925 kubelet[2723]: E0913 00:07:52.918898 2723 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:52.919368 kubelet[2723]: E0913 00:07:52.919001 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:52.919368 kubelet[2723]: W0913 00:07:52.919006 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:52.919368 kubelet[2723]: E0913 00:07:52.919013 2723 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:52.919368 kubelet[2723]: E0913 00:07:52.919131 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:52.919368 kubelet[2723]: W0913 00:07:52.919137 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:52.919368 kubelet[2723]: E0913 00:07:52.919142 2723 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:52.919368 kubelet[2723]: E0913 00:07:52.919302 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:52.919368 kubelet[2723]: W0913 00:07:52.919308 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:52.919368 kubelet[2723]: E0913 00:07:52.919325 2723 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:52.919882 kubelet[2723]: E0913 00:07:52.919448 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:52.919882 kubelet[2723]: W0913 00:07:52.919454 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:52.919882 kubelet[2723]: E0913 00:07:52.919462 2723 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:52.919882 kubelet[2723]: E0913 00:07:52.919581 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:52.919882 kubelet[2723]: W0913 00:07:52.919586 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:52.919882 kubelet[2723]: E0913 00:07:52.919593 2723 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:52.919882 kubelet[2723]: E0913 00:07:52.919769 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:52.919882 kubelet[2723]: W0913 00:07:52.919778 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:52.919882 kubelet[2723]: E0913 00:07:52.919792 2723 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:52.920256 kubelet[2723]: E0913 00:07:52.919939 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:52.920256 kubelet[2723]: W0913 00:07:52.919946 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:52.920256 kubelet[2723]: E0913 00:07:52.919956 2723 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:52.920444 kubelet[2723]: E0913 00:07:52.920348 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:52.920444 kubelet[2723]: W0913 00:07:52.920354 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:52.920444 kubelet[2723]: E0913 00:07:52.920364 2723 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:52.920673 kubelet[2723]: E0913 00:07:52.920506 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:52.920673 kubelet[2723]: W0913 00:07:52.920511 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:52.920673 kubelet[2723]: E0913 00:07:52.920527 2723 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:52.920855 kubelet[2723]: E0913 00:07:52.920751 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:52.920855 kubelet[2723]: W0913 00:07:52.920759 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:52.920855 kubelet[2723]: E0913 00:07:52.920774 2723 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:52.921249 kubelet[2723]: E0913 00:07:52.921027 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:52.921249 kubelet[2723]: W0913 00:07:52.921033 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:52.921249 kubelet[2723]: E0913 00:07:52.921051 2723 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:52.921249 kubelet[2723]: E0913 00:07:52.921153 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:52.921249 kubelet[2723]: W0913 00:07:52.921159 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:52.921569 kubelet[2723]: E0913 00:07:52.921387 2723 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:52.921569 kubelet[2723]: E0913 00:07:52.921513 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:52.921569 kubelet[2723]: W0913 00:07:52.921519 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:52.921569 kubelet[2723]: E0913 00:07:52.921524 2723 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:52.921769 kubelet[2723]: E0913 00:07:52.921695 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:52.921769 kubelet[2723]: W0913 00:07:52.921702 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:52.921769 kubelet[2723]: E0913 00:07:52.921709 2723 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:52.921870 kubelet[2723]: E0913 00:07:52.921863 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:52.921912 kubelet[2723]: W0913 00:07:52.921905 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:52.922078 kubelet[2723]: E0913 00:07:52.921945 2723 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:52.922143 kubelet[2723]: E0913 00:07:52.922136 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:52.922195 kubelet[2723]: W0913 00:07:52.922175 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:52.922238 kubelet[2723]: E0913 00:07:52.922230 2723 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:53.672000 kubelet[2723]: E0913 00:07:53.671968 2723 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-prqfd" podUID="42d9d358-be40-4d01-b1a6-5f1a28850352" Sep 13 00:07:53.809268 kubelet[2723]: I0913 00:07:53.809238 2723 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 13 00:07:53.828618 containerd[1531]: time="2025-09-13T00:07:53.828592058Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:07:53.833624 containerd[1531]: time="2025-09-13T00:07:53.833601901Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3: active requests=0, bytes read=4446660" Sep 13 00:07:53.838563 containerd[1531]: time="2025-09-13T00:07:53.838545751Z" level=info msg="ImageCreate event name:\"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:07:53.845853 containerd[1531]: time="2025-09-13T00:07:53.845827096Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:07:53.846413 containerd[1531]: time="2025-09-13T00:07:53.846131505Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" with image id \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\", size \"5939323\" in 1.562402099s" Sep 13 00:07:53.846413 containerd[1531]: time="2025-09-13T00:07:53.846150214Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" returns image reference \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\"" Sep 13 00:07:53.847841 containerd[1531]: time="2025-09-13T00:07:53.847819969Z" level=info msg="CreateContainer within sandbox \"f048b9764b4ba1f984556cd65fbdbfe6090e13931f8f1c185a820aff0dfed288\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Sep 13 00:07:53.860574 containerd[1531]: time="2025-09-13T00:07:53.860537970Z" level=info msg="CreateContainer within sandbox \"f048b9764b4ba1f984556cd65fbdbfe6090e13931f8f1c185a820aff0dfed288\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"e87777023920f3b7320f7183f8d7cf39cff37dbb08f6e73b39ed1a7abb2b7a11\"" Sep 13 00:07:53.861306 containerd[1531]: time="2025-09-13T00:07:53.861288512Z" level=info msg="StartContainer for \"e87777023920f3b7320f7183f8d7cf39cff37dbb08f6e73b39ed1a7abb2b7a11\"" Sep 13 00:07:53.886265 systemd[1]: Started cri-containerd-e87777023920f3b7320f7183f8d7cf39cff37dbb08f6e73b39ed1a7abb2b7a11.scope - libcontainer container e87777023920f3b7320f7183f8d7cf39cff37dbb08f6e73b39ed1a7abb2b7a11. Sep 13 00:07:53.902932 kubelet[2723]: E0913 00:07:53.902611 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:53.902932 kubelet[2723]: W0913 00:07:53.902791 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:53.902932 kubelet[2723]: E0913 00:07:53.902805 2723 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:53.903871 kubelet[2723]: E0913 00:07:53.903495 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:53.903871 kubelet[2723]: W0913 00:07:53.903502 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:53.903871 kubelet[2723]: E0913 00:07:53.903509 2723 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:53.904382 kubelet[2723]: E0913 00:07:53.904374 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:53.904784 kubelet[2723]: W0913 00:07:53.904394 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:53.904784 kubelet[2723]: E0913 00:07:53.904666 2723 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:53.905720 kubelet[2723]: E0913 00:07:53.905162 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:53.905720 kubelet[2723]: W0913 00:07:53.905175 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:53.905720 kubelet[2723]: E0913 00:07:53.905189 2723 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:53.906123 kubelet[2723]: E0913 00:07:53.906072 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:53.906123 kubelet[2723]: W0913 00:07:53.906080 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:53.906123 kubelet[2723]: E0913 00:07:53.906088 2723 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:53.906504 containerd[1531]: time="2025-09-13T00:07:53.906444506Z" level=info msg="StartContainer for \"e87777023920f3b7320f7183f8d7cf39cff37dbb08f6e73b39ed1a7abb2b7a11\" returns successfully" Sep 13 00:07:53.906868 kubelet[2723]: E0913 00:07:53.906677 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:53.906868 kubelet[2723]: W0913 00:07:53.906685 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:53.906868 kubelet[2723]: E0913 00:07:53.906692 2723 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:53.907118 kubelet[2723]: E0913 00:07:53.907060 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:53.907118 kubelet[2723]: W0913 00:07:53.907070 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:53.907118 kubelet[2723]: E0913 00:07:53.907079 2723 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:53.907371 kubelet[2723]: E0913 00:07:53.907336 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:53.907371 kubelet[2723]: W0913 00:07:53.907343 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:53.907371 kubelet[2723]: E0913 00:07:53.907350 2723 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:53.907939 kubelet[2723]: E0913 00:07:53.907610 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:53.907939 kubelet[2723]: W0913 00:07:53.907617 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:53.907939 kubelet[2723]: E0913 00:07:53.907623 2723 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:53.907939 kubelet[2723]: E0913 00:07:53.907889 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:53.907939 kubelet[2723]: W0913 00:07:53.907895 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:53.907939 kubelet[2723]: E0913 00:07:53.907901 2723 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:53.908253 kubelet[2723]: E0913 00:07:53.908124 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:53.908253 kubelet[2723]: W0913 00:07:53.908130 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:53.908253 kubelet[2723]: E0913 00:07:53.908136 2723 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:53.908452 kubelet[2723]: E0913 00:07:53.908375 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:53.908452 kubelet[2723]: W0913 00:07:53.908384 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:53.908452 kubelet[2723]: E0913 00:07:53.908392 2723 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:53.908739 kubelet[2723]: E0913 00:07:53.908653 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:53.908739 kubelet[2723]: W0913 00:07:53.908664 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:53.908739 kubelet[2723]: E0913 00:07:53.908674 2723 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:53.909002 kubelet[2723]: E0913 00:07:53.908941 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:53.909002 kubelet[2723]: W0913 00:07:53.908948 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:53.909002 kubelet[2723]: E0913 00:07:53.908957 2723 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:53.909221 kubelet[2723]: E0913 00:07:53.909120 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:53.909221 kubelet[2723]: W0913 00:07:53.909127 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:53.909221 kubelet[2723]: E0913 00:07:53.909132 2723 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:53.918234 systemd[1]: cri-containerd-e87777023920f3b7320f7183f8d7cf39cff37dbb08f6e73b39ed1a7abb2b7a11.scope: Deactivated successfully. Sep 13 00:07:53.933785 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e87777023920f3b7320f7183f8d7cf39cff37dbb08f6e73b39ed1a7abb2b7a11-rootfs.mount: Deactivated successfully. Sep 13 00:07:54.181112 containerd[1531]: time="2025-09-13T00:07:54.168211895Z" level=info msg="shim disconnected" id=e87777023920f3b7320f7183f8d7cf39cff37dbb08f6e73b39ed1a7abb2b7a11 namespace=k8s.io Sep 13 00:07:54.181112 containerd[1531]: time="2025-09-13T00:07:54.181006506Z" level=warning msg="cleaning up after shim disconnected" id=e87777023920f3b7320f7183f8d7cf39cff37dbb08f6e73b39ed1a7abb2b7a11 namespace=k8s.io Sep 13 00:07:54.181112 containerd[1531]: time="2025-09-13T00:07:54.181015067Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 13 00:07:54.812895 containerd[1531]: time="2025-09-13T00:07:54.812315839Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\"" Sep 13 00:07:54.823002 kubelet[2723]: I0913 00:07:54.822966 2723 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-556b8dcc47-fw7wm" podStartSLOduration=3.353557653 podStartE2EDuration="5.822950556s" podCreationTimestamp="2025-09-13 00:07:49 +0000 UTC" firstStartedPulling="2025-09-13 00:07:49.813368268 +0000 UTC m=+17.211092614" lastFinishedPulling="2025-09-13 00:07:52.28276117 +0000 UTC m=+19.680485517" observedRunningTime="2025-09-13 00:07:52.8186291 +0000 UTC m=+20.216353453" watchObservedRunningTime="2025-09-13 00:07:54.822950556 +0000 UTC m=+22.220674920" Sep 13 00:07:55.672872 kubelet[2723]: E0913 00:07:55.672826 2723 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-prqfd" podUID="42d9d358-be40-4d01-b1a6-5f1a28850352" Sep 13 00:07:57.355211 containerd[1531]: time="2025-09-13T00:07:57.355149585Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:07:57.356020 containerd[1531]: time="2025-09-13T00:07:57.355656655Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.3: active requests=0, bytes read=70440613" Sep 13 00:07:57.356020 containerd[1531]: time="2025-09-13T00:07:57.355952870Z" level=info msg="ImageCreate event name:\"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:07:57.356996 containerd[1531]: time="2025-09-13T00:07:57.356981387Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:07:57.357596 containerd[1531]: time="2025-09-13T00:07:57.357464699Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.3\" with image id \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\", size \"71933316\" in 2.545126681s" Sep 13 00:07:57.357596 containerd[1531]: time="2025-09-13T00:07:57.357480750Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\" returns image reference \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\"" Sep 13 00:07:57.359366 containerd[1531]: time="2025-09-13T00:07:57.359326817Z" level=info msg="CreateContainer within sandbox \"f048b9764b4ba1f984556cd65fbdbfe6090e13931f8f1c185a820aff0dfed288\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Sep 13 00:07:57.375912 containerd[1531]: time="2025-09-13T00:07:57.375889640Z" level=info msg="CreateContainer within sandbox \"f048b9764b4ba1f984556cd65fbdbfe6090e13931f8f1c185a820aff0dfed288\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"cbe2cbbfcda19816a70b657cd5237621a0600b665eb691b9420ee3138064cb70\"" Sep 13 00:07:57.377884 containerd[1531]: time="2025-09-13T00:07:57.376826023Z" level=info msg="StartContainer for \"cbe2cbbfcda19816a70b657cd5237621a0600b665eb691b9420ee3138064cb70\"" Sep 13 00:07:57.394232 systemd[1]: run-containerd-runc-k8s.io-cbe2cbbfcda19816a70b657cd5237621a0600b665eb691b9420ee3138064cb70-runc.WsjyYe.mount: Deactivated successfully. Sep 13 00:07:57.402264 systemd[1]: Started cri-containerd-cbe2cbbfcda19816a70b657cd5237621a0600b665eb691b9420ee3138064cb70.scope - libcontainer container cbe2cbbfcda19816a70b657cd5237621a0600b665eb691b9420ee3138064cb70. Sep 13 00:07:57.423342 containerd[1531]: time="2025-09-13T00:07:57.423322890Z" level=info msg="StartContainer for \"cbe2cbbfcda19816a70b657cd5237621a0600b665eb691b9420ee3138064cb70\" returns successfully" Sep 13 00:07:57.672280 kubelet[2723]: E0913 00:07:57.672244 2723 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-prqfd" podUID="42d9d358-be40-4d01-b1a6-5f1a28850352" Sep 13 00:07:58.785633 systemd[1]: cri-containerd-cbe2cbbfcda19816a70b657cd5237621a0600b665eb691b9420ee3138064cb70.scope: Deactivated successfully. Sep 13 00:07:58.821632 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cbe2cbbfcda19816a70b657cd5237621a0600b665eb691b9420ee3138064cb70-rootfs.mount: Deactivated successfully. Sep 13 00:07:58.826217 containerd[1531]: time="2025-09-13T00:07:58.826155008Z" level=info msg="shim disconnected" id=cbe2cbbfcda19816a70b657cd5237621a0600b665eb691b9420ee3138064cb70 namespace=k8s.io Sep 13 00:07:58.826217 containerd[1531]: time="2025-09-13T00:07:58.826215176Z" level=warning msg="cleaning up after shim disconnected" id=cbe2cbbfcda19816a70b657cd5237621a0600b665eb691b9420ee3138064cb70 namespace=k8s.io Sep 13 00:07:58.826494 containerd[1531]: time="2025-09-13T00:07:58.826221383Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 13 00:07:58.832724 kubelet[2723]: I0913 00:07:58.832383 2723 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Sep 13 00:07:58.873306 systemd[1]: Created slice kubepods-burstable-pod971b88ba_d1d5_453d_978e_99740f9cff86.slice - libcontainer container kubepods-burstable-pod971b88ba_d1d5_453d_978e_99740f9cff86.slice. Sep 13 00:07:58.885906 systemd[1]: Created slice kubepods-burstable-podc55cbdbc_12fb_409d_96af_7aa0cff6b9cc.slice - libcontainer container kubepods-burstable-podc55cbdbc_12fb_409d_96af_7aa0cff6b9cc.slice. Sep 13 00:07:58.890033 systemd[1]: Created slice kubepods-besteffort-podcd1bab13_6a3e_4841_8b06_97978d7ef332.slice - libcontainer container kubepods-besteffort-podcd1bab13_6a3e_4841_8b06_97978d7ef332.slice. Sep 13 00:07:58.893907 systemd[1]: Created slice kubepods-besteffort-pod91c757d2_79fb_408e_8fa6_48713c1e092e.slice - libcontainer container kubepods-besteffort-pod91c757d2_79fb_408e_8fa6_48713c1e092e.slice. Sep 13 00:07:58.899653 systemd[1]: Created slice kubepods-besteffort-poddc744470_c98f_40e0_a342_3186715e3534.slice - libcontainer container kubepods-besteffort-poddc744470_c98f_40e0_a342_3186715e3534.slice. Sep 13 00:07:58.906322 systemd[1]: Created slice kubepods-besteffort-podb6ce864c_530a_4daf_972c_b9dd880246b5.slice - libcontainer container kubepods-besteffort-podb6ce864c_530a_4daf_972c_b9dd880246b5.slice. Sep 13 00:07:58.911557 systemd[1]: Created slice kubepods-besteffort-pod6eeed7fc_3b7d_46e1_a392_48b84ced79bd.slice - libcontainer container kubepods-besteffort-pod6eeed7fc_3b7d_46e1_a392_48b84ced79bd.slice. Sep 13 00:07:58.955255 kubelet[2723]: I0913 00:07:58.955214 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2r57l\" (UniqueName: \"kubernetes.io/projected/c55cbdbc-12fb-409d-96af-7aa0cff6b9cc-kube-api-access-2r57l\") pod \"coredns-668d6bf9bc-bvgjw\" (UID: \"c55cbdbc-12fb-409d-96af-7aa0cff6b9cc\") " pod="kube-system/coredns-668d6bf9bc-bvgjw" Sep 13 00:07:58.955255 kubelet[2723]: I0913 00:07:58.955260 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/dc744470-c98f-40e0-a342-3186715e3534-calico-apiserver-certs\") pod \"calico-apiserver-748877ff9b-h2slx\" (UID: \"dc744470-c98f-40e0-a342-3186715e3534\") " pod="calico-apiserver/calico-apiserver-748877ff9b-h2slx" Sep 13 00:07:58.955373 kubelet[2723]: I0913 00:07:58.955296 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5vtrf\" (UniqueName: \"kubernetes.io/projected/6eeed7fc-3b7d-46e1-a392-48b84ced79bd-kube-api-access-5vtrf\") pod \"whisker-b78ffb84d-4tkck\" (UID: \"6eeed7fc-3b7d-46e1-a392-48b84ced79bd\") " pod="calico-system/whisker-b78ffb84d-4tkck" Sep 13 00:07:58.955373 kubelet[2723]: I0913 00:07:58.955308 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cd1bab13-6a3e-4841-8b06-97978d7ef332-config\") pod \"goldmane-54d579b49d-pgggr\" (UID: \"cd1bab13-6a3e-4841-8b06-97978d7ef332\") " pod="calico-system/goldmane-54d579b49d-pgggr" Sep 13 00:07:58.955373 kubelet[2723]: I0913 00:07:58.955320 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g2st5\" (UniqueName: \"kubernetes.io/projected/cd1bab13-6a3e-4841-8b06-97978d7ef332-kube-api-access-g2st5\") pod \"goldmane-54d579b49d-pgggr\" (UID: \"cd1bab13-6a3e-4841-8b06-97978d7ef332\") " pod="calico-system/goldmane-54d579b49d-pgggr" Sep 13 00:07:58.955373 kubelet[2723]: I0913 00:07:58.955331 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/6eeed7fc-3b7d-46e1-a392-48b84ced79bd-whisker-backend-key-pair\") pod \"whisker-b78ffb84d-4tkck\" (UID: \"6eeed7fc-3b7d-46e1-a392-48b84ced79bd\") " pod="calico-system/whisker-b78ffb84d-4tkck" Sep 13 00:07:58.955373 kubelet[2723]: I0913 00:07:58.955340 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lp5dq\" (UniqueName: \"kubernetes.io/projected/b6ce864c-530a-4daf-972c-b9dd880246b5-kube-api-access-lp5dq\") pod \"calico-apiserver-748877ff9b-8wsdp\" (UID: \"b6ce864c-530a-4daf-972c-b9dd880246b5\") " pod="calico-apiserver/calico-apiserver-748877ff9b-8wsdp" Sep 13 00:07:58.955472 kubelet[2723]: I0913 00:07:58.955380 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6eeed7fc-3b7d-46e1-a392-48b84ced79bd-whisker-ca-bundle\") pod \"whisker-b78ffb84d-4tkck\" (UID: \"6eeed7fc-3b7d-46e1-a392-48b84ced79bd\") " pod="calico-system/whisker-b78ffb84d-4tkck" Sep 13 00:07:58.955472 kubelet[2723]: I0913 00:07:58.955391 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cd1bab13-6a3e-4841-8b06-97978d7ef332-goldmane-ca-bundle\") pod \"goldmane-54d579b49d-pgggr\" (UID: \"cd1bab13-6a3e-4841-8b06-97978d7ef332\") " pod="calico-system/goldmane-54d579b49d-pgggr" Sep 13 00:07:58.955472 kubelet[2723]: I0913 00:07:58.955410 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/cd1bab13-6a3e-4841-8b06-97978d7ef332-goldmane-key-pair\") pod \"goldmane-54d579b49d-pgggr\" (UID: \"cd1bab13-6a3e-4841-8b06-97978d7ef332\") " pod="calico-system/goldmane-54d579b49d-pgggr" Sep 13 00:07:58.955472 kubelet[2723]: I0913 00:07:58.955452 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/91c757d2-79fb-408e-8fa6-48713c1e092e-tigera-ca-bundle\") pod \"calico-kube-controllers-5ff7f7d44c-52xq6\" (UID: \"91c757d2-79fb-408e-8fa6-48713c1e092e\") " pod="calico-system/calico-kube-controllers-5ff7f7d44c-52xq6" Sep 13 00:07:58.955472 kubelet[2723]: I0913 00:07:58.955463 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qrhqz\" (UniqueName: \"kubernetes.io/projected/971b88ba-d1d5-453d-978e-99740f9cff86-kube-api-access-qrhqz\") pod \"coredns-668d6bf9bc-27vjx\" (UID: \"971b88ba-d1d5-453d-978e-99740f9cff86\") " pod="kube-system/coredns-668d6bf9bc-27vjx" Sep 13 00:07:58.955556 kubelet[2723]: I0913 00:07:58.955473 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c55cbdbc-12fb-409d-96af-7aa0cff6b9cc-config-volume\") pod \"coredns-668d6bf9bc-bvgjw\" (UID: \"c55cbdbc-12fb-409d-96af-7aa0cff6b9cc\") " pod="kube-system/coredns-668d6bf9bc-bvgjw" Sep 13 00:07:58.955556 kubelet[2723]: I0913 00:07:58.955484 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/971b88ba-d1d5-453d-978e-99740f9cff86-config-volume\") pod \"coredns-668d6bf9bc-27vjx\" (UID: \"971b88ba-d1d5-453d-978e-99740f9cff86\") " pod="kube-system/coredns-668d6bf9bc-27vjx" Sep 13 00:07:58.955556 kubelet[2723]: I0913 00:07:58.955523 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/b6ce864c-530a-4daf-972c-b9dd880246b5-calico-apiserver-certs\") pod \"calico-apiserver-748877ff9b-8wsdp\" (UID: \"b6ce864c-530a-4daf-972c-b9dd880246b5\") " pod="calico-apiserver/calico-apiserver-748877ff9b-8wsdp" Sep 13 00:07:58.955556 kubelet[2723]: I0913 00:07:58.955537 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l648j\" (UniqueName: \"kubernetes.io/projected/91c757d2-79fb-408e-8fa6-48713c1e092e-kube-api-access-l648j\") pod \"calico-kube-controllers-5ff7f7d44c-52xq6\" (UID: \"91c757d2-79fb-408e-8fa6-48713c1e092e\") " pod="calico-system/calico-kube-controllers-5ff7f7d44c-52xq6" Sep 13 00:07:58.955556 kubelet[2723]: I0913 00:07:58.955549 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pjp8j\" (UniqueName: \"kubernetes.io/projected/dc744470-c98f-40e0-a342-3186715e3534-kube-api-access-pjp8j\") pod \"calico-apiserver-748877ff9b-h2slx\" (UID: \"dc744470-c98f-40e0-a342-3186715e3534\") " pod="calico-apiserver/calico-apiserver-748877ff9b-h2slx" Sep 13 00:07:59.187759 containerd[1531]: time="2025-09-13T00:07:59.187441761Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-27vjx,Uid:971b88ba-d1d5-453d-978e-99740f9cff86,Namespace:kube-system,Attempt:0,}" Sep 13 00:07:59.189386 containerd[1531]: time="2025-09-13T00:07:59.189362247Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-bvgjw,Uid:c55cbdbc-12fb-409d-96af-7aa0cff6b9cc,Namespace:kube-system,Attempt:0,}" Sep 13 00:07:59.193136 containerd[1531]: time="2025-09-13T00:07:59.193113873Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-pgggr,Uid:cd1bab13-6a3e-4841-8b06-97978d7ef332,Namespace:calico-system,Attempt:0,}" Sep 13 00:07:59.202448 containerd[1531]: time="2025-09-13T00:07:59.202327410Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5ff7f7d44c-52xq6,Uid:91c757d2-79fb-408e-8fa6-48713c1e092e,Namespace:calico-system,Attempt:0,}" Sep 13 00:07:59.204009 containerd[1531]: time="2025-09-13T00:07:59.203913823Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-748877ff9b-h2slx,Uid:dc744470-c98f-40e0-a342-3186715e3534,Namespace:calico-apiserver,Attempt:0,}" Sep 13 00:07:59.209478 containerd[1531]: time="2025-09-13T00:07:59.209404991Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-748877ff9b-8wsdp,Uid:b6ce864c-530a-4daf-972c-b9dd880246b5,Namespace:calico-apiserver,Attempt:0,}" Sep 13 00:07:59.216624 containerd[1531]: time="2025-09-13T00:07:59.216364447Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-b78ffb84d-4tkck,Uid:6eeed7fc-3b7d-46e1-a392-48b84ced79bd,Namespace:calico-system,Attempt:0,}" Sep 13 00:07:59.477587 containerd[1531]: time="2025-09-13T00:07:59.477509288Z" level=error msg="Failed to destroy network for sandbox \"c93942dab392c6ee56a4fa9a02035538a28c4200b2932e5d9cd9886d811e47f3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:07:59.478280 containerd[1531]: time="2025-09-13T00:07:59.478153693Z" level=error msg="Failed to destroy network for sandbox \"41993f93a8d69ba1a06995b5a771d8cb5ff09361b613d1752559b7917b2db819\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:07:59.480536 containerd[1531]: time="2025-09-13T00:07:59.480520450Z" level=error msg="encountered an error cleaning up failed sandbox \"41993f93a8d69ba1a06995b5a771d8cb5ff09361b613d1752559b7917b2db819\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:07:59.481027 containerd[1531]: time="2025-09-13T00:07:59.481012780Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-pgggr,Uid:cd1bab13-6a3e-4841-8b06-97978d7ef332,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"41993f93a8d69ba1a06995b5a771d8cb5ff09361b613d1752559b7917b2db819\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:07:59.484287 containerd[1531]: time="2025-09-13T00:07:59.483663181Z" level=error msg="Failed to destroy network for sandbox \"2690a7ec28df5d92992ed4d85756c3bdef017652f9c8b5cc6e7f02f07e238d87\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:07:59.484287 containerd[1531]: time="2025-09-13T00:07:59.484266105Z" level=error msg="encountered an error cleaning up failed sandbox \"2690a7ec28df5d92992ed4d85756c3bdef017652f9c8b5cc6e7f02f07e238d87\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:07:59.484350 containerd[1531]: time="2025-09-13T00:07:59.484290487Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-748877ff9b-8wsdp,Uid:b6ce864c-530a-4daf-972c-b9dd880246b5,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"2690a7ec28df5d92992ed4d85756c3bdef017652f9c8b5cc6e7f02f07e238d87\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:07:59.484384 containerd[1531]: time="2025-09-13T00:07:59.484355572Z" level=error msg="Failed to destroy network for sandbox \"d3b00ff9c4e338254c44e1d9ecc6030aa4b3e4f18b7be551447bb6e8e5089b39\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:07:59.485141 containerd[1531]: time="2025-09-13T00:07:59.484545777Z" level=error msg="encountered an error cleaning up failed sandbox \"d3b00ff9c4e338254c44e1d9ecc6030aa4b3e4f18b7be551447bb6e8e5089b39\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:07:59.485141 containerd[1531]: time="2025-09-13T00:07:59.484572794Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-27vjx,Uid:971b88ba-d1d5-453d-978e-99740f9cff86,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d3b00ff9c4e338254c44e1d9ecc6030aa4b3e4f18b7be551447bb6e8e5089b39\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:07:59.485141 containerd[1531]: time="2025-09-13T00:07:59.484628341Z" level=error msg="Failed to destroy network for sandbox \"697640ece8917ad0d56e5bda6588816935756a5136a20be2e1dd67d3f437ae04\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:07:59.485141 containerd[1531]: time="2025-09-13T00:07:59.484783545Z" level=error msg="encountered an error cleaning up failed sandbox \"697640ece8917ad0d56e5bda6588816935756a5136a20be2e1dd67d3f437ae04\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:07:59.485141 containerd[1531]: time="2025-09-13T00:07:59.484811107Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-b78ffb84d-4tkck,Uid:6eeed7fc-3b7d-46e1-a392-48b84ced79bd,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"697640ece8917ad0d56e5bda6588816935756a5136a20be2e1dd67d3f437ae04\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:07:59.485141 containerd[1531]: time="2025-09-13T00:07:59.484845184Z" level=error msg="Failed to destroy network for sandbox \"597a44a2433f9026351d2c0d0daddac4f0da1800f83f39391850561f25387cff\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:07:59.485141 containerd[1531]: time="2025-09-13T00:07:59.485002898Z" level=error msg="encountered an error cleaning up failed sandbox \"597a44a2433f9026351d2c0d0daddac4f0da1800f83f39391850561f25387cff\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:07:59.485141 containerd[1531]: time="2025-09-13T00:07:59.485019351Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-748877ff9b-h2slx,Uid:dc744470-c98f-40e0-a342-3186715e3534,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"597a44a2433f9026351d2c0d0daddac4f0da1800f83f39391850561f25387cff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:07:59.485141 containerd[1531]: time="2025-09-13T00:07:59.485059458Z" level=error msg="Failed to destroy network for sandbox \"62a669238e8de25d8a03d9706bb8c03e91d6c0be914fedde6abb0c3ebeaf5814\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:07:59.485344 containerd[1531]: time="2025-09-13T00:07:59.485221920Z" level=error msg="encountered an error cleaning up failed sandbox \"62a669238e8de25d8a03d9706bb8c03e91d6c0be914fedde6abb0c3ebeaf5814\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:07:59.485344 containerd[1531]: time="2025-09-13T00:07:59.485246167Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5ff7f7d44c-52xq6,Uid:91c757d2-79fb-408e-8fa6-48713c1e092e,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"62a669238e8de25d8a03d9706bb8c03e91d6c0be914fedde6abb0c3ebeaf5814\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:07:59.485344 containerd[1531]: time="2025-09-13T00:07:59.480939051Z" level=error msg="encountered an error cleaning up failed sandbox \"c93942dab392c6ee56a4fa9a02035538a28c4200b2932e5d9cd9886d811e47f3\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:07:59.485344 containerd[1531]: time="2025-09-13T00:07:59.485295092Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-bvgjw,Uid:c55cbdbc-12fb-409d-96af-7aa0cff6b9cc,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c93942dab392c6ee56a4fa9a02035538a28c4200b2932e5d9cd9886d811e47f3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:07:59.486026 kubelet[2723]: E0913 00:07:59.485955 2723 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c93942dab392c6ee56a4fa9a02035538a28c4200b2932e5d9cd9886d811e47f3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:07:59.486116 kubelet[2723]: E0913 00:07:59.486096 2723 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d3b00ff9c4e338254c44e1d9ecc6030aa4b3e4f18b7be551447bb6e8e5089b39\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:07:59.486143 kubelet[2723]: E0913 00:07:59.486118 2723 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"41993f93a8d69ba1a06995b5a771d8cb5ff09361b613d1752559b7917b2db819\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:07:59.486189 kubelet[2723]: E0913 00:07:59.486157 2723 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2690a7ec28df5d92992ed4d85756c3bdef017652f9c8b5cc6e7f02f07e238d87\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:07:59.486985 kubelet[2723]: E0913 00:07:59.486207 2723 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"597a44a2433f9026351d2c0d0daddac4f0da1800f83f39391850561f25387cff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:07:59.486985 kubelet[2723]: E0913 00:07:59.486219 2723 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"697640ece8917ad0d56e5bda6588816935756a5136a20be2e1dd67d3f437ae04\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:07:59.486985 kubelet[2723]: E0913 00:07:59.486228 2723 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"62a669238e8de25d8a03d9706bb8c03e91d6c0be914fedde6abb0c3ebeaf5814\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:07:59.493381 kubelet[2723]: E0913 00:07:59.493231 2723 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c93942dab392c6ee56a4fa9a02035538a28c4200b2932e5d9cd9886d811e47f3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-bvgjw" Sep 13 00:07:59.493381 kubelet[2723]: E0913 00:07:59.493242 2723 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d3b00ff9c4e338254c44e1d9ecc6030aa4b3e4f18b7be551447bb6e8e5089b39\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-27vjx" Sep 13 00:07:59.493381 kubelet[2723]: E0913 00:07:59.493252 2723 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c93942dab392c6ee56a4fa9a02035538a28c4200b2932e5d9cd9886d811e47f3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-bvgjw" Sep 13 00:07:59.493381 kubelet[2723]: E0913 00:07:59.493262 2723 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"41993f93a8d69ba1a06995b5a771d8cb5ff09361b613d1752559b7917b2db819\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-pgggr" Sep 13 00:07:59.493467 kubelet[2723]: E0913 00:07:59.493270 2723 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"41993f93a8d69ba1a06995b5a771d8cb5ff09361b613d1752559b7917b2db819\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-pgggr" Sep 13 00:07:59.493467 kubelet[2723]: E0913 00:07:59.493283 2723 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-bvgjw_kube-system(c55cbdbc-12fb-409d-96af-7aa0cff6b9cc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-bvgjw_kube-system(c55cbdbc-12fb-409d-96af-7aa0cff6b9cc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c93942dab392c6ee56a4fa9a02035538a28c4200b2932e5d9cd9886d811e47f3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-bvgjw" podUID="c55cbdbc-12fb-409d-96af-7aa0cff6b9cc" Sep 13 00:07:59.493467 kubelet[2723]: E0913 00:07:59.493286 2723 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-54d579b49d-pgggr_calico-system(cd1bab13-6a3e-4841-8b06-97978d7ef332)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-54d579b49d-pgggr_calico-system(cd1bab13-6a3e-4841-8b06-97978d7ef332)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"41993f93a8d69ba1a06995b5a771d8cb5ff09361b613d1752559b7917b2db819\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-54d579b49d-pgggr" podUID="cd1bab13-6a3e-4841-8b06-97978d7ef332" Sep 13 00:07:59.493552 kubelet[2723]: E0913 00:07:59.493297 2723 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"62a669238e8de25d8a03d9706bb8c03e91d6c0be914fedde6abb0c3ebeaf5814\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5ff7f7d44c-52xq6" Sep 13 00:07:59.493552 kubelet[2723]: E0913 00:07:59.493231 2723 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2690a7ec28df5d92992ed4d85756c3bdef017652f9c8b5cc6e7f02f07e238d87\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-748877ff9b-8wsdp" Sep 13 00:07:59.493552 kubelet[2723]: E0913 00:07:59.493306 2723 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"62a669238e8de25d8a03d9706bb8c03e91d6c0be914fedde6abb0c3ebeaf5814\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5ff7f7d44c-52xq6" Sep 13 00:07:59.493609 kubelet[2723]: E0913 00:07:59.493332 2723 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5ff7f7d44c-52xq6_calico-system(91c757d2-79fb-408e-8fa6-48713c1e092e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5ff7f7d44c-52xq6_calico-system(91c757d2-79fb-408e-8fa6-48713c1e092e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"62a669238e8de25d8a03d9706bb8c03e91d6c0be914fedde6abb0c3ebeaf5814\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5ff7f7d44c-52xq6" podUID="91c757d2-79fb-408e-8fa6-48713c1e092e" Sep 13 00:07:59.493609 kubelet[2723]: E0913 00:07:59.493254 2723 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d3b00ff9c4e338254c44e1d9ecc6030aa4b3e4f18b7be551447bb6e8e5089b39\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-27vjx" Sep 13 00:07:59.493609 kubelet[2723]: E0913 00:07:59.493362 2723 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-27vjx_kube-system(971b88ba-d1d5-453d-978e-99740f9cff86)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-27vjx_kube-system(971b88ba-d1d5-453d-978e-99740f9cff86)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d3b00ff9c4e338254c44e1d9ecc6030aa4b3e4f18b7be551447bb6e8e5089b39\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-27vjx" podUID="971b88ba-d1d5-453d-978e-99740f9cff86" Sep 13 00:07:59.493697 kubelet[2723]: E0913 00:07:59.493376 2723 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"597a44a2433f9026351d2c0d0daddac4f0da1800f83f39391850561f25387cff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-748877ff9b-h2slx" Sep 13 00:07:59.493697 kubelet[2723]: E0913 00:07:59.493384 2723 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"597a44a2433f9026351d2c0d0daddac4f0da1800f83f39391850561f25387cff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-748877ff9b-h2slx" Sep 13 00:07:59.493697 kubelet[2723]: E0913 00:07:59.493397 2723 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-748877ff9b-h2slx_calico-apiserver(dc744470-c98f-40e0-a342-3186715e3534)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-748877ff9b-h2slx_calico-apiserver(dc744470-c98f-40e0-a342-3186715e3534)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"597a44a2433f9026351d2c0d0daddac4f0da1800f83f39391850561f25387cff\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-748877ff9b-h2slx" podUID="dc744470-c98f-40e0-a342-3186715e3534" Sep 13 00:07:59.493764 kubelet[2723]: E0913 00:07:59.493408 2723 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"697640ece8917ad0d56e5bda6588816935756a5136a20be2e1dd67d3f437ae04\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-b78ffb84d-4tkck" Sep 13 00:07:59.493764 kubelet[2723]: E0913 00:07:59.493415 2723 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"697640ece8917ad0d56e5bda6588816935756a5136a20be2e1dd67d3f437ae04\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-b78ffb84d-4tkck" Sep 13 00:07:59.493764 kubelet[2723]: E0913 00:07:59.493426 2723 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-b78ffb84d-4tkck_calico-system(6eeed7fc-3b7d-46e1-a392-48b84ced79bd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-b78ffb84d-4tkck_calico-system(6eeed7fc-3b7d-46e1-a392-48b84ced79bd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"697640ece8917ad0d56e5bda6588816935756a5136a20be2e1dd67d3f437ae04\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-b78ffb84d-4tkck" podUID="6eeed7fc-3b7d-46e1-a392-48b84ced79bd" Sep 13 00:07:59.493850 kubelet[2723]: E0913 00:07:59.493307 2723 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2690a7ec28df5d92992ed4d85756c3bdef017652f9c8b5cc6e7f02f07e238d87\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-748877ff9b-8wsdp" Sep 13 00:07:59.493850 kubelet[2723]: E0913 00:07:59.493443 2723 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-748877ff9b-8wsdp_calico-apiserver(b6ce864c-530a-4daf-972c-b9dd880246b5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-748877ff9b-8wsdp_calico-apiserver(b6ce864c-530a-4daf-972c-b9dd880246b5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2690a7ec28df5d92992ed4d85756c3bdef017652f9c8b5cc6e7f02f07e238d87\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-748877ff9b-8wsdp" podUID="b6ce864c-530a-4daf-972c-b9dd880246b5" Sep 13 00:07:59.676151 systemd[1]: Created slice kubepods-besteffort-pod42d9d358_be40_4d01_b1a6_5f1a28850352.slice - libcontainer container kubepods-besteffort-pod42d9d358_be40_4d01_b1a6_5f1a28850352.slice. Sep 13 00:07:59.678299 containerd[1531]: time="2025-09-13T00:07:59.678281546Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-prqfd,Uid:42d9d358-be40-4d01-b1a6-5f1a28850352,Namespace:calico-system,Attempt:0,}" Sep 13 00:07:59.712459 containerd[1531]: time="2025-09-13T00:07:59.712417128Z" level=error msg="Failed to destroy network for sandbox \"e0263f43b3cfa4d19ec62c3d6a8580517818b57a4105d12213031d37c81f52a6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:07:59.712817 containerd[1531]: time="2025-09-13T00:07:59.712727993Z" level=error msg="encountered an error cleaning up failed sandbox \"e0263f43b3cfa4d19ec62c3d6a8580517818b57a4105d12213031d37c81f52a6\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:07:59.712817 containerd[1531]: time="2025-09-13T00:07:59.712760281Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-prqfd,Uid:42d9d358-be40-4d01-b1a6-5f1a28850352,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e0263f43b3cfa4d19ec62c3d6a8580517818b57a4105d12213031d37c81f52a6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:07:59.712935 kubelet[2723]: E0913 00:07:59.712906 2723 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e0263f43b3cfa4d19ec62c3d6a8580517818b57a4105d12213031d37c81f52a6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:07:59.712971 kubelet[2723]: E0913 00:07:59.712949 2723 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e0263f43b3cfa4d19ec62c3d6a8580517818b57a4105d12213031d37c81f52a6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-prqfd" Sep 13 00:07:59.712971 kubelet[2723]: E0913 00:07:59.712963 2723 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e0263f43b3cfa4d19ec62c3d6a8580517818b57a4105d12213031d37c81f52a6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-prqfd" Sep 13 00:07:59.713016 kubelet[2723]: E0913 00:07:59.712992 2723 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-prqfd_calico-system(42d9d358-be40-4d01-b1a6-5f1a28850352)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-prqfd_calico-system(42d9d358-be40-4d01-b1a6-5f1a28850352)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e0263f43b3cfa4d19ec62c3d6a8580517818b57a4105d12213031d37c81f52a6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-prqfd" podUID="42d9d358-be40-4d01-b1a6-5f1a28850352" Sep 13 00:07:59.843345 containerd[1531]: time="2025-09-13T00:07:59.842718570Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\"" Sep 13 00:07:59.846352 kubelet[2723]: I0913 00:07:59.846333 2723 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e0263f43b3cfa4d19ec62c3d6a8580517818b57a4105d12213031d37c81f52a6" Sep 13 00:07:59.847708 kubelet[2723]: I0913 00:07:59.847483 2723 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="697640ece8917ad0d56e5bda6588816935756a5136a20be2e1dd67d3f437ae04" Sep 13 00:07:59.860718 kubelet[2723]: I0913 00:07:59.860624 2723 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="597a44a2433f9026351d2c0d0daddac4f0da1800f83f39391850561f25387cff" Sep 13 00:07:59.862615 kubelet[2723]: I0913 00:07:59.862498 2723 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2690a7ec28df5d92992ed4d85756c3bdef017652f9c8b5cc6e7f02f07e238d87" Sep 13 00:07:59.864397 kubelet[2723]: I0913 00:07:59.864377 2723 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="62a669238e8de25d8a03d9706bb8c03e91d6c0be914fedde6abb0c3ebeaf5814" Sep 13 00:07:59.866497 kubelet[2723]: I0913 00:07:59.866069 2723 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="41993f93a8d69ba1a06995b5a771d8cb5ff09361b613d1752559b7917b2db819" Sep 13 00:07:59.867314 kubelet[2723]: I0913 00:07:59.867305 2723 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c93942dab392c6ee56a4fa9a02035538a28c4200b2932e5d9cd9886d811e47f3" Sep 13 00:07:59.868064 kubelet[2723]: I0913 00:07:59.868011 2723 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d3b00ff9c4e338254c44e1d9ecc6030aa4b3e4f18b7be551447bb6e8e5089b39" Sep 13 00:07:59.893712 containerd[1531]: time="2025-09-13T00:07:59.893649554Z" level=info msg="StopPodSandbox for \"597a44a2433f9026351d2c0d0daddac4f0da1800f83f39391850561f25387cff\"" Sep 13 00:07:59.894776 containerd[1531]: time="2025-09-13T00:07:59.894149294Z" level=info msg="StopPodSandbox for \"62a669238e8de25d8a03d9706bb8c03e91d6c0be914fedde6abb0c3ebeaf5814\"" Sep 13 00:07:59.894838 containerd[1531]: time="2025-09-13T00:07:59.894775565Z" level=info msg="Ensure that sandbox 62a669238e8de25d8a03d9706bb8c03e91d6c0be914fedde6abb0c3ebeaf5814 in task-service has been cleanup successfully" Sep 13 00:07:59.894890 containerd[1531]: time="2025-09-13T00:07:59.894878768Z" level=info msg="Ensure that sandbox 597a44a2433f9026351d2c0d0daddac4f0da1800f83f39391850561f25387cff in task-service has been cleanup successfully" Sep 13 00:07:59.895543 containerd[1531]: time="2025-09-13T00:07:59.895526570Z" level=info msg="StopPodSandbox for \"e0263f43b3cfa4d19ec62c3d6a8580517818b57a4105d12213031d37c81f52a6\"" Sep 13 00:07:59.895636 containerd[1531]: time="2025-09-13T00:07:59.895625174Z" level=info msg="StopPodSandbox for \"41993f93a8d69ba1a06995b5a771d8cb5ff09361b613d1752559b7917b2db819\"" Sep 13 00:07:59.895976 containerd[1531]: time="2025-09-13T00:07:59.895964528Z" level=info msg="Ensure that sandbox 41993f93a8d69ba1a06995b5a771d8cb5ff09361b613d1752559b7917b2db819 in task-service has been cleanup successfully" Sep 13 00:07:59.897071 containerd[1531]: time="2025-09-13T00:07:59.897053311Z" level=info msg="StopPodSandbox for \"c93942dab392c6ee56a4fa9a02035538a28c4200b2932e5d9cd9886d811e47f3\"" Sep 13 00:07:59.897192 containerd[1531]: time="2025-09-13T00:07:59.897142935Z" level=info msg="Ensure that sandbox c93942dab392c6ee56a4fa9a02035538a28c4200b2932e5d9cd9886d811e47f3 in task-service has been cleanup successfully" Sep 13 00:07:59.898444 containerd[1531]: time="2025-09-13T00:07:59.898340534Z" level=info msg="StopPodSandbox for \"2690a7ec28df5d92992ed4d85756c3bdef017652f9c8b5cc6e7f02f07e238d87\"" Sep 13 00:07:59.898690 containerd[1531]: time="2025-09-13T00:07:59.898671862Z" level=info msg="Ensure that sandbox e0263f43b3cfa4d19ec62c3d6a8580517818b57a4105d12213031d37c81f52a6 in task-service has been cleanup successfully" Sep 13 00:07:59.899210 containerd[1531]: time="2025-09-13T00:07:59.899088574Z" level=info msg="Ensure that sandbox 2690a7ec28df5d92992ed4d85756c3bdef017652f9c8b5cc6e7f02f07e238d87 in task-service has been cleanup successfully" Sep 13 00:07:59.899913 containerd[1531]: time="2025-09-13T00:07:59.899899551Z" level=info msg="StopPodSandbox for \"697640ece8917ad0d56e5bda6588816935756a5136a20be2e1dd67d3f437ae04\"" Sep 13 00:07:59.900057 containerd[1531]: time="2025-09-13T00:07:59.900046301Z" level=info msg="Ensure that sandbox 697640ece8917ad0d56e5bda6588816935756a5136a20be2e1dd67d3f437ae04 in task-service has been cleanup successfully" Sep 13 00:07:59.900956 containerd[1531]: time="2025-09-13T00:07:59.900757262Z" level=info msg="StopPodSandbox for \"d3b00ff9c4e338254c44e1d9ecc6030aa4b3e4f18b7be551447bb6e8e5089b39\"" Sep 13 00:07:59.900956 containerd[1531]: time="2025-09-13T00:07:59.900841776Z" level=info msg="Ensure that sandbox d3b00ff9c4e338254c44e1d9ecc6030aa4b3e4f18b7be551447bb6e8e5089b39 in task-service has been cleanup successfully" Sep 13 00:07:59.961626 containerd[1531]: time="2025-09-13T00:07:59.961587194Z" level=error msg="StopPodSandbox for \"41993f93a8d69ba1a06995b5a771d8cb5ff09361b613d1752559b7917b2db819\" failed" error="failed to destroy network for sandbox \"41993f93a8d69ba1a06995b5a771d8cb5ff09361b613d1752559b7917b2db819\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:07:59.964026 kubelet[2723]: E0913 00:07:59.961892 2723 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"41993f93a8d69ba1a06995b5a771d8cb5ff09361b613d1752559b7917b2db819\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="41993f93a8d69ba1a06995b5a771d8cb5ff09361b613d1752559b7917b2db819" Sep 13 00:07:59.964026 kubelet[2723]: E0913 00:07:59.961948 2723 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"41993f93a8d69ba1a06995b5a771d8cb5ff09361b613d1752559b7917b2db819"} Sep 13 00:07:59.964026 kubelet[2723]: E0913 00:07:59.961989 2723 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"cd1bab13-6a3e-4841-8b06-97978d7ef332\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"41993f93a8d69ba1a06995b5a771d8cb5ff09361b613d1752559b7917b2db819\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:07:59.964026 kubelet[2723]: E0913 00:07:59.962016 2723 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"cd1bab13-6a3e-4841-8b06-97978d7ef332\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"41993f93a8d69ba1a06995b5a771d8cb5ff09361b613d1752559b7917b2db819\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-54d579b49d-pgggr" podUID="cd1bab13-6a3e-4841-8b06-97978d7ef332" Sep 13 00:07:59.967665 containerd[1531]: time="2025-09-13T00:07:59.967637075Z" level=error msg="StopPodSandbox for \"e0263f43b3cfa4d19ec62c3d6a8580517818b57a4105d12213031d37c81f52a6\" failed" error="failed to destroy network for sandbox \"e0263f43b3cfa4d19ec62c3d6a8580517818b57a4105d12213031d37c81f52a6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:07:59.971040 kubelet[2723]: E0913 00:07:59.971012 2723 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"e0263f43b3cfa4d19ec62c3d6a8580517818b57a4105d12213031d37c81f52a6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="e0263f43b3cfa4d19ec62c3d6a8580517818b57a4105d12213031d37c81f52a6" Sep 13 00:07:59.971141 kubelet[2723]: E0913 00:07:59.971050 2723 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"e0263f43b3cfa4d19ec62c3d6a8580517818b57a4105d12213031d37c81f52a6"} Sep 13 00:07:59.971141 kubelet[2723]: E0913 00:07:59.971075 2723 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"42d9d358-be40-4d01-b1a6-5f1a28850352\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e0263f43b3cfa4d19ec62c3d6a8580517818b57a4105d12213031d37c81f52a6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:07:59.971141 kubelet[2723]: E0913 00:07:59.971089 2723 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"42d9d358-be40-4d01-b1a6-5f1a28850352\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e0263f43b3cfa4d19ec62c3d6a8580517818b57a4105d12213031d37c81f52a6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-prqfd" podUID="42d9d358-be40-4d01-b1a6-5f1a28850352" Sep 13 00:07:59.988506 containerd[1531]: time="2025-09-13T00:07:59.988470151Z" level=error msg="StopPodSandbox for \"697640ece8917ad0d56e5bda6588816935756a5136a20be2e1dd67d3f437ae04\" failed" error="failed to destroy network for sandbox \"697640ece8917ad0d56e5bda6588816935756a5136a20be2e1dd67d3f437ae04\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:07:59.988696 kubelet[2723]: E0913 00:07:59.988674 2723 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"697640ece8917ad0d56e5bda6588816935756a5136a20be2e1dd67d3f437ae04\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="697640ece8917ad0d56e5bda6588816935756a5136a20be2e1dd67d3f437ae04" Sep 13 00:07:59.988768 kubelet[2723]: E0913 00:07:59.988755 2723 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"697640ece8917ad0d56e5bda6588816935756a5136a20be2e1dd67d3f437ae04"} Sep 13 00:07:59.988829 kubelet[2723]: E0913 00:07:59.988819 2723 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"6eeed7fc-3b7d-46e1-a392-48b84ced79bd\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"697640ece8917ad0d56e5bda6588816935756a5136a20be2e1dd67d3f437ae04\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:07:59.988937 kubelet[2723]: E0913 00:07:59.988924 2723 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"6eeed7fc-3b7d-46e1-a392-48b84ced79bd\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"697640ece8917ad0d56e5bda6588816935756a5136a20be2e1dd67d3f437ae04\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-b78ffb84d-4tkck" podUID="6eeed7fc-3b7d-46e1-a392-48b84ced79bd" Sep 13 00:07:59.991656 containerd[1531]: time="2025-09-13T00:07:59.991627477Z" level=error msg="StopPodSandbox for \"c93942dab392c6ee56a4fa9a02035538a28c4200b2932e5d9cd9886d811e47f3\" failed" error="failed to destroy network for sandbox \"c93942dab392c6ee56a4fa9a02035538a28c4200b2932e5d9cd9886d811e47f3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:07:59.991750 kubelet[2723]: E0913 00:07:59.991731 2723 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"c93942dab392c6ee56a4fa9a02035538a28c4200b2932e5d9cd9886d811e47f3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="c93942dab392c6ee56a4fa9a02035538a28c4200b2932e5d9cd9886d811e47f3" Sep 13 00:07:59.991781 kubelet[2723]: E0913 00:07:59.991754 2723 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"c93942dab392c6ee56a4fa9a02035538a28c4200b2932e5d9cd9886d811e47f3"} Sep 13 00:07:59.991781 kubelet[2723]: E0913 00:07:59.991771 2723 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c55cbdbc-12fb-409d-96af-7aa0cff6b9cc\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c93942dab392c6ee56a4fa9a02035538a28c4200b2932e5d9cd9886d811e47f3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:07:59.991882 kubelet[2723]: E0913 00:07:59.991786 2723 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c55cbdbc-12fb-409d-96af-7aa0cff6b9cc\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c93942dab392c6ee56a4fa9a02035538a28c4200b2932e5d9cd9886d811e47f3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-bvgjw" podUID="c55cbdbc-12fb-409d-96af-7aa0cff6b9cc" Sep 13 00:08:00.000292 containerd[1531]: time="2025-09-13T00:08:00.000260474Z" level=error msg="StopPodSandbox for \"597a44a2433f9026351d2c0d0daddac4f0da1800f83f39391850561f25387cff\" failed" error="failed to destroy network for sandbox \"597a44a2433f9026351d2c0d0daddac4f0da1800f83f39391850561f25387cff\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:08:00.000568 kubelet[2723]: E0913 00:08:00.000414 2723 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"597a44a2433f9026351d2c0d0daddac4f0da1800f83f39391850561f25387cff\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="597a44a2433f9026351d2c0d0daddac4f0da1800f83f39391850561f25387cff" Sep 13 00:08:00.000568 kubelet[2723]: E0913 00:08:00.000445 2723 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"597a44a2433f9026351d2c0d0daddac4f0da1800f83f39391850561f25387cff"} Sep 13 00:08:00.000568 kubelet[2723]: E0913 00:08:00.000472 2723 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"dc744470-c98f-40e0-a342-3186715e3534\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"597a44a2433f9026351d2c0d0daddac4f0da1800f83f39391850561f25387cff\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:08:00.000568 kubelet[2723]: E0913 00:08:00.000489 2723 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"dc744470-c98f-40e0-a342-3186715e3534\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"597a44a2433f9026351d2c0d0daddac4f0da1800f83f39391850561f25387cff\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-748877ff9b-h2slx" podUID="dc744470-c98f-40e0-a342-3186715e3534" Sep 13 00:08:00.002306 containerd[1531]: time="2025-09-13T00:08:00.002240809Z" level=error msg="StopPodSandbox for \"d3b00ff9c4e338254c44e1d9ecc6030aa4b3e4f18b7be551447bb6e8e5089b39\" failed" error="failed to destroy network for sandbox \"d3b00ff9c4e338254c44e1d9ecc6030aa4b3e4f18b7be551447bb6e8e5089b39\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:08:00.002365 kubelet[2723]: E0913 00:08:00.002334 2723 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d3b00ff9c4e338254c44e1d9ecc6030aa4b3e4f18b7be551447bb6e8e5089b39\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d3b00ff9c4e338254c44e1d9ecc6030aa4b3e4f18b7be551447bb6e8e5089b39" Sep 13 00:08:00.002365 kubelet[2723]: E0913 00:08:00.002354 2723 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d3b00ff9c4e338254c44e1d9ecc6030aa4b3e4f18b7be551447bb6e8e5089b39"} Sep 13 00:08:00.002509 kubelet[2723]: E0913 00:08:00.002371 2723 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"971b88ba-d1d5-453d-978e-99740f9cff86\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d3b00ff9c4e338254c44e1d9ecc6030aa4b3e4f18b7be551447bb6e8e5089b39\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:08:00.002509 kubelet[2723]: E0913 00:08:00.002382 2723 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"971b88ba-d1d5-453d-978e-99740f9cff86\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d3b00ff9c4e338254c44e1d9ecc6030aa4b3e4f18b7be551447bb6e8e5089b39\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-27vjx" podUID="971b88ba-d1d5-453d-978e-99740f9cff86" Sep 13 00:08:00.003426 containerd[1531]: time="2025-09-13T00:08:00.003268184Z" level=error msg="StopPodSandbox for \"62a669238e8de25d8a03d9706bb8c03e91d6c0be914fedde6abb0c3ebeaf5814\" failed" error="failed to destroy network for sandbox \"62a669238e8de25d8a03d9706bb8c03e91d6c0be914fedde6abb0c3ebeaf5814\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:08:00.003464 kubelet[2723]: E0913 00:08:00.003345 2723 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"62a669238e8de25d8a03d9706bb8c03e91d6c0be914fedde6abb0c3ebeaf5814\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="62a669238e8de25d8a03d9706bb8c03e91d6c0be914fedde6abb0c3ebeaf5814" Sep 13 00:08:00.003464 kubelet[2723]: E0913 00:08:00.003364 2723 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"62a669238e8de25d8a03d9706bb8c03e91d6c0be914fedde6abb0c3ebeaf5814"} Sep 13 00:08:00.003464 kubelet[2723]: E0913 00:08:00.003378 2723 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"91c757d2-79fb-408e-8fa6-48713c1e092e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"62a669238e8de25d8a03d9706bb8c03e91d6c0be914fedde6abb0c3ebeaf5814\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:08:00.003464 kubelet[2723]: E0913 00:08:00.003389 2723 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"91c757d2-79fb-408e-8fa6-48713c1e092e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"62a669238e8de25d8a03d9706bb8c03e91d6c0be914fedde6abb0c3ebeaf5814\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5ff7f7d44c-52xq6" podUID="91c757d2-79fb-408e-8fa6-48713c1e092e" Sep 13 00:08:00.007269 containerd[1531]: time="2025-09-13T00:08:00.007225399Z" level=error msg="StopPodSandbox for \"2690a7ec28df5d92992ed4d85756c3bdef017652f9c8b5cc6e7f02f07e238d87\" failed" error="failed to destroy network for sandbox \"2690a7ec28df5d92992ed4d85756c3bdef017652f9c8b5cc6e7f02f07e238d87\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:08:00.007548 kubelet[2723]: E0913 00:08:00.007459 2723 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"2690a7ec28df5d92992ed4d85756c3bdef017652f9c8b5cc6e7f02f07e238d87\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="2690a7ec28df5d92992ed4d85756c3bdef017652f9c8b5cc6e7f02f07e238d87" Sep 13 00:08:00.007548 kubelet[2723]: E0913 00:08:00.007490 2723 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"2690a7ec28df5d92992ed4d85756c3bdef017652f9c8b5cc6e7f02f07e238d87"} Sep 13 00:08:00.007548 kubelet[2723]: E0913 00:08:00.007510 2723 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b6ce864c-530a-4daf-972c-b9dd880246b5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2690a7ec28df5d92992ed4d85756c3bdef017652f9c8b5cc6e7f02f07e238d87\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:08:00.007548 kubelet[2723]: E0913 00:08:00.007526 2723 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b6ce864c-530a-4daf-972c-b9dd880246b5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2690a7ec28df5d92992ed4d85756c3bdef017652f9c8b5cc6e7f02f07e238d87\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-748877ff9b-8wsdp" podUID="b6ce864c-530a-4daf-972c-b9dd880246b5" Sep 13 00:08:05.129485 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3169122128.mount: Deactivated successfully. Sep 13 00:08:05.455567 containerd[1531]: time="2025-09-13T00:08:05.447735686Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.3: active requests=0, bytes read=157078339" Sep 13 00:08:05.466243 containerd[1531]: time="2025-09-13T00:08:05.466206107Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:08:05.487082 containerd[1531]: time="2025-09-13T00:08:05.487050700Z" level=info msg="ImageCreate event name:\"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:08:05.487849 containerd[1531]: time="2025-09-13T00:08:05.487830975Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:08:05.488685 containerd[1531]: time="2025-09-13T00:08:05.488659017Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.3\" with image id \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\", size \"157078201\" in 5.644731761s" Sep 13 00:08:05.488748 containerd[1531]: time="2025-09-13T00:08:05.488686404Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\" returns image reference \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\"" Sep 13 00:08:05.589247 containerd[1531]: time="2025-09-13T00:08:05.589212274Z" level=info msg="CreateContainer within sandbox \"f048b9764b4ba1f984556cd65fbdbfe6090e13931f8f1c185a820aff0dfed288\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Sep 13 00:08:05.644033 containerd[1531]: time="2025-09-13T00:08:05.643985838Z" level=info msg="CreateContainer within sandbox \"f048b9764b4ba1f984556cd65fbdbfe6090e13931f8f1c185a820aff0dfed288\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"39db039151e45db4eb97d6b2e0f90f8c57fc0bb079ea2f0ce9da3127e99641e7\"" Sep 13 00:08:05.644704 containerd[1531]: time="2025-09-13T00:08:05.644669871Z" level=info msg="StartContainer for \"39db039151e45db4eb97d6b2e0f90f8c57fc0bb079ea2f0ce9da3127e99641e7\"" Sep 13 00:08:05.836274 systemd[1]: Started cri-containerd-39db039151e45db4eb97d6b2e0f90f8c57fc0bb079ea2f0ce9da3127e99641e7.scope - libcontainer container 39db039151e45db4eb97d6b2e0f90f8c57fc0bb079ea2f0ce9da3127e99641e7. Sep 13 00:08:05.864893 containerd[1531]: time="2025-09-13T00:08:05.864802266Z" level=info msg="StartContainer for \"39db039151e45db4eb97d6b2e0f90f8c57fc0bb079ea2f0ce9da3127e99641e7\" returns successfully" Sep 13 00:08:05.985786 kubelet[2723]: I0913 00:08:05.973662 2723 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-cdhzt" podStartSLOduration=1.564336312 podStartE2EDuration="16.960600809s" podCreationTimestamp="2025-09-13 00:07:49 +0000 UTC" firstStartedPulling="2025-09-13 00:07:50.09287737 +0000 UTC m=+17.490601715" lastFinishedPulling="2025-09-13 00:08:05.489141871 +0000 UTC m=+32.886866212" observedRunningTime="2025-09-13 00:08:05.960117936 +0000 UTC m=+33.357842290" watchObservedRunningTime="2025-09-13 00:08:05.960600809 +0000 UTC m=+33.358325162" Sep 13 00:08:06.268482 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Sep 13 00:08:06.269980 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Sep 13 00:08:06.811622 containerd[1531]: time="2025-09-13T00:08:06.810760450Z" level=info msg="StopPodSandbox for \"697640ece8917ad0d56e5bda6588816935756a5136a20be2e1dd67d3f437ae04\"" Sep 13 00:08:08.432036 containerd[1531]: 2025-09-13 00:08:07.560 [INFO][3911] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="697640ece8917ad0d56e5bda6588816935756a5136a20be2e1dd67d3f437ae04" Sep 13 00:08:08.432036 containerd[1531]: 2025-09-13 00:08:07.581 [INFO][3911] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="697640ece8917ad0d56e5bda6588816935756a5136a20be2e1dd67d3f437ae04" iface="eth0" netns="/var/run/netns/cni-78dce7b7-d4f4-c96b-717e-47cc32286c91" Sep 13 00:08:08.432036 containerd[1531]: 2025-09-13 00:08:07.588 [INFO][3911] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="697640ece8917ad0d56e5bda6588816935756a5136a20be2e1dd67d3f437ae04" iface="eth0" netns="/var/run/netns/cni-78dce7b7-d4f4-c96b-717e-47cc32286c91" Sep 13 00:08:08.432036 containerd[1531]: 2025-09-13 00:08:07.622 [INFO][3911] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="697640ece8917ad0d56e5bda6588816935756a5136a20be2e1dd67d3f437ae04" iface="eth0" netns="/var/run/netns/cni-78dce7b7-d4f4-c96b-717e-47cc32286c91" Sep 13 00:08:08.432036 containerd[1531]: 2025-09-13 00:08:07.622 [INFO][3911] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="697640ece8917ad0d56e5bda6588816935756a5136a20be2e1dd67d3f437ae04" Sep 13 00:08:08.432036 containerd[1531]: 2025-09-13 00:08:07.622 [INFO][3911] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="697640ece8917ad0d56e5bda6588816935756a5136a20be2e1dd67d3f437ae04" Sep 13 00:08:08.432036 containerd[1531]: 2025-09-13 00:08:08.381 [INFO][3930] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="697640ece8917ad0d56e5bda6588816935756a5136a20be2e1dd67d3f437ae04" HandleID="k8s-pod-network.697640ece8917ad0d56e5bda6588816935756a5136a20be2e1dd67d3f437ae04" Workload="localhost-k8s-whisker--b78ffb84d--4tkck-eth0" Sep 13 00:08:08.432036 containerd[1531]: 2025-09-13 00:08:08.390 [INFO][3930] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:08:08.432036 containerd[1531]: 2025-09-13 00:08:08.390 [INFO][3930] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:08:08.432036 containerd[1531]: 2025-09-13 00:08:08.427 [WARNING][3930] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="697640ece8917ad0d56e5bda6588816935756a5136a20be2e1dd67d3f437ae04" HandleID="k8s-pod-network.697640ece8917ad0d56e5bda6588816935756a5136a20be2e1dd67d3f437ae04" Workload="localhost-k8s-whisker--b78ffb84d--4tkck-eth0" Sep 13 00:08:08.432036 containerd[1531]: 2025-09-13 00:08:08.427 [INFO][3930] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="697640ece8917ad0d56e5bda6588816935756a5136a20be2e1dd67d3f437ae04" HandleID="k8s-pod-network.697640ece8917ad0d56e5bda6588816935756a5136a20be2e1dd67d3f437ae04" Workload="localhost-k8s-whisker--b78ffb84d--4tkck-eth0" Sep 13 00:08:08.432036 containerd[1531]: 2025-09-13 00:08:08.428 [INFO][3930] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:08:08.432036 containerd[1531]: 2025-09-13 00:08:08.429 [INFO][3911] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="697640ece8917ad0d56e5bda6588816935756a5136a20be2e1dd67d3f437ae04" Sep 13 00:08:08.447729 containerd[1531]: time="2025-09-13T00:08:08.439756627Z" level=info msg="TearDown network for sandbox \"697640ece8917ad0d56e5bda6588816935756a5136a20be2e1dd67d3f437ae04\" successfully" Sep 13 00:08:08.447729 containerd[1531]: time="2025-09-13T00:08:08.439779400Z" level=info msg="StopPodSandbox for \"697640ece8917ad0d56e5bda6588816935756a5136a20be2e1dd67d3f437ae04\" returns successfully" Sep 13 00:08:08.433731 systemd[1]: run-netns-cni\x2d78dce7b7\x2dd4f4\x2dc96b\x2d717e\x2d47cc32286c91.mount: Deactivated successfully. Sep 13 00:08:08.577545 kubelet[2723]: I0913 00:08:08.577505 2723 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6eeed7fc-3b7d-46e1-a392-48b84ced79bd-whisker-ca-bundle\") pod \"6eeed7fc-3b7d-46e1-a392-48b84ced79bd\" (UID: \"6eeed7fc-3b7d-46e1-a392-48b84ced79bd\") " Sep 13 00:08:08.577545 kubelet[2723]: I0913 00:08:08.577551 2723 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/6eeed7fc-3b7d-46e1-a392-48b84ced79bd-whisker-backend-key-pair\") pod \"6eeed7fc-3b7d-46e1-a392-48b84ced79bd\" (UID: \"6eeed7fc-3b7d-46e1-a392-48b84ced79bd\") " Sep 13 00:08:08.577839 kubelet[2723]: I0913 00:08:08.577572 2723 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5vtrf\" (UniqueName: \"kubernetes.io/projected/6eeed7fc-3b7d-46e1-a392-48b84ced79bd-kube-api-access-5vtrf\") pod \"6eeed7fc-3b7d-46e1-a392-48b84ced79bd\" (UID: \"6eeed7fc-3b7d-46e1-a392-48b84ced79bd\") " Sep 13 00:08:08.589624 kubelet[2723]: I0913 00:08:08.588174 2723 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6eeed7fc-3b7d-46e1-a392-48b84ced79bd-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "6eeed7fc-3b7d-46e1-a392-48b84ced79bd" (UID: "6eeed7fc-3b7d-46e1-a392-48b84ced79bd"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 13 00:08:08.592347 kubelet[2723]: I0913 00:08:08.592304 2723 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6eeed7fc-3b7d-46e1-a392-48b84ced79bd-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "6eeed7fc-3b7d-46e1-a392-48b84ced79bd" (UID: "6eeed7fc-3b7d-46e1-a392-48b84ced79bd"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 13 00:08:08.593175 systemd[1]: var-lib-kubelet-pods-6eeed7fc\x2d3b7d\x2d46e1\x2da392\x2d48b84ced79bd-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d5vtrf.mount: Deactivated successfully. Sep 13 00:08:08.594344 kubelet[2723]: I0913 00:08:08.593256 2723 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6eeed7fc-3b7d-46e1-a392-48b84ced79bd-kube-api-access-5vtrf" (OuterVolumeSpecName: "kube-api-access-5vtrf") pod "6eeed7fc-3b7d-46e1-a392-48b84ced79bd" (UID: "6eeed7fc-3b7d-46e1-a392-48b84ced79bd"). InnerVolumeSpecName "kube-api-access-5vtrf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 13 00:08:08.593348 systemd[1]: var-lib-kubelet-pods-6eeed7fc\x2d3b7d\x2d46e1\x2da392\x2d48b84ced79bd-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Sep 13 00:08:08.680755 kubelet[2723]: I0913 00:08:08.680691 2723 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-5vtrf\" (UniqueName: \"kubernetes.io/projected/6eeed7fc-3b7d-46e1-a392-48b84ced79bd-kube-api-access-5vtrf\") on node \"localhost\" DevicePath \"\"" Sep 13 00:08:08.680755 kubelet[2723]: I0913 00:08:08.680730 2723 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6eeed7fc-3b7d-46e1-a392-48b84ced79bd-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Sep 13 00:08:08.680755 kubelet[2723]: I0913 00:08:08.680737 2723 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/6eeed7fc-3b7d-46e1-a392-48b84ced79bd-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Sep 13 00:08:08.687105 systemd[1]: Removed slice kubepods-besteffort-pod6eeed7fc_3b7d_46e1_a392_48b84ced79bd.slice - libcontainer container kubepods-besteffort-pod6eeed7fc_3b7d_46e1_a392_48b84ced79bd.slice. Sep 13 00:08:09.182645 systemd[1]: Created slice kubepods-besteffort-pod7a016995_99e7_49da_a9ce_a021fddb9bc5.slice - libcontainer container kubepods-besteffort-pod7a016995_99e7_49da_a9ce_a021fddb9bc5.slice. Sep 13 00:08:09.196634 kubelet[2723]: I0913 00:08:09.196596 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/7a016995-99e7-49da-a9ce-a021fddb9bc5-whisker-backend-key-pair\") pod \"whisker-5f5db7d7dc-ctpvc\" (UID: \"7a016995-99e7-49da-a9ce-a021fddb9bc5\") " pod="calico-system/whisker-5f5db7d7dc-ctpvc" Sep 13 00:08:09.196634 kubelet[2723]: I0913 00:08:09.196634 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7a016995-99e7-49da-a9ce-a021fddb9bc5-whisker-ca-bundle\") pod \"whisker-5f5db7d7dc-ctpvc\" (UID: \"7a016995-99e7-49da-a9ce-a021fddb9bc5\") " pod="calico-system/whisker-5f5db7d7dc-ctpvc" Sep 13 00:08:09.196792 kubelet[2723]: I0913 00:08:09.196650 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xmcg5\" (UniqueName: \"kubernetes.io/projected/7a016995-99e7-49da-a9ce-a021fddb9bc5-kube-api-access-xmcg5\") pod \"whisker-5f5db7d7dc-ctpvc\" (UID: \"7a016995-99e7-49da-a9ce-a021fddb9bc5\") " pod="calico-system/whisker-5f5db7d7dc-ctpvc" Sep 13 00:08:09.487970 containerd[1531]: time="2025-09-13T00:08:09.487677889Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5f5db7d7dc-ctpvc,Uid:7a016995-99e7-49da-a9ce-a021fddb9bc5,Namespace:calico-system,Attempt:0,}" Sep 13 00:08:09.720878 systemd-networkd[1433]: cali6539602b040: Link UP Sep 13 00:08:09.721039 systemd-networkd[1433]: cali6539602b040: Gained carrier Sep 13 00:08:09.733887 containerd[1531]: 2025-09-13 00:08:09.541 [INFO][4050] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 13 00:08:09.733887 containerd[1531]: 2025-09-13 00:08:09.554 [INFO][4050] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--5f5db7d7dc--ctpvc-eth0 whisker-5f5db7d7dc- calico-system 7a016995-99e7-49da-a9ce-a021fddb9bc5 863 0 2025-09-13 00:08:09 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:5f5db7d7dc projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-5f5db7d7dc-ctpvc eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali6539602b040 [] [] }} ContainerID="7f46dcaf6844fa9aad12cc1e7c611fb9f441c81cd7405973e50ff774a9d9bced" Namespace="calico-system" Pod="whisker-5f5db7d7dc-ctpvc" WorkloadEndpoint="localhost-k8s-whisker--5f5db7d7dc--ctpvc-" Sep 13 00:08:09.733887 containerd[1531]: 2025-09-13 00:08:09.554 [INFO][4050] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7f46dcaf6844fa9aad12cc1e7c611fb9f441c81cd7405973e50ff774a9d9bced" Namespace="calico-system" Pod="whisker-5f5db7d7dc-ctpvc" WorkloadEndpoint="localhost-k8s-whisker--5f5db7d7dc--ctpvc-eth0" Sep 13 00:08:09.733887 containerd[1531]: 2025-09-13 00:08:09.593 [INFO][4065] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7f46dcaf6844fa9aad12cc1e7c611fb9f441c81cd7405973e50ff774a9d9bced" HandleID="k8s-pod-network.7f46dcaf6844fa9aad12cc1e7c611fb9f441c81cd7405973e50ff774a9d9bced" Workload="localhost-k8s-whisker--5f5db7d7dc--ctpvc-eth0" Sep 13 00:08:09.733887 containerd[1531]: 2025-09-13 00:08:09.594 [INFO][4065] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="7f46dcaf6844fa9aad12cc1e7c611fb9f441c81cd7405973e50ff774a9d9bced" HandleID="k8s-pod-network.7f46dcaf6844fa9aad12cc1e7c611fb9f441c81cd7405973e50ff774a9d9bced" Workload="localhost-k8s-whisker--5f5db7d7dc--ctpvc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5850), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-5f5db7d7dc-ctpvc", "timestamp":"2025-09-13 00:08:09.593057077 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:08:09.733887 containerd[1531]: 2025-09-13 00:08:09.594 [INFO][4065] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:08:09.733887 containerd[1531]: 2025-09-13 00:08:09.594 [INFO][4065] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:08:09.733887 containerd[1531]: 2025-09-13 00:08:09.594 [INFO][4065] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 13 00:08:09.733887 containerd[1531]: 2025-09-13 00:08:09.604 [INFO][4065] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7f46dcaf6844fa9aad12cc1e7c611fb9f441c81cd7405973e50ff774a9d9bced" host="localhost" Sep 13 00:08:09.733887 containerd[1531]: 2025-09-13 00:08:09.662 [INFO][4065] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 13 00:08:09.733887 containerd[1531]: 2025-09-13 00:08:09.664 [INFO][4065] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 13 00:08:09.733887 containerd[1531]: 2025-09-13 00:08:09.665 [INFO][4065] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 13 00:08:09.733887 containerd[1531]: 2025-09-13 00:08:09.666 [INFO][4065] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 13 00:08:09.733887 containerd[1531]: 2025-09-13 00:08:09.666 [INFO][4065] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.7f46dcaf6844fa9aad12cc1e7c611fb9f441c81cd7405973e50ff774a9d9bced" host="localhost" Sep 13 00:08:09.733887 containerd[1531]: 2025-09-13 00:08:09.667 [INFO][4065] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.7f46dcaf6844fa9aad12cc1e7c611fb9f441c81cd7405973e50ff774a9d9bced Sep 13 00:08:09.733887 containerd[1531]: 2025-09-13 00:08:09.683 [INFO][4065] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.7f46dcaf6844fa9aad12cc1e7c611fb9f441c81cd7405973e50ff774a9d9bced" host="localhost" Sep 13 00:08:09.733887 containerd[1531]: 2025-09-13 00:08:09.697 [INFO][4065] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.7f46dcaf6844fa9aad12cc1e7c611fb9f441c81cd7405973e50ff774a9d9bced" host="localhost" Sep 13 00:08:09.733887 containerd[1531]: 2025-09-13 00:08:09.697 [INFO][4065] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.7f46dcaf6844fa9aad12cc1e7c611fb9f441c81cd7405973e50ff774a9d9bced" host="localhost" Sep 13 00:08:09.733887 containerd[1531]: 2025-09-13 00:08:09.697 [INFO][4065] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:08:09.733887 containerd[1531]: 2025-09-13 00:08:09.697 [INFO][4065] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="7f46dcaf6844fa9aad12cc1e7c611fb9f441c81cd7405973e50ff774a9d9bced" HandleID="k8s-pod-network.7f46dcaf6844fa9aad12cc1e7c611fb9f441c81cd7405973e50ff774a9d9bced" Workload="localhost-k8s-whisker--5f5db7d7dc--ctpvc-eth0" Sep 13 00:08:09.736819 containerd[1531]: 2025-09-13 00:08:09.699 [INFO][4050] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7f46dcaf6844fa9aad12cc1e7c611fb9f441c81cd7405973e50ff774a9d9bced" Namespace="calico-system" Pod="whisker-5f5db7d7dc-ctpvc" WorkloadEndpoint="localhost-k8s-whisker--5f5db7d7dc--ctpvc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--5f5db7d7dc--ctpvc-eth0", GenerateName:"whisker-5f5db7d7dc-", Namespace:"calico-system", SelfLink:"", UID:"7a016995-99e7-49da-a9ce-a021fddb9bc5", ResourceVersion:"863", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 8, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5f5db7d7dc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-5f5db7d7dc-ctpvc", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali6539602b040", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:08:09.736819 containerd[1531]: 2025-09-13 00:08:09.699 [INFO][4050] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="7f46dcaf6844fa9aad12cc1e7c611fb9f441c81cd7405973e50ff774a9d9bced" Namespace="calico-system" Pod="whisker-5f5db7d7dc-ctpvc" WorkloadEndpoint="localhost-k8s-whisker--5f5db7d7dc--ctpvc-eth0" Sep 13 00:08:09.736819 containerd[1531]: 2025-09-13 00:08:09.699 [INFO][4050] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6539602b040 ContainerID="7f46dcaf6844fa9aad12cc1e7c611fb9f441c81cd7405973e50ff774a9d9bced" Namespace="calico-system" Pod="whisker-5f5db7d7dc-ctpvc" WorkloadEndpoint="localhost-k8s-whisker--5f5db7d7dc--ctpvc-eth0" Sep 13 00:08:09.736819 containerd[1531]: 2025-09-13 00:08:09.722 [INFO][4050] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7f46dcaf6844fa9aad12cc1e7c611fb9f441c81cd7405973e50ff774a9d9bced" Namespace="calico-system" Pod="whisker-5f5db7d7dc-ctpvc" WorkloadEndpoint="localhost-k8s-whisker--5f5db7d7dc--ctpvc-eth0" Sep 13 00:08:09.736819 containerd[1531]: 2025-09-13 00:08:09.722 [INFO][4050] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7f46dcaf6844fa9aad12cc1e7c611fb9f441c81cd7405973e50ff774a9d9bced" Namespace="calico-system" Pod="whisker-5f5db7d7dc-ctpvc" WorkloadEndpoint="localhost-k8s-whisker--5f5db7d7dc--ctpvc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--5f5db7d7dc--ctpvc-eth0", GenerateName:"whisker-5f5db7d7dc-", Namespace:"calico-system", SelfLink:"", UID:"7a016995-99e7-49da-a9ce-a021fddb9bc5", ResourceVersion:"863", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 8, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5f5db7d7dc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7f46dcaf6844fa9aad12cc1e7c611fb9f441c81cd7405973e50ff774a9d9bced", Pod:"whisker-5f5db7d7dc-ctpvc", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali6539602b040", MAC:"b2:3b:dd:4a:79:50", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:08:09.736819 containerd[1531]: 2025-09-13 00:08:09.730 [INFO][4050] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7f46dcaf6844fa9aad12cc1e7c611fb9f441c81cd7405973e50ff774a9d9bced" Namespace="calico-system" Pod="whisker-5f5db7d7dc-ctpvc" WorkloadEndpoint="localhost-k8s-whisker--5f5db7d7dc--ctpvc-eth0" Sep 13 00:08:09.759426 containerd[1531]: time="2025-09-13T00:08:09.759003255Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:08:09.759426 containerd[1531]: time="2025-09-13T00:08:09.759036656Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:08:09.759426 containerd[1531]: time="2025-09-13T00:08:09.759046263Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:08:09.759426 containerd[1531]: time="2025-09-13T00:08:09.759091173Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:08:09.785286 systemd[1]: Started cri-containerd-7f46dcaf6844fa9aad12cc1e7c611fb9f441c81cd7405973e50ff774a9d9bced.scope - libcontainer container 7f46dcaf6844fa9aad12cc1e7c611fb9f441c81cd7405973e50ff774a9d9bced. Sep 13 00:08:09.802261 systemd-resolved[1434]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 13 00:08:09.844653 containerd[1531]: time="2025-09-13T00:08:09.843991088Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5f5db7d7dc-ctpvc,Uid:7a016995-99e7-49da-a9ce-a021fddb9bc5,Namespace:calico-system,Attempt:0,} returns sandbox id \"7f46dcaf6844fa9aad12cc1e7c611fb9f441c81cd7405973e50ff774a9d9bced\"" Sep 13 00:08:09.846715 containerd[1531]: time="2025-09-13T00:08:09.846693171Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\"" Sep 13 00:08:10.673847 containerd[1531]: time="2025-09-13T00:08:10.673659280Z" level=info msg="StopPodSandbox for \"41993f93a8d69ba1a06995b5a771d8cb5ff09361b613d1752559b7917b2db819\"" Sep 13 00:08:10.674290 containerd[1531]: time="2025-09-13T00:08:10.674121162Z" level=info msg="StopPodSandbox for \"62a669238e8de25d8a03d9706bb8c03e91d6c0be914fedde6abb0c3ebeaf5814\"" Sep 13 00:08:10.718485 kubelet[2723]: I0913 00:08:10.675706 2723 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6eeed7fc-3b7d-46e1-a392-48b84ced79bd" path="/var/lib/kubelet/pods/6eeed7fc-3b7d-46e1-a392-48b84ced79bd/volumes" Sep 13 00:08:10.795006 containerd[1531]: 2025-09-13 00:08:10.751 [INFO][4164] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="62a669238e8de25d8a03d9706bb8c03e91d6c0be914fedde6abb0c3ebeaf5814" Sep 13 00:08:10.795006 containerd[1531]: 2025-09-13 00:08:10.751 [INFO][4164] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="62a669238e8de25d8a03d9706bb8c03e91d6c0be914fedde6abb0c3ebeaf5814" iface="eth0" netns="/var/run/netns/cni-88a4a3be-1a57-ba3f-4e59-c5e779e4f867" Sep 13 00:08:10.795006 containerd[1531]: 2025-09-13 00:08:10.751 [INFO][4164] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="62a669238e8de25d8a03d9706bb8c03e91d6c0be914fedde6abb0c3ebeaf5814" iface="eth0" netns="/var/run/netns/cni-88a4a3be-1a57-ba3f-4e59-c5e779e4f867" Sep 13 00:08:10.795006 containerd[1531]: 2025-09-13 00:08:10.751 [INFO][4164] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="62a669238e8de25d8a03d9706bb8c03e91d6c0be914fedde6abb0c3ebeaf5814" iface="eth0" netns="/var/run/netns/cni-88a4a3be-1a57-ba3f-4e59-c5e779e4f867" Sep 13 00:08:10.795006 containerd[1531]: 2025-09-13 00:08:10.751 [INFO][4164] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="62a669238e8de25d8a03d9706bb8c03e91d6c0be914fedde6abb0c3ebeaf5814" Sep 13 00:08:10.795006 containerd[1531]: 2025-09-13 00:08:10.752 [INFO][4164] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="62a669238e8de25d8a03d9706bb8c03e91d6c0be914fedde6abb0c3ebeaf5814" Sep 13 00:08:10.795006 containerd[1531]: 2025-09-13 00:08:10.788 [INFO][4182] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="62a669238e8de25d8a03d9706bb8c03e91d6c0be914fedde6abb0c3ebeaf5814" HandleID="k8s-pod-network.62a669238e8de25d8a03d9706bb8c03e91d6c0be914fedde6abb0c3ebeaf5814" Workload="localhost-k8s-calico--kube--controllers--5ff7f7d44c--52xq6-eth0" Sep 13 00:08:10.795006 containerd[1531]: 2025-09-13 00:08:10.788 [INFO][4182] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:08:10.795006 containerd[1531]: 2025-09-13 00:08:10.788 [INFO][4182] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:08:10.795006 containerd[1531]: 2025-09-13 00:08:10.792 [WARNING][4182] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="62a669238e8de25d8a03d9706bb8c03e91d6c0be914fedde6abb0c3ebeaf5814" HandleID="k8s-pod-network.62a669238e8de25d8a03d9706bb8c03e91d6c0be914fedde6abb0c3ebeaf5814" Workload="localhost-k8s-calico--kube--controllers--5ff7f7d44c--52xq6-eth0" Sep 13 00:08:10.795006 containerd[1531]: 2025-09-13 00:08:10.792 [INFO][4182] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="62a669238e8de25d8a03d9706bb8c03e91d6c0be914fedde6abb0c3ebeaf5814" HandleID="k8s-pod-network.62a669238e8de25d8a03d9706bb8c03e91d6c0be914fedde6abb0c3ebeaf5814" Workload="localhost-k8s-calico--kube--controllers--5ff7f7d44c--52xq6-eth0" Sep 13 00:08:10.795006 containerd[1531]: 2025-09-13 00:08:10.793 [INFO][4182] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:08:10.795006 containerd[1531]: 2025-09-13 00:08:10.794 [INFO][4164] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="62a669238e8de25d8a03d9706bb8c03e91d6c0be914fedde6abb0c3ebeaf5814" Sep 13 00:08:10.803798 containerd[1531]: time="2025-09-13T00:08:10.796789311Z" level=info msg="TearDown network for sandbox \"62a669238e8de25d8a03d9706bb8c03e91d6c0be914fedde6abb0c3ebeaf5814\" successfully" Sep 13 00:08:10.803798 containerd[1531]: time="2025-09-13T00:08:10.796815031Z" level=info msg="StopPodSandbox for \"62a669238e8de25d8a03d9706bb8c03e91d6c0be914fedde6abb0c3ebeaf5814\" returns successfully" Sep 13 00:08:10.803798 containerd[1531]: time="2025-09-13T00:08:10.797840816Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5ff7f7d44c-52xq6,Uid:91c757d2-79fb-408e-8fa6-48713c1e092e,Namespace:calico-system,Attempt:1,}" Sep 13 00:08:10.798815 systemd[1]: run-netns-cni\x2d88a4a3be\x2d1a57\x2dba3f\x2d4e59\x2dc5e779e4f867.mount: Deactivated successfully. Sep 13 00:08:10.813086 containerd[1531]: 2025-09-13 00:08:10.723 [INFO][4163] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="41993f93a8d69ba1a06995b5a771d8cb5ff09361b613d1752559b7917b2db819" Sep 13 00:08:10.813086 containerd[1531]: 2025-09-13 00:08:10.724 [INFO][4163] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="41993f93a8d69ba1a06995b5a771d8cb5ff09361b613d1752559b7917b2db819" iface="eth0" netns="/var/run/netns/cni-9b381c19-90ad-e302-a99c-b9e7a0a2f6d6" Sep 13 00:08:10.813086 containerd[1531]: 2025-09-13 00:08:10.725 [INFO][4163] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="41993f93a8d69ba1a06995b5a771d8cb5ff09361b613d1752559b7917b2db819" iface="eth0" netns="/var/run/netns/cni-9b381c19-90ad-e302-a99c-b9e7a0a2f6d6" Sep 13 00:08:10.813086 containerd[1531]: 2025-09-13 00:08:10.725 [INFO][4163] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="41993f93a8d69ba1a06995b5a771d8cb5ff09361b613d1752559b7917b2db819" iface="eth0" netns="/var/run/netns/cni-9b381c19-90ad-e302-a99c-b9e7a0a2f6d6" Sep 13 00:08:10.813086 containerd[1531]: 2025-09-13 00:08:10.725 [INFO][4163] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="41993f93a8d69ba1a06995b5a771d8cb5ff09361b613d1752559b7917b2db819" Sep 13 00:08:10.813086 containerd[1531]: 2025-09-13 00:08:10.725 [INFO][4163] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="41993f93a8d69ba1a06995b5a771d8cb5ff09361b613d1752559b7917b2db819" Sep 13 00:08:10.813086 containerd[1531]: 2025-09-13 00:08:10.798 [INFO][4177] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="41993f93a8d69ba1a06995b5a771d8cb5ff09361b613d1752559b7917b2db819" HandleID="k8s-pod-network.41993f93a8d69ba1a06995b5a771d8cb5ff09361b613d1752559b7917b2db819" Workload="localhost-k8s-goldmane--54d579b49d--pgggr-eth0" Sep 13 00:08:10.813086 containerd[1531]: 2025-09-13 00:08:10.798 [INFO][4177] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:08:10.813086 containerd[1531]: 2025-09-13 00:08:10.798 [INFO][4177] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:08:10.813086 containerd[1531]: 2025-09-13 00:08:10.802 [WARNING][4177] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="41993f93a8d69ba1a06995b5a771d8cb5ff09361b613d1752559b7917b2db819" HandleID="k8s-pod-network.41993f93a8d69ba1a06995b5a771d8cb5ff09361b613d1752559b7917b2db819" Workload="localhost-k8s-goldmane--54d579b49d--pgggr-eth0" Sep 13 00:08:10.813086 containerd[1531]: 2025-09-13 00:08:10.802 [INFO][4177] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="41993f93a8d69ba1a06995b5a771d8cb5ff09361b613d1752559b7917b2db819" HandleID="k8s-pod-network.41993f93a8d69ba1a06995b5a771d8cb5ff09361b613d1752559b7917b2db819" Workload="localhost-k8s-goldmane--54d579b49d--pgggr-eth0" Sep 13 00:08:10.813086 containerd[1531]: 2025-09-13 00:08:10.810 [INFO][4177] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:08:10.813086 containerd[1531]: 2025-09-13 00:08:10.811 [INFO][4163] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="41993f93a8d69ba1a06995b5a771d8cb5ff09361b613d1752559b7917b2db819" Sep 13 00:08:10.814564 systemd[1]: run-netns-cni\x2d9b381c19\x2d90ad\x2de302\x2da99c\x2db9e7a0a2f6d6.mount: Deactivated successfully. Sep 13 00:08:10.814821 containerd[1531]: time="2025-09-13T00:08:10.814729891Z" level=info msg="TearDown network for sandbox \"41993f93a8d69ba1a06995b5a771d8cb5ff09361b613d1752559b7917b2db819\" successfully" Sep 13 00:08:10.814821 containerd[1531]: time="2025-09-13T00:08:10.814746201Z" level=info msg="StopPodSandbox for \"41993f93a8d69ba1a06995b5a771d8cb5ff09361b613d1752559b7917b2db819\" returns successfully" Sep 13 00:08:10.815330 containerd[1531]: time="2025-09-13T00:08:10.815313211Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-pgggr,Uid:cd1bab13-6a3e-4841-8b06-97978d7ef332,Namespace:calico-system,Attempt:1,}" Sep 13 00:08:10.936595 systemd-networkd[1433]: cali62d605b3a21: Link UP Sep 13 00:08:10.938563 systemd-networkd[1433]: cali62d605b3a21: Gained carrier Sep 13 00:08:10.953037 containerd[1531]: 2025-09-13 00:08:10.875 [INFO][4191] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 13 00:08:10.953037 containerd[1531]: 2025-09-13 00:08:10.883 [INFO][4191] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--5ff7f7d44c--52xq6-eth0 calico-kube-controllers-5ff7f7d44c- calico-system 91c757d2-79fb-408e-8fa6-48713c1e092e 874 0 2025-09-13 00:07:50 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:5ff7f7d44c projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-5ff7f7d44c-52xq6 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali62d605b3a21 [] [] }} ContainerID="b520bf29e2284115ccd5d07b1d4c4a7eac18aa5f2ae3f30672f31334f9da7d6d" Namespace="calico-system" Pod="calico-kube-controllers-5ff7f7d44c-52xq6" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5ff7f7d44c--52xq6-" Sep 13 00:08:10.953037 containerd[1531]: 2025-09-13 00:08:10.883 [INFO][4191] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b520bf29e2284115ccd5d07b1d4c4a7eac18aa5f2ae3f30672f31334f9da7d6d" Namespace="calico-system" Pod="calico-kube-controllers-5ff7f7d44c-52xq6" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5ff7f7d44c--52xq6-eth0" Sep 13 00:08:10.953037 containerd[1531]: 2025-09-13 00:08:10.902 [INFO][4215] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b520bf29e2284115ccd5d07b1d4c4a7eac18aa5f2ae3f30672f31334f9da7d6d" HandleID="k8s-pod-network.b520bf29e2284115ccd5d07b1d4c4a7eac18aa5f2ae3f30672f31334f9da7d6d" Workload="localhost-k8s-calico--kube--controllers--5ff7f7d44c--52xq6-eth0" Sep 13 00:08:10.953037 containerd[1531]: 2025-09-13 00:08:10.902 [INFO][4215] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b520bf29e2284115ccd5d07b1d4c4a7eac18aa5f2ae3f30672f31334f9da7d6d" HandleID="k8s-pod-network.b520bf29e2284115ccd5d07b1d4c4a7eac18aa5f2ae3f30672f31334f9da7d6d" Workload="localhost-k8s-calico--kube--controllers--5ff7f7d44c--52xq6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5600), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-5ff7f7d44c-52xq6", "timestamp":"2025-09-13 00:08:10.902654494 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:08:10.953037 containerd[1531]: 2025-09-13 00:08:10.902 [INFO][4215] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:08:10.953037 containerd[1531]: 2025-09-13 00:08:10.903 [INFO][4215] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:08:10.953037 containerd[1531]: 2025-09-13 00:08:10.903 [INFO][4215] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 13 00:08:10.953037 containerd[1531]: 2025-09-13 00:08:10.909 [INFO][4215] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b520bf29e2284115ccd5d07b1d4c4a7eac18aa5f2ae3f30672f31334f9da7d6d" host="localhost" Sep 13 00:08:10.953037 containerd[1531]: 2025-09-13 00:08:10.916 [INFO][4215] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 13 00:08:10.953037 containerd[1531]: 2025-09-13 00:08:10.919 [INFO][4215] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 13 00:08:10.953037 containerd[1531]: 2025-09-13 00:08:10.920 [INFO][4215] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 13 00:08:10.953037 containerd[1531]: 2025-09-13 00:08:10.922 [INFO][4215] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 13 00:08:10.953037 containerd[1531]: 2025-09-13 00:08:10.922 [INFO][4215] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.b520bf29e2284115ccd5d07b1d4c4a7eac18aa5f2ae3f30672f31334f9da7d6d" host="localhost" Sep 13 00:08:10.953037 containerd[1531]: 2025-09-13 00:08:10.923 [INFO][4215] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.b520bf29e2284115ccd5d07b1d4c4a7eac18aa5f2ae3f30672f31334f9da7d6d Sep 13 00:08:10.953037 containerd[1531]: 2025-09-13 00:08:10.926 [INFO][4215] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.b520bf29e2284115ccd5d07b1d4c4a7eac18aa5f2ae3f30672f31334f9da7d6d" host="localhost" Sep 13 00:08:10.953037 containerd[1531]: 2025-09-13 00:08:10.930 [INFO][4215] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.b520bf29e2284115ccd5d07b1d4c4a7eac18aa5f2ae3f30672f31334f9da7d6d" host="localhost" Sep 13 00:08:10.953037 containerd[1531]: 2025-09-13 00:08:10.930 [INFO][4215] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.b520bf29e2284115ccd5d07b1d4c4a7eac18aa5f2ae3f30672f31334f9da7d6d" host="localhost" Sep 13 00:08:10.953037 containerd[1531]: 2025-09-13 00:08:10.930 [INFO][4215] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:08:10.953037 containerd[1531]: 2025-09-13 00:08:10.930 [INFO][4215] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="b520bf29e2284115ccd5d07b1d4c4a7eac18aa5f2ae3f30672f31334f9da7d6d" HandleID="k8s-pod-network.b520bf29e2284115ccd5d07b1d4c4a7eac18aa5f2ae3f30672f31334f9da7d6d" Workload="localhost-k8s-calico--kube--controllers--5ff7f7d44c--52xq6-eth0" Sep 13 00:08:10.953558 containerd[1531]: 2025-09-13 00:08:10.931 [INFO][4191] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b520bf29e2284115ccd5d07b1d4c4a7eac18aa5f2ae3f30672f31334f9da7d6d" Namespace="calico-system" Pod="calico-kube-controllers-5ff7f7d44c-52xq6" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5ff7f7d44c--52xq6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--5ff7f7d44c--52xq6-eth0", GenerateName:"calico-kube-controllers-5ff7f7d44c-", Namespace:"calico-system", SelfLink:"", UID:"91c757d2-79fb-408e-8fa6-48713c1e092e", ResourceVersion:"874", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 7, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5ff7f7d44c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-5ff7f7d44c-52xq6", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali62d605b3a21", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:08:10.953558 containerd[1531]: 2025-09-13 00:08:10.931 [INFO][4191] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="b520bf29e2284115ccd5d07b1d4c4a7eac18aa5f2ae3f30672f31334f9da7d6d" Namespace="calico-system" Pod="calico-kube-controllers-5ff7f7d44c-52xq6" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5ff7f7d44c--52xq6-eth0" Sep 13 00:08:10.953558 containerd[1531]: 2025-09-13 00:08:10.931 [INFO][4191] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali62d605b3a21 ContainerID="b520bf29e2284115ccd5d07b1d4c4a7eac18aa5f2ae3f30672f31334f9da7d6d" Namespace="calico-system" Pod="calico-kube-controllers-5ff7f7d44c-52xq6" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5ff7f7d44c--52xq6-eth0" Sep 13 00:08:10.953558 containerd[1531]: 2025-09-13 00:08:10.939 [INFO][4191] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b520bf29e2284115ccd5d07b1d4c4a7eac18aa5f2ae3f30672f31334f9da7d6d" Namespace="calico-system" Pod="calico-kube-controllers-5ff7f7d44c-52xq6" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5ff7f7d44c--52xq6-eth0" Sep 13 00:08:10.953558 containerd[1531]: 2025-09-13 00:08:10.939 [INFO][4191] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b520bf29e2284115ccd5d07b1d4c4a7eac18aa5f2ae3f30672f31334f9da7d6d" Namespace="calico-system" Pod="calico-kube-controllers-5ff7f7d44c-52xq6" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5ff7f7d44c--52xq6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--5ff7f7d44c--52xq6-eth0", GenerateName:"calico-kube-controllers-5ff7f7d44c-", Namespace:"calico-system", SelfLink:"", UID:"91c757d2-79fb-408e-8fa6-48713c1e092e", ResourceVersion:"874", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 7, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5ff7f7d44c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b520bf29e2284115ccd5d07b1d4c4a7eac18aa5f2ae3f30672f31334f9da7d6d", Pod:"calico-kube-controllers-5ff7f7d44c-52xq6", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali62d605b3a21", MAC:"0a:67:e2:79:b5:cd", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:08:10.953558 containerd[1531]: 2025-09-13 00:08:10.949 [INFO][4191] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b520bf29e2284115ccd5d07b1d4c4a7eac18aa5f2ae3f30672f31334f9da7d6d" Namespace="calico-system" Pod="calico-kube-controllers-5ff7f7d44c-52xq6" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5ff7f7d44c--52xq6-eth0" Sep 13 00:08:10.963566 containerd[1531]: time="2025-09-13T00:08:10.963300747Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:08:10.963566 containerd[1531]: time="2025-09-13T00:08:10.963346159Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:08:10.963566 containerd[1531]: time="2025-09-13T00:08:10.963375052Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:08:10.963566 containerd[1531]: time="2025-09-13T00:08:10.963476484Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:08:10.975273 systemd[1]: Started cri-containerd-b520bf29e2284115ccd5d07b1d4c4a7eac18aa5f2ae3f30672f31334f9da7d6d.scope - libcontainer container b520bf29e2284115ccd5d07b1d4c4a7eac18aa5f2ae3f30672f31334f9da7d6d. Sep 13 00:08:10.982636 systemd-resolved[1434]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 13 00:08:11.003350 containerd[1531]: time="2025-09-13T00:08:11.002615732Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5ff7f7d44c-52xq6,Uid:91c757d2-79fb-408e-8fa6-48713c1e092e,Namespace:calico-system,Attempt:1,} returns sandbox id \"b520bf29e2284115ccd5d07b1d4c4a7eac18aa5f2ae3f30672f31334f9da7d6d\"" Sep 13 00:08:11.030939 systemd-networkd[1433]: caliad72d5c3c6a: Link UP Sep 13 00:08:11.031039 systemd-networkd[1433]: caliad72d5c3c6a: Gained carrier Sep 13 00:08:11.039539 containerd[1531]: 2025-09-13 00:08:10.872 [INFO][4195] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 13 00:08:11.039539 containerd[1531]: 2025-09-13 00:08:10.883 [INFO][4195] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--54d579b49d--pgggr-eth0 goldmane-54d579b49d- calico-system cd1bab13-6a3e-4841-8b06-97978d7ef332 873 0 2025-09-13 00:07:49 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:54d579b49d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-54d579b49d-pgggr eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] caliad72d5c3c6a [] [] }} ContainerID="e14c6859306747c12c30901fd0d168b19ed5632cab03483380ce5f914829db8c" Namespace="calico-system" Pod="goldmane-54d579b49d-pgggr" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--pgggr-" Sep 13 00:08:11.039539 containerd[1531]: 2025-09-13 00:08:10.883 [INFO][4195] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e14c6859306747c12c30901fd0d168b19ed5632cab03483380ce5f914829db8c" Namespace="calico-system" Pod="goldmane-54d579b49d-pgggr" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--pgggr-eth0" Sep 13 00:08:11.039539 containerd[1531]: 2025-09-13 00:08:10.912 [INFO][4217] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e14c6859306747c12c30901fd0d168b19ed5632cab03483380ce5f914829db8c" HandleID="k8s-pod-network.e14c6859306747c12c30901fd0d168b19ed5632cab03483380ce5f914829db8c" Workload="localhost-k8s-goldmane--54d579b49d--pgggr-eth0" Sep 13 00:08:11.039539 containerd[1531]: 2025-09-13 00:08:10.912 [INFO][4217] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e14c6859306747c12c30901fd0d168b19ed5632cab03483380ce5f914829db8c" HandleID="k8s-pod-network.e14c6859306747c12c30901fd0d168b19ed5632cab03483380ce5f914829db8c" Workload="localhost-k8s-goldmane--54d579b49d--pgggr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5010), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-54d579b49d-pgggr", "timestamp":"2025-09-13 00:08:10.912738597 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:08:11.039539 containerd[1531]: 2025-09-13 00:08:10.912 [INFO][4217] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:08:11.039539 containerd[1531]: 2025-09-13 00:08:10.930 [INFO][4217] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:08:11.039539 containerd[1531]: 2025-09-13 00:08:10.930 [INFO][4217] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 13 00:08:11.039539 containerd[1531]: 2025-09-13 00:08:11.010 [INFO][4217] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e14c6859306747c12c30901fd0d168b19ed5632cab03483380ce5f914829db8c" host="localhost" Sep 13 00:08:11.039539 containerd[1531]: 2025-09-13 00:08:11.017 [INFO][4217] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 13 00:08:11.039539 containerd[1531]: 2025-09-13 00:08:11.021 [INFO][4217] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 13 00:08:11.039539 containerd[1531]: 2025-09-13 00:08:11.021 [INFO][4217] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 13 00:08:11.039539 containerd[1531]: 2025-09-13 00:08:11.023 [INFO][4217] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 13 00:08:11.039539 containerd[1531]: 2025-09-13 00:08:11.023 [INFO][4217] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.e14c6859306747c12c30901fd0d168b19ed5632cab03483380ce5f914829db8c" host="localhost" Sep 13 00:08:11.039539 containerd[1531]: 2025-09-13 00:08:11.023 [INFO][4217] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.e14c6859306747c12c30901fd0d168b19ed5632cab03483380ce5f914829db8c Sep 13 00:08:11.039539 containerd[1531]: 2025-09-13 00:08:11.025 [INFO][4217] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.e14c6859306747c12c30901fd0d168b19ed5632cab03483380ce5f914829db8c" host="localhost" Sep 13 00:08:11.039539 containerd[1531]: 2025-09-13 00:08:11.028 [INFO][4217] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.e14c6859306747c12c30901fd0d168b19ed5632cab03483380ce5f914829db8c" host="localhost" Sep 13 00:08:11.039539 containerd[1531]: 2025-09-13 00:08:11.028 [INFO][4217] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.e14c6859306747c12c30901fd0d168b19ed5632cab03483380ce5f914829db8c" host="localhost" Sep 13 00:08:11.039539 containerd[1531]: 2025-09-13 00:08:11.028 [INFO][4217] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:08:11.039539 containerd[1531]: 2025-09-13 00:08:11.028 [INFO][4217] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="e14c6859306747c12c30901fd0d168b19ed5632cab03483380ce5f914829db8c" HandleID="k8s-pod-network.e14c6859306747c12c30901fd0d168b19ed5632cab03483380ce5f914829db8c" Workload="localhost-k8s-goldmane--54d579b49d--pgggr-eth0" Sep 13 00:08:11.039981 containerd[1531]: 2025-09-13 00:08:11.029 [INFO][4195] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e14c6859306747c12c30901fd0d168b19ed5632cab03483380ce5f914829db8c" Namespace="calico-system" Pod="goldmane-54d579b49d-pgggr" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--pgggr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--54d579b49d--pgggr-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"cd1bab13-6a3e-4841-8b06-97978d7ef332", ResourceVersion:"873", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 7, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-54d579b49d-pgggr", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"caliad72d5c3c6a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:08:11.039981 containerd[1531]: 2025-09-13 00:08:11.029 [INFO][4195] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="e14c6859306747c12c30901fd0d168b19ed5632cab03483380ce5f914829db8c" Namespace="calico-system" Pod="goldmane-54d579b49d-pgggr" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--pgggr-eth0" Sep 13 00:08:11.039981 containerd[1531]: 2025-09-13 00:08:11.029 [INFO][4195] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliad72d5c3c6a ContainerID="e14c6859306747c12c30901fd0d168b19ed5632cab03483380ce5f914829db8c" Namespace="calico-system" Pod="goldmane-54d579b49d-pgggr" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--pgggr-eth0" Sep 13 00:08:11.039981 containerd[1531]: 2025-09-13 00:08:11.030 [INFO][4195] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e14c6859306747c12c30901fd0d168b19ed5632cab03483380ce5f914829db8c" Namespace="calico-system" Pod="goldmane-54d579b49d-pgggr" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--pgggr-eth0" Sep 13 00:08:11.039981 containerd[1531]: 2025-09-13 00:08:11.030 [INFO][4195] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e14c6859306747c12c30901fd0d168b19ed5632cab03483380ce5f914829db8c" Namespace="calico-system" Pod="goldmane-54d579b49d-pgggr" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--pgggr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--54d579b49d--pgggr-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"cd1bab13-6a3e-4841-8b06-97978d7ef332", ResourceVersion:"873", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 7, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e14c6859306747c12c30901fd0d168b19ed5632cab03483380ce5f914829db8c", Pod:"goldmane-54d579b49d-pgggr", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"caliad72d5c3c6a", MAC:"de:82:74:96:d6:4a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:08:11.039981 containerd[1531]: 2025-09-13 00:08:11.035 [INFO][4195] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e14c6859306747c12c30901fd0d168b19ed5632cab03483380ce5f914829db8c" Namespace="calico-system" Pod="goldmane-54d579b49d-pgggr" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--pgggr-eth0" Sep 13 00:08:11.055363 containerd[1531]: time="2025-09-13T00:08:11.055327012Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:08:11.055469 containerd[1531]: time="2025-09-13T00:08:11.055455641Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:08:11.055586 containerd[1531]: time="2025-09-13T00:08:11.055509911Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:08:11.055624 containerd[1531]: time="2025-09-13T00:08:11.055566888Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:08:11.069280 systemd[1]: Started cri-containerd-e14c6859306747c12c30901fd0d168b19ed5632cab03483380ce5f914829db8c.scope - libcontainer container e14c6859306747c12c30901fd0d168b19ed5632cab03483380ce5f914829db8c. Sep 13 00:08:11.078782 systemd-resolved[1434]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 13 00:08:11.104029 containerd[1531]: time="2025-09-13T00:08:11.104000064Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-pgggr,Uid:cd1bab13-6a3e-4841-8b06-97978d7ef332,Namespace:calico-system,Attempt:1,} returns sandbox id \"e14c6859306747c12c30901fd0d168b19ed5632cab03483380ce5f914829db8c\"" Sep 13 00:08:11.112303 systemd-networkd[1433]: cali6539602b040: Gained IPv6LL Sep 13 00:08:11.204672 containerd[1531]: time="2025-09-13T00:08:11.204608845Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:08:11.205934 containerd[1531]: time="2025-09-13T00:08:11.205911161Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.3: active requests=0, bytes read=4661291" Sep 13 00:08:11.206114 containerd[1531]: time="2025-09-13T00:08:11.206099999Z" level=info msg="ImageCreate event name:\"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:08:11.207658 containerd[1531]: time="2025-09-13T00:08:11.207641517Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:08:11.208754 containerd[1531]: time="2025-09-13T00:08:11.208737238Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.3\" with image id \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\", size \"6153986\" in 1.362023215s" Sep 13 00:08:11.208788 containerd[1531]: time="2025-09-13T00:08:11.208756489Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\" returns image reference \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\"" Sep 13 00:08:11.209670 containerd[1531]: time="2025-09-13T00:08:11.209399603Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\"" Sep 13 00:08:11.210432 containerd[1531]: time="2025-09-13T00:08:11.210417611Z" level=info msg="CreateContainer within sandbox \"7f46dcaf6844fa9aad12cc1e7c611fb9f441c81cd7405973e50ff774a9d9bced\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Sep 13 00:08:11.217059 containerd[1531]: time="2025-09-13T00:08:11.217038292Z" level=info msg="CreateContainer within sandbox \"7f46dcaf6844fa9aad12cc1e7c611fb9f441c81cd7405973e50ff774a9d9bced\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"b915107fa05ad3a6101b5e36c447b72836a4ff795b5fe11ea2ceb9fad696def5\"" Sep 13 00:08:11.225799 containerd[1531]: time="2025-09-13T00:08:11.225773921Z" level=info msg="StartContainer for \"b915107fa05ad3a6101b5e36c447b72836a4ff795b5fe11ea2ceb9fad696def5\"" Sep 13 00:08:11.243280 systemd[1]: Started cri-containerd-b915107fa05ad3a6101b5e36c447b72836a4ff795b5fe11ea2ceb9fad696def5.scope - libcontainer container b915107fa05ad3a6101b5e36c447b72836a4ff795b5fe11ea2ceb9fad696def5. Sep 13 00:08:11.285675 containerd[1531]: time="2025-09-13T00:08:11.285643541Z" level=info msg="StartContainer for \"b915107fa05ad3a6101b5e36c447b72836a4ff795b5fe11ea2ceb9fad696def5\" returns successfully" Sep 13 00:08:11.672782 containerd[1531]: time="2025-09-13T00:08:11.672592190Z" level=info msg="StopPodSandbox for \"e0263f43b3cfa4d19ec62c3d6a8580517818b57a4105d12213031d37c81f52a6\"" Sep 13 00:08:11.751299 containerd[1531]: 2025-09-13 00:08:11.711 [INFO][4377] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e0263f43b3cfa4d19ec62c3d6a8580517818b57a4105d12213031d37c81f52a6" Sep 13 00:08:11.751299 containerd[1531]: 2025-09-13 00:08:11.711 [INFO][4377] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="e0263f43b3cfa4d19ec62c3d6a8580517818b57a4105d12213031d37c81f52a6" iface="eth0" netns="/var/run/netns/cni-5fa6a6dd-9896-f9f2-6d54-ef26725aa974" Sep 13 00:08:11.751299 containerd[1531]: 2025-09-13 00:08:11.711 [INFO][4377] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="e0263f43b3cfa4d19ec62c3d6a8580517818b57a4105d12213031d37c81f52a6" iface="eth0" netns="/var/run/netns/cni-5fa6a6dd-9896-f9f2-6d54-ef26725aa974" Sep 13 00:08:11.751299 containerd[1531]: 2025-09-13 00:08:11.712 [INFO][4377] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="e0263f43b3cfa4d19ec62c3d6a8580517818b57a4105d12213031d37c81f52a6" iface="eth0" netns="/var/run/netns/cni-5fa6a6dd-9896-f9f2-6d54-ef26725aa974" Sep 13 00:08:11.751299 containerd[1531]: 2025-09-13 00:08:11.712 [INFO][4377] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e0263f43b3cfa4d19ec62c3d6a8580517818b57a4105d12213031d37c81f52a6" Sep 13 00:08:11.751299 containerd[1531]: 2025-09-13 00:08:11.712 [INFO][4377] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e0263f43b3cfa4d19ec62c3d6a8580517818b57a4105d12213031d37c81f52a6" Sep 13 00:08:11.751299 containerd[1531]: 2025-09-13 00:08:11.737 [INFO][4390] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e0263f43b3cfa4d19ec62c3d6a8580517818b57a4105d12213031d37c81f52a6" HandleID="k8s-pod-network.e0263f43b3cfa4d19ec62c3d6a8580517818b57a4105d12213031d37c81f52a6" Workload="localhost-k8s-csi--node--driver--prqfd-eth0" Sep 13 00:08:11.751299 containerd[1531]: 2025-09-13 00:08:11.739 [INFO][4390] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:08:11.751299 containerd[1531]: 2025-09-13 00:08:11.739 [INFO][4390] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:08:11.751299 containerd[1531]: 2025-09-13 00:08:11.747 [WARNING][4390] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e0263f43b3cfa4d19ec62c3d6a8580517818b57a4105d12213031d37c81f52a6" HandleID="k8s-pod-network.e0263f43b3cfa4d19ec62c3d6a8580517818b57a4105d12213031d37c81f52a6" Workload="localhost-k8s-csi--node--driver--prqfd-eth0" Sep 13 00:08:11.751299 containerd[1531]: 2025-09-13 00:08:11.747 [INFO][4390] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e0263f43b3cfa4d19ec62c3d6a8580517818b57a4105d12213031d37c81f52a6" HandleID="k8s-pod-network.e0263f43b3cfa4d19ec62c3d6a8580517818b57a4105d12213031d37c81f52a6" Workload="localhost-k8s-csi--node--driver--prqfd-eth0" Sep 13 00:08:11.751299 containerd[1531]: 2025-09-13 00:08:11.749 [INFO][4390] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:08:11.751299 containerd[1531]: 2025-09-13 00:08:11.750 [INFO][4377] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e0263f43b3cfa4d19ec62c3d6a8580517818b57a4105d12213031d37c81f52a6" Sep 13 00:08:11.752512 containerd[1531]: time="2025-09-13T00:08:11.751388239Z" level=info msg="TearDown network for sandbox \"e0263f43b3cfa4d19ec62c3d6a8580517818b57a4105d12213031d37c81f52a6\" successfully" Sep 13 00:08:11.752512 containerd[1531]: time="2025-09-13T00:08:11.751402861Z" level=info msg="StopPodSandbox for \"e0263f43b3cfa4d19ec62c3d6a8580517818b57a4105d12213031d37c81f52a6\" returns successfully" Sep 13 00:08:11.752512 containerd[1531]: time="2025-09-13T00:08:11.751846856Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-prqfd,Uid:42d9d358-be40-4d01-b1a6-5f1a28850352,Namespace:calico-system,Attempt:1,}" Sep 13 00:08:11.800330 systemd[1]: run-netns-cni\x2d5fa6a6dd\x2d9896\x2df9f2\x2d6d54\x2def26725aa974.mount: Deactivated successfully. Sep 13 00:08:11.884695 systemd-networkd[1433]: calif13d7d76da3: Link UP Sep 13 00:08:11.885782 systemd-networkd[1433]: calif13d7d76da3: Gained carrier Sep 13 00:08:11.901055 containerd[1531]: 2025-09-13 00:08:11.816 [INFO][4404] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 13 00:08:11.901055 containerd[1531]: 2025-09-13 00:08:11.823 [INFO][4404] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--prqfd-eth0 csi-node-driver- calico-system 42d9d358-be40-4d01-b1a6-5f1a28850352 893 0 2025-09-13 00:07:49 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:6c96d95cc7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-prqfd eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calif13d7d76da3 [] [] }} ContainerID="5c02421b9b1d53132eec1967348c12830d2645894eb75b5b7203f95662f96c94" Namespace="calico-system" Pod="csi-node-driver-prqfd" WorkloadEndpoint="localhost-k8s-csi--node--driver--prqfd-" Sep 13 00:08:11.901055 containerd[1531]: 2025-09-13 00:08:11.823 [INFO][4404] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5c02421b9b1d53132eec1967348c12830d2645894eb75b5b7203f95662f96c94" Namespace="calico-system" Pod="csi-node-driver-prqfd" WorkloadEndpoint="localhost-k8s-csi--node--driver--prqfd-eth0" Sep 13 00:08:11.901055 containerd[1531]: 2025-09-13 00:08:11.845 [INFO][4415] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5c02421b9b1d53132eec1967348c12830d2645894eb75b5b7203f95662f96c94" HandleID="k8s-pod-network.5c02421b9b1d53132eec1967348c12830d2645894eb75b5b7203f95662f96c94" Workload="localhost-k8s-csi--node--driver--prqfd-eth0" Sep 13 00:08:11.901055 containerd[1531]: 2025-09-13 00:08:11.845 [INFO][4415] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="5c02421b9b1d53132eec1967348c12830d2645894eb75b5b7203f95662f96c94" HandleID="k8s-pod-network.5c02421b9b1d53132eec1967348c12830d2645894eb75b5b7203f95662f96c94" Workload="localhost-k8s-csi--node--driver--prqfd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f610), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-prqfd", "timestamp":"2025-09-13 00:08:11.845285168 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:08:11.901055 containerd[1531]: 2025-09-13 00:08:11.845 [INFO][4415] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:08:11.901055 containerd[1531]: 2025-09-13 00:08:11.845 [INFO][4415] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:08:11.901055 containerd[1531]: 2025-09-13 00:08:11.845 [INFO][4415] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 13 00:08:11.901055 containerd[1531]: 2025-09-13 00:08:11.851 [INFO][4415] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.5c02421b9b1d53132eec1967348c12830d2645894eb75b5b7203f95662f96c94" host="localhost" Sep 13 00:08:11.901055 containerd[1531]: 2025-09-13 00:08:11.854 [INFO][4415] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 13 00:08:11.901055 containerd[1531]: 2025-09-13 00:08:11.857 [INFO][4415] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 13 00:08:11.901055 containerd[1531]: 2025-09-13 00:08:11.862 [INFO][4415] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 13 00:08:11.901055 containerd[1531]: 2025-09-13 00:08:11.863 [INFO][4415] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 13 00:08:11.901055 containerd[1531]: 2025-09-13 00:08:11.863 [INFO][4415] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.5c02421b9b1d53132eec1967348c12830d2645894eb75b5b7203f95662f96c94" host="localhost" Sep 13 00:08:11.901055 containerd[1531]: 2025-09-13 00:08:11.864 [INFO][4415] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.5c02421b9b1d53132eec1967348c12830d2645894eb75b5b7203f95662f96c94 Sep 13 00:08:11.901055 containerd[1531]: 2025-09-13 00:08:11.868 [INFO][4415] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.5c02421b9b1d53132eec1967348c12830d2645894eb75b5b7203f95662f96c94" host="localhost" Sep 13 00:08:11.901055 containerd[1531]: 2025-09-13 00:08:11.881 [INFO][4415] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.5c02421b9b1d53132eec1967348c12830d2645894eb75b5b7203f95662f96c94" host="localhost" Sep 13 00:08:11.901055 containerd[1531]: 2025-09-13 00:08:11.881 [INFO][4415] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.5c02421b9b1d53132eec1967348c12830d2645894eb75b5b7203f95662f96c94" host="localhost" Sep 13 00:08:11.901055 containerd[1531]: 2025-09-13 00:08:11.881 [INFO][4415] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:08:11.901055 containerd[1531]: 2025-09-13 00:08:11.881 [INFO][4415] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="5c02421b9b1d53132eec1967348c12830d2645894eb75b5b7203f95662f96c94" HandleID="k8s-pod-network.5c02421b9b1d53132eec1967348c12830d2645894eb75b5b7203f95662f96c94" Workload="localhost-k8s-csi--node--driver--prqfd-eth0" Sep 13 00:08:11.910014 containerd[1531]: 2025-09-13 00:08:11.882 [INFO][4404] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5c02421b9b1d53132eec1967348c12830d2645894eb75b5b7203f95662f96c94" Namespace="calico-system" Pod="csi-node-driver-prqfd" WorkloadEndpoint="localhost-k8s-csi--node--driver--prqfd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--prqfd-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"42d9d358-be40-4d01-b1a6-5f1a28850352", ResourceVersion:"893", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 7, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-prqfd", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calif13d7d76da3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:08:11.910014 containerd[1531]: 2025-09-13 00:08:11.882 [INFO][4404] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="5c02421b9b1d53132eec1967348c12830d2645894eb75b5b7203f95662f96c94" Namespace="calico-system" Pod="csi-node-driver-prqfd" WorkloadEndpoint="localhost-k8s-csi--node--driver--prqfd-eth0" Sep 13 00:08:11.910014 containerd[1531]: 2025-09-13 00:08:11.882 [INFO][4404] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif13d7d76da3 ContainerID="5c02421b9b1d53132eec1967348c12830d2645894eb75b5b7203f95662f96c94" Namespace="calico-system" Pod="csi-node-driver-prqfd" WorkloadEndpoint="localhost-k8s-csi--node--driver--prqfd-eth0" Sep 13 00:08:11.910014 containerd[1531]: 2025-09-13 00:08:11.885 [INFO][4404] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5c02421b9b1d53132eec1967348c12830d2645894eb75b5b7203f95662f96c94" Namespace="calico-system" Pod="csi-node-driver-prqfd" WorkloadEndpoint="localhost-k8s-csi--node--driver--prqfd-eth0" Sep 13 00:08:11.910014 containerd[1531]: 2025-09-13 00:08:11.886 [INFO][4404] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5c02421b9b1d53132eec1967348c12830d2645894eb75b5b7203f95662f96c94" Namespace="calico-system" Pod="csi-node-driver-prqfd" WorkloadEndpoint="localhost-k8s-csi--node--driver--prqfd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--prqfd-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"42d9d358-be40-4d01-b1a6-5f1a28850352", ResourceVersion:"893", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 7, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5c02421b9b1d53132eec1967348c12830d2645894eb75b5b7203f95662f96c94", Pod:"csi-node-driver-prqfd", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calif13d7d76da3", MAC:"0e:4b:12:9e:9f:64", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:08:11.910014 containerd[1531]: 2025-09-13 00:08:11.899 [INFO][4404] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5c02421b9b1d53132eec1967348c12830d2645894eb75b5b7203f95662f96c94" Namespace="calico-system" Pod="csi-node-driver-prqfd" WorkloadEndpoint="localhost-k8s-csi--node--driver--prqfd-eth0" Sep 13 00:08:11.915104 containerd[1531]: time="2025-09-13T00:08:11.915048154Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:08:11.915104 containerd[1531]: time="2025-09-13T00:08:11.915087877Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:08:11.915223 containerd[1531]: time="2025-09-13T00:08:11.915099043Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:08:11.915223 containerd[1531]: time="2025-09-13T00:08:11.915165074Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:08:11.927851 systemd[1]: run-containerd-runc-k8s.io-5c02421b9b1d53132eec1967348c12830d2645894eb75b5b7203f95662f96c94-runc.g7QQdi.mount: Deactivated successfully. Sep 13 00:08:11.939268 systemd[1]: Started cri-containerd-5c02421b9b1d53132eec1967348c12830d2645894eb75b5b7203f95662f96c94.scope - libcontainer container 5c02421b9b1d53132eec1967348c12830d2645894eb75b5b7203f95662f96c94. Sep 13 00:08:11.946267 systemd-resolved[1434]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 13 00:08:11.952907 containerd[1531]: time="2025-09-13T00:08:11.952885756Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-prqfd,Uid:42d9d358-be40-4d01-b1a6-5f1a28850352,Namespace:calico-system,Attempt:1,} returns sandbox id \"5c02421b9b1d53132eec1967348c12830d2645894eb75b5b7203f95662f96c94\"" Sep 13 00:08:12.264381 systemd-networkd[1433]: caliad72d5c3c6a: Gained IPv6LL Sep 13 00:08:12.690919 containerd[1531]: time="2025-09-13T00:08:12.690826500Z" level=info msg="StopPodSandbox for \"c93942dab392c6ee56a4fa9a02035538a28c4200b2932e5d9cd9886d811e47f3\"" Sep 13 00:08:12.776326 systemd-networkd[1433]: cali62d605b3a21: Gained IPv6LL Sep 13 00:08:12.813224 containerd[1531]: 2025-09-13 00:08:12.748 [INFO][4482] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c93942dab392c6ee56a4fa9a02035538a28c4200b2932e5d9cd9886d811e47f3" Sep 13 00:08:12.813224 containerd[1531]: 2025-09-13 00:08:12.748 [INFO][4482] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="c93942dab392c6ee56a4fa9a02035538a28c4200b2932e5d9cd9886d811e47f3" iface="eth0" netns="/var/run/netns/cni-4f06fa0e-22c8-59a4-2938-cda5406e3046" Sep 13 00:08:12.813224 containerd[1531]: 2025-09-13 00:08:12.749 [INFO][4482] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="c93942dab392c6ee56a4fa9a02035538a28c4200b2932e5d9cd9886d811e47f3" iface="eth0" netns="/var/run/netns/cni-4f06fa0e-22c8-59a4-2938-cda5406e3046" Sep 13 00:08:12.813224 containerd[1531]: 2025-09-13 00:08:12.751 [INFO][4482] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="c93942dab392c6ee56a4fa9a02035538a28c4200b2932e5d9cd9886d811e47f3" iface="eth0" netns="/var/run/netns/cni-4f06fa0e-22c8-59a4-2938-cda5406e3046" Sep 13 00:08:12.813224 containerd[1531]: 2025-09-13 00:08:12.751 [INFO][4482] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c93942dab392c6ee56a4fa9a02035538a28c4200b2932e5d9cd9886d811e47f3" Sep 13 00:08:12.813224 containerd[1531]: 2025-09-13 00:08:12.751 [INFO][4482] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c93942dab392c6ee56a4fa9a02035538a28c4200b2932e5d9cd9886d811e47f3" Sep 13 00:08:12.813224 containerd[1531]: 2025-09-13 00:08:12.802 [INFO][4490] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c93942dab392c6ee56a4fa9a02035538a28c4200b2932e5d9cd9886d811e47f3" HandleID="k8s-pod-network.c93942dab392c6ee56a4fa9a02035538a28c4200b2932e5d9cd9886d811e47f3" Workload="localhost-k8s-coredns--668d6bf9bc--bvgjw-eth0" Sep 13 00:08:12.813224 containerd[1531]: 2025-09-13 00:08:12.802 [INFO][4490] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:08:12.813224 containerd[1531]: 2025-09-13 00:08:12.803 [INFO][4490] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:08:12.813224 containerd[1531]: 2025-09-13 00:08:12.808 [WARNING][4490] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c93942dab392c6ee56a4fa9a02035538a28c4200b2932e5d9cd9886d811e47f3" HandleID="k8s-pod-network.c93942dab392c6ee56a4fa9a02035538a28c4200b2932e5d9cd9886d811e47f3" Workload="localhost-k8s-coredns--668d6bf9bc--bvgjw-eth0" Sep 13 00:08:12.813224 containerd[1531]: 2025-09-13 00:08:12.808 [INFO][4490] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c93942dab392c6ee56a4fa9a02035538a28c4200b2932e5d9cd9886d811e47f3" HandleID="k8s-pod-network.c93942dab392c6ee56a4fa9a02035538a28c4200b2932e5d9cd9886d811e47f3" Workload="localhost-k8s-coredns--668d6bf9bc--bvgjw-eth0" Sep 13 00:08:12.813224 containerd[1531]: 2025-09-13 00:08:12.809 [INFO][4490] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:08:12.813224 containerd[1531]: 2025-09-13 00:08:12.812 [INFO][4482] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c93942dab392c6ee56a4fa9a02035538a28c4200b2932e5d9cd9886d811e47f3" Sep 13 00:08:12.823597 containerd[1531]: time="2025-09-13T00:08:12.813651306Z" level=info msg="TearDown network for sandbox \"c93942dab392c6ee56a4fa9a02035538a28c4200b2932e5d9cd9886d811e47f3\" successfully" Sep 13 00:08:12.823597 containerd[1531]: time="2025-09-13T00:08:12.813669788Z" level=info msg="StopPodSandbox for \"c93942dab392c6ee56a4fa9a02035538a28c4200b2932e5d9cd9886d811e47f3\" returns successfully" Sep 13 00:08:12.816644 systemd[1]: run-netns-cni\x2d4f06fa0e\x2d22c8\x2d59a4\x2d2938\x2dcda5406e3046.mount: Deactivated successfully. Sep 13 00:08:12.833482 containerd[1531]: time="2025-09-13T00:08:12.829370720Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-bvgjw,Uid:c55cbdbc-12fb-409d-96af-7aa0cff6b9cc,Namespace:kube-system,Attempt:1,}" Sep 13 00:08:13.140014 systemd-networkd[1433]: cali9ee488ee3a9: Link UP Sep 13 00:08:13.140140 systemd-networkd[1433]: cali9ee488ee3a9: Gained carrier Sep 13 00:08:13.152812 containerd[1531]: 2025-09-13 00:08:13.063 [INFO][4517] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 13 00:08:13.152812 containerd[1531]: 2025-09-13 00:08:13.075 [INFO][4517] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--bvgjw-eth0 coredns-668d6bf9bc- kube-system c55cbdbc-12fb-409d-96af-7aa0cff6b9cc 899 0 2025-09-13 00:07:38 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-bvgjw eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali9ee488ee3a9 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="9b64364b8fc28fe51fab3dbf6785cf9143ef9f211ee65f99fc52e020cc608af5" Namespace="kube-system" Pod="coredns-668d6bf9bc-bvgjw" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--bvgjw-" Sep 13 00:08:13.152812 containerd[1531]: 2025-09-13 00:08:13.075 [INFO][4517] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9b64364b8fc28fe51fab3dbf6785cf9143ef9f211ee65f99fc52e020cc608af5" Namespace="kube-system" Pod="coredns-668d6bf9bc-bvgjw" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--bvgjw-eth0" Sep 13 00:08:13.152812 containerd[1531]: 2025-09-13 00:08:13.098 [INFO][4531] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9b64364b8fc28fe51fab3dbf6785cf9143ef9f211ee65f99fc52e020cc608af5" HandleID="k8s-pod-network.9b64364b8fc28fe51fab3dbf6785cf9143ef9f211ee65f99fc52e020cc608af5" Workload="localhost-k8s-coredns--668d6bf9bc--bvgjw-eth0" Sep 13 00:08:13.152812 containerd[1531]: 2025-09-13 00:08:13.098 [INFO][4531] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="9b64364b8fc28fe51fab3dbf6785cf9143ef9f211ee65f99fc52e020cc608af5" HandleID="k8s-pod-network.9b64364b8fc28fe51fab3dbf6785cf9143ef9f211ee65f99fc52e020cc608af5" Workload="localhost-k8s-coredns--668d6bf9bc--bvgjw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5020), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-bvgjw", "timestamp":"2025-09-13 00:08:13.098660034 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:08:13.152812 containerd[1531]: 2025-09-13 00:08:13.098 [INFO][4531] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:08:13.152812 containerd[1531]: 2025-09-13 00:08:13.098 [INFO][4531] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:08:13.152812 containerd[1531]: 2025-09-13 00:08:13.098 [INFO][4531] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 13 00:08:13.152812 containerd[1531]: 2025-09-13 00:08:13.104 [INFO][4531] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9b64364b8fc28fe51fab3dbf6785cf9143ef9f211ee65f99fc52e020cc608af5" host="localhost" Sep 13 00:08:13.152812 containerd[1531]: 2025-09-13 00:08:13.112 [INFO][4531] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 13 00:08:13.152812 containerd[1531]: 2025-09-13 00:08:13.118 [INFO][4531] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 13 00:08:13.152812 containerd[1531]: 2025-09-13 00:08:13.120 [INFO][4531] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 13 00:08:13.152812 containerd[1531]: 2025-09-13 00:08:13.121 [INFO][4531] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 13 00:08:13.152812 containerd[1531]: 2025-09-13 00:08:13.121 [INFO][4531] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.9b64364b8fc28fe51fab3dbf6785cf9143ef9f211ee65f99fc52e020cc608af5" host="localhost" Sep 13 00:08:13.152812 containerd[1531]: 2025-09-13 00:08:13.125 [INFO][4531] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.9b64364b8fc28fe51fab3dbf6785cf9143ef9f211ee65f99fc52e020cc608af5 Sep 13 00:08:13.152812 containerd[1531]: 2025-09-13 00:08:13.129 [INFO][4531] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.9b64364b8fc28fe51fab3dbf6785cf9143ef9f211ee65f99fc52e020cc608af5" host="localhost" Sep 13 00:08:13.152812 containerd[1531]: 2025-09-13 00:08:13.134 [INFO][4531] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.9b64364b8fc28fe51fab3dbf6785cf9143ef9f211ee65f99fc52e020cc608af5" host="localhost" Sep 13 00:08:13.152812 containerd[1531]: 2025-09-13 00:08:13.134 [INFO][4531] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.9b64364b8fc28fe51fab3dbf6785cf9143ef9f211ee65f99fc52e020cc608af5" host="localhost" Sep 13 00:08:13.152812 containerd[1531]: 2025-09-13 00:08:13.134 [INFO][4531] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:08:13.152812 containerd[1531]: 2025-09-13 00:08:13.134 [INFO][4531] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="9b64364b8fc28fe51fab3dbf6785cf9143ef9f211ee65f99fc52e020cc608af5" HandleID="k8s-pod-network.9b64364b8fc28fe51fab3dbf6785cf9143ef9f211ee65f99fc52e020cc608af5" Workload="localhost-k8s-coredns--668d6bf9bc--bvgjw-eth0" Sep 13 00:08:13.153935 containerd[1531]: 2025-09-13 00:08:13.135 [INFO][4517] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9b64364b8fc28fe51fab3dbf6785cf9143ef9f211ee65f99fc52e020cc608af5" Namespace="kube-system" Pod="coredns-668d6bf9bc-bvgjw" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--bvgjw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--bvgjw-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"c55cbdbc-12fb-409d-96af-7aa0cff6b9cc", ResourceVersion:"899", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 7, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-bvgjw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9ee488ee3a9", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:08:13.153935 containerd[1531]: 2025-09-13 00:08:13.135 [INFO][4517] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="9b64364b8fc28fe51fab3dbf6785cf9143ef9f211ee65f99fc52e020cc608af5" Namespace="kube-system" Pod="coredns-668d6bf9bc-bvgjw" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--bvgjw-eth0" Sep 13 00:08:13.153935 containerd[1531]: 2025-09-13 00:08:13.135 [INFO][4517] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9ee488ee3a9 ContainerID="9b64364b8fc28fe51fab3dbf6785cf9143ef9f211ee65f99fc52e020cc608af5" Namespace="kube-system" Pod="coredns-668d6bf9bc-bvgjw" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--bvgjw-eth0" Sep 13 00:08:13.153935 containerd[1531]: 2025-09-13 00:08:13.139 [INFO][4517] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9b64364b8fc28fe51fab3dbf6785cf9143ef9f211ee65f99fc52e020cc608af5" Namespace="kube-system" Pod="coredns-668d6bf9bc-bvgjw" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--bvgjw-eth0" Sep 13 00:08:13.153935 containerd[1531]: 2025-09-13 00:08:13.141 [INFO][4517] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9b64364b8fc28fe51fab3dbf6785cf9143ef9f211ee65f99fc52e020cc608af5" Namespace="kube-system" Pod="coredns-668d6bf9bc-bvgjw" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--bvgjw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--bvgjw-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"c55cbdbc-12fb-409d-96af-7aa0cff6b9cc", ResourceVersion:"899", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 7, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9b64364b8fc28fe51fab3dbf6785cf9143ef9f211ee65f99fc52e020cc608af5", Pod:"coredns-668d6bf9bc-bvgjw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9ee488ee3a9", MAC:"76:04:d7:7d:3a:d1", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:08:13.153935 containerd[1531]: 2025-09-13 00:08:13.150 [INFO][4517] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9b64364b8fc28fe51fab3dbf6785cf9143ef9f211ee65f99fc52e020cc608af5" Namespace="kube-system" Pod="coredns-668d6bf9bc-bvgjw" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--bvgjw-eth0" Sep 13 00:08:13.167064 containerd[1531]: time="2025-09-13T00:08:13.166890170Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:08:13.167064 containerd[1531]: time="2025-09-13T00:08:13.166956809Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:08:13.168255 containerd[1531]: time="2025-09-13T00:08:13.167246309Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:08:13.168255 containerd[1531]: time="2025-09-13T00:08:13.167319393Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:08:13.188286 systemd[1]: Started cri-containerd-9b64364b8fc28fe51fab3dbf6785cf9143ef9f211ee65f99fc52e020cc608af5.scope - libcontainer container 9b64364b8fc28fe51fab3dbf6785cf9143ef9f211ee65f99fc52e020cc608af5. Sep 13 00:08:13.198024 systemd-resolved[1434]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 13 00:08:13.224403 containerd[1531]: time="2025-09-13T00:08:13.224210642Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-bvgjw,Uid:c55cbdbc-12fb-409d-96af-7aa0cff6b9cc,Namespace:kube-system,Attempt:1,} returns sandbox id \"9b64364b8fc28fe51fab3dbf6785cf9143ef9f211ee65f99fc52e020cc608af5\"" Sep 13 00:08:13.227233 containerd[1531]: time="2025-09-13T00:08:13.227167044Z" level=info msg="CreateContainer within sandbox \"9b64364b8fc28fe51fab3dbf6785cf9143ef9f211ee65f99fc52e020cc608af5\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 13 00:08:13.248317 containerd[1531]: time="2025-09-13T00:08:13.248292351Z" level=info msg="CreateContainer within sandbox \"9b64364b8fc28fe51fab3dbf6785cf9143ef9f211ee65f99fc52e020cc608af5\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b654a3b7946ed38c2711c260d3e7197947207cd7869002faf07855f6b8e43cfa\"" Sep 13 00:08:13.249704 containerd[1531]: time="2025-09-13T00:08:13.249689017Z" level=info msg="StartContainer for \"b654a3b7946ed38c2711c260d3e7197947207cd7869002faf07855f6b8e43cfa\"" Sep 13 00:08:13.281721 systemd[1]: Started cri-containerd-b654a3b7946ed38c2711c260d3e7197947207cd7869002faf07855f6b8e43cfa.scope - libcontainer container b654a3b7946ed38c2711c260d3e7197947207cd7869002faf07855f6b8e43cfa. Sep 13 00:08:13.311394 containerd[1531]: time="2025-09-13T00:08:13.311300543Z" level=info msg="StartContainer for \"b654a3b7946ed38c2711c260d3e7197947207cd7869002faf07855f6b8e43cfa\" returns successfully" Sep 13 00:08:13.480579 systemd-networkd[1433]: calif13d7d76da3: Gained IPv6LL Sep 13 00:08:13.673606 containerd[1531]: time="2025-09-13T00:08:13.673325232Z" level=info msg="StopPodSandbox for \"597a44a2433f9026351d2c0d0daddac4f0da1800f83f39391850561f25387cff\"" Sep 13 00:08:13.674986 containerd[1531]: time="2025-09-13T00:08:13.674835995Z" level=info msg="StopPodSandbox for \"2690a7ec28df5d92992ed4d85756c3bdef017652f9c8b5cc6e7f02f07e238d87\"" Sep 13 00:08:13.777793 containerd[1531]: 2025-09-13 00:08:13.727 [INFO][4636] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="597a44a2433f9026351d2c0d0daddac4f0da1800f83f39391850561f25387cff" Sep 13 00:08:13.777793 containerd[1531]: 2025-09-13 00:08:13.727 [INFO][4636] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="597a44a2433f9026351d2c0d0daddac4f0da1800f83f39391850561f25387cff" iface="eth0" netns="/var/run/netns/cni-94c963aa-4785-c970-8c78-6d7e17fc1511" Sep 13 00:08:13.777793 containerd[1531]: 2025-09-13 00:08:13.728 [INFO][4636] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="597a44a2433f9026351d2c0d0daddac4f0da1800f83f39391850561f25387cff" iface="eth0" netns="/var/run/netns/cni-94c963aa-4785-c970-8c78-6d7e17fc1511" Sep 13 00:08:13.777793 containerd[1531]: 2025-09-13 00:08:13.728 [INFO][4636] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="597a44a2433f9026351d2c0d0daddac4f0da1800f83f39391850561f25387cff" iface="eth0" netns="/var/run/netns/cni-94c963aa-4785-c970-8c78-6d7e17fc1511" Sep 13 00:08:13.777793 containerd[1531]: 2025-09-13 00:08:13.728 [INFO][4636] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="597a44a2433f9026351d2c0d0daddac4f0da1800f83f39391850561f25387cff" Sep 13 00:08:13.777793 containerd[1531]: 2025-09-13 00:08:13.728 [INFO][4636] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="597a44a2433f9026351d2c0d0daddac4f0da1800f83f39391850561f25387cff" Sep 13 00:08:13.777793 containerd[1531]: 2025-09-13 00:08:13.753 [INFO][4650] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="597a44a2433f9026351d2c0d0daddac4f0da1800f83f39391850561f25387cff" HandleID="k8s-pod-network.597a44a2433f9026351d2c0d0daddac4f0da1800f83f39391850561f25387cff" Workload="localhost-k8s-calico--apiserver--748877ff9b--h2slx-eth0" Sep 13 00:08:13.777793 containerd[1531]: 2025-09-13 00:08:13.753 [INFO][4650] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:08:13.777793 containerd[1531]: 2025-09-13 00:08:13.753 [INFO][4650] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:08:13.777793 containerd[1531]: 2025-09-13 00:08:13.763 [WARNING][4650] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="597a44a2433f9026351d2c0d0daddac4f0da1800f83f39391850561f25387cff" HandleID="k8s-pod-network.597a44a2433f9026351d2c0d0daddac4f0da1800f83f39391850561f25387cff" Workload="localhost-k8s-calico--apiserver--748877ff9b--h2slx-eth0" Sep 13 00:08:13.777793 containerd[1531]: 2025-09-13 00:08:13.763 [INFO][4650] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="597a44a2433f9026351d2c0d0daddac4f0da1800f83f39391850561f25387cff" HandleID="k8s-pod-network.597a44a2433f9026351d2c0d0daddac4f0da1800f83f39391850561f25387cff" Workload="localhost-k8s-calico--apiserver--748877ff9b--h2slx-eth0" Sep 13 00:08:13.777793 containerd[1531]: 2025-09-13 00:08:13.767 [INFO][4650] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:08:13.777793 containerd[1531]: 2025-09-13 00:08:13.774 [INFO][4636] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="597a44a2433f9026351d2c0d0daddac4f0da1800f83f39391850561f25387cff" Sep 13 00:08:13.785341 containerd[1531]: time="2025-09-13T00:08:13.778378097Z" level=info msg="TearDown network for sandbox \"597a44a2433f9026351d2c0d0daddac4f0da1800f83f39391850561f25387cff\" successfully" Sep 13 00:08:13.785341 containerd[1531]: time="2025-09-13T00:08:13.778400989Z" level=info msg="StopPodSandbox for \"597a44a2433f9026351d2c0d0daddac4f0da1800f83f39391850561f25387cff\" returns successfully" Sep 13 00:08:13.785341 containerd[1531]: time="2025-09-13T00:08:13.779150879Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-748877ff9b-h2slx,Uid:dc744470-c98f-40e0-a342-3186715e3534,Namespace:calico-apiserver,Attempt:1,}" Sep 13 00:08:13.794906 containerd[1531]: 2025-09-13 00:08:13.753 [INFO][4637] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2690a7ec28df5d92992ed4d85756c3bdef017652f9c8b5cc6e7f02f07e238d87" Sep 13 00:08:13.794906 containerd[1531]: 2025-09-13 00:08:13.753 [INFO][4637] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="2690a7ec28df5d92992ed4d85756c3bdef017652f9c8b5cc6e7f02f07e238d87" iface="eth0" netns="/var/run/netns/cni-7ab40ffa-1db8-513f-af5a-a18bf5b5046f" Sep 13 00:08:13.794906 containerd[1531]: 2025-09-13 00:08:13.754 [INFO][4637] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="2690a7ec28df5d92992ed4d85756c3bdef017652f9c8b5cc6e7f02f07e238d87" iface="eth0" netns="/var/run/netns/cni-7ab40ffa-1db8-513f-af5a-a18bf5b5046f" Sep 13 00:08:13.794906 containerd[1531]: 2025-09-13 00:08:13.754 [INFO][4637] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="2690a7ec28df5d92992ed4d85756c3bdef017652f9c8b5cc6e7f02f07e238d87" iface="eth0" netns="/var/run/netns/cni-7ab40ffa-1db8-513f-af5a-a18bf5b5046f" Sep 13 00:08:13.794906 containerd[1531]: 2025-09-13 00:08:13.754 [INFO][4637] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2690a7ec28df5d92992ed4d85756c3bdef017652f9c8b5cc6e7f02f07e238d87" Sep 13 00:08:13.794906 containerd[1531]: 2025-09-13 00:08:13.754 [INFO][4637] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2690a7ec28df5d92992ed4d85756c3bdef017652f9c8b5cc6e7f02f07e238d87" Sep 13 00:08:13.794906 containerd[1531]: 2025-09-13 00:08:13.782 [INFO][4657] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2690a7ec28df5d92992ed4d85756c3bdef017652f9c8b5cc6e7f02f07e238d87" HandleID="k8s-pod-network.2690a7ec28df5d92992ed4d85756c3bdef017652f9c8b5cc6e7f02f07e238d87" Workload="localhost-k8s-calico--apiserver--748877ff9b--8wsdp-eth0" Sep 13 00:08:13.794906 containerd[1531]: 2025-09-13 00:08:13.782 [INFO][4657] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:08:13.794906 containerd[1531]: 2025-09-13 00:08:13.782 [INFO][4657] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:08:13.794906 containerd[1531]: 2025-09-13 00:08:13.791 [WARNING][4657] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2690a7ec28df5d92992ed4d85756c3bdef017652f9c8b5cc6e7f02f07e238d87" HandleID="k8s-pod-network.2690a7ec28df5d92992ed4d85756c3bdef017652f9c8b5cc6e7f02f07e238d87" Workload="localhost-k8s-calico--apiserver--748877ff9b--8wsdp-eth0" Sep 13 00:08:13.794906 containerd[1531]: 2025-09-13 00:08:13.791 [INFO][4657] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2690a7ec28df5d92992ed4d85756c3bdef017652f9c8b5cc6e7f02f07e238d87" HandleID="k8s-pod-network.2690a7ec28df5d92992ed4d85756c3bdef017652f9c8b5cc6e7f02f07e238d87" Workload="localhost-k8s-calico--apiserver--748877ff9b--8wsdp-eth0" Sep 13 00:08:13.794906 containerd[1531]: 2025-09-13 00:08:13.792 [INFO][4657] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:08:13.794906 containerd[1531]: 2025-09-13 00:08:13.793 [INFO][4637] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2690a7ec28df5d92992ed4d85756c3bdef017652f9c8b5cc6e7f02f07e238d87" Sep 13 00:08:13.800128 containerd[1531]: time="2025-09-13T00:08:13.795048692Z" level=info msg="TearDown network for sandbox \"2690a7ec28df5d92992ed4d85756c3bdef017652f9c8b5cc6e7f02f07e238d87\" successfully" Sep 13 00:08:13.800128 containerd[1531]: time="2025-09-13T00:08:13.795064154Z" level=info msg="StopPodSandbox for \"2690a7ec28df5d92992ed4d85756c3bdef017652f9c8b5cc6e7f02f07e238d87\" returns successfully" Sep 13 00:08:13.800128 containerd[1531]: time="2025-09-13T00:08:13.795656455Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-748877ff9b-8wsdp,Uid:b6ce864c-530a-4daf-972c-b9dd880246b5,Namespace:calico-apiserver,Attempt:1,}" Sep 13 00:08:13.816311 systemd[1]: run-containerd-runc-k8s.io-9b64364b8fc28fe51fab3dbf6785cf9143ef9f211ee65f99fc52e020cc608af5-runc.d6YbRV.mount: Deactivated successfully. Sep 13 00:08:13.816372 systemd[1]: run-netns-cni\x2d94c963aa\x2d4785\x2dc970\x2d8c78\x2d6d7e17fc1511.mount: Deactivated successfully. Sep 13 00:08:13.816410 systemd[1]: run-netns-cni\x2d7ab40ffa\x2d1db8\x2d513f\x2daf5a\x2da18bf5b5046f.mount: Deactivated successfully. Sep 13 00:08:13.980363 kubelet[2723]: I0913 00:08:13.980310 2723 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-bvgjw" podStartSLOduration=35.975506658 podStartE2EDuration="35.975506658s" podCreationTimestamp="2025-09-13 00:07:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:08:13.97539334 +0000 UTC m=+41.373117687" watchObservedRunningTime="2025-09-13 00:08:13.975506658 +0000 UTC m=+41.373231006" Sep 13 00:08:14.013224 containerd[1531]: time="2025-09-13T00:08:14.013156771Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:08:14.013993 containerd[1531]: time="2025-09-13T00:08:14.013809689Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.3: active requests=0, bytes read=51277746" Sep 13 00:08:14.019209 containerd[1531]: time="2025-09-13T00:08:14.018094378Z" level=info msg="ImageCreate event name:\"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:08:14.035858 containerd[1531]: time="2025-09-13T00:08:14.035246415Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:08:14.037163 containerd[1531]: time="2025-09-13T00:08:14.037143856Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" with image id \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\", size \"52770417\" in 2.827727057s" Sep 13 00:08:14.037242 containerd[1531]: time="2025-09-13T00:08:14.037164604Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" returns image reference \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\"" Sep 13 00:08:14.039768 containerd[1531]: time="2025-09-13T00:08:14.039738169Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\"" Sep 13 00:08:14.056990 containerd[1531]: time="2025-09-13T00:08:14.056964179Z" level=info msg="CreateContainer within sandbox \"b520bf29e2284115ccd5d07b1d4c4a7eac18aa5f2ae3f30672f31334f9da7d6d\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Sep 13 00:08:14.072532 containerd[1531]: time="2025-09-13T00:08:14.072411239Z" level=info msg="CreateContainer within sandbox \"b520bf29e2284115ccd5d07b1d4c4a7eac18aa5f2ae3f30672f31334f9da7d6d\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"e196af573d04e14806e4686b34754c0aa3daf46782491cf6f4711eb67e39e591\"" Sep 13 00:08:14.074987 containerd[1531]: time="2025-09-13T00:08:14.074912817Z" level=info msg="StartContainer for \"e196af573d04e14806e4686b34754c0aa3daf46782491cf6f4711eb67e39e591\"" Sep 13 00:08:14.182551 systemd[1]: Started cri-containerd-e196af573d04e14806e4686b34754c0aa3daf46782491cf6f4711eb67e39e591.scope - libcontainer container e196af573d04e14806e4686b34754c0aa3daf46782491cf6f4711eb67e39e591. Sep 13 00:08:14.199978 systemd-networkd[1433]: cali55ef14a4cb3: Link UP Sep 13 00:08:14.200874 systemd-networkd[1433]: cali55ef14a4cb3: Gained carrier Sep 13 00:08:14.216870 containerd[1531]: 2025-09-13 00:08:14.092 [INFO][4685] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 13 00:08:14.216870 containerd[1531]: 2025-09-13 00:08:14.105 [INFO][4685] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--748877ff9b--h2slx-eth0 calico-apiserver-748877ff9b- calico-apiserver dc744470-c98f-40e0-a342-3186715e3534 912 0 2025-09-13 00:07:47 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:748877ff9b projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-748877ff9b-h2slx eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali55ef14a4cb3 [] [] }} ContainerID="79c83d114911169e2a1eaa2497863e4116893dc69844add8d0241ccc6b70370e" Namespace="calico-apiserver" Pod="calico-apiserver-748877ff9b-h2slx" WorkloadEndpoint="localhost-k8s-calico--apiserver--748877ff9b--h2slx-" Sep 13 00:08:14.216870 containerd[1531]: 2025-09-13 00:08:14.105 [INFO][4685] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="79c83d114911169e2a1eaa2497863e4116893dc69844add8d0241ccc6b70370e" Namespace="calico-apiserver" Pod="calico-apiserver-748877ff9b-h2slx" WorkloadEndpoint="localhost-k8s-calico--apiserver--748877ff9b--h2slx-eth0" Sep 13 00:08:14.216870 containerd[1531]: 2025-09-13 00:08:14.156 [INFO][4724] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="79c83d114911169e2a1eaa2497863e4116893dc69844add8d0241ccc6b70370e" HandleID="k8s-pod-network.79c83d114911169e2a1eaa2497863e4116893dc69844add8d0241ccc6b70370e" Workload="localhost-k8s-calico--apiserver--748877ff9b--h2slx-eth0" Sep 13 00:08:14.216870 containerd[1531]: 2025-09-13 00:08:14.156 [INFO][4724] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="79c83d114911169e2a1eaa2497863e4116893dc69844add8d0241ccc6b70370e" HandleID="k8s-pod-network.79c83d114911169e2a1eaa2497863e4116893dc69844add8d0241ccc6b70370e" Workload="localhost-k8s-calico--apiserver--748877ff9b--h2slx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f230), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-748877ff9b-h2slx", "timestamp":"2025-09-13 00:08:14.156040217 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:08:14.216870 containerd[1531]: 2025-09-13 00:08:14.156 [INFO][4724] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:08:14.216870 containerd[1531]: 2025-09-13 00:08:14.156 [INFO][4724] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:08:14.216870 containerd[1531]: 2025-09-13 00:08:14.156 [INFO][4724] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 13 00:08:14.216870 containerd[1531]: 2025-09-13 00:08:14.165 [INFO][4724] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.79c83d114911169e2a1eaa2497863e4116893dc69844add8d0241ccc6b70370e" host="localhost" Sep 13 00:08:14.216870 containerd[1531]: 2025-09-13 00:08:14.169 [INFO][4724] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 13 00:08:14.216870 containerd[1531]: 2025-09-13 00:08:14.175 [INFO][4724] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 13 00:08:14.216870 containerd[1531]: 2025-09-13 00:08:14.176 [INFO][4724] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 13 00:08:14.216870 containerd[1531]: 2025-09-13 00:08:14.179 [INFO][4724] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 13 00:08:14.216870 containerd[1531]: 2025-09-13 00:08:14.179 [INFO][4724] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.79c83d114911169e2a1eaa2497863e4116893dc69844add8d0241ccc6b70370e" host="localhost" Sep 13 00:08:14.216870 containerd[1531]: 2025-09-13 00:08:14.180 [INFO][4724] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.79c83d114911169e2a1eaa2497863e4116893dc69844add8d0241ccc6b70370e Sep 13 00:08:14.216870 containerd[1531]: 2025-09-13 00:08:14.182 [INFO][4724] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.79c83d114911169e2a1eaa2497863e4116893dc69844add8d0241ccc6b70370e" host="localhost" Sep 13 00:08:14.216870 containerd[1531]: 2025-09-13 00:08:14.191 [INFO][4724] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.79c83d114911169e2a1eaa2497863e4116893dc69844add8d0241ccc6b70370e" host="localhost" Sep 13 00:08:14.216870 containerd[1531]: 2025-09-13 00:08:14.191 [INFO][4724] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.79c83d114911169e2a1eaa2497863e4116893dc69844add8d0241ccc6b70370e" host="localhost" Sep 13 00:08:14.216870 containerd[1531]: 2025-09-13 00:08:14.191 [INFO][4724] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:08:14.216870 containerd[1531]: 2025-09-13 00:08:14.191 [INFO][4724] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="79c83d114911169e2a1eaa2497863e4116893dc69844add8d0241ccc6b70370e" HandleID="k8s-pod-network.79c83d114911169e2a1eaa2497863e4116893dc69844add8d0241ccc6b70370e" Workload="localhost-k8s-calico--apiserver--748877ff9b--h2slx-eth0" Sep 13 00:08:14.219235 containerd[1531]: 2025-09-13 00:08:14.194 [INFO][4685] cni-plugin/k8s.go 418: Populated endpoint ContainerID="79c83d114911169e2a1eaa2497863e4116893dc69844add8d0241ccc6b70370e" Namespace="calico-apiserver" Pod="calico-apiserver-748877ff9b-h2slx" WorkloadEndpoint="localhost-k8s-calico--apiserver--748877ff9b--h2slx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--748877ff9b--h2slx-eth0", GenerateName:"calico-apiserver-748877ff9b-", Namespace:"calico-apiserver", SelfLink:"", UID:"dc744470-c98f-40e0-a342-3186715e3534", ResourceVersion:"912", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 7, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"748877ff9b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-748877ff9b-h2slx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali55ef14a4cb3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:08:14.219235 containerd[1531]: 2025-09-13 00:08:14.195 [INFO][4685] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="79c83d114911169e2a1eaa2497863e4116893dc69844add8d0241ccc6b70370e" Namespace="calico-apiserver" Pod="calico-apiserver-748877ff9b-h2slx" WorkloadEndpoint="localhost-k8s-calico--apiserver--748877ff9b--h2slx-eth0" Sep 13 00:08:14.219235 containerd[1531]: 2025-09-13 00:08:14.195 [INFO][4685] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali55ef14a4cb3 ContainerID="79c83d114911169e2a1eaa2497863e4116893dc69844add8d0241ccc6b70370e" Namespace="calico-apiserver" Pod="calico-apiserver-748877ff9b-h2slx" WorkloadEndpoint="localhost-k8s-calico--apiserver--748877ff9b--h2slx-eth0" Sep 13 00:08:14.219235 containerd[1531]: 2025-09-13 00:08:14.201 [INFO][4685] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="79c83d114911169e2a1eaa2497863e4116893dc69844add8d0241ccc6b70370e" Namespace="calico-apiserver" Pod="calico-apiserver-748877ff9b-h2slx" WorkloadEndpoint="localhost-k8s-calico--apiserver--748877ff9b--h2slx-eth0" Sep 13 00:08:14.219235 containerd[1531]: 2025-09-13 00:08:14.202 [INFO][4685] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="79c83d114911169e2a1eaa2497863e4116893dc69844add8d0241ccc6b70370e" Namespace="calico-apiserver" Pod="calico-apiserver-748877ff9b-h2slx" WorkloadEndpoint="localhost-k8s-calico--apiserver--748877ff9b--h2slx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--748877ff9b--h2slx-eth0", GenerateName:"calico-apiserver-748877ff9b-", Namespace:"calico-apiserver", SelfLink:"", UID:"dc744470-c98f-40e0-a342-3186715e3534", ResourceVersion:"912", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 7, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"748877ff9b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"79c83d114911169e2a1eaa2497863e4116893dc69844add8d0241ccc6b70370e", Pod:"calico-apiserver-748877ff9b-h2slx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali55ef14a4cb3", MAC:"1a:54:dd:2f:f9:15", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:08:14.219235 containerd[1531]: 2025-09-13 00:08:14.213 [INFO][4685] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="79c83d114911169e2a1eaa2497863e4116893dc69844add8d0241ccc6b70370e" Namespace="calico-apiserver" Pod="calico-apiserver-748877ff9b-h2slx" WorkloadEndpoint="localhost-k8s-calico--apiserver--748877ff9b--h2slx-eth0" Sep 13 00:08:14.234674 containerd[1531]: time="2025-09-13T00:08:14.234521269Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:08:14.234674 containerd[1531]: time="2025-09-13T00:08:14.234554758Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:08:14.234674 containerd[1531]: time="2025-09-13T00:08:14.234582000Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:08:14.234674 containerd[1531]: time="2025-09-13T00:08:14.234637166Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:08:14.249866 systemd[1]: Started cri-containerd-79c83d114911169e2a1eaa2497863e4116893dc69844add8d0241ccc6b70370e.scope - libcontainer container 79c83d114911169e2a1eaa2497863e4116893dc69844add8d0241ccc6b70370e. Sep 13 00:08:14.256722 containerd[1531]: time="2025-09-13T00:08:14.256681210Z" level=info msg="StartContainer for \"e196af573d04e14806e4686b34754c0aa3daf46782491cf6f4711eb67e39e591\" returns successfully" Sep 13 00:08:14.274027 systemd-resolved[1434]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 13 00:08:14.300164 systemd-networkd[1433]: cali69c38515ffc: Link UP Sep 13 00:08:14.300527 systemd-networkd[1433]: cali69c38515ffc: Gained carrier Sep 13 00:08:14.316799 containerd[1531]: time="2025-09-13T00:08:14.315593469Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-748877ff9b-h2slx,Uid:dc744470-c98f-40e0-a342-3186715e3534,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"79c83d114911169e2a1eaa2497863e4116893dc69844add8d0241ccc6b70370e\"" Sep 13 00:08:14.319360 containerd[1531]: 2025-09-13 00:08:14.085 [INFO][4683] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 13 00:08:14.319360 containerd[1531]: 2025-09-13 00:08:14.102 [INFO][4683] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--748877ff9b--8wsdp-eth0 calico-apiserver-748877ff9b- calico-apiserver b6ce864c-530a-4daf-972c-b9dd880246b5 913 0 2025-09-13 00:07:47 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:748877ff9b projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-748877ff9b-8wsdp eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali69c38515ffc [] [] }} ContainerID="58965c2fcbcf510ac6906fff13ab711505586dd74f0cd42a644481a3840828bf" Namespace="calico-apiserver" Pod="calico-apiserver-748877ff9b-8wsdp" WorkloadEndpoint="localhost-k8s-calico--apiserver--748877ff9b--8wsdp-" Sep 13 00:08:14.319360 containerd[1531]: 2025-09-13 00:08:14.102 [INFO][4683] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="58965c2fcbcf510ac6906fff13ab711505586dd74f0cd42a644481a3840828bf" Namespace="calico-apiserver" Pod="calico-apiserver-748877ff9b-8wsdp" WorkloadEndpoint="localhost-k8s-calico--apiserver--748877ff9b--8wsdp-eth0" Sep 13 00:08:14.319360 containerd[1531]: 2025-09-13 00:08:14.167 [INFO][4715] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="58965c2fcbcf510ac6906fff13ab711505586dd74f0cd42a644481a3840828bf" HandleID="k8s-pod-network.58965c2fcbcf510ac6906fff13ab711505586dd74f0cd42a644481a3840828bf" Workload="localhost-k8s-calico--apiserver--748877ff9b--8wsdp-eth0" Sep 13 00:08:14.319360 containerd[1531]: 2025-09-13 00:08:14.167 [INFO][4715] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="58965c2fcbcf510ac6906fff13ab711505586dd74f0cd42a644481a3840828bf" HandleID="k8s-pod-network.58965c2fcbcf510ac6906fff13ab711505586dd74f0cd42a644481a3840828bf" Workload="localhost-k8s-calico--apiserver--748877ff9b--8wsdp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5600), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-748877ff9b-8wsdp", "timestamp":"2025-09-13 00:08:14.167491699 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:08:14.319360 containerd[1531]: 2025-09-13 00:08:14.168 [INFO][4715] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:08:14.319360 containerd[1531]: 2025-09-13 00:08:14.191 [INFO][4715] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:08:14.319360 containerd[1531]: 2025-09-13 00:08:14.193 [INFO][4715] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 13 00:08:14.319360 containerd[1531]: 2025-09-13 00:08:14.266 [INFO][4715] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.58965c2fcbcf510ac6906fff13ab711505586dd74f0cd42a644481a3840828bf" host="localhost" Sep 13 00:08:14.319360 containerd[1531]: 2025-09-13 00:08:14.269 [INFO][4715] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 13 00:08:14.319360 containerd[1531]: 2025-09-13 00:08:14.277 [INFO][4715] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 13 00:08:14.319360 containerd[1531]: 2025-09-13 00:08:14.278 [INFO][4715] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 13 00:08:14.319360 containerd[1531]: 2025-09-13 00:08:14.281 [INFO][4715] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 13 00:08:14.319360 containerd[1531]: 2025-09-13 00:08:14.281 [INFO][4715] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.58965c2fcbcf510ac6906fff13ab711505586dd74f0cd42a644481a3840828bf" host="localhost" Sep 13 00:08:14.319360 containerd[1531]: 2025-09-13 00:08:14.283 [INFO][4715] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.58965c2fcbcf510ac6906fff13ab711505586dd74f0cd42a644481a3840828bf Sep 13 00:08:14.319360 containerd[1531]: 2025-09-13 00:08:14.284 [INFO][4715] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.58965c2fcbcf510ac6906fff13ab711505586dd74f0cd42a644481a3840828bf" host="localhost" Sep 13 00:08:14.319360 containerd[1531]: 2025-09-13 00:08:14.288 [INFO][4715] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.58965c2fcbcf510ac6906fff13ab711505586dd74f0cd42a644481a3840828bf" host="localhost" Sep 13 00:08:14.319360 containerd[1531]: 2025-09-13 00:08:14.288 [INFO][4715] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.58965c2fcbcf510ac6906fff13ab711505586dd74f0cd42a644481a3840828bf" host="localhost" Sep 13 00:08:14.319360 containerd[1531]: 2025-09-13 00:08:14.288 [INFO][4715] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:08:14.319360 containerd[1531]: 2025-09-13 00:08:14.288 [INFO][4715] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="58965c2fcbcf510ac6906fff13ab711505586dd74f0cd42a644481a3840828bf" HandleID="k8s-pod-network.58965c2fcbcf510ac6906fff13ab711505586dd74f0cd42a644481a3840828bf" Workload="localhost-k8s-calico--apiserver--748877ff9b--8wsdp-eth0" Sep 13 00:08:14.319762 containerd[1531]: 2025-09-13 00:08:14.297 [INFO][4683] cni-plugin/k8s.go 418: Populated endpoint ContainerID="58965c2fcbcf510ac6906fff13ab711505586dd74f0cd42a644481a3840828bf" Namespace="calico-apiserver" Pod="calico-apiserver-748877ff9b-8wsdp" WorkloadEndpoint="localhost-k8s-calico--apiserver--748877ff9b--8wsdp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--748877ff9b--8wsdp-eth0", GenerateName:"calico-apiserver-748877ff9b-", Namespace:"calico-apiserver", SelfLink:"", UID:"b6ce864c-530a-4daf-972c-b9dd880246b5", ResourceVersion:"913", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 7, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"748877ff9b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-748877ff9b-8wsdp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali69c38515ffc", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:08:14.319762 containerd[1531]: 2025-09-13 00:08:14.297 [INFO][4683] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="58965c2fcbcf510ac6906fff13ab711505586dd74f0cd42a644481a3840828bf" Namespace="calico-apiserver" Pod="calico-apiserver-748877ff9b-8wsdp" WorkloadEndpoint="localhost-k8s-calico--apiserver--748877ff9b--8wsdp-eth0" Sep 13 00:08:14.319762 containerd[1531]: 2025-09-13 00:08:14.297 [INFO][4683] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali69c38515ffc ContainerID="58965c2fcbcf510ac6906fff13ab711505586dd74f0cd42a644481a3840828bf" Namespace="calico-apiserver" Pod="calico-apiserver-748877ff9b-8wsdp" WorkloadEndpoint="localhost-k8s-calico--apiserver--748877ff9b--8wsdp-eth0" Sep 13 00:08:14.319762 containerd[1531]: 2025-09-13 00:08:14.301 [INFO][4683] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="58965c2fcbcf510ac6906fff13ab711505586dd74f0cd42a644481a3840828bf" Namespace="calico-apiserver" Pod="calico-apiserver-748877ff9b-8wsdp" WorkloadEndpoint="localhost-k8s-calico--apiserver--748877ff9b--8wsdp-eth0" Sep 13 00:08:14.319762 containerd[1531]: 2025-09-13 00:08:14.302 [INFO][4683] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="58965c2fcbcf510ac6906fff13ab711505586dd74f0cd42a644481a3840828bf" Namespace="calico-apiserver" Pod="calico-apiserver-748877ff9b-8wsdp" WorkloadEndpoint="localhost-k8s-calico--apiserver--748877ff9b--8wsdp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--748877ff9b--8wsdp-eth0", GenerateName:"calico-apiserver-748877ff9b-", Namespace:"calico-apiserver", SelfLink:"", UID:"b6ce864c-530a-4daf-972c-b9dd880246b5", ResourceVersion:"913", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 7, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"748877ff9b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"58965c2fcbcf510ac6906fff13ab711505586dd74f0cd42a644481a3840828bf", Pod:"calico-apiserver-748877ff9b-8wsdp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali69c38515ffc", MAC:"52:4a:08:5a:f7:c6", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:08:14.319762 containerd[1531]: 2025-09-13 00:08:14.314 [INFO][4683] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="58965c2fcbcf510ac6906fff13ab711505586dd74f0cd42a644481a3840828bf" Namespace="calico-apiserver" Pod="calico-apiserver-748877ff9b-8wsdp" WorkloadEndpoint="localhost-k8s-calico--apiserver--748877ff9b--8wsdp-eth0" Sep 13 00:08:14.343043 containerd[1531]: time="2025-09-13T00:08:14.342193690Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:08:14.343043 containerd[1531]: time="2025-09-13T00:08:14.342227671Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:08:14.343043 containerd[1531]: time="2025-09-13T00:08:14.342234553Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:08:14.343043 containerd[1531]: time="2025-09-13T00:08:14.342294109Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:08:14.360332 systemd[1]: Started cri-containerd-58965c2fcbcf510ac6906fff13ab711505586dd74f0cd42a644481a3840828bf.scope - libcontainer container 58965c2fcbcf510ac6906fff13ab711505586dd74f0cd42a644481a3840828bf. Sep 13 00:08:14.368497 systemd-resolved[1434]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 13 00:08:14.376261 systemd-networkd[1433]: cali9ee488ee3a9: Gained IPv6LL Sep 13 00:08:14.387935 containerd[1531]: time="2025-09-13T00:08:14.387911266Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-748877ff9b-8wsdp,Uid:b6ce864c-530a-4daf-972c-b9dd880246b5,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"58965c2fcbcf510ac6906fff13ab711505586dd74f0cd42a644481a3840828bf\"" Sep 13 00:08:14.962400 kubelet[2723]: I0913 00:08:14.962142 2723 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-5ff7f7d44c-52xq6" podStartSLOduration=21.930195539 podStartE2EDuration="24.962128615s" podCreationTimestamp="2025-09-13 00:07:50 +0000 UTC" firstStartedPulling="2025-09-13 00:08:11.006505456 +0000 UTC m=+38.404229803" lastFinishedPulling="2025-09-13 00:08:14.038438535 +0000 UTC m=+41.436162879" observedRunningTime="2025-09-13 00:08:14.959513698 +0000 UTC m=+42.357238050" watchObservedRunningTime="2025-09-13 00:08:14.962128615 +0000 UTC m=+42.359852974" Sep 13 00:08:15.274413 kubelet[2723]: I0913 00:08:15.274336 2723 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 13 00:08:15.674487 containerd[1531]: time="2025-09-13T00:08:15.674417696Z" level=info msg="StopPodSandbox for \"d3b00ff9c4e338254c44e1d9ecc6030aa4b3e4f18b7be551447bb6e8e5089b39\"" Sep 13 00:08:15.761884 containerd[1531]: 2025-09-13 00:08:15.726 [INFO][4910] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d3b00ff9c4e338254c44e1d9ecc6030aa4b3e4f18b7be551447bb6e8e5089b39" Sep 13 00:08:15.761884 containerd[1531]: 2025-09-13 00:08:15.726 [INFO][4910] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="d3b00ff9c4e338254c44e1d9ecc6030aa4b3e4f18b7be551447bb6e8e5089b39" iface="eth0" netns="/var/run/netns/cni-e26e2c3f-c00b-e40c-122e-8b0a1420f772" Sep 13 00:08:15.761884 containerd[1531]: 2025-09-13 00:08:15.726 [INFO][4910] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="d3b00ff9c4e338254c44e1d9ecc6030aa4b3e4f18b7be551447bb6e8e5089b39" iface="eth0" netns="/var/run/netns/cni-e26e2c3f-c00b-e40c-122e-8b0a1420f772" Sep 13 00:08:15.761884 containerd[1531]: 2025-09-13 00:08:15.728 [INFO][4910] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="d3b00ff9c4e338254c44e1d9ecc6030aa4b3e4f18b7be551447bb6e8e5089b39" iface="eth0" netns="/var/run/netns/cni-e26e2c3f-c00b-e40c-122e-8b0a1420f772" Sep 13 00:08:15.761884 containerd[1531]: 2025-09-13 00:08:15.728 [INFO][4910] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d3b00ff9c4e338254c44e1d9ecc6030aa4b3e4f18b7be551447bb6e8e5089b39" Sep 13 00:08:15.761884 containerd[1531]: 2025-09-13 00:08:15.728 [INFO][4910] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d3b00ff9c4e338254c44e1d9ecc6030aa4b3e4f18b7be551447bb6e8e5089b39" Sep 13 00:08:15.761884 containerd[1531]: 2025-09-13 00:08:15.749 [INFO][4917] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d3b00ff9c4e338254c44e1d9ecc6030aa4b3e4f18b7be551447bb6e8e5089b39" HandleID="k8s-pod-network.d3b00ff9c4e338254c44e1d9ecc6030aa4b3e4f18b7be551447bb6e8e5089b39" Workload="localhost-k8s-coredns--668d6bf9bc--27vjx-eth0" Sep 13 00:08:15.761884 containerd[1531]: 2025-09-13 00:08:15.749 [INFO][4917] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:08:15.761884 containerd[1531]: 2025-09-13 00:08:15.749 [INFO][4917] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:08:15.761884 containerd[1531]: 2025-09-13 00:08:15.756 [WARNING][4917] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d3b00ff9c4e338254c44e1d9ecc6030aa4b3e4f18b7be551447bb6e8e5089b39" HandleID="k8s-pod-network.d3b00ff9c4e338254c44e1d9ecc6030aa4b3e4f18b7be551447bb6e8e5089b39" Workload="localhost-k8s-coredns--668d6bf9bc--27vjx-eth0" Sep 13 00:08:15.761884 containerd[1531]: 2025-09-13 00:08:15.756 [INFO][4917] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d3b00ff9c4e338254c44e1d9ecc6030aa4b3e4f18b7be551447bb6e8e5089b39" HandleID="k8s-pod-network.d3b00ff9c4e338254c44e1d9ecc6030aa4b3e4f18b7be551447bb6e8e5089b39" Workload="localhost-k8s-coredns--668d6bf9bc--27vjx-eth0" Sep 13 00:08:15.761884 containerd[1531]: 2025-09-13 00:08:15.756 [INFO][4917] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:08:15.761884 containerd[1531]: 2025-09-13 00:08:15.758 [INFO][4910] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d3b00ff9c4e338254c44e1d9ecc6030aa4b3e4f18b7be551447bb6e8e5089b39" Sep 13 00:08:15.764141 systemd[1]: run-netns-cni\x2de26e2c3f\x2dc00b\x2de40c\x2d122e\x2d8b0a1420f772.mount: Deactivated successfully. Sep 13 00:08:15.765185 containerd[1531]: time="2025-09-13T00:08:15.764818090Z" level=info msg="TearDown network for sandbox \"d3b00ff9c4e338254c44e1d9ecc6030aa4b3e4f18b7be551447bb6e8e5089b39\" successfully" Sep 13 00:08:15.765185 containerd[1531]: time="2025-09-13T00:08:15.764839463Z" level=info msg="StopPodSandbox for \"d3b00ff9c4e338254c44e1d9ecc6030aa4b3e4f18b7be551447bb6e8e5089b39\" returns successfully" Sep 13 00:08:15.765867 containerd[1531]: time="2025-09-13T00:08:15.765754634Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-27vjx,Uid:971b88ba-d1d5-453d-978e-99740f9cff86,Namespace:kube-system,Attempt:1,}" Sep 13 00:08:15.784522 systemd-networkd[1433]: cali55ef14a4cb3: Gained IPv6LL Sep 13 00:08:15.925675 systemd-networkd[1433]: califbb29fd2a46: Link UP Sep 13 00:08:15.926695 systemd-networkd[1433]: califbb29fd2a46: Gained carrier Sep 13 00:08:15.947894 containerd[1531]: 2025-09-13 00:08:15.848 [INFO][4925] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--27vjx-eth0 coredns-668d6bf9bc- kube-system 971b88ba-d1d5-453d-978e-99740f9cff86 947 0 2025-09-13 00:07:38 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-27vjx eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] califbb29fd2a46 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="938dccbc0f3611bcca69d50daa6ce5d50d188be4fb2086234c6768925f17b55b" Namespace="kube-system" Pod="coredns-668d6bf9bc-27vjx" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--27vjx-" Sep 13 00:08:15.947894 containerd[1531]: 2025-09-13 00:08:15.848 [INFO][4925] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="938dccbc0f3611bcca69d50daa6ce5d50d188be4fb2086234c6768925f17b55b" Namespace="kube-system" Pod="coredns-668d6bf9bc-27vjx" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--27vjx-eth0" Sep 13 00:08:15.947894 containerd[1531]: 2025-09-13 00:08:15.886 [INFO][4939] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="938dccbc0f3611bcca69d50daa6ce5d50d188be4fb2086234c6768925f17b55b" HandleID="k8s-pod-network.938dccbc0f3611bcca69d50daa6ce5d50d188be4fb2086234c6768925f17b55b" Workload="localhost-k8s-coredns--668d6bf9bc--27vjx-eth0" Sep 13 00:08:15.947894 containerd[1531]: 2025-09-13 00:08:15.887 [INFO][4939] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="938dccbc0f3611bcca69d50daa6ce5d50d188be4fb2086234c6768925f17b55b" HandleID="k8s-pod-network.938dccbc0f3611bcca69d50daa6ce5d50d188be4fb2086234c6768925f17b55b" Workload="localhost-k8s-coredns--668d6bf9bc--27vjx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ad4a0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-27vjx", "timestamp":"2025-09-13 00:08:15.886942073 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:08:15.947894 containerd[1531]: 2025-09-13 00:08:15.887 [INFO][4939] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:08:15.947894 containerd[1531]: 2025-09-13 00:08:15.887 [INFO][4939] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:08:15.947894 containerd[1531]: 2025-09-13 00:08:15.887 [INFO][4939] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 13 00:08:15.947894 containerd[1531]: 2025-09-13 00:08:15.891 [INFO][4939] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.938dccbc0f3611bcca69d50daa6ce5d50d188be4fb2086234c6768925f17b55b" host="localhost" Sep 13 00:08:15.947894 containerd[1531]: 2025-09-13 00:08:15.893 [INFO][4939] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 13 00:08:15.947894 containerd[1531]: 2025-09-13 00:08:15.895 [INFO][4939] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 13 00:08:15.947894 containerd[1531]: 2025-09-13 00:08:15.896 [INFO][4939] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 13 00:08:15.947894 containerd[1531]: 2025-09-13 00:08:15.897 [INFO][4939] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 13 00:08:15.947894 containerd[1531]: 2025-09-13 00:08:15.897 [INFO][4939] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.938dccbc0f3611bcca69d50daa6ce5d50d188be4fb2086234c6768925f17b55b" host="localhost" Sep 13 00:08:15.947894 containerd[1531]: 2025-09-13 00:08:15.898 [INFO][4939] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.938dccbc0f3611bcca69d50daa6ce5d50d188be4fb2086234c6768925f17b55b Sep 13 00:08:15.947894 containerd[1531]: 2025-09-13 00:08:15.902 [INFO][4939] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.938dccbc0f3611bcca69d50daa6ce5d50d188be4fb2086234c6768925f17b55b" host="localhost" Sep 13 00:08:15.947894 containerd[1531]: 2025-09-13 00:08:15.916 [INFO][4939] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.938dccbc0f3611bcca69d50daa6ce5d50d188be4fb2086234c6768925f17b55b" host="localhost" Sep 13 00:08:15.947894 containerd[1531]: 2025-09-13 00:08:15.916 [INFO][4939] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.938dccbc0f3611bcca69d50daa6ce5d50d188be4fb2086234c6768925f17b55b" host="localhost" Sep 13 00:08:15.947894 containerd[1531]: 2025-09-13 00:08:15.916 [INFO][4939] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:08:15.947894 containerd[1531]: 2025-09-13 00:08:15.916 [INFO][4939] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="938dccbc0f3611bcca69d50daa6ce5d50d188be4fb2086234c6768925f17b55b" HandleID="k8s-pod-network.938dccbc0f3611bcca69d50daa6ce5d50d188be4fb2086234c6768925f17b55b" Workload="localhost-k8s-coredns--668d6bf9bc--27vjx-eth0" Sep 13 00:08:15.949032 containerd[1531]: 2025-09-13 00:08:15.920 [INFO][4925] cni-plugin/k8s.go 418: Populated endpoint ContainerID="938dccbc0f3611bcca69d50daa6ce5d50d188be4fb2086234c6768925f17b55b" Namespace="kube-system" Pod="coredns-668d6bf9bc-27vjx" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--27vjx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--27vjx-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"971b88ba-d1d5-453d-978e-99740f9cff86", ResourceVersion:"947", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 7, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-27vjx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califbb29fd2a46", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:08:15.949032 containerd[1531]: 2025-09-13 00:08:15.920 [INFO][4925] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="938dccbc0f3611bcca69d50daa6ce5d50d188be4fb2086234c6768925f17b55b" Namespace="kube-system" Pod="coredns-668d6bf9bc-27vjx" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--27vjx-eth0" Sep 13 00:08:15.949032 containerd[1531]: 2025-09-13 00:08:15.920 [INFO][4925] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califbb29fd2a46 ContainerID="938dccbc0f3611bcca69d50daa6ce5d50d188be4fb2086234c6768925f17b55b" Namespace="kube-system" Pod="coredns-668d6bf9bc-27vjx" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--27vjx-eth0" Sep 13 00:08:15.949032 containerd[1531]: 2025-09-13 00:08:15.927 [INFO][4925] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="938dccbc0f3611bcca69d50daa6ce5d50d188be4fb2086234c6768925f17b55b" Namespace="kube-system" Pod="coredns-668d6bf9bc-27vjx" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--27vjx-eth0" Sep 13 00:08:15.949032 containerd[1531]: 2025-09-13 00:08:15.928 [INFO][4925] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="938dccbc0f3611bcca69d50daa6ce5d50d188be4fb2086234c6768925f17b55b" Namespace="kube-system" Pod="coredns-668d6bf9bc-27vjx" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--27vjx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--27vjx-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"971b88ba-d1d5-453d-978e-99740f9cff86", ResourceVersion:"947", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 7, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"938dccbc0f3611bcca69d50daa6ce5d50d188be4fb2086234c6768925f17b55b", Pod:"coredns-668d6bf9bc-27vjx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califbb29fd2a46", MAC:"9a:c4:76:fc:83:3a", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:08:15.949032 containerd[1531]: 2025-09-13 00:08:15.942 [INFO][4925] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="938dccbc0f3611bcca69d50daa6ce5d50d188be4fb2086234c6768925f17b55b" Namespace="kube-system" Pod="coredns-668d6bf9bc-27vjx" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--27vjx-eth0" Sep 13 00:08:15.978698 containerd[1531]: time="2025-09-13T00:08:15.978644810Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:08:15.978859 containerd[1531]: time="2025-09-13T00:08:15.978680332Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:08:15.978859 containerd[1531]: time="2025-09-13T00:08:15.978698943Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:08:15.978859 containerd[1531]: time="2025-09-13T00:08:15.978748723Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:08:15.990107 kernel: bpftool[4973]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Sep 13 00:08:15.998575 systemd[1]: run-containerd-runc-k8s.io-938dccbc0f3611bcca69d50daa6ce5d50d188be4fb2086234c6768925f17b55b-runc.lSCmkt.mount: Deactivated successfully. Sep 13 00:08:16.011551 systemd[1]: Started cri-containerd-938dccbc0f3611bcca69d50daa6ce5d50d188be4fb2086234c6768925f17b55b.scope - libcontainer container 938dccbc0f3611bcca69d50daa6ce5d50d188be4fb2086234c6768925f17b55b. Sep 13 00:08:16.019881 systemd-resolved[1434]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 13 00:08:16.041612 containerd[1531]: time="2025-09-13T00:08:16.041589111Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-27vjx,Uid:971b88ba-d1d5-453d-978e-99740f9cff86,Namespace:kube-system,Attempt:1,} returns sandbox id \"938dccbc0f3611bcca69d50daa6ce5d50d188be4fb2086234c6768925f17b55b\"" Sep 13 00:08:16.077895 containerd[1531]: time="2025-09-13T00:08:16.077791086Z" level=info msg="CreateContainer within sandbox \"938dccbc0f3611bcca69d50daa6ce5d50d188be4fb2086234c6768925f17b55b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 13 00:08:16.105102 systemd-networkd[1433]: cali69c38515ffc: Gained IPv6LL Sep 13 00:08:16.218593 containerd[1531]: time="2025-09-13T00:08:16.218519278Z" level=info msg="CreateContainer within sandbox \"938dccbc0f3611bcca69d50daa6ce5d50d188be4fb2086234c6768925f17b55b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"41d1a19753d9c86f5b37ee8635ac0945421905b5ad5d9ef11a62ef024fb41952\"" Sep 13 00:08:16.221737 containerd[1531]: time="2025-09-13T00:08:16.221496977Z" level=info msg="StartContainer for \"41d1a19753d9c86f5b37ee8635ac0945421905b5ad5d9ef11a62ef024fb41952\"" Sep 13 00:08:16.261312 systemd[1]: Started cri-containerd-41d1a19753d9c86f5b37ee8635ac0945421905b5ad5d9ef11a62ef024fb41952.scope - libcontainer container 41d1a19753d9c86f5b37ee8635ac0945421905b5ad5d9ef11a62ef024fb41952. Sep 13 00:08:16.304389 containerd[1531]: time="2025-09-13T00:08:16.304363071Z" level=info msg="StartContainer for \"41d1a19753d9c86f5b37ee8635ac0945421905b5ad5d9ef11a62ef024fb41952\" returns successfully" Sep 13 00:08:16.576866 systemd-networkd[1433]: vxlan.calico: Link UP Sep 13 00:08:16.576871 systemd-networkd[1433]: vxlan.calico: Gained carrier Sep 13 00:08:16.984108 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount233164910.mount: Deactivated successfully. Sep 13 00:08:17.009230 kubelet[2723]: I0913 00:08:17.008810 2723 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 13 00:08:17.063205 kubelet[2723]: I0913 00:08:17.062677 2723 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-27vjx" podStartSLOduration=39.062661773 podStartE2EDuration="39.062661773s" podCreationTimestamp="2025-09-13 00:07:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:08:17.008317059 +0000 UTC m=+44.406041411" watchObservedRunningTime="2025-09-13 00:08:17.062661773 +0000 UTC m=+44.460386127" Sep 13 00:08:17.640272 systemd-networkd[1433]: vxlan.calico: Gained IPv6LL Sep 13 00:08:17.960303 systemd-networkd[1433]: califbb29fd2a46: Gained IPv6LL Sep 13 00:08:18.048201 containerd[1531]: time="2025-09-13T00:08:18.048111194Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:08:18.051813 containerd[1531]: time="2025-09-13T00:08:18.051755383Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.3: active requests=0, bytes read=66357526" Sep 13 00:08:18.057492 containerd[1531]: time="2025-09-13T00:08:18.057472329Z" level=info msg="ImageCreate event name:\"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:08:18.071988 containerd[1531]: time="2025-09-13T00:08:18.071881039Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:08:18.072441 containerd[1531]: time="2025-09-13T00:08:18.072352772Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" with image id \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\", size \"66357372\" in 4.032594825s" Sep 13 00:08:18.072441 containerd[1531]: time="2025-09-13T00:08:18.072371558Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" returns image reference \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\"" Sep 13 00:08:18.124994 containerd[1531]: time="2025-09-13T00:08:18.124282929Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\"" Sep 13 00:08:18.169629 containerd[1531]: time="2025-09-13T00:08:18.169606263Z" level=info msg="CreateContainer within sandbox \"e14c6859306747c12c30901fd0d168b19ed5632cab03483380ce5f914829db8c\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Sep 13 00:08:18.177361 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount52821982.mount: Deactivated successfully. Sep 13 00:08:18.180376 containerd[1531]: time="2025-09-13T00:08:18.180355396Z" level=info msg="CreateContainer within sandbox \"e14c6859306747c12c30901fd0d168b19ed5632cab03483380ce5f914829db8c\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"d2e8dd7a0432e0471a0b9bfc82941d6c87df64b98a97c86e77371fe547c26515\"" Sep 13 00:08:18.192218 containerd[1531]: time="2025-09-13T00:08:18.191547274Z" level=info msg="StartContainer for \"d2e8dd7a0432e0471a0b9bfc82941d6c87df64b98a97c86e77371fe547c26515\"" Sep 13 00:08:18.218313 systemd[1]: Started cri-containerd-d2e8dd7a0432e0471a0b9bfc82941d6c87df64b98a97c86e77371fe547c26515.scope - libcontainer container d2e8dd7a0432e0471a0b9bfc82941d6c87df64b98a97c86e77371fe547c26515. Sep 13 00:08:18.273524 containerd[1531]: time="2025-09-13T00:08:18.273456217Z" level=info msg="StartContainer for \"d2e8dd7a0432e0471a0b9bfc82941d6c87df64b98a97c86e77371fe547c26515\" returns successfully" Sep 13 00:08:19.207236 kubelet[2723]: I0913 00:08:19.202938 2723 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-54d579b49d-pgggr" podStartSLOduration=23.196973326 podStartE2EDuration="30.197855964s" podCreationTimestamp="2025-09-13 00:07:49 +0000 UTC" firstStartedPulling="2025-09-13 00:08:11.104860421 +0000 UTC m=+38.502584765" lastFinishedPulling="2025-09-13 00:08:18.105743058 +0000 UTC m=+45.503467403" observedRunningTime="2025-09-13 00:08:19.192984993 +0000 UTC m=+46.590709346" watchObservedRunningTime="2025-09-13 00:08:19.197855964 +0000 UTC m=+46.595580310" Sep 13 00:08:21.413759 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount448507863.mount: Deactivated successfully. Sep 13 00:08:21.776745 containerd[1531]: time="2025-09-13T00:08:21.776523103Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:08:21.778695 containerd[1531]: time="2025-09-13T00:08:21.778633278Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.3: active requests=0, bytes read=33085545" Sep 13 00:08:21.780431 containerd[1531]: time="2025-09-13T00:08:21.780040715Z" level=info msg="ImageCreate event name:\"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:08:21.783476 containerd[1531]: time="2025-09-13T00:08:21.783461304Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:08:21.784900 containerd[1531]: time="2025-09-13T00:08:21.783904252Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" with image id \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\", size \"33085375\" in 3.659597829s" Sep 13 00:08:21.784956 containerd[1531]: time="2025-09-13T00:08:21.784945529Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" returns image reference \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\"" Sep 13 00:08:21.851871 containerd[1531]: time="2025-09-13T00:08:21.851852125Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\"" Sep 13 00:08:21.859025 containerd[1531]: time="2025-09-13T00:08:21.858917756Z" level=info msg="CreateContainer within sandbox \"7f46dcaf6844fa9aad12cc1e7c611fb9f441c81cd7405973e50ff774a9d9bced\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Sep 13 00:08:21.868570 containerd[1531]: time="2025-09-13T00:08:21.868503718Z" level=info msg="CreateContainer within sandbox \"7f46dcaf6844fa9aad12cc1e7c611fb9f441c81cd7405973e50ff774a9d9bced\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"215445bcb9b97463920d8d331c7c9ce533783148a7f40b6cf2267cd85e28deec\"" Sep 13 00:08:21.870046 containerd[1531]: time="2025-09-13T00:08:21.868889206Z" level=info msg="StartContainer for \"215445bcb9b97463920d8d331c7c9ce533783148a7f40b6cf2267cd85e28deec\"" Sep 13 00:08:21.975264 systemd[1]: Started cri-containerd-215445bcb9b97463920d8d331c7c9ce533783148a7f40b6cf2267cd85e28deec.scope - libcontainer container 215445bcb9b97463920d8d331c7c9ce533783148a7f40b6cf2267cd85e28deec. Sep 13 00:08:22.017694 containerd[1531]: time="2025-09-13T00:08:22.017661047Z" level=info msg="StartContainer for \"215445bcb9b97463920d8d331c7c9ce533783148a7f40b6cf2267cd85e28deec\" returns successfully" Sep 13 00:08:22.342442 kubelet[2723]: I0913 00:08:22.342190 2723 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-5f5db7d7dc-ctpvc" podStartSLOduration=1.309534059 podStartE2EDuration="13.315579506s" podCreationTimestamp="2025-09-13 00:08:09 +0000 UTC" firstStartedPulling="2025-09-13 00:08:09.84569924 +0000 UTC m=+37.243423584" lastFinishedPulling="2025-09-13 00:08:21.851744687 +0000 UTC m=+49.249469031" observedRunningTime="2025-09-13 00:08:22.278747438 +0000 UTC m=+49.676471784" watchObservedRunningTime="2025-09-13 00:08:22.315579506 +0000 UTC m=+49.713303860" Sep 13 00:08:23.531038 containerd[1531]: time="2025-09-13T00:08:23.530486056Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:08:23.531748 containerd[1531]: time="2025-09-13T00:08:23.531715767Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.3: active requests=0, bytes read=8760527" Sep 13 00:08:23.532241 containerd[1531]: time="2025-09-13T00:08:23.532224361Z" level=info msg="ImageCreate event name:\"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:08:23.534622 containerd[1531]: time="2025-09-13T00:08:23.534026445Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:08:23.534926 containerd[1531]: time="2025-09-13T00:08:23.534607143Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.3\" with image id \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\", size \"10253230\" in 1.682620151s" Sep 13 00:08:23.535377 containerd[1531]: time="2025-09-13T00:08:23.535170915Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\" returns image reference \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\"" Sep 13 00:08:23.536982 containerd[1531]: time="2025-09-13T00:08:23.536389584Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 13 00:08:23.541177 containerd[1531]: time="2025-09-13T00:08:23.541034957Z" level=info msg="CreateContainer within sandbox \"5c02421b9b1d53132eec1967348c12830d2645894eb75b5b7203f95662f96c94\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Sep 13 00:08:23.567447 containerd[1531]: time="2025-09-13T00:08:23.567406000Z" level=info msg="CreateContainer within sandbox \"5c02421b9b1d53132eec1967348c12830d2645894eb75b5b7203f95662f96c94\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"c9d8410bf4b7587afb2764525a8d701ef2aead6f4ec0a1e7a69809188d096401\"" Sep 13 00:08:23.569852 containerd[1531]: time="2025-09-13T00:08:23.569829158Z" level=info msg="StartContainer for \"c9d8410bf4b7587afb2764525a8d701ef2aead6f4ec0a1e7a69809188d096401\"" Sep 13 00:08:23.614306 systemd[1]: Started cri-containerd-c9d8410bf4b7587afb2764525a8d701ef2aead6f4ec0a1e7a69809188d096401.scope - libcontainer container c9d8410bf4b7587afb2764525a8d701ef2aead6f4ec0a1e7a69809188d096401. Sep 13 00:08:23.651978 containerd[1531]: time="2025-09-13T00:08:23.651948616Z" level=info msg="StartContainer for \"c9d8410bf4b7587afb2764525a8d701ef2aead6f4ec0a1e7a69809188d096401\" returns successfully" Sep 13 00:08:26.534136 containerd[1531]: time="2025-09-13T00:08:26.534098677Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:08:26.535532 containerd[1531]: time="2025-09-13T00:08:26.535507745Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=47333864" Sep 13 00:08:26.536021 containerd[1531]: time="2025-09-13T00:08:26.536004806Z" level=info msg="ImageCreate event name:\"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:08:26.546166 containerd[1531]: time="2025-09-13T00:08:26.546150135Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:08:26.547015 containerd[1531]: time="2025-09-13T00:08:26.546997649Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 3.010587229s" Sep 13 00:08:26.547794 containerd[1531]: time="2025-09-13T00:08:26.547078453Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 13 00:08:26.551673 containerd[1531]: time="2025-09-13T00:08:26.551587016Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 13 00:08:26.634359 systemd[1]: run-containerd-runc-k8s.io-e196af573d04e14806e4686b34754c0aa3daf46782491cf6f4711eb67e39e591-runc.VfjMj4.mount: Deactivated successfully. Sep 13 00:08:26.662884 containerd[1531]: time="2025-09-13T00:08:26.662853998Z" level=info msg="CreateContainer within sandbox \"79c83d114911169e2a1eaa2497863e4116893dc69844add8d0241ccc6b70370e\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 13 00:08:26.675077 containerd[1531]: time="2025-09-13T00:08:26.675051098Z" level=info msg="CreateContainer within sandbox \"79c83d114911169e2a1eaa2497863e4116893dc69844add8d0241ccc6b70370e\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"22a8df81bf05c1602667276d5ce9fe4ba6e324b36815a5923346f72b93787a30\"" Sep 13 00:08:26.677654 containerd[1531]: time="2025-09-13T00:08:26.677515578Z" level=info msg="StartContainer for \"22a8df81bf05c1602667276d5ce9fe4ba6e324b36815a5923346f72b93787a30\"" Sep 13 00:08:26.723298 systemd[1]: Started cri-containerd-22a8df81bf05c1602667276d5ce9fe4ba6e324b36815a5923346f72b93787a30.scope - libcontainer container 22a8df81bf05c1602667276d5ce9fe4ba6e324b36815a5923346f72b93787a30. Sep 13 00:08:26.766590 containerd[1531]: time="2025-09-13T00:08:26.766565094Z" level=info msg="StartContainer for \"22a8df81bf05c1602667276d5ce9fe4ba6e324b36815a5923346f72b93787a30\" returns successfully" Sep 13 00:08:27.000203 containerd[1531]: time="2025-09-13T00:08:26.999214224Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:08:27.005396 containerd[1531]: time="2025-09-13T00:08:27.005364369Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=77" Sep 13 00:08:27.013983 containerd[1531]: time="2025-09-13T00:08:27.007847045Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 456.243803ms" Sep 13 00:08:27.013983 containerd[1531]: time="2025-09-13T00:08:27.007866874Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 13 00:08:27.013983 containerd[1531]: time="2025-09-13T00:08:27.008514173Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\"" Sep 13 00:08:27.018950 containerd[1531]: time="2025-09-13T00:08:27.018927316Z" level=info msg="CreateContainer within sandbox \"58965c2fcbcf510ac6906fff13ab711505586dd74f0cd42a644481a3840828bf\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 13 00:08:27.078631 containerd[1531]: time="2025-09-13T00:08:27.078469525Z" level=info msg="CreateContainer within sandbox \"58965c2fcbcf510ac6906fff13ab711505586dd74f0cd42a644481a3840828bf\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"d1127d160250a3c9c9de765bc6185e9758916723c2a3f54bfc1e577c08589241\"" Sep 13 00:08:27.079782 containerd[1531]: time="2025-09-13T00:08:27.079631648Z" level=info msg="StartContainer for \"d1127d160250a3c9c9de765bc6185e9758916723c2a3f54bfc1e577c08589241\"" Sep 13 00:08:27.107318 systemd[1]: Started cri-containerd-d1127d160250a3c9c9de765bc6185e9758916723c2a3f54bfc1e577c08589241.scope - libcontainer container d1127d160250a3c9c9de765bc6185e9758916723c2a3f54bfc1e577c08589241. Sep 13 00:08:27.164434 containerd[1531]: time="2025-09-13T00:08:27.164397247Z" level=info msg="StartContainer for \"d1127d160250a3c9c9de765bc6185e9758916723c2a3f54bfc1e577c08589241\" returns successfully" Sep 13 00:08:27.352722 kubelet[2723]: I0913 00:08:27.329954 2723 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-748877ff9b-h2slx" podStartSLOduration=28.094537546 podStartE2EDuration="40.329934099s" podCreationTimestamp="2025-09-13 00:07:47 +0000 UTC" firstStartedPulling="2025-09-13 00:08:14.31811969 +0000 UTC m=+41.715844034" lastFinishedPulling="2025-09-13 00:08:26.553516242 +0000 UTC m=+53.951240587" observedRunningTime="2025-09-13 00:08:27.329882753 +0000 UTC m=+54.727607105" watchObservedRunningTime="2025-09-13 00:08:27.329934099 +0000 UTC m=+54.727658447" Sep 13 00:08:27.356108 kubelet[2723]: I0913 00:08:27.356077 2723 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-748877ff9b-8wsdp" podStartSLOduration=27.736516198 podStartE2EDuration="40.356063837s" podCreationTimestamp="2025-09-13 00:07:47 +0000 UTC" firstStartedPulling="2025-09-13 00:08:14.38881602 +0000 UTC m=+41.786540364" lastFinishedPulling="2025-09-13 00:08:27.008363656 +0000 UTC m=+54.406088003" observedRunningTime="2025-09-13 00:08:27.35249876 +0000 UTC m=+54.750223104" watchObservedRunningTime="2025-09-13 00:08:27.356063837 +0000 UTC m=+54.753788185" Sep 13 00:08:28.305664 kubelet[2723]: I0913 00:08:28.299786 2723 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 13 00:08:28.307146 kubelet[2723]: I0913 00:08:28.300470 2723 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 13 00:08:29.520911 containerd[1531]: time="2025-09-13T00:08:29.520428885Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:08:29.520911 containerd[1531]: time="2025-09-13T00:08:29.520846797Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3: active requests=0, bytes read=14698542" Sep 13 00:08:29.521229 containerd[1531]: time="2025-09-13T00:08:29.521158683Z" level=info msg="ImageCreate event name:\"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:08:29.522280 containerd[1531]: time="2025-09-13T00:08:29.522263803Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:08:29.528897 containerd[1531]: time="2025-09-13T00:08:29.528000019Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" with image id \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\", size \"16191197\" in 2.51946782s" Sep 13 00:08:29.528941 containerd[1531]: time="2025-09-13T00:08:29.528901617Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" returns image reference \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\"" Sep 13 00:08:29.536492 containerd[1531]: time="2025-09-13T00:08:29.536474043Z" level=info msg="CreateContainer within sandbox \"5c02421b9b1d53132eec1967348c12830d2645894eb75b5b7203f95662f96c94\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Sep 13 00:08:29.549457 containerd[1531]: time="2025-09-13T00:08:29.549403899Z" level=info msg="CreateContainer within sandbox \"5c02421b9b1d53132eec1967348c12830d2645894eb75b5b7203f95662f96c94\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"bae2914578d620227ff9d26dc521acca4cc76d373092bc29b73aa71d6c23cacf\"" Sep 13 00:08:29.550206 containerd[1531]: time="2025-09-13T00:08:29.549634764Z" level=info msg="StartContainer for \"bae2914578d620227ff9d26dc521acca4cc76d373092bc29b73aa71d6c23cacf\"" Sep 13 00:08:29.597030 systemd[1]: Started cri-containerd-bae2914578d620227ff9d26dc521acca4cc76d373092bc29b73aa71d6c23cacf.scope - libcontainer container bae2914578d620227ff9d26dc521acca4cc76d373092bc29b73aa71d6c23cacf. Sep 13 00:08:29.626473 containerd[1531]: time="2025-09-13T00:08:29.626401502Z" level=info msg="StartContainer for \"bae2914578d620227ff9d26dc521acca4cc76d373092bc29b73aa71d6c23cacf\" returns successfully" Sep 13 00:08:30.016068 kubelet[2723]: I0913 00:08:30.015106 2723 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Sep 13 00:08:30.018389 kubelet[2723]: I0913 00:08:30.018376 2723 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Sep 13 00:08:30.335150 kubelet[2723]: I0913 00:08:30.334985 2723 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-prqfd" podStartSLOduration=23.758096368 podStartE2EDuration="41.334971555s" podCreationTimestamp="2025-09-13 00:07:49 +0000 UTC" firstStartedPulling="2025-09-13 00:08:11.953640755 +0000 UTC m=+39.351365098" lastFinishedPulling="2025-09-13 00:08:29.530515944 +0000 UTC m=+56.928240285" observedRunningTime="2025-09-13 00:08:30.328122932 +0000 UTC m=+57.725847285" watchObservedRunningTime="2025-09-13 00:08:30.334971555 +0000 UTC m=+57.732695909" Sep 13 00:08:31.385790 systemd[1]: run-containerd-runc-k8s.io-d2e8dd7a0432e0471a0b9bfc82941d6c87df64b98a97c86e77371fe547c26515-runc.Blzu2i.mount: Deactivated successfully. Sep 13 00:08:32.802020 containerd[1531]: time="2025-09-13T00:08:32.800770151Z" level=info msg="StopPodSandbox for \"d3b00ff9c4e338254c44e1d9ecc6030aa4b3e4f18b7be551447bb6e8e5089b39\"" Sep 13 00:08:33.591617 containerd[1531]: 2025-09-13 00:08:33.330 [WARNING][5611] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d3b00ff9c4e338254c44e1d9ecc6030aa4b3e4f18b7be551447bb6e8e5089b39" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--27vjx-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"971b88ba-d1d5-453d-978e-99740f9cff86", ResourceVersion:"960", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 7, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"938dccbc0f3611bcca69d50daa6ce5d50d188be4fb2086234c6768925f17b55b", Pod:"coredns-668d6bf9bc-27vjx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califbb29fd2a46", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:08:33.591617 containerd[1531]: 2025-09-13 00:08:33.336 [INFO][5611] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d3b00ff9c4e338254c44e1d9ecc6030aa4b3e4f18b7be551447bb6e8e5089b39" Sep 13 00:08:33.591617 containerd[1531]: 2025-09-13 00:08:33.336 [INFO][5611] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d3b00ff9c4e338254c44e1d9ecc6030aa4b3e4f18b7be551447bb6e8e5089b39" iface="eth0" netns="" Sep 13 00:08:33.591617 containerd[1531]: 2025-09-13 00:08:33.337 [INFO][5611] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d3b00ff9c4e338254c44e1d9ecc6030aa4b3e4f18b7be551447bb6e8e5089b39" Sep 13 00:08:33.591617 containerd[1531]: 2025-09-13 00:08:33.337 [INFO][5611] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d3b00ff9c4e338254c44e1d9ecc6030aa4b3e4f18b7be551447bb6e8e5089b39" Sep 13 00:08:33.591617 containerd[1531]: 2025-09-13 00:08:33.569 [INFO][5621] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d3b00ff9c4e338254c44e1d9ecc6030aa4b3e4f18b7be551447bb6e8e5089b39" HandleID="k8s-pod-network.d3b00ff9c4e338254c44e1d9ecc6030aa4b3e4f18b7be551447bb6e8e5089b39" Workload="localhost-k8s-coredns--668d6bf9bc--27vjx-eth0" Sep 13 00:08:33.591617 containerd[1531]: 2025-09-13 00:08:33.573 [INFO][5621] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:08:33.591617 containerd[1531]: 2025-09-13 00:08:33.573 [INFO][5621] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:08:33.591617 containerd[1531]: 2025-09-13 00:08:33.586 [WARNING][5621] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d3b00ff9c4e338254c44e1d9ecc6030aa4b3e4f18b7be551447bb6e8e5089b39" HandleID="k8s-pod-network.d3b00ff9c4e338254c44e1d9ecc6030aa4b3e4f18b7be551447bb6e8e5089b39" Workload="localhost-k8s-coredns--668d6bf9bc--27vjx-eth0" Sep 13 00:08:33.591617 containerd[1531]: 2025-09-13 00:08:33.586 [INFO][5621] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d3b00ff9c4e338254c44e1d9ecc6030aa4b3e4f18b7be551447bb6e8e5089b39" HandleID="k8s-pod-network.d3b00ff9c4e338254c44e1d9ecc6030aa4b3e4f18b7be551447bb6e8e5089b39" Workload="localhost-k8s-coredns--668d6bf9bc--27vjx-eth0" Sep 13 00:08:33.591617 containerd[1531]: 2025-09-13 00:08:33.587 [INFO][5621] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:08:33.591617 containerd[1531]: 2025-09-13 00:08:33.589 [INFO][5611] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d3b00ff9c4e338254c44e1d9ecc6030aa4b3e4f18b7be551447bb6e8e5089b39" Sep 13 00:08:33.606213 containerd[1531]: time="2025-09-13T00:08:33.606185173Z" level=info msg="TearDown network for sandbox \"d3b00ff9c4e338254c44e1d9ecc6030aa4b3e4f18b7be551447bb6e8e5089b39\" successfully" Sep 13 00:08:33.606213 containerd[1531]: time="2025-09-13T00:08:33.606212567Z" level=info msg="StopPodSandbox for \"d3b00ff9c4e338254c44e1d9ecc6030aa4b3e4f18b7be551447bb6e8e5089b39\" returns successfully" Sep 13 00:08:33.696388 containerd[1531]: time="2025-09-13T00:08:33.696360548Z" level=info msg="RemovePodSandbox for \"d3b00ff9c4e338254c44e1d9ecc6030aa4b3e4f18b7be551447bb6e8e5089b39\"" Sep 13 00:08:33.698674 containerd[1531]: time="2025-09-13T00:08:33.698654929Z" level=info msg="Forcibly stopping sandbox \"d3b00ff9c4e338254c44e1d9ecc6030aa4b3e4f18b7be551447bb6e8e5089b39\"" Sep 13 00:08:33.751037 containerd[1531]: 2025-09-13 00:08:33.721 [WARNING][5635] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d3b00ff9c4e338254c44e1d9ecc6030aa4b3e4f18b7be551447bb6e8e5089b39" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--27vjx-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"971b88ba-d1d5-453d-978e-99740f9cff86", ResourceVersion:"960", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 7, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"938dccbc0f3611bcca69d50daa6ce5d50d188be4fb2086234c6768925f17b55b", Pod:"coredns-668d6bf9bc-27vjx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califbb29fd2a46", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:08:33.751037 containerd[1531]: 2025-09-13 00:08:33.721 [INFO][5635] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d3b00ff9c4e338254c44e1d9ecc6030aa4b3e4f18b7be551447bb6e8e5089b39" Sep 13 00:08:33.751037 containerd[1531]: 2025-09-13 00:08:33.721 [INFO][5635] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d3b00ff9c4e338254c44e1d9ecc6030aa4b3e4f18b7be551447bb6e8e5089b39" iface="eth0" netns="" Sep 13 00:08:33.751037 containerd[1531]: 2025-09-13 00:08:33.721 [INFO][5635] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d3b00ff9c4e338254c44e1d9ecc6030aa4b3e4f18b7be551447bb6e8e5089b39" Sep 13 00:08:33.751037 containerd[1531]: 2025-09-13 00:08:33.721 [INFO][5635] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d3b00ff9c4e338254c44e1d9ecc6030aa4b3e4f18b7be551447bb6e8e5089b39" Sep 13 00:08:33.751037 containerd[1531]: 2025-09-13 00:08:33.735 [INFO][5642] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d3b00ff9c4e338254c44e1d9ecc6030aa4b3e4f18b7be551447bb6e8e5089b39" HandleID="k8s-pod-network.d3b00ff9c4e338254c44e1d9ecc6030aa4b3e4f18b7be551447bb6e8e5089b39" Workload="localhost-k8s-coredns--668d6bf9bc--27vjx-eth0" Sep 13 00:08:33.751037 containerd[1531]: 2025-09-13 00:08:33.735 [INFO][5642] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:08:33.751037 containerd[1531]: 2025-09-13 00:08:33.735 [INFO][5642] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:08:33.751037 containerd[1531]: 2025-09-13 00:08:33.741 [WARNING][5642] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d3b00ff9c4e338254c44e1d9ecc6030aa4b3e4f18b7be551447bb6e8e5089b39" HandleID="k8s-pod-network.d3b00ff9c4e338254c44e1d9ecc6030aa4b3e4f18b7be551447bb6e8e5089b39" Workload="localhost-k8s-coredns--668d6bf9bc--27vjx-eth0" Sep 13 00:08:33.751037 containerd[1531]: 2025-09-13 00:08:33.741 [INFO][5642] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d3b00ff9c4e338254c44e1d9ecc6030aa4b3e4f18b7be551447bb6e8e5089b39" HandleID="k8s-pod-network.d3b00ff9c4e338254c44e1d9ecc6030aa4b3e4f18b7be551447bb6e8e5089b39" Workload="localhost-k8s-coredns--668d6bf9bc--27vjx-eth0" Sep 13 00:08:33.751037 containerd[1531]: 2025-09-13 00:08:33.745 [INFO][5642] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:08:33.751037 containerd[1531]: 2025-09-13 00:08:33.749 [INFO][5635] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d3b00ff9c4e338254c44e1d9ecc6030aa4b3e4f18b7be551447bb6e8e5089b39" Sep 13 00:08:33.752361 containerd[1531]: time="2025-09-13T00:08:33.751063156Z" level=info msg="TearDown network for sandbox \"d3b00ff9c4e338254c44e1d9ecc6030aa4b3e4f18b7be551447bb6e8e5089b39\" successfully" Sep 13 00:08:33.777193 containerd[1531]: time="2025-09-13T00:08:33.777106770Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d3b00ff9c4e338254c44e1d9ecc6030aa4b3e4f18b7be551447bb6e8e5089b39\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 13 00:08:33.791547 containerd[1531]: time="2025-09-13T00:08:33.791532676Z" level=info msg="RemovePodSandbox \"d3b00ff9c4e338254c44e1d9ecc6030aa4b3e4f18b7be551447bb6e8e5089b39\" returns successfully" Sep 13 00:08:33.794211 containerd[1531]: time="2025-09-13T00:08:33.794195267Z" level=info msg="StopPodSandbox for \"62a669238e8de25d8a03d9706bb8c03e91d6c0be914fedde6abb0c3ebeaf5814\"" Sep 13 00:08:33.835957 containerd[1531]: 2025-09-13 00:08:33.813 [WARNING][5656] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="62a669238e8de25d8a03d9706bb8c03e91d6c0be914fedde6abb0c3ebeaf5814" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--5ff7f7d44c--52xq6-eth0", GenerateName:"calico-kube-controllers-5ff7f7d44c-", Namespace:"calico-system", SelfLink:"", UID:"91c757d2-79fb-408e-8fa6-48713c1e092e", ResourceVersion:"954", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 7, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5ff7f7d44c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b520bf29e2284115ccd5d07b1d4c4a7eac18aa5f2ae3f30672f31334f9da7d6d", Pod:"calico-kube-controllers-5ff7f7d44c-52xq6", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali62d605b3a21", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:08:33.835957 containerd[1531]: 2025-09-13 00:08:33.813 [INFO][5656] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="62a669238e8de25d8a03d9706bb8c03e91d6c0be914fedde6abb0c3ebeaf5814" Sep 13 00:08:33.835957 containerd[1531]: 2025-09-13 00:08:33.813 [INFO][5656] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="62a669238e8de25d8a03d9706bb8c03e91d6c0be914fedde6abb0c3ebeaf5814" iface="eth0" netns="" Sep 13 00:08:33.835957 containerd[1531]: 2025-09-13 00:08:33.813 [INFO][5656] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="62a669238e8de25d8a03d9706bb8c03e91d6c0be914fedde6abb0c3ebeaf5814" Sep 13 00:08:33.835957 containerd[1531]: 2025-09-13 00:08:33.813 [INFO][5656] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="62a669238e8de25d8a03d9706bb8c03e91d6c0be914fedde6abb0c3ebeaf5814" Sep 13 00:08:33.835957 containerd[1531]: 2025-09-13 00:08:33.827 [INFO][5663] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="62a669238e8de25d8a03d9706bb8c03e91d6c0be914fedde6abb0c3ebeaf5814" HandleID="k8s-pod-network.62a669238e8de25d8a03d9706bb8c03e91d6c0be914fedde6abb0c3ebeaf5814" Workload="localhost-k8s-calico--kube--controllers--5ff7f7d44c--52xq6-eth0" Sep 13 00:08:33.835957 containerd[1531]: 2025-09-13 00:08:33.827 [INFO][5663] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:08:33.835957 containerd[1531]: 2025-09-13 00:08:33.827 [INFO][5663] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:08:33.835957 containerd[1531]: 2025-09-13 00:08:33.832 [WARNING][5663] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="62a669238e8de25d8a03d9706bb8c03e91d6c0be914fedde6abb0c3ebeaf5814" HandleID="k8s-pod-network.62a669238e8de25d8a03d9706bb8c03e91d6c0be914fedde6abb0c3ebeaf5814" Workload="localhost-k8s-calico--kube--controllers--5ff7f7d44c--52xq6-eth0" Sep 13 00:08:33.835957 containerd[1531]: 2025-09-13 00:08:33.832 [INFO][5663] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="62a669238e8de25d8a03d9706bb8c03e91d6c0be914fedde6abb0c3ebeaf5814" HandleID="k8s-pod-network.62a669238e8de25d8a03d9706bb8c03e91d6c0be914fedde6abb0c3ebeaf5814" Workload="localhost-k8s-calico--kube--controllers--5ff7f7d44c--52xq6-eth0" Sep 13 00:08:33.835957 containerd[1531]: 2025-09-13 00:08:33.833 [INFO][5663] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:08:33.835957 containerd[1531]: 2025-09-13 00:08:33.834 [INFO][5656] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="62a669238e8de25d8a03d9706bb8c03e91d6c0be914fedde6abb0c3ebeaf5814" Sep 13 00:08:33.837906 containerd[1531]: time="2025-09-13T00:08:33.835981342Z" level=info msg="TearDown network for sandbox \"62a669238e8de25d8a03d9706bb8c03e91d6c0be914fedde6abb0c3ebeaf5814\" successfully" Sep 13 00:08:33.837906 containerd[1531]: time="2025-09-13T00:08:33.835996716Z" level=info msg="StopPodSandbox for \"62a669238e8de25d8a03d9706bb8c03e91d6c0be914fedde6abb0c3ebeaf5814\" returns successfully" Sep 13 00:08:33.837906 containerd[1531]: time="2025-09-13T00:08:33.836595764Z" level=info msg="RemovePodSandbox for \"62a669238e8de25d8a03d9706bb8c03e91d6c0be914fedde6abb0c3ebeaf5814\"" Sep 13 00:08:33.837906 containerd[1531]: time="2025-09-13T00:08:33.836611217Z" level=info msg="Forcibly stopping sandbox \"62a669238e8de25d8a03d9706bb8c03e91d6c0be914fedde6abb0c3ebeaf5814\"" Sep 13 00:08:33.877697 containerd[1531]: 2025-09-13 00:08:33.858 [WARNING][5678] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="62a669238e8de25d8a03d9706bb8c03e91d6c0be914fedde6abb0c3ebeaf5814" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--5ff7f7d44c--52xq6-eth0", GenerateName:"calico-kube-controllers-5ff7f7d44c-", Namespace:"calico-system", SelfLink:"", UID:"91c757d2-79fb-408e-8fa6-48713c1e092e", ResourceVersion:"954", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 7, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5ff7f7d44c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b520bf29e2284115ccd5d07b1d4c4a7eac18aa5f2ae3f30672f31334f9da7d6d", Pod:"calico-kube-controllers-5ff7f7d44c-52xq6", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali62d605b3a21", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:08:33.877697 containerd[1531]: 2025-09-13 00:08:33.858 [INFO][5678] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="62a669238e8de25d8a03d9706bb8c03e91d6c0be914fedde6abb0c3ebeaf5814" Sep 13 00:08:33.877697 containerd[1531]: 2025-09-13 00:08:33.858 [INFO][5678] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="62a669238e8de25d8a03d9706bb8c03e91d6c0be914fedde6abb0c3ebeaf5814" iface="eth0" netns="" Sep 13 00:08:33.877697 containerd[1531]: 2025-09-13 00:08:33.858 [INFO][5678] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="62a669238e8de25d8a03d9706bb8c03e91d6c0be914fedde6abb0c3ebeaf5814" Sep 13 00:08:33.877697 containerd[1531]: 2025-09-13 00:08:33.858 [INFO][5678] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="62a669238e8de25d8a03d9706bb8c03e91d6c0be914fedde6abb0c3ebeaf5814" Sep 13 00:08:33.877697 containerd[1531]: 2025-09-13 00:08:33.870 [INFO][5685] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="62a669238e8de25d8a03d9706bb8c03e91d6c0be914fedde6abb0c3ebeaf5814" HandleID="k8s-pod-network.62a669238e8de25d8a03d9706bb8c03e91d6c0be914fedde6abb0c3ebeaf5814" Workload="localhost-k8s-calico--kube--controllers--5ff7f7d44c--52xq6-eth0" Sep 13 00:08:33.877697 containerd[1531]: 2025-09-13 00:08:33.871 [INFO][5685] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:08:33.877697 containerd[1531]: 2025-09-13 00:08:33.871 [INFO][5685] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:08:33.877697 containerd[1531]: 2025-09-13 00:08:33.874 [WARNING][5685] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="62a669238e8de25d8a03d9706bb8c03e91d6c0be914fedde6abb0c3ebeaf5814" HandleID="k8s-pod-network.62a669238e8de25d8a03d9706bb8c03e91d6c0be914fedde6abb0c3ebeaf5814" Workload="localhost-k8s-calico--kube--controllers--5ff7f7d44c--52xq6-eth0" Sep 13 00:08:33.877697 containerd[1531]: 2025-09-13 00:08:33.874 [INFO][5685] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="62a669238e8de25d8a03d9706bb8c03e91d6c0be914fedde6abb0c3ebeaf5814" HandleID="k8s-pod-network.62a669238e8de25d8a03d9706bb8c03e91d6c0be914fedde6abb0c3ebeaf5814" Workload="localhost-k8s-calico--kube--controllers--5ff7f7d44c--52xq6-eth0" Sep 13 00:08:33.877697 containerd[1531]: 2025-09-13 00:08:33.875 [INFO][5685] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:08:33.877697 containerd[1531]: 2025-09-13 00:08:33.876 [INFO][5678] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="62a669238e8de25d8a03d9706bb8c03e91d6c0be914fedde6abb0c3ebeaf5814" Sep 13 00:08:33.879122 containerd[1531]: time="2025-09-13T00:08:33.877681613Z" level=info msg="TearDown network for sandbox \"62a669238e8de25d8a03d9706bb8c03e91d6c0be914fedde6abb0c3ebeaf5814\" successfully" Sep 13 00:08:33.885423 containerd[1531]: time="2025-09-13T00:08:33.885206124Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"62a669238e8de25d8a03d9706bb8c03e91d6c0be914fedde6abb0c3ebeaf5814\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 13 00:08:33.885423 containerd[1531]: time="2025-09-13T00:08:33.885241416Z" level=info msg="RemovePodSandbox \"62a669238e8de25d8a03d9706bb8c03e91d6c0be914fedde6abb0c3ebeaf5814\" returns successfully" Sep 13 00:08:33.885888 containerd[1531]: time="2025-09-13T00:08:33.885536389Z" level=info msg="StopPodSandbox for \"e0263f43b3cfa4d19ec62c3d6a8580517818b57a4105d12213031d37c81f52a6\"" Sep 13 00:08:33.930841 containerd[1531]: 2025-09-13 00:08:33.907 [WARNING][5705] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e0263f43b3cfa4d19ec62c3d6a8580517818b57a4105d12213031d37c81f52a6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--prqfd-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"42d9d358-be40-4d01-b1a6-5f1a28850352", ResourceVersion:"1032", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 7, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5c02421b9b1d53132eec1967348c12830d2645894eb75b5b7203f95662f96c94", Pod:"csi-node-driver-prqfd", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calif13d7d76da3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:08:33.930841 containerd[1531]: 2025-09-13 00:08:33.907 [INFO][5705] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e0263f43b3cfa4d19ec62c3d6a8580517818b57a4105d12213031d37c81f52a6" Sep 13 00:08:33.930841 containerd[1531]: 2025-09-13 00:08:33.908 [INFO][5705] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e0263f43b3cfa4d19ec62c3d6a8580517818b57a4105d12213031d37c81f52a6" iface="eth0" netns="" Sep 13 00:08:33.930841 containerd[1531]: 2025-09-13 00:08:33.908 [INFO][5705] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e0263f43b3cfa4d19ec62c3d6a8580517818b57a4105d12213031d37c81f52a6" Sep 13 00:08:33.930841 containerd[1531]: 2025-09-13 00:08:33.908 [INFO][5705] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e0263f43b3cfa4d19ec62c3d6a8580517818b57a4105d12213031d37c81f52a6" Sep 13 00:08:33.930841 containerd[1531]: 2025-09-13 00:08:33.922 [INFO][5713] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e0263f43b3cfa4d19ec62c3d6a8580517818b57a4105d12213031d37c81f52a6" HandleID="k8s-pod-network.e0263f43b3cfa4d19ec62c3d6a8580517818b57a4105d12213031d37c81f52a6" Workload="localhost-k8s-csi--node--driver--prqfd-eth0" Sep 13 00:08:33.930841 containerd[1531]: 2025-09-13 00:08:33.922 [INFO][5713] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:08:33.930841 containerd[1531]: 2025-09-13 00:08:33.922 [INFO][5713] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:08:33.930841 containerd[1531]: 2025-09-13 00:08:33.927 [WARNING][5713] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e0263f43b3cfa4d19ec62c3d6a8580517818b57a4105d12213031d37c81f52a6" HandleID="k8s-pod-network.e0263f43b3cfa4d19ec62c3d6a8580517818b57a4105d12213031d37c81f52a6" Workload="localhost-k8s-csi--node--driver--prqfd-eth0" Sep 13 00:08:33.930841 containerd[1531]: 2025-09-13 00:08:33.927 [INFO][5713] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e0263f43b3cfa4d19ec62c3d6a8580517818b57a4105d12213031d37c81f52a6" HandleID="k8s-pod-network.e0263f43b3cfa4d19ec62c3d6a8580517818b57a4105d12213031d37c81f52a6" Workload="localhost-k8s-csi--node--driver--prqfd-eth0" Sep 13 00:08:33.930841 containerd[1531]: 2025-09-13 00:08:33.928 [INFO][5713] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:08:33.930841 containerd[1531]: 2025-09-13 00:08:33.929 [INFO][5705] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e0263f43b3cfa4d19ec62c3d6a8580517818b57a4105d12213031d37c81f52a6" Sep 13 00:08:33.931883 containerd[1531]: time="2025-09-13T00:08:33.930822251Z" level=info msg="TearDown network for sandbox \"e0263f43b3cfa4d19ec62c3d6a8580517818b57a4105d12213031d37c81f52a6\" successfully" Sep 13 00:08:33.931883 containerd[1531]: time="2025-09-13T00:08:33.931003312Z" level=info msg="StopPodSandbox for \"e0263f43b3cfa4d19ec62c3d6a8580517818b57a4105d12213031d37c81f52a6\" returns successfully" Sep 13 00:08:33.931883 containerd[1531]: time="2025-09-13T00:08:33.931316988Z" level=info msg="RemovePodSandbox for \"e0263f43b3cfa4d19ec62c3d6a8580517818b57a4105d12213031d37c81f52a6\"" Sep 13 00:08:33.931883 containerd[1531]: time="2025-09-13T00:08:33.931332141Z" level=info msg="Forcibly stopping sandbox \"e0263f43b3cfa4d19ec62c3d6a8580517818b57a4105d12213031d37c81f52a6\"" Sep 13 00:08:33.976021 containerd[1531]: 2025-09-13 00:08:33.952 [WARNING][5727] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e0263f43b3cfa4d19ec62c3d6a8580517818b57a4105d12213031d37c81f52a6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--prqfd-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"42d9d358-be40-4d01-b1a6-5f1a28850352", ResourceVersion:"1032", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 7, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5c02421b9b1d53132eec1967348c12830d2645894eb75b5b7203f95662f96c94", Pod:"csi-node-driver-prqfd", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calif13d7d76da3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:08:33.976021 containerd[1531]: 2025-09-13 00:08:33.953 [INFO][5727] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e0263f43b3cfa4d19ec62c3d6a8580517818b57a4105d12213031d37c81f52a6" Sep 13 00:08:33.976021 containerd[1531]: 2025-09-13 00:08:33.953 [INFO][5727] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e0263f43b3cfa4d19ec62c3d6a8580517818b57a4105d12213031d37c81f52a6" iface="eth0" netns="" Sep 13 00:08:33.976021 containerd[1531]: 2025-09-13 00:08:33.953 [INFO][5727] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e0263f43b3cfa4d19ec62c3d6a8580517818b57a4105d12213031d37c81f52a6" Sep 13 00:08:33.976021 containerd[1531]: 2025-09-13 00:08:33.953 [INFO][5727] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e0263f43b3cfa4d19ec62c3d6a8580517818b57a4105d12213031d37c81f52a6" Sep 13 00:08:33.976021 containerd[1531]: 2025-09-13 00:08:33.968 [INFO][5734] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e0263f43b3cfa4d19ec62c3d6a8580517818b57a4105d12213031d37c81f52a6" HandleID="k8s-pod-network.e0263f43b3cfa4d19ec62c3d6a8580517818b57a4105d12213031d37c81f52a6" Workload="localhost-k8s-csi--node--driver--prqfd-eth0" Sep 13 00:08:33.976021 containerd[1531]: 2025-09-13 00:08:33.968 [INFO][5734] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:08:33.976021 containerd[1531]: 2025-09-13 00:08:33.968 [INFO][5734] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:08:33.976021 containerd[1531]: 2025-09-13 00:08:33.972 [WARNING][5734] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e0263f43b3cfa4d19ec62c3d6a8580517818b57a4105d12213031d37c81f52a6" HandleID="k8s-pod-network.e0263f43b3cfa4d19ec62c3d6a8580517818b57a4105d12213031d37c81f52a6" Workload="localhost-k8s-csi--node--driver--prqfd-eth0" Sep 13 00:08:33.976021 containerd[1531]: 2025-09-13 00:08:33.972 [INFO][5734] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e0263f43b3cfa4d19ec62c3d6a8580517818b57a4105d12213031d37c81f52a6" HandleID="k8s-pod-network.e0263f43b3cfa4d19ec62c3d6a8580517818b57a4105d12213031d37c81f52a6" Workload="localhost-k8s-csi--node--driver--prqfd-eth0" Sep 13 00:08:33.976021 containerd[1531]: 2025-09-13 00:08:33.973 [INFO][5734] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:08:33.976021 containerd[1531]: 2025-09-13 00:08:33.974 [INFO][5727] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e0263f43b3cfa4d19ec62c3d6a8580517818b57a4105d12213031d37c81f52a6" Sep 13 00:08:33.976021 containerd[1531]: time="2025-09-13T00:08:33.975317034Z" level=info msg="TearDown network for sandbox \"e0263f43b3cfa4d19ec62c3d6a8580517818b57a4105d12213031d37c81f52a6\" successfully" Sep 13 00:08:33.997504 containerd[1531]: time="2025-09-13T00:08:33.997484922Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e0263f43b3cfa4d19ec62c3d6a8580517818b57a4105d12213031d37c81f52a6\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 13 00:08:33.997544 containerd[1531]: time="2025-09-13T00:08:33.997519544Z" level=info msg="RemovePodSandbox \"e0263f43b3cfa4d19ec62c3d6a8580517818b57a4105d12213031d37c81f52a6\" returns successfully" Sep 13 00:08:33.997945 containerd[1531]: time="2025-09-13T00:08:33.997815545Z" level=info msg="StopPodSandbox for \"41993f93a8d69ba1a06995b5a771d8cb5ff09361b613d1752559b7917b2db819\"" Sep 13 00:08:34.052765 containerd[1531]: 2025-09-13 00:08:34.034 [WARNING][5748] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="41993f93a8d69ba1a06995b5a771d8cb5ff09361b613d1752559b7917b2db819" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--54d579b49d--pgggr-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"cd1bab13-6a3e-4841-8b06-97978d7ef332", ResourceVersion:"974", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 7, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e14c6859306747c12c30901fd0d168b19ed5632cab03483380ce5f914829db8c", Pod:"goldmane-54d579b49d-pgggr", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"caliad72d5c3c6a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:08:34.052765 containerd[1531]: 2025-09-13 00:08:34.034 [INFO][5748] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="41993f93a8d69ba1a06995b5a771d8cb5ff09361b613d1752559b7917b2db819" Sep 13 00:08:34.052765 containerd[1531]: 2025-09-13 00:08:34.034 [INFO][5748] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="41993f93a8d69ba1a06995b5a771d8cb5ff09361b613d1752559b7917b2db819" iface="eth0" netns="" Sep 13 00:08:34.052765 containerd[1531]: 2025-09-13 00:08:34.034 [INFO][5748] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="41993f93a8d69ba1a06995b5a771d8cb5ff09361b613d1752559b7917b2db819" Sep 13 00:08:34.052765 containerd[1531]: 2025-09-13 00:08:34.034 [INFO][5748] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="41993f93a8d69ba1a06995b5a771d8cb5ff09361b613d1752559b7917b2db819" Sep 13 00:08:34.052765 containerd[1531]: 2025-09-13 00:08:34.046 [INFO][5756] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="41993f93a8d69ba1a06995b5a771d8cb5ff09361b613d1752559b7917b2db819" HandleID="k8s-pod-network.41993f93a8d69ba1a06995b5a771d8cb5ff09361b613d1752559b7917b2db819" Workload="localhost-k8s-goldmane--54d579b49d--pgggr-eth0" Sep 13 00:08:34.052765 containerd[1531]: 2025-09-13 00:08:34.046 [INFO][5756] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:08:34.052765 containerd[1531]: 2025-09-13 00:08:34.046 [INFO][5756] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:08:34.052765 containerd[1531]: 2025-09-13 00:08:34.049 [WARNING][5756] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="41993f93a8d69ba1a06995b5a771d8cb5ff09361b613d1752559b7917b2db819" HandleID="k8s-pod-network.41993f93a8d69ba1a06995b5a771d8cb5ff09361b613d1752559b7917b2db819" Workload="localhost-k8s-goldmane--54d579b49d--pgggr-eth0" Sep 13 00:08:34.052765 containerd[1531]: 2025-09-13 00:08:34.049 [INFO][5756] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="41993f93a8d69ba1a06995b5a771d8cb5ff09361b613d1752559b7917b2db819" HandleID="k8s-pod-network.41993f93a8d69ba1a06995b5a771d8cb5ff09361b613d1752559b7917b2db819" Workload="localhost-k8s-goldmane--54d579b49d--pgggr-eth0" Sep 13 00:08:34.052765 containerd[1531]: 2025-09-13 00:08:34.050 [INFO][5756] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:08:34.052765 containerd[1531]: 2025-09-13 00:08:34.051 [INFO][5748] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="41993f93a8d69ba1a06995b5a771d8cb5ff09361b613d1752559b7917b2db819" Sep 13 00:08:34.062671 containerd[1531]: time="2025-09-13T00:08:34.052961375Z" level=info msg="TearDown network for sandbox \"41993f93a8d69ba1a06995b5a771d8cb5ff09361b613d1752559b7917b2db819\" successfully" Sep 13 00:08:34.062671 containerd[1531]: time="2025-09-13T00:08:34.052977229Z" level=info msg="StopPodSandbox for \"41993f93a8d69ba1a06995b5a771d8cb5ff09361b613d1752559b7917b2db819\" returns successfully" Sep 13 00:08:34.062671 containerd[1531]: time="2025-09-13T00:08:34.053428214Z" level=info msg="RemovePodSandbox for \"41993f93a8d69ba1a06995b5a771d8cb5ff09361b613d1752559b7917b2db819\"" Sep 13 00:08:34.062671 containerd[1531]: time="2025-09-13T00:08:34.053478855Z" level=info msg="Forcibly stopping sandbox \"41993f93a8d69ba1a06995b5a771d8cb5ff09361b613d1752559b7917b2db819\"" Sep 13 00:08:34.100198 containerd[1531]: 2025-09-13 00:08:34.077 [WARNING][5771] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="41993f93a8d69ba1a06995b5a771d8cb5ff09361b613d1752559b7917b2db819" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--54d579b49d--pgggr-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"cd1bab13-6a3e-4841-8b06-97978d7ef332", ResourceVersion:"974", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 7, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e14c6859306747c12c30901fd0d168b19ed5632cab03483380ce5f914829db8c", Pod:"goldmane-54d579b49d-pgggr", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"caliad72d5c3c6a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:08:34.100198 containerd[1531]: 2025-09-13 00:08:34.077 [INFO][5771] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="41993f93a8d69ba1a06995b5a771d8cb5ff09361b613d1752559b7917b2db819" Sep 13 00:08:34.100198 containerd[1531]: 2025-09-13 00:08:34.077 [INFO][5771] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="41993f93a8d69ba1a06995b5a771d8cb5ff09361b613d1752559b7917b2db819" iface="eth0" netns="" Sep 13 00:08:34.100198 containerd[1531]: 2025-09-13 00:08:34.077 [INFO][5771] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="41993f93a8d69ba1a06995b5a771d8cb5ff09361b613d1752559b7917b2db819" Sep 13 00:08:34.100198 containerd[1531]: 2025-09-13 00:08:34.077 [INFO][5771] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="41993f93a8d69ba1a06995b5a771d8cb5ff09361b613d1752559b7917b2db819" Sep 13 00:08:34.100198 containerd[1531]: 2025-09-13 00:08:34.092 [INFO][5778] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="41993f93a8d69ba1a06995b5a771d8cb5ff09361b613d1752559b7917b2db819" HandleID="k8s-pod-network.41993f93a8d69ba1a06995b5a771d8cb5ff09361b613d1752559b7917b2db819" Workload="localhost-k8s-goldmane--54d579b49d--pgggr-eth0" Sep 13 00:08:34.100198 containerd[1531]: 2025-09-13 00:08:34.092 [INFO][5778] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:08:34.100198 containerd[1531]: 2025-09-13 00:08:34.092 [INFO][5778] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:08:34.100198 containerd[1531]: 2025-09-13 00:08:34.097 [WARNING][5778] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="41993f93a8d69ba1a06995b5a771d8cb5ff09361b613d1752559b7917b2db819" HandleID="k8s-pod-network.41993f93a8d69ba1a06995b5a771d8cb5ff09361b613d1752559b7917b2db819" Workload="localhost-k8s-goldmane--54d579b49d--pgggr-eth0" Sep 13 00:08:34.100198 containerd[1531]: 2025-09-13 00:08:34.097 [INFO][5778] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="41993f93a8d69ba1a06995b5a771d8cb5ff09361b613d1752559b7917b2db819" HandleID="k8s-pod-network.41993f93a8d69ba1a06995b5a771d8cb5ff09361b613d1752559b7917b2db819" Workload="localhost-k8s-goldmane--54d579b49d--pgggr-eth0" Sep 13 00:08:34.100198 containerd[1531]: 2025-09-13 00:08:34.097 [INFO][5778] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:08:34.100198 containerd[1531]: 2025-09-13 00:08:34.098 [INFO][5771] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="41993f93a8d69ba1a06995b5a771d8cb5ff09361b613d1752559b7917b2db819" Sep 13 00:08:34.100198 containerd[1531]: time="2025-09-13T00:08:34.099927979Z" level=info msg="TearDown network for sandbox \"41993f93a8d69ba1a06995b5a771d8cb5ff09361b613d1752559b7917b2db819\" successfully" Sep 13 00:08:34.102430 containerd[1531]: time="2025-09-13T00:08:34.102231576Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"41993f93a8d69ba1a06995b5a771d8cb5ff09361b613d1752559b7917b2db819\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 13 00:08:34.102430 containerd[1531]: time="2025-09-13T00:08:34.102263994Z" level=info msg="RemovePodSandbox \"41993f93a8d69ba1a06995b5a771d8cb5ff09361b613d1752559b7917b2db819\" returns successfully" Sep 13 00:08:34.102743 containerd[1531]: time="2025-09-13T00:08:34.102610099Z" level=info msg="StopPodSandbox for \"697640ece8917ad0d56e5bda6588816935756a5136a20be2e1dd67d3f437ae04\"" Sep 13 00:08:34.151307 containerd[1531]: 2025-09-13 00:08:34.124 [WARNING][5792] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="697640ece8917ad0d56e5bda6588816935756a5136a20be2e1dd67d3f437ae04" WorkloadEndpoint="localhost-k8s-whisker--b78ffb84d--4tkck-eth0" Sep 13 00:08:34.151307 containerd[1531]: 2025-09-13 00:08:34.124 [INFO][5792] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="697640ece8917ad0d56e5bda6588816935756a5136a20be2e1dd67d3f437ae04" Sep 13 00:08:34.151307 containerd[1531]: 2025-09-13 00:08:34.124 [INFO][5792] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="697640ece8917ad0d56e5bda6588816935756a5136a20be2e1dd67d3f437ae04" iface="eth0" netns="" Sep 13 00:08:34.151307 containerd[1531]: 2025-09-13 00:08:34.124 [INFO][5792] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="697640ece8917ad0d56e5bda6588816935756a5136a20be2e1dd67d3f437ae04" Sep 13 00:08:34.151307 containerd[1531]: 2025-09-13 00:08:34.124 [INFO][5792] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="697640ece8917ad0d56e5bda6588816935756a5136a20be2e1dd67d3f437ae04" Sep 13 00:08:34.151307 containerd[1531]: 2025-09-13 00:08:34.138 [INFO][5799] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="697640ece8917ad0d56e5bda6588816935756a5136a20be2e1dd67d3f437ae04" HandleID="k8s-pod-network.697640ece8917ad0d56e5bda6588816935756a5136a20be2e1dd67d3f437ae04" Workload="localhost-k8s-whisker--b78ffb84d--4tkck-eth0" Sep 13 00:08:34.151307 containerd[1531]: 2025-09-13 00:08:34.138 [INFO][5799] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:08:34.151307 containerd[1531]: 2025-09-13 00:08:34.138 [INFO][5799] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:08:34.151307 containerd[1531]: 2025-09-13 00:08:34.144 [WARNING][5799] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="697640ece8917ad0d56e5bda6588816935756a5136a20be2e1dd67d3f437ae04" HandleID="k8s-pod-network.697640ece8917ad0d56e5bda6588816935756a5136a20be2e1dd67d3f437ae04" Workload="localhost-k8s-whisker--b78ffb84d--4tkck-eth0" Sep 13 00:08:34.151307 containerd[1531]: 2025-09-13 00:08:34.144 [INFO][5799] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="697640ece8917ad0d56e5bda6588816935756a5136a20be2e1dd67d3f437ae04" HandleID="k8s-pod-network.697640ece8917ad0d56e5bda6588816935756a5136a20be2e1dd67d3f437ae04" Workload="localhost-k8s-whisker--b78ffb84d--4tkck-eth0" Sep 13 00:08:34.151307 containerd[1531]: 2025-09-13 00:08:34.145 [INFO][5799] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:08:34.151307 containerd[1531]: 2025-09-13 00:08:34.147 [INFO][5792] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="697640ece8917ad0d56e5bda6588816935756a5136a20be2e1dd67d3f437ae04" Sep 13 00:08:34.151307 containerd[1531]: time="2025-09-13T00:08:34.150419185Z" level=info msg="TearDown network for sandbox \"697640ece8917ad0d56e5bda6588816935756a5136a20be2e1dd67d3f437ae04\" successfully" Sep 13 00:08:34.151307 containerd[1531]: time="2025-09-13T00:08:34.150435054Z" level=info msg="StopPodSandbox for \"697640ece8917ad0d56e5bda6588816935756a5136a20be2e1dd67d3f437ae04\" returns successfully" Sep 13 00:08:34.158972 containerd[1531]: time="2025-09-13T00:08:34.158816106Z" level=info msg="RemovePodSandbox for \"697640ece8917ad0d56e5bda6588816935756a5136a20be2e1dd67d3f437ae04\"" Sep 13 00:08:34.158972 containerd[1531]: time="2025-09-13T00:08:34.158836510Z" level=info msg="Forcibly stopping sandbox \"697640ece8917ad0d56e5bda6588816935756a5136a20be2e1dd67d3f437ae04\"" Sep 13 00:08:34.214039 containerd[1531]: 2025-09-13 00:08:34.185 [WARNING][5813] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="697640ece8917ad0d56e5bda6588816935756a5136a20be2e1dd67d3f437ae04" WorkloadEndpoint="localhost-k8s-whisker--b78ffb84d--4tkck-eth0" Sep 13 00:08:34.214039 containerd[1531]: 2025-09-13 00:08:34.185 [INFO][5813] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="697640ece8917ad0d56e5bda6588816935756a5136a20be2e1dd67d3f437ae04" Sep 13 00:08:34.214039 containerd[1531]: 2025-09-13 00:08:34.185 [INFO][5813] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="697640ece8917ad0d56e5bda6588816935756a5136a20be2e1dd67d3f437ae04" iface="eth0" netns="" Sep 13 00:08:34.214039 containerd[1531]: 2025-09-13 00:08:34.185 [INFO][5813] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="697640ece8917ad0d56e5bda6588816935756a5136a20be2e1dd67d3f437ae04" Sep 13 00:08:34.214039 containerd[1531]: 2025-09-13 00:08:34.185 [INFO][5813] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="697640ece8917ad0d56e5bda6588816935756a5136a20be2e1dd67d3f437ae04" Sep 13 00:08:34.214039 containerd[1531]: 2025-09-13 00:08:34.203 [INFO][5820] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="697640ece8917ad0d56e5bda6588816935756a5136a20be2e1dd67d3f437ae04" HandleID="k8s-pod-network.697640ece8917ad0d56e5bda6588816935756a5136a20be2e1dd67d3f437ae04" Workload="localhost-k8s-whisker--b78ffb84d--4tkck-eth0" Sep 13 00:08:34.214039 containerd[1531]: 2025-09-13 00:08:34.204 [INFO][5820] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:08:34.214039 containerd[1531]: 2025-09-13 00:08:34.204 [INFO][5820] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:08:34.214039 containerd[1531]: 2025-09-13 00:08:34.208 [WARNING][5820] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="697640ece8917ad0d56e5bda6588816935756a5136a20be2e1dd67d3f437ae04" HandleID="k8s-pod-network.697640ece8917ad0d56e5bda6588816935756a5136a20be2e1dd67d3f437ae04" Workload="localhost-k8s-whisker--b78ffb84d--4tkck-eth0" Sep 13 00:08:34.214039 containerd[1531]: 2025-09-13 00:08:34.208 [INFO][5820] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="697640ece8917ad0d56e5bda6588816935756a5136a20be2e1dd67d3f437ae04" HandleID="k8s-pod-network.697640ece8917ad0d56e5bda6588816935756a5136a20be2e1dd67d3f437ae04" Workload="localhost-k8s-whisker--b78ffb84d--4tkck-eth0" Sep 13 00:08:34.214039 containerd[1531]: 2025-09-13 00:08:34.210 [INFO][5820] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:08:34.214039 containerd[1531]: 2025-09-13 00:08:34.212 [INFO][5813] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="697640ece8917ad0d56e5bda6588816935756a5136a20be2e1dd67d3f437ae04" Sep 13 00:08:34.214662 containerd[1531]: time="2025-09-13T00:08:34.214229171Z" level=info msg="TearDown network for sandbox \"697640ece8917ad0d56e5bda6588816935756a5136a20be2e1dd67d3f437ae04\" successfully" Sep 13 00:08:34.216628 containerd[1531]: time="2025-09-13T00:08:34.216614553Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"697640ece8917ad0d56e5bda6588816935756a5136a20be2e1dd67d3f437ae04\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 13 00:08:34.216758 containerd[1531]: time="2025-09-13T00:08:34.216718888Z" level=info msg="RemovePodSandbox \"697640ece8917ad0d56e5bda6588816935756a5136a20be2e1dd67d3f437ae04\" returns successfully" Sep 13 00:08:34.218148 containerd[1531]: time="2025-09-13T00:08:34.218085167Z" level=info msg="StopPodSandbox for \"597a44a2433f9026351d2c0d0daddac4f0da1800f83f39391850561f25387cff\"" Sep 13 00:08:34.261262 containerd[1531]: 2025-09-13 00:08:34.242 [WARNING][5834] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="597a44a2433f9026351d2c0d0daddac4f0da1800f83f39391850561f25387cff" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--748877ff9b--h2slx-eth0", GenerateName:"calico-apiserver-748877ff9b-", Namespace:"calico-apiserver", SelfLink:"", UID:"dc744470-c98f-40e0-a342-3186715e3534", ResourceVersion:"1017", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 7, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"748877ff9b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"79c83d114911169e2a1eaa2497863e4116893dc69844add8d0241ccc6b70370e", Pod:"calico-apiserver-748877ff9b-h2slx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali55ef14a4cb3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:08:34.261262 containerd[1531]: 2025-09-13 00:08:34.242 [INFO][5834] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="597a44a2433f9026351d2c0d0daddac4f0da1800f83f39391850561f25387cff" Sep 13 00:08:34.261262 containerd[1531]: 2025-09-13 00:08:34.242 [INFO][5834] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="597a44a2433f9026351d2c0d0daddac4f0da1800f83f39391850561f25387cff" iface="eth0" netns="" Sep 13 00:08:34.261262 containerd[1531]: 2025-09-13 00:08:34.242 [INFO][5834] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="597a44a2433f9026351d2c0d0daddac4f0da1800f83f39391850561f25387cff" Sep 13 00:08:34.261262 containerd[1531]: 2025-09-13 00:08:34.242 [INFO][5834] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="597a44a2433f9026351d2c0d0daddac4f0da1800f83f39391850561f25387cff" Sep 13 00:08:34.261262 containerd[1531]: 2025-09-13 00:08:34.254 [INFO][5841] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="597a44a2433f9026351d2c0d0daddac4f0da1800f83f39391850561f25387cff" HandleID="k8s-pod-network.597a44a2433f9026351d2c0d0daddac4f0da1800f83f39391850561f25387cff" Workload="localhost-k8s-calico--apiserver--748877ff9b--h2slx-eth0" Sep 13 00:08:34.261262 containerd[1531]: 2025-09-13 00:08:34.254 [INFO][5841] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:08:34.261262 containerd[1531]: 2025-09-13 00:08:34.254 [INFO][5841] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:08:34.261262 containerd[1531]: 2025-09-13 00:08:34.258 [WARNING][5841] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="597a44a2433f9026351d2c0d0daddac4f0da1800f83f39391850561f25387cff" HandleID="k8s-pod-network.597a44a2433f9026351d2c0d0daddac4f0da1800f83f39391850561f25387cff" Workload="localhost-k8s-calico--apiserver--748877ff9b--h2slx-eth0" Sep 13 00:08:34.261262 containerd[1531]: 2025-09-13 00:08:34.258 [INFO][5841] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="597a44a2433f9026351d2c0d0daddac4f0da1800f83f39391850561f25387cff" HandleID="k8s-pod-network.597a44a2433f9026351d2c0d0daddac4f0da1800f83f39391850561f25387cff" Workload="localhost-k8s-calico--apiserver--748877ff9b--h2slx-eth0" Sep 13 00:08:34.261262 containerd[1531]: 2025-09-13 00:08:34.259 [INFO][5841] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:08:34.261262 containerd[1531]: 2025-09-13 00:08:34.260 [INFO][5834] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="597a44a2433f9026351d2c0d0daddac4f0da1800f83f39391850561f25387cff" Sep 13 00:08:34.262296 containerd[1531]: time="2025-09-13T00:08:34.261345576Z" level=info msg="TearDown network for sandbox \"597a44a2433f9026351d2c0d0daddac4f0da1800f83f39391850561f25387cff\" successfully" Sep 13 00:08:34.262296 containerd[1531]: time="2025-09-13T00:08:34.261360839Z" level=info msg="StopPodSandbox for \"597a44a2433f9026351d2c0d0daddac4f0da1800f83f39391850561f25387cff\" returns successfully" Sep 13 00:08:34.262296 containerd[1531]: time="2025-09-13T00:08:34.261897343Z" level=info msg="RemovePodSandbox for \"597a44a2433f9026351d2c0d0daddac4f0da1800f83f39391850561f25387cff\"" Sep 13 00:08:34.262296 containerd[1531]: time="2025-09-13T00:08:34.261932384Z" level=info msg="Forcibly stopping sandbox \"597a44a2433f9026351d2c0d0daddac4f0da1800f83f39391850561f25387cff\"" Sep 13 00:08:34.303684 containerd[1531]: 2025-09-13 00:08:34.281 [WARNING][5855] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="597a44a2433f9026351d2c0d0daddac4f0da1800f83f39391850561f25387cff" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--748877ff9b--h2slx-eth0", GenerateName:"calico-apiserver-748877ff9b-", Namespace:"calico-apiserver", SelfLink:"", UID:"dc744470-c98f-40e0-a342-3186715e3534", ResourceVersion:"1017", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 7, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"748877ff9b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"79c83d114911169e2a1eaa2497863e4116893dc69844add8d0241ccc6b70370e", Pod:"calico-apiserver-748877ff9b-h2slx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali55ef14a4cb3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:08:34.303684 containerd[1531]: 2025-09-13 00:08:34.281 [INFO][5855] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="597a44a2433f9026351d2c0d0daddac4f0da1800f83f39391850561f25387cff" Sep 13 00:08:34.303684 containerd[1531]: 2025-09-13 00:08:34.281 [INFO][5855] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="597a44a2433f9026351d2c0d0daddac4f0da1800f83f39391850561f25387cff" iface="eth0" netns="" Sep 13 00:08:34.303684 containerd[1531]: 2025-09-13 00:08:34.281 [INFO][5855] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="597a44a2433f9026351d2c0d0daddac4f0da1800f83f39391850561f25387cff" Sep 13 00:08:34.303684 containerd[1531]: 2025-09-13 00:08:34.281 [INFO][5855] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="597a44a2433f9026351d2c0d0daddac4f0da1800f83f39391850561f25387cff" Sep 13 00:08:34.303684 containerd[1531]: 2025-09-13 00:08:34.297 [INFO][5862] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="597a44a2433f9026351d2c0d0daddac4f0da1800f83f39391850561f25387cff" HandleID="k8s-pod-network.597a44a2433f9026351d2c0d0daddac4f0da1800f83f39391850561f25387cff" Workload="localhost-k8s-calico--apiserver--748877ff9b--h2slx-eth0" Sep 13 00:08:34.303684 containerd[1531]: 2025-09-13 00:08:34.297 [INFO][5862] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:08:34.303684 containerd[1531]: 2025-09-13 00:08:34.297 [INFO][5862] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:08:34.303684 containerd[1531]: 2025-09-13 00:08:34.300 [WARNING][5862] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="597a44a2433f9026351d2c0d0daddac4f0da1800f83f39391850561f25387cff" HandleID="k8s-pod-network.597a44a2433f9026351d2c0d0daddac4f0da1800f83f39391850561f25387cff" Workload="localhost-k8s-calico--apiserver--748877ff9b--h2slx-eth0" Sep 13 00:08:34.303684 containerd[1531]: 2025-09-13 00:08:34.300 [INFO][5862] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="597a44a2433f9026351d2c0d0daddac4f0da1800f83f39391850561f25387cff" HandleID="k8s-pod-network.597a44a2433f9026351d2c0d0daddac4f0da1800f83f39391850561f25387cff" Workload="localhost-k8s-calico--apiserver--748877ff9b--h2slx-eth0" Sep 13 00:08:34.303684 containerd[1531]: 2025-09-13 00:08:34.301 [INFO][5862] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:08:34.303684 containerd[1531]: 2025-09-13 00:08:34.302 [INFO][5855] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="597a44a2433f9026351d2c0d0daddac4f0da1800f83f39391850561f25387cff" Sep 13 00:08:34.304279 containerd[1531]: time="2025-09-13T00:08:34.303663351Z" level=info msg="TearDown network for sandbox \"597a44a2433f9026351d2c0d0daddac4f0da1800f83f39391850561f25387cff\" successfully" Sep 13 00:08:34.305947 containerd[1531]: time="2025-09-13T00:08:34.305928305Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"597a44a2433f9026351d2c0d0daddac4f0da1800f83f39391850561f25387cff\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 13 00:08:34.306046 containerd[1531]: time="2025-09-13T00:08:34.306022607Z" level=info msg="RemovePodSandbox \"597a44a2433f9026351d2c0d0daddac4f0da1800f83f39391850561f25387cff\" returns successfully" Sep 13 00:08:34.306367 containerd[1531]: time="2025-09-13T00:08:34.306356664Z" level=info msg="StopPodSandbox for \"c93942dab392c6ee56a4fa9a02035538a28c4200b2932e5d9cd9886d811e47f3\"" Sep 13 00:08:34.346192 containerd[1531]: 2025-09-13 00:08:34.325 [WARNING][5876] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c93942dab392c6ee56a4fa9a02035538a28c4200b2932e5d9cd9886d811e47f3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--bvgjw-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"c55cbdbc-12fb-409d-96af-7aa0cff6b9cc", ResourceVersion:"932", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 7, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9b64364b8fc28fe51fab3dbf6785cf9143ef9f211ee65f99fc52e020cc608af5", Pod:"coredns-668d6bf9bc-bvgjw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9ee488ee3a9", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:08:34.346192 containerd[1531]: 2025-09-13 00:08:34.326 [INFO][5876] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c93942dab392c6ee56a4fa9a02035538a28c4200b2932e5d9cd9886d811e47f3" Sep 13 00:08:34.346192 containerd[1531]: 2025-09-13 00:08:34.326 [INFO][5876] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c93942dab392c6ee56a4fa9a02035538a28c4200b2932e5d9cd9886d811e47f3" iface="eth0" netns="" Sep 13 00:08:34.346192 containerd[1531]: 2025-09-13 00:08:34.326 [INFO][5876] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c93942dab392c6ee56a4fa9a02035538a28c4200b2932e5d9cd9886d811e47f3" Sep 13 00:08:34.346192 containerd[1531]: 2025-09-13 00:08:34.326 [INFO][5876] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c93942dab392c6ee56a4fa9a02035538a28c4200b2932e5d9cd9886d811e47f3" Sep 13 00:08:34.346192 containerd[1531]: 2025-09-13 00:08:34.339 [INFO][5883] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c93942dab392c6ee56a4fa9a02035538a28c4200b2932e5d9cd9886d811e47f3" HandleID="k8s-pod-network.c93942dab392c6ee56a4fa9a02035538a28c4200b2932e5d9cd9886d811e47f3" Workload="localhost-k8s-coredns--668d6bf9bc--bvgjw-eth0" Sep 13 00:08:34.346192 containerd[1531]: 2025-09-13 00:08:34.339 [INFO][5883] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:08:34.346192 containerd[1531]: 2025-09-13 00:08:34.339 [INFO][5883] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:08:34.346192 containerd[1531]: 2025-09-13 00:08:34.342 [WARNING][5883] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c93942dab392c6ee56a4fa9a02035538a28c4200b2932e5d9cd9886d811e47f3" HandleID="k8s-pod-network.c93942dab392c6ee56a4fa9a02035538a28c4200b2932e5d9cd9886d811e47f3" Workload="localhost-k8s-coredns--668d6bf9bc--bvgjw-eth0" Sep 13 00:08:34.346192 containerd[1531]: 2025-09-13 00:08:34.342 [INFO][5883] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c93942dab392c6ee56a4fa9a02035538a28c4200b2932e5d9cd9886d811e47f3" HandleID="k8s-pod-network.c93942dab392c6ee56a4fa9a02035538a28c4200b2932e5d9cd9886d811e47f3" Workload="localhost-k8s-coredns--668d6bf9bc--bvgjw-eth0" Sep 13 00:08:34.346192 containerd[1531]: 2025-09-13 00:08:34.343 [INFO][5883] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:08:34.346192 containerd[1531]: 2025-09-13 00:08:34.344 [INFO][5876] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c93942dab392c6ee56a4fa9a02035538a28c4200b2932e5d9cd9886d811e47f3" Sep 13 00:08:34.346192 containerd[1531]: time="2025-09-13T00:08:34.346158611Z" level=info msg="TearDown network for sandbox \"c93942dab392c6ee56a4fa9a02035538a28c4200b2932e5d9cd9886d811e47f3\" successfully" Sep 13 00:08:34.346192 containerd[1531]: time="2025-09-13T00:08:34.346173676Z" level=info msg="StopPodSandbox for \"c93942dab392c6ee56a4fa9a02035538a28c4200b2932e5d9cd9886d811e47f3\" returns successfully" Sep 13 00:08:34.347043 containerd[1531]: time="2025-09-13T00:08:34.346843377Z" level=info msg="RemovePodSandbox for \"c93942dab392c6ee56a4fa9a02035538a28c4200b2932e5d9cd9886d811e47f3\"" Sep 13 00:08:34.347043 containerd[1531]: time="2025-09-13T00:08:34.346859621Z" level=info msg="Forcibly stopping sandbox \"c93942dab392c6ee56a4fa9a02035538a28c4200b2932e5d9cd9886d811e47f3\"" Sep 13 00:08:34.407438 containerd[1531]: 2025-09-13 00:08:34.371 [WARNING][5897] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c93942dab392c6ee56a4fa9a02035538a28c4200b2932e5d9cd9886d811e47f3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--bvgjw-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"c55cbdbc-12fb-409d-96af-7aa0cff6b9cc", ResourceVersion:"932", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 7, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9b64364b8fc28fe51fab3dbf6785cf9143ef9f211ee65f99fc52e020cc608af5", Pod:"coredns-668d6bf9bc-bvgjw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9ee488ee3a9", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:08:34.407438 containerd[1531]: 2025-09-13 00:08:34.371 [INFO][5897] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c93942dab392c6ee56a4fa9a02035538a28c4200b2932e5d9cd9886d811e47f3" Sep 13 00:08:34.407438 containerd[1531]: 2025-09-13 00:08:34.371 [INFO][5897] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c93942dab392c6ee56a4fa9a02035538a28c4200b2932e5d9cd9886d811e47f3" iface="eth0" netns="" Sep 13 00:08:34.407438 containerd[1531]: 2025-09-13 00:08:34.371 [INFO][5897] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c93942dab392c6ee56a4fa9a02035538a28c4200b2932e5d9cd9886d811e47f3" Sep 13 00:08:34.407438 containerd[1531]: 2025-09-13 00:08:34.371 [INFO][5897] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c93942dab392c6ee56a4fa9a02035538a28c4200b2932e5d9cd9886d811e47f3" Sep 13 00:08:34.407438 containerd[1531]: 2025-09-13 00:08:34.389 [INFO][5904] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c93942dab392c6ee56a4fa9a02035538a28c4200b2932e5d9cd9886d811e47f3" HandleID="k8s-pod-network.c93942dab392c6ee56a4fa9a02035538a28c4200b2932e5d9cd9886d811e47f3" Workload="localhost-k8s-coredns--668d6bf9bc--bvgjw-eth0" Sep 13 00:08:34.407438 containerd[1531]: 2025-09-13 00:08:34.389 [INFO][5904] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:08:34.407438 containerd[1531]: 2025-09-13 00:08:34.389 [INFO][5904] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:08:34.407438 containerd[1531]: 2025-09-13 00:08:34.399 [WARNING][5904] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c93942dab392c6ee56a4fa9a02035538a28c4200b2932e5d9cd9886d811e47f3" HandleID="k8s-pod-network.c93942dab392c6ee56a4fa9a02035538a28c4200b2932e5d9cd9886d811e47f3" Workload="localhost-k8s-coredns--668d6bf9bc--bvgjw-eth0" Sep 13 00:08:34.407438 containerd[1531]: 2025-09-13 00:08:34.399 [INFO][5904] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c93942dab392c6ee56a4fa9a02035538a28c4200b2932e5d9cd9886d811e47f3" HandleID="k8s-pod-network.c93942dab392c6ee56a4fa9a02035538a28c4200b2932e5d9cd9886d811e47f3" Workload="localhost-k8s-coredns--668d6bf9bc--bvgjw-eth0" Sep 13 00:08:34.407438 containerd[1531]: 2025-09-13 00:08:34.401 [INFO][5904] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:08:34.407438 containerd[1531]: 2025-09-13 00:08:34.406 [INFO][5897] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c93942dab392c6ee56a4fa9a02035538a28c4200b2932e5d9cd9886d811e47f3" Sep 13 00:08:34.407438 containerd[1531]: time="2025-09-13T00:08:34.407407701Z" level=info msg="TearDown network for sandbox \"c93942dab392c6ee56a4fa9a02035538a28c4200b2932e5d9cd9886d811e47f3\" successfully" Sep 13 00:08:34.430098 containerd[1531]: time="2025-09-13T00:08:34.429967715Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c93942dab392c6ee56a4fa9a02035538a28c4200b2932e5d9cd9886d811e47f3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 13 00:08:34.430098 containerd[1531]: time="2025-09-13T00:08:34.430023800Z" level=info msg="RemovePodSandbox \"c93942dab392c6ee56a4fa9a02035538a28c4200b2932e5d9cd9886d811e47f3\" returns successfully" Sep 13 00:08:34.432683 containerd[1531]: time="2025-09-13T00:08:34.430392134Z" level=info msg="StopPodSandbox for \"2690a7ec28df5d92992ed4d85756c3bdef017652f9c8b5cc6e7f02f07e238d87\"" Sep 13 00:08:34.474368 containerd[1531]: 2025-09-13 00:08:34.453 [WARNING][5919] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2690a7ec28df5d92992ed4d85756c3bdef017652f9c8b5cc6e7f02f07e238d87" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--748877ff9b--8wsdp-eth0", GenerateName:"calico-apiserver-748877ff9b-", Namespace:"calico-apiserver", SelfLink:"", UID:"b6ce864c-530a-4daf-972c-b9dd880246b5", ResourceVersion:"1020", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 7, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"748877ff9b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"58965c2fcbcf510ac6906fff13ab711505586dd74f0cd42a644481a3840828bf", Pod:"calico-apiserver-748877ff9b-8wsdp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali69c38515ffc", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:08:34.474368 containerd[1531]: 2025-09-13 00:08:34.453 [INFO][5919] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2690a7ec28df5d92992ed4d85756c3bdef017652f9c8b5cc6e7f02f07e238d87" Sep 13 00:08:34.474368 containerd[1531]: 2025-09-13 00:08:34.453 [INFO][5919] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2690a7ec28df5d92992ed4d85756c3bdef017652f9c8b5cc6e7f02f07e238d87" iface="eth0" netns="" Sep 13 00:08:34.474368 containerd[1531]: 2025-09-13 00:08:34.453 [INFO][5919] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2690a7ec28df5d92992ed4d85756c3bdef017652f9c8b5cc6e7f02f07e238d87" Sep 13 00:08:34.474368 containerd[1531]: 2025-09-13 00:08:34.454 [INFO][5919] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2690a7ec28df5d92992ed4d85756c3bdef017652f9c8b5cc6e7f02f07e238d87" Sep 13 00:08:34.474368 containerd[1531]: 2025-09-13 00:08:34.467 [INFO][5926] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2690a7ec28df5d92992ed4d85756c3bdef017652f9c8b5cc6e7f02f07e238d87" HandleID="k8s-pod-network.2690a7ec28df5d92992ed4d85756c3bdef017652f9c8b5cc6e7f02f07e238d87" Workload="localhost-k8s-calico--apiserver--748877ff9b--8wsdp-eth0" Sep 13 00:08:34.474368 containerd[1531]: 2025-09-13 00:08:34.467 [INFO][5926] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:08:34.474368 containerd[1531]: 2025-09-13 00:08:34.467 [INFO][5926] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:08:34.474368 containerd[1531]: 2025-09-13 00:08:34.471 [WARNING][5926] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2690a7ec28df5d92992ed4d85756c3bdef017652f9c8b5cc6e7f02f07e238d87" HandleID="k8s-pod-network.2690a7ec28df5d92992ed4d85756c3bdef017652f9c8b5cc6e7f02f07e238d87" Workload="localhost-k8s-calico--apiserver--748877ff9b--8wsdp-eth0" Sep 13 00:08:34.474368 containerd[1531]: 2025-09-13 00:08:34.471 [INFO][5926] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2690a7ec28df5d92992ed4d85756c3bdef017652f9c8b5cc6e7f02f07e238d87" HandleID="k8s-pod-network.2690a7ec28df5d92992ed4d85756c3bdef017652f9c8b5cc6e7f02f07e238d87" Workload="localhost-k8s-calico--apiserver--748877ff9b--8wsdp-eth0" Sep 13 00:08:34.474368 containerd[1531]: 2025-09-13 00:08:34.472 [INFO][5926] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:08:34.474368 containerd[1531]: 2025-09-13 00:08:34.473 [INFO][5919] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2690a7ec28df5d92992ed4d85756c3bdef017652f9c8b5cc6e7f02f07e238d87" Sep 13 00:08:34.474368 containerd[1531]: time="2025-09-13T00:08:34.474416975Z" level=info msg="TearDown network for sandbox \"2690a7ec28df5d92992ed4d85756c3bdef017652f9c8b5cc6e7f02f07e238d87\" successfully" Sep 13 00:08:34.474368 containerd[1531]: time="2025-09-13T00:08:34.474431967Z" level=info msg="StopPodSandbox for \"2690a7ec28df5d92992ed4d85756c3bdef017652f9c8b5cc6e7f02f07e238d87\" returns successfully" Sep 13 00:08:34.477615 containerd[1531]: time="2025-09-13T00:08:34.477598243Z" level=info msg="RemovePodSandbox for \"2690a7ec28df5d92992ed4d85756c3bdef017652f9c8b5cc6e7f02f07e238d87\"" Sep 13 00:08:34.478336 containerd[1531]: time="2025-09-13T00:08:34.477619517Z" level=info msg="Forcibly stopping sandbox \"2690a7ec28df5d92992ed4d85756c3bdef017652f9c8b5cc6e7f02f07e238d87\"" Sep 13 00:08:34.523939 containerd[1531]: 2025-09-13 00:08:34.498 [WARNING][5940] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2690a7ec28df5d92992ed4d85756c3bdef017652f9c8b5cc6e7f02f07e238d87" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--748877ff9b--8wsdp-eth0", GenerateName:"calico-apiserver-748877ff9b-", Namespace:"calico-apiserver", SelfLink:"", UID:"b6ce864c-530a-4daf-972c-b9dd880246b5", ResourceVersion:"1020", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 7, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"748877ff9b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"58965c2fcbcf510ac6906fff13ab711505586dd74f0cd42a644481a3840828bf", Pod:"calico-apiserver-748877ff9b-8wsdp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali69c38515ffc", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:08:34.523939 containerd[1531]: 2025-09-13 00:08:34.498 [INFO][5940] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2690a7ec28df5d92992ed4d85756c3bdef017652f9c8b5cc6e7f02f07e238d87" Sep 13 00:08:34.523939 containerd[1531]: 2025-09-13 00:08:34.498 [INFO][5940] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2690a7ec28df5d92992ed4d85756c3bdef017652f9c8b5cc6e7f02f07e238d87" iface="eth0" netns="" Sep 13 00:08:34.523939 containerd[1531]: 2025-09-13 00:08:34.498 [INFO][5940] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2690a7ec28df5d92992ed4d85756c3bdef017652f9c8b5cc6e7f02f07e238d87" Sep 13 00:08:34.523939 containerd[1531]: 2025-09-13 00:08:34.498 [INFO][5940] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2690a7ec28df5d92992ed4d85756c3bdef017652f9c8b5cc6e7f02f07e238d87" Sep 13 00:08:34.523939 containerd[1531]: 2025-09-13 00:08:34.510 [INFO][5947] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2690a7ec28df5d92992ed4d85756c3bdef017652f9c8b5cc6e7f02f07e238d87" HandleID="k8s-pod-network.2690a7ec28df5d92992ed4d85756c3bdef017652f9c8b5cc6e7f02f07e238d87" Workload="localhost-k8s-calico--apiserver--748877ff9b--8wsdp-eth0" Sep 13 00:08:34.523939 containerd[1531]: 2025-09-13 00:08:34.510 [INFO][5947] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:08:34.523939 containerd[1531]: 2025-09-13 00:08:34.510 [INFO][5947] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:08:34.523939 containerd[1531]: 2025-09-13 00:08:34.514 [WARNING][5947] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2690a7ec28df5d92992ed4d85756c3bdef017652f9c8b5cc6e7f02f07e238d87" HandleID="k8s-pod-network.2690a7ec28df5d92992ed4d85756c3bdef017652f9c8b5cc6e7f02f07e238d87" Workload="localhost-k8s-calico--apiserver--748877ff9b--8wsdp-eth0" Sep 13 00:08:34.523939 containerd[1531]: 2025-09-13 00:08:34.514 [INFO][5947] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2690a7ec28df5d92992ed4d85756c3bdef017652f9c8b5cc6e7f02f07e238d87" HandleID="k8s-pod-network.2690a7ec28df5d92992ed4d85756c3bdef017652f9c8b5cc6e7f02f07e238d87" Workload="localhost-k8s-calico--apiserver--748877ff9b--8wsdp-eth0" Sep 13 00:08:34.523939 containerd[1531]: 2025-09-13 00:08:34.521 [INFO][5947] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:08:34.523939 containerd[1531]: 2025-09-13 00:08:34.522 [INFO][5940] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2690a7ec28df5d92992ed4d85756c3bdef017652f9c8b5cc6e7f02f07e238d87" Sep 13 00:08:34.523939 containerd[1531]: time="2025-09-13T00:08:34.523765744Z" level=info msg="TearDown network for sandbox \"2690a7ec28df5d92992ed4d85756c3bdef017652f9c8b5cc6e7f02f07e238d87\" successfully" Sep 13 00:08:34.534698 containerd[1531]: time="2025-09-13T00:08:34.534605795Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2690a7ec28df5d92992ed4d85756c3bdef017652f9c8b5cc6e7f02f07e238d87\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 13 00:08:34.534698 containerd[1531]: time="2025-09-13T00:08:34.534639917Z" level=info msg="RemovePodSandbox \"2690a7ec28df5d92992ed4d85756c3bdef017652f9c8b5cc6e7f02f07e238d87\" returns successfully" Sep 13 00:08:35.344537 kubelet[2723]: I0913 00:08:35.344506 2723 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 13 00:08:44.117168 kubelet[2723]: I0913 00:08:44.117134 2723 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 13 00:08:44.296623 systemd[1]: Started sshd@7-139.178.70.103:22-139.178.89.65:47100.service - OpenSSH per-connection server daemon (139.178.89.65:47100). Sep 13 00:08:44.399516 sshd[6006]: Accepted publickey for core from 139.178.89.65 port 47100 ssh2: RSA SHA256:3TZ0siYQ43CmuUVihwoq18hyOHBS5nlRY7FLM6+omOc Sep 13 00:08:44.402082 sshd[6006]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:08:44.409212 systemd-logind[1512]: New session 10 of user core. Sep 13 00:08:44.414310 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 13 00:08:45.001328 sshd[6006]: pam_unix(sshd:session): session closed for user core Sep 13 00:08:45.014453 systemd[1]: sshd@7-139.178.70.103:22-139.178.89.65:47100.service: Deactivated successfully. Sep 13 00:08:45.016785 systemd[1]: session-10.scope: Deactivated successfully. Sep 13 00:08:45.017977 systemd-logind[1512]: Session 10 logged out. Waiting for processes to exit. Sep 13 00:08:45.022145 systemd-logind[1512]: Removed session 10. Sep 13 00:08:50.013597 systemd[1]: Started sshd@8-139.178.70.103:22-139.178.89.65:58614.service - OpenSSH per-connection server daemon (139.178.89.65:58614). Sep 13 00:08:50.165053 sshd[6063]: Accepted publickey for core from 139.178.89.65 port 58614 ssh2: RSA SHA256:3TZ0siYQ43CmuUVihwoq18hyOHBS5nlRY7FLM6+omOc Sep 13 00:08:50.167438 sshd[6063]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:08:50.172904 systemd-logind[1512]: New session 11 of user core. Sep 13 00:08:50.177519 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 13 00:08:51.144981 sshd[6063]: pam_unix(sshd:session): session closed for user core Sep 13 00:08:51.163807 systemd[1]: sshd@8-139.178.70.103:22-139.178.89.65:58614.service: Deactivated successfully. Sep 13 00:08:51.165140 systemd[1]: session-11.scope: Deactivated successfully. Sep 13 00:08:51.175926 systemd-logind[1512]: Session 11 logged out. Waiting for processes to exit. Sep 13 00:08:51.176550 systemd-logind[1512]: Removed session 11. Sep 13 00:08:56.162634 systemd[1]: Started sshd@9-139.178.70.103:22-139.178.89.65:58618.service - OpenSSH per-connection server daemon (139.178.89.65:58618). Sep 13 00:08:56.292411 sshd[6099]: Accepted publickey for core from 139.178.89.65 port 58618 ssh2: RSA SHA256:3TZ0siYQ43CmuUVihwoq18hyOHBS5nlRY7FLM6+omOc Sep 13 00:08:56.293898 sshd[6099]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:08:56.297221 systemd-logind[1512]: New session 12 of user core. Sep 13 00:08:56.307367 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 13 00:08:57.335118 sshd[6099]: pam_unix(sshd:session): session closed for user core Sep 13 00:08:57.345419 systemd[1]: Started sshd@10-139.178.70.103:22-139.178.89.65:58630.service - OpenSSH per-connection server daemon (139.178.89.65:58630). Sep 13 00:08:57.346349 systemd[1]: sshd@9-139.178.70.103:22-139.178.89.65:58618.service: Deactivated successfully. Sep 13 00:08:57.347690 systemd[1]: session-12.scope: Deactivated successfully. Sep 13 00:08:57.349401 systemd-logind[1512]: Session 12 logged out. Waiting for processes to exit. Sep 13 00:08:57.350589 systemd-logind[1512]: Removed session 12. Sep 13 00:08:57.389548 sshd[6119]: Accepted publickey for core from 139.178.89.65 port 58630 ssh2: RSA SHA256:3TZ0siYQ43CmuUVihwoq18hyOHBS5nlRY7FLM6+omOc Sep 13 00:08:57.390394 sshd[6119]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:08:57.393208 systemd-logind[1512]: New session 13 of user core. Sep 13 00:08:57.398298 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 13 00:08:57.572423 sshd[6119]: pam_unix(sshd:session): session closed for user core Sep 13 00:08:57.579554 systemd[1]: sshd@10-139.178.70.103:22-139.178.89.65:58630.service: Deactivated successfully. Sep 13 00:08:57.581356 systemd[1]: session-13.scope: Deactivated successfully. Sep 13 00:08:57.582730 systemd-logind[1512]: Session 13 logged out. Waiting for processes to exit. Sep 13 00:08:57.587503 systemd[1]: Started sshd@11-139.178.70.103:22-139.178.89.65:58644.service - OpenSSH per-connection server daemon (139.178.89.65:58644). Sep 13 00:08:57.593649 systemd-logind[1512]: Removed session 13. Sep 13 00:08:57.633599 sshd[6132]: Accepted publickey for core from 139.178.89.65 port 58644 ssh2: RSA SHA256:3TZ0siYQ43CmuUVihwoq18hyOHBS5nlRY7FLM6+omOc Sep 13 00:08:57.634495 sshd[6132]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:08:57.637966 systemd-logind[1512]: New session 14 of user core. Sep 13 00:08:57.643312 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 13 00:08:57.916336 sshd[6132]: pam_unix(sshd:session): session closed for user core Sep 13 00:08:57.918151 systemd[1]: sshd@11-139.178.70.103:22-139.178.89.65:58644.service: Deactivated successfully. Sep 13 00:08:57.919406 systemd[1]: session-14.scope: Deactivated successfully. Sep 13 00:08:57.920433 systemd-logind[1512]: Session 14 logged out. Waiting for processes to exit. Sep 13 00:08:57.921016 systemd-logind[1512]: Removed session 14. Sep 13 00:09:02.931470 systemd[1]: Started sshd@12-139.178.70.103:22-139.178.89.65:60170.service - OpenSSH per-connection server daemon (139.178.89.65:60170). Sep 13 00:09:03.129812 sshd[6149]: Accepted publickey for core from 139.178.89.65 port 60170 ssh2: RSA SHA256:3TZ0siYQ43CmuUVihwoq18hyOHBS5nlRY7FLM6+omOc Sep 13 00:09:03.131024 sshd[6149]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:09:03.133605 systemd-logind[1512]: New session 15 of user core. Sep 13 00:09:03.139441 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 13 00:09:04.017532 sshd[6149]: pam_unix(sshd:session): session closed for user core Sep 13 00:09:04.022506 systemd[1]: sshd@12-139.178.70.103:22-139.178.89.65:60170.service: Deactivated successfully. Sep 13 00:09:04.023843 systemd[1]: session-15.scope: Deactivated successfully. Sep 13 00:09:04.024487 systemd-logind[1512]: Session 15 logged out. Waiting for processes to exit. Sep 13 00:09:04.025086 systemd-logind[1512]: Removed session 15. Sep 13 00:09:09.035459 systemd[1]: Started sshd@13-139.178.70.103:22-139.178.89.65:60174.service - OpenSSH per-connection server daemon (139.178.89.65:60174). Sep 13 00:09:09.844130 sshd[6164]: Accepted publickey for core from 139.178.89.65 port 60174 ssh2: RSA SHA256:3TZ0siYQ43CmuUVihwoq18hyOHBS5nlRY7FLM6+omOc Sep 13 00:09:09.859329 sshd[6164]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:09:09.881435 systemd-logind[1512]: New session 16 of user core. Sep 13 00:09:09.888369 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 13 00:09:14.125402 sshd[6164]: pam_unix(sshd:session): session closed for user core Sep 13 00:09:14.132713 systemd[1]: sshd@13-139.178.70.103:22-139.178.89.65:60174.service: Deactivated successfully. Sep 13 00:09:14.134271 systemd[1]: session-16.scope: Deactivated successfully. Sep 13 00:09:14.135346 systemd-logind[1512]: Session 16 logged out. Waiting for processes to exit. Sep 13 00:09:14.143428 systemd[1]: Started sshd@14-139.178.70.103:22-139.178.89.65:45666.service - OpenSSH per-connection server daemon (139.178.89.65:45666). Sep 13 00:09:14.144916 systemd-logind[1512]: Removed session 16. Sep 13 00:09:14.215951 sshd[6178]: Accepted publickey for core from 139.178.89.65 port 45666 ssh2: RSA SHA256:3TZ0siYQ43CmuUVihwoq18hyOHBS5nlRY7FLM6+omOc Sep 13 00:09:14.216827 sshd[6178]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:09:14.219355 systemd-logind[1512]: New session 17 of user core. Sep 13 00:09:14.224332 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 13 00:09:15.483664 sshd[6178]: pam_unix(sshd:session): session closed for user core Sep 13 00:09:15.484324 systemd[1]: Started sshd@15-139.178.70.103:22-139.178.89.65:45678.service - OpenSSH per-connection server daemon (139.178.89.65:45678). Sep 13 00:09:15.487298 systemd[1]: sshd@14-139.178.70.103:22-139.178.89.65:45666.service: Deactivated successfully. Sep 13 00:09:15.488836 systemd[1]: session-17.scope: Deactivated successfully. Sep 13 00:09:15.490134 systemd-logind[1512]: Session 17 logged out. Waiting for processes to exit. Sep 13 00:09:15.490924 systemd-logind[1512]: Removed session 17. Sep 13 00:09:15.601043 sshd[6187]: Accepted publickey for core from 139.178.89.65 port 45678 ssh2: RSA SHA256:3TZ0siYQ43CmuUVihwoq18hyOHBS5nlRY7FLM6+omOc Sep 13 00:09:15.602051 sshd[6187]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:09:15.605662 systemd-logind[1512]: New session 18 of user core. Sep 13 00:09:15.612287 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 13 00:09:17.689994 sshd[6187]: pam_unix(sshd:session): session closed for user core Sep 13 00:09:17.699625 systemd[1]: Started sshd@16-139.178.70.103:22-139.178.89.65:45694.service - OpenSSH per-connection server daemon (139.178.89.65:45694). Sep 13 00:09:17.717486 systemd[1]: sshd@15-139.178.70.103:22-139.178.89.65:45678.service: Deactivated successfully. Sep 13 00:09:17.719164 systemd[1]: session-18.scope: Deactivated successfully. Sep 13 00:09:17.723134 systemd-logind[1512]: Session 18 logged out. Waiting for processes to exit. Sep 13 00:09:17.723839 systemd-logind[1512]: Removed session 18. Sep 13 00:09:17.954284 sshd[6258]: Accepted publickey for core from 139.178.89.65 port 45694 ssh2: RSA SHA256:3TZ0siYQ43CmuUVihwoq18hyOHBS5nlRY7FLM6+omOc Sep 13 00:09:17.961771 sshd[6258]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:09:17.965588 systemd-logind[1512]: New session 19 of user core. Sep 13 00:09:17.970854 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 13 00:09:18.963939 sshd[6258]: pam_unix(sshd:session): session closed for user core Sep 13 00:09:18.971157 systemd[1]: sshd@16-139.178.70.103:22-139.178.89.65:45694.service: Deactivated successfully. Sep 13 00:09:18.973217 systemd[1]: session-19.scope: Deactivated successfully. Sep 13 00:09:18.974269 systemd-logind[1512]: Session 19 logged out. Waiting for processes to exit. Sep 13 00:09:18.986406 systemd[1]: Started sshd@17-139.178.70.103:22-139.178.89.65:45710.service - OpenSSH per-connection server daemon (139.178.89.65:45710). Sep 13 00:09:18.987631 systemd-logind[1512]: Removed session 19. Sep 13 00:09:19.197878 sshd[6280]: Accepted publickey for core from 139.178.89.65 port 45710 ssh2: RSA SHA256:3TZ0siYQ43CmuUVihwoq18hyOHBS5nlRY7FLM6+omOc Sep 13 00:09:19.212063 sshd[6280]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:09:19.220817 systemd-logind[1512]: New session 20 of user core. Sep 13 00:09:19.225366 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 13 00:09:20.143111 sshd[6280]: pam_unix(sshd:session): session closed for user core Sep 13 00:09:20.147029 systemd[1]: sshd@17-139.178.70.103:22-139.178.89.65:45710.service: Deactivated successfully. Sep 13 00:09:20.149029 systemd[1]: session-20.scope: Deactivated successfully. Sep 13 00:09:20.151092 systemd-logind[1512]: Session 20 logged out. Waiting for processes to exit. Sep 13 00:09:20.152533 systemd-logind[1512]: Removed session 20. Sep 13 00:09:25.160755 systemd[1]: Started sshd@18-139.178.70.103:22-139.178.89.65:51206.service - OpenSSH per-connection server daemon (139.178.89.65:51206). Sep 13 00:09:25.345710 sshd[6316]: Accepted publickey for core from 139.178.89.65 port 51206 ssh2: RSA SHA256:3TZ0siYQ43CmuUVihwoq18hyOHBS5nlRY7FLM6+omOc Sep 13 00:09:25.360840 sshd[6316]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:09:25.368560 systemd-logind[1512]: New session 21 of user core. Sep 13 00:09:25.374284 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 13 00:09:25.670034 sshd[6316]: pam_unix(sshd:session): session closed for user core Sep 13 00:09:25.674371 systemd[1]: sshd@18-139.178.70.103:22-139.178.89.65:51206.service: Deactivated successfully. Sep 13 00:09:25.677084 systemd[1]: session-21.scope: Deactivated successfully. Sep 13 00:09:25.678992 systemd-logind[1512]: Session 21 logged out. Waiting for processes to exit. Sep 13 00:09:25.680058 systemd-logind[1512]: Removed session 21. Sep 13 00:09:30.679041 systemd[1]: Started sshd@19-139.178.70.103:22-139.178.89.65:39718.service - OpenSSH per-connection server daemon (139.178.89.65:39718). Sep 13 00:09:30.782061 sshd[6347]: Accepted publickey for core from 139.178.89.65 port 39718 ssh2: RSA SHA256:3TZ0siYQ43CmuUVihwoq18hyOHBS5nlRY7FLM6+omOc Sep 13 00:09:30.783407 sshd[6347]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:09:30.788985 systemd-logind[1512]: New session 22 of user core. Sep 13 00:09:30.794307 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 13 00:09:31.266327 sshd[6347]: pam_unix(sshd:session): session closed for user core Sep 13 00:09:31.269010 systemd[1]: sshd@19-139.178.70.103:22-139.178.89.65:39718.service: Deactivated successfully. Sep 13 00:09:31.270592 systemd[1]: session-22.scope: Deactivated successfully. Sep 13 00:09:31.271475 systemd-logind[1512]: Session 22 logged out. Waiting for processes to exit. Sep 13 00:09:31.272097 systemd-logind[1512]: Removed session 22. Sep 13 00:09:36.287144 systemd[1]: Started sshd@20-139.178.70.103:22-139.178.89.65:39732.service - OpenSSH per-connection server daemon (139.178.89.65:39732). Sep 13 00:09:36.509832 sshd[6385]: Accepted publickey for core from 139.178.89.65 port 39732 ssh2: RSA SHA256:3TZ0siYQ43CmuUVihwoq18hyOHBS5nlRY7FLM6+omOc Sep 13 00:09:36.513963 sshd[6385]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:09:36.520690 systemd-logind[1512]: New session 23 of user core. Sep 13 00:09:36.533412 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 13 00:09:37.561396 sshd[6385]: pam_unix(sshd:session): session closed for user core Sep 13 00:09:37.563538 systemd[1]: sshd@20-139.178.70.103:22-139.178.89.65:39732.service: Deactivated successfully. Sep 13 00:09:37.571794 systemd[1]: session-23.scope: Deactivated successfully. Sep 13 00:09:37.573378 systemd-logind[1512]: Session 23 logged out. Waiting for processes to exit. Sep 13 00:09:37.574369 systemd-logind[1512]: Removed session 23.