Sep 13 00:23:05.726152 kernel: Linux version 6.12.47-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Fri Sep 12 22:15:39 -00 2025 Sep 13 00:23:05.726168 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=21b29c6e420cf06e0546ff797fc1285d986af130e4ba1abb9f27cb6343b53294 Sep 13 00:23:05.726174 kernel: Disabled fast string operations Sep 13 00:23:05.726179 kernel: BIOS-provided physical RAM map: Sep 13 00:23:05.726182 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ebff] usable Sep 13 00:23:05.726188 kernel: BIOS-e820: [mem 0x000000000009ec00-0x000000000009ffff] reserved Sep 13 00:23:05.726193 kernel: BIOS-e820: [mem 0x00000000000dc000-0x00000000000fffff] reserved Sep 13 00:23:05.726197 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007fedffff] usable Sep 13 00:23:05.726222 kernel: BIOS-e820: [mem 0x000000007fee0000-0x000000007fefefff] ACPI data Sep 13 00:23:05.726231 kernel: BIOS-e820: [mem 0x000000007feff000-0x000000007fefffff] ACPI NVS Sep 13 00:23:05.726236 kernel: BIOS-e820: [mem 0x000000007ff00000-0x000000007fffffff] usable Sep 13 00:23:05.726240 kernel: BIOS-e820: [mem 0x00000000f0000000-0x00000000f7ffffff] reserved Sep 13 00:23:05.726244 kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec0ffff] reserved Sep 13 00:23:05.726249 kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved Sep 13 00:23:05.726255 kernel: BIOS-e820: [mem 0x00000000fffe0000-0x00000000ffffffff] reserved Sep 13 00:23:05.726260 kernel: NX (Execute Disable) protection: active Sep 13 00:23:05.726265 kernel: APIC: Static calls initialized Sep 13 00:23:05.726270 kernel: SMBIOS 2.7 present. Sep 13 00:23:05.726275 kernel: DMI: VMware, Inc. VMware Virtual Platform/440BX Desktop Reference Platform, BIOS 6.00 05/28/2020 Sep 13 00:23:05.726279 kernel: DMI: Memory slots populated: 1/128 Sep 13 00:23:05.726285 kernel: vmware: hypercall mode: 0x00 Sep 13 00:23:05.726290 kernel: Hypervisor detected: VMware Sep 13 00:23:05.726295 kernel: vmware: TSC freq read from hypervisor : 3408.000 MHz Sep 13 00:23:05.726299 kernel: vmware: Host bus clock speed read from hypervisor : 66000000 Hz Sep 13 00:23:05.726304 kernel: vmware: using clock offset of 5531207061 ns Sep 13 00:23:05.726309 kernel: tsc: Detected 3408.000 MHz processor Sep 13 00:23:05.726314 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 13 00:23:05.726319 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 13 00:23:05.726324 kernel: last_pfn = 0x80000 max_arch_pfn = 0x400000000 Sep 13 00:23:05.726329 kernel: total RAM covered: 3072M Sep 13 00:23:05.726335 kernel: Found optimal setting for mtrr clean up Sep 13 00:23:05.726342 kernel: gran_size: 64K chunk_size: 64K num_reg: 2 lose cover RAM: 0G Sep 13 00:23:05.726347 kernel: MTRR map: 6 entries (5 fixed + 1 variable; max 21), built from 8 variable MTRRs Sep 13 00:23:05.726352 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 13 00:23:05.726357 kernel: Using GB pages for direct mapping Sep 13 00:23:05.726362 kernel: ACPI: Early table checksum verification disabled Sep 13 00:23:05.726367 kernel: ACPI: RSDP 0x00000000000F6A00 000024 (v02 PTLTD ) Sep 13 00:23:05.726372 kernel: ACPI: XSDT 0x000000007FEE965B 00005C (v01 INTEL 440BX 06040000 VMW 01324272) Sep 13 00:23:05.726377 kernel: ACPI: FACP 0x000000007FEFEE73 0000F4 (v04 INTEL 440BX 06040000 PTL 000F4240) Sep 13 00:23:05.726383 kernel: ACPI: DSDT 0x000000007FEEAD55 01411E (v01 PTLTD Custom 06040000 MSFT 03000001) Sep 13 00:23:05.726389 kernel: ACPI: FACS 0x000000007FEFFFC0 000040 Sep 13 00:23:05.726394 kernel: ACPI: FACS 0x000000007FEFFFC0 000040 Sep 13 00:23:05.726399 kernel: ACPI: BOOT 0x000000007FEEAD2D 000028 (v01 PTLTD $SBFTBL$ 06040000 LTP 00000001) Sep 13 00:23:05.726405 kernel: ACPI: APIC 0x000000007FEEA5EB 000742 (v01 PTLTD ? APIC 06040000 LTP 00000000) Sep 13 00:23:05.726411 kernel: ACPI: MCFG 0x000000007FEEA5AF 00003C (v01 PTLTD $PCITBL$ 06040000 LTP 00000001) Sep 13 00:23:05.726416 kernel: ACPI: SRAT 0x000000007FEE9757 0008A8 (v02 VMWARE MEMPLUG 06040000 VMW 00000001) Sep 13 00:23:05.726421 kernel: ACPI: HPET 0x000000007FEE971F 000038 (v01 VMWARE VMW HPET 06040000 VMW 00000001) Sep 13 00:23:05.726429 kernel: ACPI: WAET 0x000000007FEE96F7 000028 (v01 VMWARE VMW WAET 06040000 VMW 00000001) Sep 13 00:23:05.726435 kernel: ACPI: Reserving FACP table memory at [mem 0x7fefee73-0x7fefef66] Sep 13 00:23:05.726440 kernel: ACPI: Reserving DSDT table memory at [mem 0x7feead55-0x7fefee72] Sep 13 00:23:05.726445 kernel: ACPI: Reserving FACS table memory at [mem 0x7fefffc0-0x7fefffff] Sep 13 00:23:05.726450 kernel: ACPI: Reserving FACS table memory at [mem 0x7fefffc0-0x7fefffff] Sep 13 00:23:05.726455 kernel: ACPI: Reserving BOOT table memory at [mem 0x7feead2d-0x7feead54] Sep 13 00:23:05.726465 kernel: ACPI: Reserving APIC table memory at [mem 0x7feea5eb-0x7feead2c] Sep 13 00:23:05.726470 kernel: ACPI: Reserving MCFG table memory at [mem 0x7feea5af-0x7feea5ea] Sep 13 00:23:05.726475 kernel: ACPI: Reserving SRAT table memory at [mem 0x7fee9757-0x7fee9ffe] Sep 13 00:23:05.726480 kernel: ACPI: Reserving HPET table memory at [mem 0x7fee971f-0x7fee9756] Sep 13 00:23:05.726485 kernel: ACPI: Reserving WAET table memory at [mem 0x7fee96f7-0x7fee971e] Sep 13 00:23:05.726490 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Sep 13 00:23:05.726495 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Sep 13 00:23:05.726536 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000-0xbfffffff] hotplug Sep 13 00:23:05.726542 kernel: NUMA: Node 0 [mem 0x00001000-0x0009ffff] + [mem 0x00100000-0x7fffffff] -> [mem 0x00001000-0x7fffffff] Sep 13 00:23:05.727098 kernel: NODE_DATA(0) allocated [mem 0x7fff8dc0-0x7fffffff] Sep 13 00:23:05.727105 kernel: Zone ranges: Sep 13 00:23:05.727111 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 13 00:23:05.727116 kernel: DMA32 [mem 0x0000000001000000-0x000000007fffffff] Sep 13 00:23:05.727121 kernel: Normal empty Sep 13 00:23:05.727126 kernel: Device empty Sep 13 00:23:05.727131 kernel: Movable zone start for each node Sep 13 00:23:05.727136 kernel: Early memory node ranges Sep 13 00:23:05.727141 kernel: node 0: [mem 0x0000000000001000-0x000000000009dfff] Sep 13 00:23:05.727146 kernel: node 0: [mem 0x0000000000100000-0x000000007fedffff] Sep 13 00:23:05.727153 kernel: node 0: [mem 0x000000007ff00000-0x000000007fffffff] Sep 13 00:23:05.727158 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007fffffff] Sep 13 00:23:05.727163 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 13 00:23:05.727168 kernel: On node 0, zone DMA: 98 pages in unavailable ranges Sep 13 00:23:05.727173 kernel: On node 0, zone DMA32: 32 pages in unavailable ranges Sep 13 00:23:05.727179 kernel: ACPI: PM-Timer IO Port: 0x1008 Sep 13 00:23:05.727184 kernel: ACPI: LAPIC_NMI (acpi_id[0x00] high edge lint[0x1]) Sep 13 00:23:05.727189 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1]) Sep 13 00:23:05.727194 kernel: ACPI: LAPIC_NMI (acpi_id[0x02] high edge lint[0x1]) Sep 13 00:23:05.727200 kernel: ACPI: LAPIC_NMI (acpi_id[0x03] high edge lint[0x1]) Sep 13 00:23:05.727205 kernel: ACPI: LAPIC_NMI (acpi_id[0x04] high edge lint[0x1]) Sep 13 00:23:05.727210 kernel: ACPI: LAPIC_NMI (acpi_id[0x05] high edge lint[0x1]) Sep 13 00:23:05.727215 kernel: ACPI: LAPIC_NMI (acpi_id[0x06] high edge lint[0x1]) Sep 13 00:23:05.727220 kernel: ACPI: LAPIC_NMI (acpi_id[0x07] high edge lint[0x1]) Sep 13 00:23:05.727225 kernel: ACPI: LAPIC_NMI (acpi_id[0x08] high edge lint[0x1]) Sep 13 00:23:05.727230 kernel: ACPI: LAPIC_NMI (acpi_id[0x09] high edge lint[0x1]) Sep 13 00:23:05.727236 kernel: ACPI: LAPIC_NMI (acpi_id[0x0a] high edge lint[0x1]) Sep 13 00:23:05.727241 kernel: ACPI: LAPIC_NMI (acpi_id[0x0b] high edge lint[0x1]) Sep 13 00:23:05.727246 kernel: ACPI: LAPIC_NMI (acpi_id[0x0c] high edge lint[0x1]) Sep 13 00:23:05.727252 kernel: ACPI: LAPIC_NMI (acpi_id[0x0d] high edge lint[0x1]) Sep 13 00:23:05.727257 kernel: ACPI: LAPIC_NMI (acpi_id[0x0e] high edge lint[0x1]) Sep 13 00:23:05.727262 kernel: ACPI: LAPIC_NMI (acpi_id[0x0f] high edge lint[0x1]) Sep 13 00:23:05.727267 kernel: ACPI: LAPIC_NMI (acpi_id[0x10] high edge lint[0x1]) Sep 13 00:23:05.727272 kernel: ACPI: LAPIC_NMI (acpi_id[0x11] high edge lint[0x1]) Sep 13 00:23:05.727277 kernel: ACPI: LAPIC_NMI (acpi_id[0x12] high edge lint[0x1]) Sep 13 00:23:05.727282 kernel: ACPI: LAPIC_NMI (acpi_id[0x13] high edge lint[0x1]) Sep 13 00:23:05.727287 kernel: ACPI: LAPIC_NMI (acpi_id[0x14] high edge lint[0x1]) Sep 13 00:23:05.727292 kernel: ACPI: LAPIC_NMI (acpi_id[0x15] high edge lint[0x1]) Sep 13 00:23:05.727299 kernel: ACPI: LAPIC_NMI (acpi_id[0x16] high edge lint[0x1]) Sep 13 00:23:05.727304 kernel: ACPI: LAPIC_NMI (acpi_id[0x17] high edge lint[0x1]) Sep 13 00:23:05.727309 kernel: ACPI: LAPIC_NMI (acpi_id[0x18] high edge lint[0x1]) Sep 13 00:23:05.727314 kernel: ACPI: LAPIC_NMI (acpi_id[0x19] high edge lint[0x1]) Sep 13 00:23:05.727319 kernel: ACPI: LAPIC_NMI (acpi_id[0x1a] high edge lint[0x1]) Sep 13 00:23:05.727324 kernel: ACPI: LAPIC_NMI (acpi_id[0x1b] high edge lint[0x1]) Sep 13 00:23:05.727329 kernel: ACPI: LAPIC_NMI (acpi_id[0x1c] high edge lint[0x1]) Sep 13 00:23:05.727334 kernel: ACPI: LAPIC_NMI (acpi_id[0x1d] high edge lint[0x1]) Sep 13 00:23:05.727339 kernel: ACPI: LAPIC_NMI (acpi_id[0x1e] high edge lint[0x1]) Sep 13 00:23:05.727345 kernel: ACPI: LAPIC_NMI (acpi_id[0x1f] high edge lint[0x1]) Sep 13 00:23:05.727351 kernel: ACPI: LAPIC_NMI (acpi_id[0x20] high edge lint[0x1]) Sep 13 00:23:05.727356 kernel: ACPI: LAPIC_NMI (acpi_id[0x21] high edge lint[0x1]) Sep 13 00:23:05.727361 kernel: ACPI: LAPIC_NMI (acpi_id[0x22] high edge lint[0x1]) Sep 13 00:23:05.727366 kernel: ACPI: LAPIC_NMI (acpi_id[0x23] high edge lint[0x1]) Sep 13 00:23:05.727371 kernel: ACPI: LAPIC_NMI (acpi_id[0x24] high edge lint[0x1]) Sep 13 00:23:05.727376 kernel: ACPI: LAPIC_NMI (acpi_id[0x25] high edge lint[0x1]) Sep 13 00:23:05.727381 kernel: ACPI: LAPIC_NMI (acpi_id[0x26] high edge lint[0x1]) Sep 13 00:23:05.727386 kernel: ACPI: LAPIC_NMI (acpi_id[0x27] high edge lint[0x1]) Sep 13 00:23:05.727396 kernel: ACPI: LAPIC_NMI (acpi_id[0x28] high edge lint[0x1]) Sep 13 00:23:05.727401 kernel: ACPI: LAPIC_NMI (acpi_id[0x29] high edge lint[0x1]) Sep 13 00:23:05.727406 kernel: ACPI: LAPIC_NMI (acpi_id[0x2a] high edge lint[0x1]) Sep 13 00:23:05.727413 kernel: ACPI: LAPIC_NMI (acpi_id[0x2b] high edge lint[0x1]) Sep 13 00:23:05.727418 kernel: ACPI: LAPIC_NMI (acpi_id[0x2c] high edge lint[0x1]) Sep 13 00:23:05.727423 kernel: ACPI: LAPIC_NMI (acpi_id[0x2d] high edge lint[0x1]) Sep 13 00:23:05.727429 kernel: ACPI: LAPIC_NMI (acpi_id[0x2e] high edge lint[0x1]) Sep 13 00:23:05.727434 kernel: ACPI: LAPIC_NMI (acpi_id[0x2f] high edge lint[0x1]) Sep 13 00:23:05.727440 kernel: ACPI: LAPIC_NMI (acpi_id[0x30] high edge lint[0x1]) Sep 13 00:23:05.727446 kernel: ACPI: LAPIC_NMI (acpi_id[0x31] high edge lint[0x1]) Sep 13 00:23:05.727451 kernel: ACPI: LAPIC_NMI (acpi_id[0x32] high edge lint[0x1]) Sep 13 00:23:05.727457 kernel: ACPI: LAPIC_NMI (acpi_id[0x33] high edge lint[0x1]) Sep 13 00:23:05.727462 kernel: ACPI: LAPIC_NMI (acpi_id[0x34] high edge lint[0x1]) Sep 13 00:23:05.727467 kernel: ACPI: LAPIC_NMI (acpi_id[0x35] high edge lint[0x1]) Sep 13 00:23:05.727473 kernel: ACPI: LAPIC_NMI (acpi_id[0x36] high edge lint[0x1]) Sep 13 00:23:05.727479 kernel: ACPI: LAPIC_NMI (acpi_id[0x37] high edge lint[0x1]) Sep 13 00:23:05.727484 kernel: ACPI: LAPIC_NMI (acpi_id[0x38] high edge lint[0x1]) Sep 13 00:23:05.727489 kernel: ACPI: LAPIC_NMI (acpi_id[0x39] high edge lint[0x1]) Sep 13 00:23:05.727495 kernel: ACPI: LAPIC_NMI (acpi_id[0x3a] high edge lint[0x1]) Sep 13 00:23:05.727501 kernel: ACPI: LAPIC_NMI (acpi_id[0x3b] high edge lint[0x1]) Sep 13 00:23:05.727506 kernel: ACPI: LAPIC_NMI (acpi_id[0x3c] high edge lint[0x1]) Sep 13 00:23:05.727512 kernel: ACPI: LAPIC_NMI (acpi_id[0x3d] high edge lint[0x1]) Sep 13 00:23:05.727517 kernel: ACPI: LAPIC_NMI (acpi_id[0x3e] high edge lint[0x1]) Sep 13 00:23:05.727523 kernel: ACPI: LAPIC_NMI (acpi_id[0x3f] high edge lint[0x1]) Sep 13 00:23:05.727528 kernel: ACPI: LAPIC_NMI (acpi_id[0x40] high edge lint[0x1]) Sep 13 00:23:05.727533 kernel: ACPI: LAPIC_NMI (acpi_id[0x41] high edge lint[0x1]) Sep 13 00:23:05.727539 kernel: ACPI: LAPIC_NMI (acpi_id[0x42] high edge lint[0x1]) Sep 13 00:23:05.727563 kernel: ACPI: LAPIC_NMI (acpi_id[0x43] high edge lint[0x1]) Sep 13 00:23:05.727572 kernel: ACPI: LAPIC_NMI (acpi_id[0x44] high edge lint[0x1]) Sep 13 00:23:05.727577 kernel: ACPI: LAPIC_NMI (acpi_id[0x45] high edge lint[0x1]) Sep 13 00:23:05.727583 kernel: ACPI: LAPIC_NMI (acpi_id[0x46] high edge lint[0x1]) Sep 13 00:23:05.727588 kernel: ACPI: LAPIC_NMI (acpi_id[0x47] high edge lint[0x1]) Sep 13 00:23:05.727593 kernel: ACPI: LAPIC_NMI (acpi_id[0x48] high edge lint[0x1]) Sep 13 00:23:05.727599 kernel: ACPI: LAPIC_NMI (acpi_id[0x49] high edge lint[0x1]) Sep 13 00:23:05.727604 kernel: ACPI: LAPIC_NMI (acpi_id[0x4a] high edge lint[0x1]) Sep 13 00:23:05.727610 kernel: ACPI: LAPIC_NMI (acpi_id[0x4b] high edge lint[0x1]) Sep 13 00:23:05.727615 kernel: ACPI: LAPIC_NMI (acpi_id[0x4c] high edge lint[0x1]) Sep 13 00:23:05.727620 kernel: ACPI: LAPIC_NMI (acpi_id[0x4d] high edge lint[0x1]) Sep 13 00:23:05.727627 kernel: ACPI: LAPIC_NMI (acpi_id[0x4e] high edge lint[0x1]) Sep 13 00:23:05.727632 kernel: ACPI: LAPIC_NMI (acpi_id[0x4f] high edge lint[0x1]) Sep 13 00:23:05.727638 kernel: ACPI: LAPIC_NMI (acpi_id[0x50] high edge lint[0x1]) Sep 13 00:23:05.727643 kernel: ACPI: LAPIC_NMI (acpi_id[0x51] high edge lint[0x1]) Sep 13 00:23:05.727648 kernel: ACPI: LAPIC_NMI (acpi_id[0x52] high edge lint[0x1]) Sep 13 00:23:05.727654 kernel: ACPI: LAPIC_NMI (acpi_id[0x53] high edge lint[0x1]) Sep 13 00:23:05.727659 kernel: ACPI: LAPIC_NMI (acpi_id[0x54] high edge lint[0x1]) Sep 13 00:23:05.727664 kernel: ACPI: LAPIC_NMI (acpi_id[0x55] high edge lint[0x1]) Sep 13 00:23:05.727670 kernel: ACPI: LAPIC_NMI (acpi_id[0x56] high edge lint[0x1]) Sep 13 00:23:05.727675 kernel: ACPI: LAPIC_NMI (acpi_id[0x57] high edge lint[0x1]) Sep 13 00:23:05.727682 kernel: ACPI: LAPIC_NMI (acpi_id[0x58] high edge lint[0x1]) Sep 13 00:23:05.727687 kernel: ACPI: LAPIC_NMI (acpi_id[0x59] high edge lint[0x1]) Sep 13 00:23:05.727692 kernel: ACPI: LAPIC_NMI (acpi_id[0x5a] high edge lint[0x1]) Sep 13 00:23:05.727698 kernel: ACPI: LAPIC_NMI (acpi_id[0x5b] high edge lint[0x1]) Sep 13 00:23:05.727703 kernel: ACPI: LAPIC_NMI (acpi_id[0x5c] high edge lint[0x1]) Sep 13 00:23:05.727708 kernel: ACPI: LAPIC_NMI (acpi_id[0x5d] high edge lint[0x1]) Sep 13 00:23:05.727714 kernel: ACPI: LAPIC_NMI (acpi_id[0x5e] high edge lint[0x1]) Sep 13 00:23:05.727719 kernel: ACPI: LAPIC_NMI (acpi_id[0x5f] high edge lint[0x1]) Sep 13 00:23:05.727724 kernel: ACPI: LAPIC_NMI (acpi_id[0x60] high edge lint[0x1]) Sep 13 00:23:05.727731 kernel: ACPI: LAPIC_NMI (acpi_id[0x61] high edge lint[0x1]) Sep 13 00:23:05.727736 kernel: ACPI: LAPIC_NMI (acpi_id[0x62] high edge lint[0x1]) Sep 13 00:23:05.727742 kernel: ACPI: LAPIC_NMI (acpi_id[0x63] high edge lint[0x1]) Sep 13 00:23:05.727747 kernel: ACPI: LAPIC_NMI (acpi_id[0x64] high edge lint[0x1]) Sep 13 00:23:05.727752 kernel: ACPI: LAPIC_NMI (acpi_id[0x65] high edge lint[0x1]) Sep 13 00:23:05.727758 kernel: ACPI: LAPIC_NMI (acpi_id[0x66] high edge lint[0x1]) Sep 13 00:23:05.727763 kernel: ACPI: LAPIC_NMI (acpi_id[0x67] high edge lint[0x1]) Sep 13 00:23:05.727769 kernel: ACPI: LAPIC_NMI (acpi_id[0x68] high edge lint[0x1]) Sep 13 00:23:05.727774 kernel: ACPI: LAPIC_NMI (acpi_id[0x69] high edge lint[0x1]) Sep 13 00:23:05.727780 kernel: ACPI: LAPIC_NMI (acpi_id[0x6a] high edge lint[0x1]) Sep 13 00:23:05.727786 kernel: ACPI: LAPIC_NMI (acpi_id[0x6b] high edge lint[0x1]) Sep 13 00:23:05.727791 kernel: ACPI: LAPIC_NMI (acpi_id[0x6c] high edge lint[0x1]) Sep 13 00:23:05.727797 kernel: ACPI: LAPIC_NMI (acpi_id[0x6d] high edge lint[0x1]) Sep 13 00:23:05.727802 kernel: ACPI: LAPIC_NMI (acpi_id[0x6e] high edge lint[0x1]) Sep 13 00:23:05.727807 kernel: ACPI: LAPIC_NMI (acpi_id[0x6f] high edge lint[0x1]) Sep 13 00:23:05.727813 kernel: ACPI: LAPIC_NMI (acpi_id[0x70] high edge lint[0x1]) Sep 13 00:23:05.727818 kernel: ACPI: LAPIC_NMI (acpi_id[0x71] high edge lint[0x1]) Sep 13 00:23:05.727823 kernel: ACPI: LAPIC_NMI (acpi_id[0x72] high edge lint[0x1]) Sep 13 00:23:05.727829 kernel: ACPI: LAPIC_NMI (acpi_id[0x73] high edge lint[0x1]) Sep 13 00:23:05.727834 kernel: ACPI: LAPIC_NMI (acpi_id[0x74] high edge lint[0x1]) Sep 13 00:23:05.727841 kernel: ACPI: LAPIC_NMI (acpi_id[0x75] high edge lint[0x1]) Sep 13 00:23:05.727846 kernel: ACPI: LAPIC_NMI (acpi_id[0x76] high edge lint[0x1]) Sep 13 00:23:05.727852 kernel: ACPI: LAPIC_NMI (acpi_id[0x77] high edge lint[0x1]) Sep 13 00:23:05.727857 kernel: ACPI: LAPIC_NMI (acpi_id[0x78] high edge lint[0x1]) Sep 13 00:23:05.727863 kernel: ACPI: LAPIC_NMI (acpi_id[0x79] high edge lint[0x1]) Sep 13 00:23:05.727868 kernel: ACPI: LAPIC_NMI (acpi_id[0x7a] high edge lint[0x1]) Sep 13 00:23:05.727873 kernel: ACPI: LAPIC_NMI (acpi_id[0x7b] high edge lint[0x1]) Sep 13 00:23:05.727879 kernel: ACPI: LAPIC_NMI (acpi_id[0x7c] high edge lint[0x1]) Sep 13 00:23:05.727884 kernel: ACPI: LAPIC_NMI (acpi_id[0x7d] high edge lint[0x1]) Sep 13 00:23:05.727889 kernel: ACPI: LAPIC_NMI (acpi_id[0x7e] high edge lint[0x1]) Sep 13 00:23:05.727897 kernel: ACPI: LAPIC_NMI (acpi_id[0x7f] high edge lint[0x1]) Sep 13 00:23:05.727902 kernel: IOAPIC[0]: apic_id 1, version 17, address 0xfec00000, GSI 0-23 Sep 13 00:23:05.727907 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 high edge) Sep 13 00:23:05.727913 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 13 00:23:05.727919 kernel: ACPI: HPET id: 0x8086af01 base: 0xfed00000 Sep 13 00:23:05.727924 kernel: TSC deadline timer available Sep 13 00:23:05.727930 kernel: CPU topo: Max. logical packages: 128 Sep 13 00:23:05.727935 kernel: CPU topo: Max. logical dies: 128 Sep 13 00:23:05.727941 kernel: CPU topo: Max. dies per package: 1 Sep 13 00:23:05.727947 kernel: CPU topo: Max. threads per core: 1 Sep 13 00:23:05.727952 kernel: CPU topo: Num. cores per package: 1 Sep 13 00:23:05.727958 kernel: CPU topo: Num. threads per package: 1 Sep 13 00:23:05.727963 kernel: CPU topo: Allowing 2 present CPUs plus 126 hotplug CPUs Sep 13 00:23:05.727969 kernel: [mem 0x80000000-0xefffffff] available for PCI devices Sep 13 00:23:05.727974 kernel: Booting paravirtualized kernel on VMware hypervisor Sep 13 00:23:05.727980 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 13 00:23:05.727986 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:128 nr_cpu_ids:128 nr_node_ids:1 Sep 13 00:23:05.727991 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u262144 Sep 13 00:23:05.727999 kernel: pcpu-alloc: s207832 r8192 d29736 u262144 alloc=1*2097152 Sep 13 00:23:05.728004 kernel: pcpu-alloc: [0] 000 001 002 003 004 005 006 007 Sep 13 00:23:05.728009 kernel: pcpu-alloc: [0] 008 009 010 011 012 013 014 015 Sep 13 00:23:05.728015 kernel: pcpu-alloc: [0] 016 017 018 019 020 021 022 023 Sep 13 00:23:05.728021 kernel: pcpu-alloc: [0] 024 025 026 027 028 029 030 031 Sep 13 00:23:05.728026 kernel: pcpu-alloc: [0] 032 033 034 035 036 037 038 039 Sep 13 00:23:05.728031 kernel: pcpu-alloc: [0] 040 041 042 043 044 045 046 047 Sep 13 00:23:05.728037 kernel: pcpu-alloc: [0] 048 049 050 051 052 053 054 055 Sep 13 00:23:05.728042 kernel: pcpu-alloc: [0] 056 057 058 059 060 061 062 063 Sep 13 00:23:05.728049 kernel: pcpu-alloc: [0] 064 065 066 067 068 069 070 071 Sep 13 00:23:05.728054 kernel: pcpu-alloc: [0] 072 073 074 075 076 077 078 079 Sep 13 00:23:05.728059 kernel: pcpu-alloc: [0] 080 081 082 083 084 085 086 087 Sep 13 00:23:05.728065 kernel: pcpu-alloc: [0] 088 089 090 091 092 093 094 095 Sep 13 00:23:05.728070 kernel: pcpu-alloc: [0] 096 097 098 099 100 101 102 103 Sep 13 00:23:05.728075 kernel: pcpu-alloc: [0] 104 105 106 107 108 109 110 111 Sep 13 00:23:05.728081 kernel: pcpu-alloc: [0] 112 113 114 115 116 117 118 119 Sep 13 00:23:05.728086 kernel: pcpu-alloc: [0] 120 121 122 123 124 125 126 127 Sep 13 00:23:05.728092 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=21b29c6e420cf06e0546ff797fc1285d986af130e4ba1abb9f27cb6343b53294 Sep 13 00:23:05.728099 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 13 00:23:05.728105 kernel: random: crng init done Sep 13 00:23:05.728110 kernel: printk: log_buf_len individual max cpu contribution: 4096 bytes Sep 13 00:23:05.728116 kernel: printk: log_buf_len total cpu_extra contributions: 520192 bytes Sep 13 00:23:05.728121 kernel: printk: log_buf_len min size: 262144 bytes Sep 13 00:23:05.728127 kernel: printk: log_buf_len: 1048576 bytes Sep 13 00:23:05.728132 kernel: printk: early log buf free: 245576(93%) Sep 13 00:23:05.728138 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 13 00:23:05.728144 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Sep 13 00:23:05.728150 kernel: Fallback order for Node 0: 0 Sep 13 00:23:05.728156 kernel: Built 1 zonelists, mobility grouping on. Total pages: 524157 Sep 13 00:23:05.728161 kernel: Policy zone: DMA32 Sep 13 00:23:05.728167 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 13 00:23:05.728172 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=128, Nodes=1 Sep 13 00:23:05.728178 kernel: ftrace: allocating 40122 entries in 157 pages Sep 13 00:23:05.728183 kernel: ftrace: allocated 157 pages with 5 groups Sep 13 00:23:05.728189 kernel: Dynamic Preempt: voluntary Sep 13 00:23:05.728195 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 13 00:23:05.728201 kernel: rcu: RCU event tracing is enabled. Sep 13 00:23:05.728207 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=128. Sep 13 00:23:05.728215 kernel: Trampoline variant of Tasks RCU enabled. Sep 13 00:23:05.728220 kernel: Rude variant of Tasks RCU enabled. Sep 13 00:23:05.728226 kernel: Tracing variant of Tasks RCU enabled. Sep 13 00:23:05.728231 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 13 00:23:05.728237 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=128 Sep 13 00:23:05.728242 kernel: RCU Tasks: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. Sep 13 00:23:05.728248 kernel: RCU Tasks Rude: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. Sep 13 00:23:05.728254 kernel: RCU Tasks Trace: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. Sep 13 00:23:05.728260 kernel: NR_IRQS: 33024, nr_irqs: 1448, preallocated irqs: 16 Sep 13 00:23:05.728265 kernel: rcu: srcu_init: Setting srcu_struct sizes to big. Sep 13 00:23:05.728271 kernel: Console: colour VGA+ 80x25 Sep 13 00:23:05.728276 kernel: printk: legacy console [tty0] enabled Sep 13 00:23:05.728282 kernel: printk: legacy console [ttyS0] enabled Sep 13 00:23:05.728287 kernel: ACPI: Core revision 20240827 Sep 13 00:23:05.728293 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 133484882848 ns Sep 13 00:23:05.728298 kernel: APIC: Switch to symmetric I/O mode setup Sep 13 00:23:05.728305 kernel: x2apic enabled Sep 13 00:23:05.728310 kernel: APIC: Switched APIC routing to: physical x2apic Sep 13 00:23:05.728316 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Sep 13 00:23:05.728321 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x311fd3cd494, max_idle_ns: 440795223879 ns Sep 13 00:23:05.728327 kernel: Calibrating delay loop (skipped) preset value.. 6816.00 BogoMIPS (lpj=3408000) Sep 13 00:23:05.728332 kernel: Disabled fast string operations Sep 13 00:23:05.728338 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Sep 13 00:23:05.728343 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Sep 13 00:23:05.728349 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 13 00:23:05.728355 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall and VM exit Sep 13 00:23:05.728361 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS Sep 13 00:23:05.728366 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT Sep 13 00:23:05.728372 kernel: RETBleed: Mitigation: Enhanced IBRS Sep 13 00:23:05.728377 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Sep 13 00:23:05.728383 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Sep 13 00:23:05.728388 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Sep 13 00:23:05.728394 kernel: SRBDS: Unknown: Dependent on hypervisor status Sep 13 00:23:05.728400 kernel: GDS: Unknown: Dependent on hypervisor status Sep 13 00:23:05.728406 kernel: active return thunk: its_return_thunk Sep 13 00:23:05.728411 kernel: ITS: Mitigation: Aligned branch/return thunks Sep 13 00:23:05.728416 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 13 00:23:05.728422 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 13 00:23:05.728427 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 13 00:23:05.728433 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 13 00:23:05.728438 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Sep 13 00:23:05.728444 kernel: Freeing SMP alternatives memory: 32K Sep 13 00:23:05.728450 kernel: pid_max: default: 131072 minimum: 1024 Sep 13 00:23:05.728456 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Sep 13 00:23:05.728461 kernel: landlock: Up and running. Sep 13 00:23:05.728467 kernel: SELinux: Initializing. Sep 13 00:23:05.728472 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Sep 13 00:23:05.728478 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Sep 13 00:23:05.728483 kernel: smpboot: CPU0: Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz (family: 0x6, model: 0x9e, stepping: 0xd) Sep 13 00:23:05.728489 kernel: Performance Events: Skylake events, core PMU driver. Sep 13 00:23:05.728494 kernel: core: CPUID marked event: 'cpu cycles' unavailable Sep 13 00:23:05.728501 kernel: core: CPUID marked event: 'instructions' unavailable Sep 13 00:23:05.728507 kernel: core: CPUID marked event: 'bus cycles' unavailable Sep 13 00:23:05.728512 kernel: core: CPUID marked event: 'cache references' unavailable Sep 13 00:23:05.728517 kernel: core: CPUID marked event: 'cache misses' unavailable Sep 13 00:23:05.728522 kernel: core: CPUID marked event: 'branch instructions' unavailable Sep 13 00:23:05.728528 kernel: core: CPUID marked event: 'branch misses' unavailable Sep 13 00:23:05.728533 kernel: ... version: 1 Sep 13 00:23:05.728539 kernel: ... bit width: 48 Sep 13 00:23:05.728580 kernel: ... generic registers: 4 Sep 13 00:23:05.728589 kernel: ... value mask: 0000ffffffffffff Sep 13 00:23:05.728595 kernel: ... max period: 000000007fffffff Sep 13 00:23:05.728600 kernel: ... fixed-purpose events: 0 Sep 13 00:23:05.728606 kernel: ... event mask: 000000000000000f Sep 13 00:23:05.728611 kernel: signal: max sigframe size: 1776 Sep 13 00:23:05.728617 kernel: rcu: Hierarchical SRCU implementation. Sep 13 00:23:05.728622 kernel: rcu: Max phase no-delay instances is 400. Sep 13 00:23:05.728628 kernel: Timer migration: 3 hierarchy levels; 8 children per group; 3 crossnode level Sep 13 00:23:05.728633 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Sep 13 00:23:05.728640 kernel: smp: Bringing up secondary CPUs ... Sep 13 00:23:05.728645 kernel: smpboot: x86: Booting SMP configuration: Sep 13 00:23:05.728650 kernel: .... node #0, CPUs: #1 Sep 13 00:23:05.728656 kernel: Disabled fast string operations Sep 13 00:23:05.728661 kernel: smp: Brought up 1 node, 2 CPUs Sep 13 00:23:05.728667 kernel: smpboot: Total of 2 processors activated (13632.00 BogoMIPS) Sep 13 00:23:05.728673 kernel: Memory: 1926316K/2096628K available (14336K kernel code, 2432K rwdata, 9960K rodata, 53828K init, 1088K bss, 158936K reserved, 0K cma-reserved) Sep 13 00:23:05.728678 kernel: devtmpfs: initialized Sep 13 00:23:05.728684 kernel: x86/mm: Memory block size: 128MB Sep 13 00:23:05.728690 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7feff000-0x7fefffff] (4096 bytes) Sep 13 00:23:05.728696 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 13 00:23:05.728701 kernel: futex hash table entries: 32768 (order: 9, 2097152 bytes, linear) Sep 13 00:23:05.728706 kernel: pinctrl core: initialized pinctrl subsystem Sep 13 00:23:05.728712 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 13 00:23:05.728717 kernel: audit: initializing netlink subsys (disabled) Sep 13 00:23:05.728723 kernel: audit: type=2000 audit(1757722982.286:1): state=initialized audit_enabled=0 res=1 Sep 13 00:23:05.728728 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 13 00:23:05.728734 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 13 00:23:05.728740 kernel: cpuidle: using governor menu Sep 13 00:23:05.728746 kernel: Simple Boot Flag at 0x36 set to 0x80 Sep 13 00:23:05.728751 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 13 00:23:05.728757 kernel: dca service started, version 1.12.1 Sep 13 00:23:05.728769 kernel: PCI: ECAM [mem 0xf0000000-0xf7ffffff] (base 0xf0000000) for domain 0000 [bus 00-7f] Sep 13 00:23:05.728776 kernel: PCI: Using configuration type 1 for base access Sep 13 00:23:05.728782 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 13 00:23:05.728788 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 13 00:23:05.728793 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Sep 13 00:23:05.728800 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 13 00:23:05.728806 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Sep 13 00:23:05.728811 kernel: ACPI: Added _OSI(Module Device) Sep 13 00:23:05.728817 kernel: ACPI: Added _OSI(Processor Device) Sep 13 00:23:05.728823 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 13 00:23:05.728829 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 13 00:23:05.728835 kernel: ACPI: [Firmware Bug]: BIOS _OSI(Linux) query ignored Sep 13 00:23:05.728841 kernel: ACPI: Interpreter enabled Sep 13 00:23:05.728847 kernel: ACPI: PM: (supports S0 S1 S5) Sep 13 00:23:05.728854 kernel: ACPI: Using IOAPIC for interrupt routing Sep 13 00:23:05.728860 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 13 00:23:05.728865 kernel: PCI: Using E820 reservations for host bridge windows Sep 13 00:23:05.728871 kernel: ACPI: Enabled 4 GPEs in block 00 to 0F Sep 13 00:23:05.728877 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-7f]) Sep 13 00:23:05.728960 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 13 00:23:05.729014 kernel: acpi PNP0A03:00: _OSC: platform does not support [AER LTR] Sep 13 00:23:05.729065 kernel: acpi PNP0A03:00: _OSC: OS now controls [PCIeHotplug PME PCIeCapability] Sep 13 00:23:05.729073 kernel: PCI host bridge to bus 0000:00 Sep 13 00:23:05.729124 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Sep 13 00:23:05.729169 kernel: pci_bus 0000:00: root bus resource [mem 0x000cc000-0x000dbfff window] Sep 13 00:23:05.729218 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Sep 13 00:23:05.729262 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Sep 13 00:23:05.729305 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xfeff window] Sep 13 00:23:05.729348 kernel: pci_bus 0000:00: root bus resource [bus 00-7f] Sep 13 00:23:05.729409 kernel: pci 0000:00:00.0: [8086:7190] type 00 class 0x060000 conventional PCI endpoint Sep 13 00:23:05.729467 kernel: pci 0000:00:01.0: [8086:7191] type 01 class 0x060400 conventional PCI bridge Sep 13 00:23:05.729519 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Sep 13 00:23:05.729662 kernel: pci 0000:00:07.0: [8086:7110] type 00 class 0x060100 conventional PCI endpoint Sep 13 00:23:05.729722 kernel: pci 0000:00:07.1: [8086:7111] type 00 class 0x01018a conventional PCI endpoint Sep 13 00:23:05.729774 kernel: pci 0000:00:07.1: BAR 4 [io 0x1060-0x106f] Sep 13 00:23:05.729824 kernel: pci 0000:00:07.1: BAR 0 [io 0x01f0-0x01f7]: legacy IDE quirk Sep 13 00:23:05.729873 kernel: pci 0000:00:07.1: BAR 1 [io 0x03f6]: legacy IDE quirk Sep 13 00:23:05.729922 kernel: pci 0000:00:07.1: BAR 2 [io 0x0170-0x0177]: legacy IDE quirk Sep 13 00:23:05.729972 kernel: pci 0000:00:07.1: BAR 3 [io 0x0376]: legacy IDE quirk Sep 13 00:23:05.730029 kernel: pci 0000:00:07.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint Sep 13 00:23:05.730079 kernel: pci 0000:00:07.3: quirk: [io 0x1000-0x103f] claimed by PIIX4 ACPI Sep 13 00:23:05.730128 kernel: pci 0000:00:07.3: quirk: [io 0x1040-0x104f] claimed by PIIX4 SMB Sep 13 00:23:05.730182 kernel: pci 0000:00:07.7: [15ad:0740] type 00 class 0x088000 conventional PCI endpoint Sep 13 00:23:05.730233 kernel: pci 0000:00:07.7: BAR 0 [io 0x1080-0x10bf] Sep 13 00:23:05.730282 kernel: pci 0000:00:07.7: BAR 1 [mem 0xfebfe000-0xfebfffff 64bit] Sep 13 00:23:05.730338 kernel: pci 0000:00:0f.0: [15ad:0405] type 00 class 0x030000 conventional PCI endpoint Sep 13 00:23:05.730387 kernel: pci 0000:00:0f.0: BAR 0 [io 0x1070-0x107f] Sep 13 00:23:05.730436 kernel: pci 0000:00:0f.0: BAR 1 [mem 0xe8000000-0xefffffff pref] Sep 13 00:23:05.730486 kernel: pci 0000:00:0f.0: BAR 2 [mem 0xfe000000-0xfe7fffff] Sep 13 00:23:05.730535 kernel: pci 0000:00:0f.0: ROM [mem 0x00000000-0x00007fff pref] Sep 13 00:23:05.730601 kernel: pci 0000:00:0f.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Sep 13 00:23:05.730657 kernel: pci 0000:00:11.0: [15ad:0790] type 01 class 0x060401 conventional PCI bridge Sep 13 00:23:05.730709 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] (subtractive decode) Sep 13 00:23:05.730759 kernel: pci 0000:00:11.0: bridge window [io 0x2000-0x3fff] Sep 13 00:23:05.730808 kernel: pci 0000:00:11.0: bridge window [mem 0xfd600000-0xfdffffff] Sep 13 00:23:05.730856 kernel: pci 0000:00:11.0: bridge window [mem 0xe7b00000-0xe7ffffff 64bit pref] Sep 13 00:23:05.730909 kernel: pci 0000:00:15.0: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Sep 13 00:23:05.730960 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] Sep 13 00:23:05.731056 kernel: pci 0000:00:15.0: bridge window [io 0x4000-0x4fff] Sep 13 00:23:05.731109 kernel: pci 0000:00:15.0: bridge window [mem 0xfd500000-0xfd5fffff] Sep 13 00:23:05.731160 kernel: pci 0000:00:15.0: PME# supported from D0 D3hot D3cold Sep 13 00:23:05.731215 kernel: pci 0000:00:15.1: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Sep 13 00:23:05.731266 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] Sep 13 00:23:05.731315 kernel: pci 0000:00:15.1: bridge window [io 0x8000-0x8fff] Sep 13 00:23:05.731364 kernel: pci 0000:00:15.1: bridge window [mem 0xfd100000-0xfd1fffff] Sep 13 00:23:05.731414 kernel: pci 0000:00:15.1: bridge window [mem 0xe7800000-0xe78fffff 64bit pref] Sep 13 00:23:05.731466 kernel: pci 0000:00:15.1: PME# supported from D0 D3hot D3cold Sep 13 00:23:05.731520 kernel: pci 0000:00:15.2: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Sep 13 00:23:05.731610 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] Sep 13 00:23:05.731677 kernel: pci 0000:00:15.2: bridge window [io 0xc000-0xcfff] Sep 13 00:23:05.732615 kernel: pci 0000:00:15.2: bridge window [mem 0xfcd00000-0xfcdfffff] Sep 13 00:23:05.732673 kernel: pci 0000:00:15.2: bridge window [mem 0xe7400000-0xe74fffff 64bit pref] Sep 13 00:23:05.732728 kernel: pci 0000:00:15.2: PME# supported from D0 D3hot D3cold Sep 13 00:23:05.732786 kernel: pci 0000:00:15.3: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Sep 13 00:23:05.732850 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] Sep 13 00:23:05.732901 kernel: pci 0000:00:15.3: bridge window [mem 0xfc900000-0xfc9fffff] Sep 13 00:23:05.732952 kernel: pci 0000:00:15.3: bridge window [mem 0xe7000000-0xe70fffff 64bit pref] Sep 13 00:23:05.733001 kernel: pci 0000:00:15.3: PME# supported from D0 D3hot D3cold Sep 13 00:23:05.733055 kernel: pci 0000:00:15.4: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Sep 13 00:23:05.733109 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] Sep 13 00:23:05.733160 kernel: pci 0000:00:15.4: bridge window [mem 0xfc500000-0xfc5fffff] Sep 13 00:23:05.733210 kernel: pci 0000:00:15.4: bridge window [mem 0xe6c00000-0xe6cfffff 64bit pref] Sep 13 00:23:05.733259 kernel: pci 0000:00:15.4: PME# supported from D0 D3hot D3cold Sep 13 00:23:05.733313 kernel: pci 0000:00:15.5: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Sep 13 00:23:05.733364 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] Sep 13 00:23:05.733413 kernel: pci 0000:00:15.5: bridge window [mem 0xfc100000-0xfc1fffff] Sep 13 00:23:05.733466 kernel: pci 0000:00:15.5: bridge window [mem 0xe6800000-0xe68fffff 64bit pref] Sep 13 00:23:05.733516 kernel: pci 0000:00:15.5: PME# supported from D0 D3hot D3cold Sep 13 00:23:05.734341 kernel: pci 0000:00:15.6: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Sep 13 00:23:05.734395 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] Sep 13 00:23:05.734446 kernel: pci 0000:00:15.6: bridge window [mem 0xfbd00000-0xfbdfffff] Sep 13 00:23:05.734496 kernel: pci 0000:00:15.6: bridge window [mem 0xe6400000-0xe64fffff 64bit pref] Sep 13 00:23:05.734586 kernel: pci 0000:00:15.6: PME# supported from D0 D3hot D3cold Sep 13 00:23:05.734652 kernel: pci 0000:00:15.7: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Sep 13 00:23:05.734704 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] Sep 13 00:23:05.734755 kernel: pci 0000:00:15.7: bridge window [mem 0xfb900000-0xfb9fffff] Sep 13 00:23:05.734804 kernel: pci 0000:00:15.7: bridge window [mem 0xe6000000-0xe60fffff 64bit pref] Sep 13 00:23:05.734854 kernel: pci 0000:00:15.7: PME# supported from D0 D3hot D3cold Sep 13 00:23:05.734908 kernel: pci 0000:00:16.0: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Sep 13 00:23:05.734965 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] Sep 13 00:23:05.735019 kernel: pci 0000:00:16.0: bridge window [io 0x5000-0x5fff] Sep 13 00:23:05.735069 kernel: pci 0000:00:16.0: bridge window [mem 0xfd400000-0xfd4fffff] Sep 13 00:23:05.735122 kernel: pci 0000:00:16.0: PME# supported from D0 D3hot D3cold Sep 13 00:23:05.735743 kernel: pci 0000:00:16.1: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Sep 13 00:23:05.735823 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] Sep 13 00:23:05.735909 kernel: pci 0000:00:16.1: bridge window [io 0x9000-0x9fff] Sep 13 00:23:05.735977 kernel: pci 0000:00:16.1: bridge window [mem 0xfd000000-0xfd0fffff] Sep 13 00:23:05.736038 kernel: pci 0000:00:16.1: bridge window [mem 0xe7700000-0xe77fffff 64bit pref] Sep 13 00:23:05.736116 kernel: pci 0000:00:16.1: PME# supported from D0 D3hot D3cold Sep 13 00:23:05.736211 kernel: pci 0000:00:16.2: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Sep 13 00:23:05.736282 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] Sep 13 00:23:05.736342 kernel: pci 0000:00:16.2: bridge window [io 0xd000-0xdfff] Sep 13 00:23:05.736424 kernel: pci 0000:00:16.2: bridge window [mem 0xfcc00000-0xfccfffff] Sep 13 00:23:05.736506 kernel: pci 0000:00:16.2: bridge window [mem 0xe7300000-0xe73fffff 64bit pref] Sep 13 00:23:05.736594 kernel: pci 0000:00:16.2: PME# supported from D0 D3hot D3cold Sep 13 00:23:05.736697 kernel: pci 0000:00:16.3: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Sep 13 00:23:05.736780 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] Sep 13 00:23:05.736855 kernel: pci 0000:00:16.3: bridge window [mem 0xfc800000-0xfc8fffff] Sep 13 00:23:05.736907 kernel: pci 0000:00:16.3: bridge window [mem 0xe6f00000-0xe6ffffff 64bit pref] Sep 13 00:23:05.736986 kernel: pci 0000:00:16.3: PME# supported from D0 D3hot D3cold Sep 13 00:23:05.737070 kernel: pci 0000:00:16.4: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Sep 13 00:23:05.737148 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] Sep 13 00:23:05.737201 kernel: pci 0000:00:16.4: bridge window [mem 0xfc400000-0xfc4fffff] Sep 13 00:23:05.737280 kernel: pci 0000:00:16.4: bridge window [mem 0xe6b00000-0xe6bfffff 64bit pref] Sep 13 00:23:05.737359 kernel: pci 0000:00:16.4: PME# supported from D0 D3hot D3cold Sep 13 00:23:05.737440 kernel: pci 0000:00:16.5: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Sep 13 00:23:05.737495 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] Sep 13 00:23:05.737588 kernel: pci 0000:00:16.5: bridge window [mem 0xfc000000-0xfc0fffff] Sep 13 00:23:05.737669 kernel: pci 0000:00:16.5: bridge window [mem 0xe6700000-0xe67fffff 64bit pref] Sep 13 00:23:05.737742 kernel: pci 0000:00:16.5: PME# supported from D0 D3hot D3cold Sep 13 00:23:05.737805 kernel: pci 0000:00:16.6: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Sep 13 00:23:05.737877 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] Sep 13 00:23:05.737959 kernel: pci 0000:00:16.6: bridge window [mem 0xfbc00000-0xfbcfffff] Sep 13 00:23:05.738036 kernel: pci 0000:00:16.6: bridge window [mem 0xe6300000-0xe63fffff 64bit pref] Sep 13 00:23:05.738122 kernel: pci 0000:00:16.6: PME# supported from D0 D3hot D3cold Sep 13 00:23:05.738181 kernel: pci 0000:00:16.7: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Sep 13 00:23:05.738263 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] Sep 13 00:23:05.738339 kernel: pci 0000:00:16.7: bridge window [mem 0xfb800000-0xfb8fffff] Sep 13 00:23:05.738421 kernel: pci 0000:00:16.7: bridge window [mem 0xe5f00000-0xe5ffffff 64bit pref] Sep 13 00:23:05.738474 kernel: pci 0000:00:16.7: PME# supported from D0 D3hot D3cold Sep 13 00:23:05.738564 kernel: pci 0000:00:17.0: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Sep 13 00:23:05.738646 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] Sep 13 00:23:05.738724 kernel: pci 0000:00:17.0: bridge window [io 0x6000-0x6fff] Sep 13 00:23:05.738778 kernel: pci 0000:00:17.0: bridge window [mem 0xfd300000-0xfd3fffff] Sep 13 00:23:05.738858 kernel: pci 0000:00:17.0: bridge window [mem 0xe7a00000-0xe7afffff 64bit pref] Sep 13 00:23:05.738936 kernel: pci 0000:00:17.0: PME# supported from D0 D3hot D3cold Sep 13 00:23:05.739019 kernel: pci 0000:00:17.1: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Sep 13 00:23:05.739073 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] Sep 13 00:23:05.739155 kernel: pci 0000:00:17.1: bridge window [io 0xa000-0xafff] Sep 13 00:23:05.739236 kernel: pci 0000:00:17.1: bridge window [mem 0xfcf00000-0xfcffffff] Sep 13 00:23:05.739314 kernel: pci 0000:00:17.1: bridge window [mem 0xe7600000-0xe76fffff 64bit pref] Sep 13 00:23:05.739366 kernel: pci 0000:00:17.1: PME# supported from D0 D3hot D3cold Sep 13 00:23:05.739449 kernel: pci 0000:00:17.2: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Sep 13 00:23:05.739529 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] Sep 13 00:23:05.739620 kernel: pci 0000:00:17.2: bridge window [io 0xe000-0xefff] Sep 13 00:23:05.739680 kernel: pci 0000:00:17.2: bridge window [mem 0xfcb00000-0xfcbfffff] Sep 13 00:23:05.739750 kernel: pci 0000:00:17.2: bridge window [mem 0xe7200000-0xe72fffff 64bit pref] Sep 13 00:23:05.739828 kernel: pci 0000:00:17.2: PME# supported from D0 D3hot D3cold Sep 13 00:23:05.739914 kernel: pci 0000:00:17.3: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Sep 13 00:23:05.739970 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] Sep 13 00:23:05.740046 kernel: pci 0000:00:17.3: bridge window [mem 0xfc700000-0xfc7fffff] Sep 13 00:23:05.740128 kernel: pci 0000:00:17.3: bridge window [mem 0xe6e00000-0xe6efffff 64bit pref] Sep 13 00:23:05.740204 kernel: pci 0000:00:17.3: PME# supported from D0 D3hot D3cold Sep 13 00:23:05.740265 kernel: pci 0000:00:17.4: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Sep 13 00:23:05.740339 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] Sep 13 00:23:05.740415 kernel: pci 0000:00:17.4: bridge window [mem 0xfc300000-0xfc3fffff] Sep 13 00:23:05.740498 kernel: pci 0000:00:17.4: bridge window [mem 0xe6a00000-0xe6afffff 64bit pref] Sep 13 00:23:05.740568 kernel: pci 0000:00:17.4: PME# supported from D0 D3hot D3cold Sep 13 00:23:05.740650 kernel: pci 0000:00:17.5: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Sep 13 00:23:05.740732 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] Sep 13 00:23:05.740804 kernel: pci 0000:00:17.5: bridge window [mem 0xfbf00000-0xfbffffff] Sep 13 00:23:05.740857 kernel: pci 0000:00:17.5: bridge window [mem 0xe6600000-0xe66fffff 64bit pref] Sep 13 00:23:05.740934 kernel: pci 0000:00:17.5: PME# supported from D0 D3hot D3cold Sep 13 00:23:05.741010 kernel: pci 0000:00:17.6: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Sep 13 00:23:05.741096 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] Sep 13 00:23:05.741150 kernel: pci 0000:00:17.6: bridge window [mem 0xfbb00000-0xfbbfffff] Sep 13 00:23:05.741214 kernel: pci 0000:00:17.6: bridge window [mem 0xe6200000-0xe62fffff 64bit pref] Sep 13 00:23:05.741280 kernel: pci 0000:00:17.6: PME# supported from D0 D3hot D3cold Sep 13 00:23:05.741366 kernel: pci 0000:00:17.7: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Sep 13 00:23:05.741442 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] Sep 13 00:23:05.741494 kernel: pci 0000:00:17.7: bridge window [mem 0xfb700000-0xfb7fffff] Sep 13 00:23:05.741726 kernel: pci 0000:00:17.7: bridge window [mem 0xe5e00000-0xe5efffff 64bit pref] Sep 13 00:23:05.741787 kernel: pci 0000:00:17.7: PME# supported from D0 D3hot D3cold Sep 13 00:23:05.741870 kernel: pci 0000:00:18.0: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Sep 13 00:23:05.741941 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] Sep 13 00:23:05.742030 kernel: pci 0000:00:18.0: bridge window [io 0x7000-0x7fff] Sep 13 00:23:05.742085 kernel: pci 0000:00:18.0: bridge window [mem 0xfd200000-0xfd2fffff] Sep 13 00:23:05.742148 kernel: pci 0000:00:18.0: bridge window [mem 0xe7900000-0xe79fffff 64bit pref] Sep 13 00:23:05.742226 kernel: pci 0000:00:18.0: PME# supported from D0 D3hot D3cold Sep 13 00:23:05.742312 kernel: pci 0000:00:18.1: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Sep 13 00:23:05.742383 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] Sep 13 00:23:05.742443 kernel: pci 0000:00:18.1: bridge window [io 0xb000-0xbfff] Sep 13 00:23:05.742515 kernel: pci 0000:00:18.1: bridge window [mem 0xfce00000-0xfcefffff] Sep 13 00:23:05.742612 kernel: pci 0000:00:18.1: bridge window [mem 0xe7500000-0xe75fffff 64bit pref] Sep 13 00:23:05.742683 kernel: pci 0000:00:18.1: PME# supported from D0 D3hot D3cold Sep 13 00:23:05.742749 kernel: pci 0000:00:18.2: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Sep 13 00:23:05.742820 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] Sep 13 00:23:05.742898 kernel: pci 0000:00:18.2: bridge window [mem 0xfca00000-0xfcafffff] Sep 13 00:23:05.742979 kernel: pci 0000:00:18.2: bridge window [mem 0xe7100000-0xe71fffff 64bit pref] Sep 13 00:23:05.743031 kernel: pci 0000:00:18.2: PME# supported from D0 D3hot D3cold Sep 13 00:23:05.743619 kernel: pci 0000:00:18.3: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Sep 13 00:23:05.743701 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] Sep 13 00:23:05.743775 kernel: pci 0000:00:18.3: bridge window [mem 0xfc600000-0xfc6fffff] Sep 13 00:23:05.743864 kernel: pci 0000:00:18.3: bridge window [mem 0xe6d00000-0xe6dfffff 64bit pref] Sep 13 00:23:05.743918 kernel: pci 0000:00:18.3: PME# supported from D0 D3hot D3cold Sep 13 00:23:05.743999 kernel: pci 0000:00:18.4: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Sep 13 00:23:05.744073 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] Sep 13 00:23:05.744160 kernel: pci 0000:00:18.4: bridge window [mem 0xfc200000-0xfc2fffff] Sep 13 00:23:05.744213 kernel: pci 0000:00:18.4: bridge window [mem 0xe6900000-0xe69fffff 64bit pref] Sep 13 00:23:05.744289 kernel: pci 0000:00:18.4: PME# supported from D0 D3hot D3cold Sep 13 00:23:05.744364 kernel: pci 0000:00:18.5: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Sep 13 00:23:05.744454 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] Sep 13 00:23:05.744508 kernel: pci 0000:00:18.5: bridge window [mem 0xfbe00000-0xfbefffff] Sep 13 00:23:05.744594 kernel: pci 0000:00:18.5: bridge window [mem 0xe6500000-0xe65fffff 64bit pref] Sep 13 00:23:05.744665 kernel: pci 0000:00:18.5: PME# supported from D0 D3hot D3cold Sep 13 00:23:05.744757 kernel: pci 0000:00:18.6: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Sep 13 00:23:05.744812 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] Sep 13 00:23:05.744888 kernel: pci 0000:00:18.6: bridge window [mem 0xfba00000-0xfbafffff] Sep 13 00:23:05.744957 kernel: pci 0000:00:18.6: bridge window [mem 0xe6100000-0xe61fffff 64bit pref] Sep 13 00:23:05.745044 kernel: pci 0000:00:18.6: PME# supported from D0 D3hot D3cold Sep 13 00:23:05.745106 kernel: pci 0000:00:18.7: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Sep 13 00:23:05.745183 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] Sep 13 00:23:05.745261 kernel: pci 0000:00:18.7: bridge window [mem 0xfb600000-0xfb6fffff] Sep 13 00:23:05.745349 kernel: pci 0000:00:18.7: bridge window [mem 0xe5d00000-0xe5dfffff 64bit pref] Sep 13 00:23:05.745403 kernel: pci 0000:00:18.7: PME# supported from D0 D3hot D3cold Sep 13 00:23:05.745484 kernel: pci_bus 0000:01: extended config space not accessible Sep 13 00:23:05.745562 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Sep 13 00:23:05.745656 kernel: pci_bus 0000:02: extended config space not accessible Sep 13 00:23:05.745666 kernel: acpiphp: Slot [32] registered Sep 13 00:23:05.745672 kernel: acpiphp: Slot [33] registered Sep 13 00:23:05.745678 kernel: acpiphp: Slot [34] registered Sep 13 00:23:05.745684 kernel: acpiphp: Slot [35] registered Sep 13 00:23:05.745689 kernel: acpiphp: Slot [36] registered Sep 13 00:23:05.745697 kernel: acpiphp: Slot [37] registered Sep 13 00:23:05.745703 kernel: acpiphp: Slot [38] registered Sep 13 00:23:05.745709 kernel: acpiphp: Slot [39] registered Sep 13 00:23:05.745714 kernel: acpiphp: Slot [40] registered Sep 13 00:23:05.745720 kernel: acpiphp: Slot [41] registered Sep 13 00:23:05.745726 kernel: acpiphp: Slot [42] registered Sep 13 00:23:05.745732 kernel: acpiphp: Slot [43] registered Sep 13 00:23:05.745740 kernel: acpiphp: Slot [44] registered Sep 13 00:23:05.745750 kernel: acpiphp: Slot [45] registered Sep 13 00:23:05.745761 kernel: acpiphp: Slot [46] registered Sep 13 00:23:05.745780 kernel: acpiphp: Slot [47] registered Sep 13 00:23:05.745786 kernel: acpiphp: Slot [48] registered Sep 13 00:23:05.745792 kernel: acpiphp: Slot [49] registered Sep 13 00:23:05.745798 kernel: acpiphp: Slot [50] registered Sep 13 00:23:05.745803 kernel: acpiphp: Slot [51] registered Sep 13 00:23:05.745809 kernel: acpiphp: Slot [52] registered Sep 13 00:23:05.745815 kernel: acpiphp: Slot [53] registered Sep 13 00:23:05.745821 kernel: acpiphp: Slot [54] registered Sep 13 00:23:05.745827 kernel: acpiphp: Slot [55] registered Sep 13 00:23:05.745834 kernel: acpiphp: Slot [56] registered Sep 13 00:23:05.745840 kernel: acpiphp: Slot [57] registered Sep 13 00:23:05.745850 kernel: acpiphp: Slot [58] registered Sep 13 00:23:05.745860 kernel: acpiphp: Slot [59] registered Sep 13 00:23:05.745870 kernel: acpiphp: Slot [60] registered Sep 13 00:23:05.745881 kernel: acpiphp: Slot [61] registered Sep 13 00:23:05.745891 kernel: acpiphp: Slot [62] registered Sep 13 00:23:05.745901 kernel: acpiphp: Slot [63] registered Sep 13 00:23:05.745988 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] (subtractive decode) Sep 13 00:23:05.746041 kernel: pci 0000:00:11.0: bridge window [mem 0x000a0000-0x000bffff window] (subtractive decode) Sep 13 00:23:05.746117 kernel: pci 0000:00:11.0: bridge window [mem 0x000cc000-0x000dbfff window] (subtractive decode) Sep 13 00:23:05.746177 kernel: pci 0000:00:11.0: bridge window [mem 0xc0000000-0xfebfffff window] (subtractive decode) Sep 13 00:23:05.746267 kernel: pci 0000:00:11.0: bridge window [io 0x0000-0x0cf7 window] (subtractive decode) Sep 13 00:23:05.746337 kernel: pci 0000:00:11.0: bridge window [io 0x0d00-0xfeff window] (subtractive decode) Sep 13 00:23:05.746414 kernel: pci 0000:03:00.0: [15ad:07c0] type 00 class 0x010700 PCIe Endpoint Sep 13 00:23:05.746471 kernel: pci 0000:03:00.0: BAR 0 [io 0x4000-0x4007] Sep 13 00:23:05.746638 kernel: pci 0000:03:00.0: BAR 1 [mem 0xfd5f8000-0xfd5fffff 64bit] Sep 13 00:23:05.746708 kernel: pci 0000:03:00.0: ROM [mem 0x00000000-0x0000ffff pref] Sep 13 00:23:05.746778 kernel: pci 0000:03:00.0: PME# supported from D0 D3hot D3cold Sep 13 00:23:05.746848 kernel: pci 0000:03:00.0: disabling ASPM on pre-1.1 PCIe device. You can enable it with 'pcie_aspm=force' Sep 13 00:23:05.746938 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] Sep 13 00:23:05.746994 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] Sep 13 00:23:05.747070 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] Sep 13 00:23:05.747128 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] Sep 13 00:23:05.747212 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] Sep 13 00:23:05.747289 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] Sep 13 00:23:05.747351 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] Sep 13 00:23:05.747417 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] Sep 13 00:23:05.747496 kernel: pci 0000:0b:00.0: [15ad:07b0] type 00 class 0x020000 PCIe Endpoint Sep 13 00:23:05.747595 kernel: pci 0000:0b:00.0: BAR 0 [mem 0xfd4fc000-0xfd4fcfff] Sep 13 00:23:05.747654 kernel: pci 0000:0b:00.0: BAR 1 [mem 0xfd4fd000-0xfd4fdfff] Sep 13 00:23:05.747730 kernel: pci 0000:0b:00.0: BAR 2 [mem 0xfd4fe000-0xfd4fffff] Sep 13 00:23:05.747799 kernel: pci 0000:0b:00.0: BAR 3 [io 0x5000-0x500f] Sep 13 00:23:05.747888 kernel: pci 0000:0b:00.0: ROM [mem 0x00000000-0x0000ffff pref] Sep 13 00:23:05.747945 kernel: pci 0000:0b:00.0: supports D1 D2 Sep 13 00:23:05.748020 kernel: pci 0000:0b:00.0: PME# supported from D0 D1 D2 D3hot D3cold Sep 13 00:23:05.748073 kernel: pci 0000:0b:00.0: disabling ASPM on pre-1.1 PCIe device. You can enable it with 'pcie_aspm=force' Sep 13 00:23:05.748136 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] Sep 13 00:23:05.748220 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] Sep 13 00:23:05.748289 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] Sep 13 00:23:05.748364 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] Sep 13 00:23:05.748418 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] Sep 13 00:23:05.748493 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] Sep 13 00:23:05.748588 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] Sep 13 00:23:05.748645 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] Sep 13 00:23:05.748718 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] Sep 13 00:23:05.748786 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] Sep 13 00:23:05.748865 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] Sep 13 00:23:05.748934 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] Sep 13 00:23:05.749010 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] Sep 13 00:23:05.749066 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] Sep 13 00:23:05.749144 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] Sep 13 00:23:05.749232 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] Sep 13 00:23:05.749312 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] Sep 13 00:23:05.749375 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] Sep 13 00:23:05.749455 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] Sep 13 00:23:05.749528 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] Sep 13 00:23:05.749891 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] Sep 13 00:23:05.749946 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] Sep 13 00:23:05.750001 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] Sep 13 00:23:05.750088 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] Sep 13 00:23:05.750106 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 9 Sep 13 00:23:05.750114 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 0 Sep 13 00:23:05.750120 kernel: ACPI: PCI: Interrupt link LNKB disabled Sep 13 00:23:05.750126 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Sep 13 00:23:05.750137 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 10 Sep 13 00:23:05.750144 kernel: iommu: Default domain type: Translated Sep 13 00:23:05.750151 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 13 00:23:05.750166 kernel: PCI: Using ACPI for IRQ routing Sep 13 00:23:05.750179 kernel: PCI: pci_cache_line_size set to 64 bytes Sep 13 00:23:05.750193 kernel: e820: reserve RAM buffer [mem 0x0009ec00-0x0009ffff] Sep 13 00:23:05.750204 kernel: e820: reserve RAM buffer [mem 0x7fee0000-0x7fffffff] Sep 13 00:23:05.750303 kernel: pci 0000:00:0f.0: vgaarb: setting as boot VGA device Sep 13 00:23:05.750367 kernel: pci 0000:00:0f.0: vgaarb: bridge control possible Sep 13 00:23:05.750455 kernel: pci 0000:00:0f.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Sep 13 00:23:05.750466 kernel: vgaarb: loaded Sep 13 00:23:05.750475 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 Sep 13 00:23:05.750484 kernel: hpet0: 16 comparators, 64-bit 14.318180 MHz counter Sep 13 00:23:05.750492 kernel: clocksource: Switched to clocksource tsc-early Sep 13 00:23:05.750498 kernel: VFS: Disk quotas dquot_6.6.0 Sep 13 00:23:05.750509 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 13 00:23:05.750517 kernel: pnp: PnP ACPI init Sep 13 00:23:05.750626 kernel: system 00:00: [io 0x1000-0x103f] has been reserved Sep 13 00:23:05.750703 kernel: system 00:00: [io 0x1040-0x104f] has been reserved Sep 13 00:23:05.750774 kernel: system 00:00: [io 0x0cf0-0x0cf1] has been reserved Sep 13 00:23:05.750842 kernel: system 00:04: [mem 0xfed00000-0xfed003ff] has been reserved Sep 13 00:23:05.750896 kernel: pnp 00:06: [dma 2] Sep 13 00:23:05.751338 kernel: system 00:07: [io 0xfce0-0xfcff] has been reserved Sep 13 00:23:05.751410 kernel: system 00:07: [mem 0xf0000000-0xf7ffffff] has been reserved Sep 13 00:23:05.751462 kernel: system 00:07: [mem 0xfe800000-0xfe9fffff] has been reserved Sep 13 00:23:05.751476 kernel: pnp: PnP ACPI: found 8 devices Sep 13 00:23:05.751487 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 13 00:23:05.751497 kernel: NET: Registered PF_INET protocol family Sep 13 00:23:05.751511 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 13 00:23:05.751521 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Sep 13 00:23:05.751532 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 13 00:23:05.751542 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Sep 13 00:23:05.751563 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Sep 13 00:23:05.751574 kernel: TCP: Hash tables configured (established 16384 bind 16384) Sep 13 00:23:05.751591 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Sep 13 00:23:05.751601 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Sep 13 00:23:05.751609 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 13 00:23:05.751614 kernel: NET: Registered PF_XDP protocol family Sep 13 00:23:05.751678 kernel: pci 0000:00:15.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 Sep 13 00:23:05.751747 kernel: pci 0000:00:15.3: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Sep 13 00:23:05.751815 kernel: pci 0000:00:15.4: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Sep 13 00:23:05.751896 kernel: pci 0000:00:15.5: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Sep 13 00:23:05.751962 kernel: pci 0000:00:15.6: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Sep 13 00:23:05.752035 kernel: pci 0000:00:15.7: bridge window [io 0x1000-0x0fff] to [bus 0a] add_size 1000 Sep 13 00:23:05.752093 kernel: pci 0000:00:16.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 0b] add_size 200000 add_align 100000 Sep 13 00:23:05.752173 kernel: pci 0000:00:16.3: bridge window [io 0x1000-0x0fff] to [bus 0e] add_size 1000 Sep 13 00:23:05.752260 kernel: pci 0000:00:16.4: bridge window [io 0x1000-0x0fff] to [bus 0f] add_size 1000 Sep 13 00:23:05.752334 kernel: pci 0000:00:16.5: bridge window [io 0x1000-0x0fff] to [bus 10] add_size 1000 Sep 13 00:23:05.752387 kernel: pci 0000:00:16.6: bridge window [io 0x1000-0x0fff] to [bus 11] add_size 1000 Sep 13 00:23:05.752463 kernel: pci 0000:00:16.7: bridge window [io 0x1000-0x0fff] to [bus 12] add_size 1000 Sep 13 00:23:05.753621 kernel: pci 0000:00:17.3: bridge window [io 0x1000-0x0fff] to [bus 16] add_size 1000 Sep 13 00:23:05.753688 kernel: pci 0000:00:17.4: bridge window [io 0x1000-0x0fff] to [bus 17] add_size 1000 Sep 13 00:23:05.753779 kernel: pci 0000:00:17.5: bridge window [io 0x1000-0x0fff] to [bus 18] add_size 1000 Sep 13 00:23:05.753870 kernel: pci 0000:00:17.6: bridge window [io 0x1000-0x0fff] to [bus 19] add_size 1000 Sep 13 00:23:05.753944 kernel: pci 0000:00:17.7: bridge window [io 0x1000-0x0fff] to [bus 1a] add_size 1000 Sep 13 00:23:05.753999 kernel: pci 0000:00:18.2: bridge window [io 0x1000-0x0fff] to [bus 1d] add_size 1000 Sep 13 00:23:05.754069 kernel: pci 0000:00:18.3: bridge window [io 0x1000-0x0fff] to [bus 1e] add_size 1000 Sep 13 00:23:05.754150 kernel: pci 0000:00:18.4: bridge window [io 0x1000-0x0fff] to [bus 1f] add_size 1000 Sep 13 00:23:05.754217 kernel: pci 0000:00:18.5: bridge window [io 0x1000-0x0fff] to [bus 20] add_size 1000 Sep 13 00:23:05.754288 kernel: pci 0000:00:18.6: bridge window [io 0x1000-0x0fff] to [bus 21] add_size 1000 Sep 13 00:23:05.754346 kernel: pci 0000:00:18.7: bridge window [io 0x1000-0x0fff] to [bus 22] add_size 1000 Sep 13 00:23:05.754425 kernel: pci 0000:00:15.0: bridge window [mem 0xc0000000-0xc01fffff 64bit pref]: assigned Sep 13 00:23:05.754506 kernel: pci 0000:00:16.0: bridge window [mem 0xc0200000-0xc03fffff 64bit pref]: assigned Sep 13 00:23:05.754590 kernel: pci 0000:00:15.3: bridge window [io size 0x1000]: can't assign; no space Sep 13 00:23:05.754642 kernel: pci 0000:00:15.3: bridge window [io size 0x1000]: failed to assign Sep 13 00:23:05.754718 kernel: pci 0000:00:15.4: bridge window [io size 0x1000]: can't assign; no space Sep 13 00:23:05.754803 kernel: pci 0000:00:15.4: bridge window [io size 0x1000]: failed to assign Sep 13 00:23:05.754873 kernel: pci 0000:00:15.5: bridge window [io size 0x1000]: can't assign; no space Sep 13 00:23:05.754928 kernel: pci 0000:00:15.5: bridge window [io size 0x1000]: failed to assign Sep 13 00:23:05.754993 kernel: pci 0000:00:15.6: bridge window [io size 0x1000]: can't assign; no space Sep 13 00:23:05.755072 kernel: pci 0000:00:15.6: bridge window [io size 0x1000]: failed to assign Sep 13 00:23:05.755145 kernel: pci 0000:00:15.7: bridge window [io size 0x1000]: can't assign; no space Sep 13 00:23:05.755212 kernel: pci 0000:00:15.7: bridge window [io size 0x1000]: failed to assign Sep 13 00:23:05.755271 kernel: pci 0000:00:16.3: bridge window [io size 0x1000]: can't assign; no space Sep 13 00:23:05.755350 kernel: pci 0000:00:16.3: bridge window [io size 0x1000]: failed to assign Sep 13 00:23:05.755429 kernel: pci 0000:00:16.4: bridge window [io size 0x1000]: can't assign; no space Sep 13 00:23:05.755502 kernel: pci 0000:00:16.4: bridge window [io size 0x1000]: failed to assign Sep 13 00:23:05.757194 kernel: pci 0000:00:16.5: bridge window [io size 0x1000]: can't assign; no space Sep 13 00:23:05.757273 kernel: pci 0000:00:16.5: bridge window [io size 0x1000]: failed to assign Sep 13 00:23:05.757362 kernel: pci 0000:00:16.6: bridge window [io size 0x1000]: can't assign; no space Sep 13 00:23:05.757416 kernel: pci 0000:00:16.6: bridge window [io size 0x1000]: failed to assign Sep 13 00:23:05.757468 kernel: pci 0000:00:16.7: bridge window [io size 0x1000]: can't assign; no space Sep 13 00:23:05.757519 kernel: pci 0000:00:16.7: bridge window [io size 0x1000]: failed to assign Sep 13 00:23:05.757589 kernel: pci 0000:00:17.3: bridge window [io size 0x1000]: can't assign; no space Sep 13 00:23:05.757642 kernel: pci 0000:00:17.3: bridge window [io size 0x1000]: failed to assign Sep 13 00:23:05.757693 kernel: pci 0000:00:17.4: bridge window [io size 0x1000]: can't assign; no space Sep 13 00:23:05.757755 kernel: pci 0000:00:17.4: bridge window [io size 0x1000]: failed to assign Sep 13 00:23:05.757817 kernel: pci 0000:00:17.5: bridge window [io size 0x1000]: can't assign; no space Sep 13 00:23:05.757868 kernel: pci 0000:00:17.5: bridge window [io size 0x1000]: failed to assign Sep 13 00:23:05.757918 kernel: pci 0000:00:17.6: bridge window [io size 0x1000]: can't assign; no space Sep 13 00:23:05.757970 kernel: pci 0000:00:17.6: bridge window [io size 0x1000]: failed to assign Sep 13 00:23:05.758443 kernel: pci 0000:00:17.7: bridge window [io size 0x1000]: can't assign; no space Sep 13 00:23:05.758525 kernel: pci 0000:00:17.7: bridge window [io size 0x1000]: failed to assign Sep 13 00:23:05.758589 kernel: pci 0000:00:18.2: bridge window [io size 0x1000]: can't assign; no space Sep 13 00:23:05.758642 kernel: pci 0000:00:18.2: bridge window [io size 0x1000]: failed to assign Sep 13 00:23:05.758695 kernel: pci 0000:00:18.3: bridge window [io size 0x1000]: can't assign; no space Sep 13 00:23:05.758766 kernel: pci 0000:00:18.3: bridge window [io size 0x1000]: failed to assign Sep 13 00:23:05.758818 kernel: pci 0000:00:18.4: bridge window [io size 0x1000]: can't assign; no space Sep 13 00:23:05.758886 kernel: pci 0000:00:18.4: bridge window [io size 0x1000]: failed to assign Sep 13 00:23:05.758977 kernel: pci 0000:00:18.5: bridge window [io size 0x1000]: can't assign; no space Sep 13 00:23:05.759030 kernel: pci 0000:00:18.5: bridge window [io size 0x1000]: failed to assign Sep 13 00:23:05.759081 kernel: pci 0000:00:18.6: bridge window [io size 0x1000]: can't assign; no space Sep 13 00:23:05.759132 kernel: pci 0000:00:18.6: bridge window [io size 0x1000]: failed to assign Sep 13 00:23:05.759205 kernel: pci 0000:00:18.7: bridge window [io size 0x1000]: can't assign; no space Sep 13 00:23:05.759265 kernel: pci 0000:00:18.7: bridge window [io size 0x1000]: failed to assign Sep 13 00:23:05.759341 kernel: pci 0000:00:18.7: bridge window [io size 0x1000]: can't assign; no space Sep 13 00:23:05.759421 kernel: pci 0000:00:18.7: bridge window [io size 0x1000]: failed to assign Sep 13 00:23:05.759475 kernel: pci 0000:00:18.6: bridge window [io size 0x1000]: can't assign; no space Sep 13 00:23:05.759525 kernel: pci 0000:00:18.6: bridge window [io size 0x1000]: failed to assign Sep 13 00:23:05.759716 kernel: pci 0000:00:18.5: bridge window [io size 0x1000]: can't assign; no space Sep 13 00:23:05.759770 kernel: pci 0000:00:18.5: bridge window [io size 0x1000]: failed to assign Sep 13 00:23:05.759841 kernel: pci 0000:00:18.4: bridge window [io size 0x1000]: can't assign; no space Sep 13 00:23:05.759919 kernel: pci 0000:00:18.4: bridge window [io size 0x1000]: failed to assign Sep 13 00:23:05.759997 kernel: pci 0000:00:18.3: bridge window [io size 0x1000]: can't assign; no space Sep 13 00:23:05.760057 kernel: pci 0000:00:18.3: bridge window [io size 0x1000]: failed to assign Sep 13 00:23:05.760109 kernel: pci 0000:00:18.2: bridge window [io size 0x1000]: can't assign; no space Sep 13 00:23:05.760160 kernel: pci 0000:00:18.2: bridge window [io size 0x1000]: failed to assign Sep 13 00:23:05.760215 kernel: pci 0000:00:17.7: bridge window [io size 0x1000]: can't assign; no space Sep 13 00:23:05.760266 kernel: pci 0000:00:17.7: bridge window [io size 0x1000]: failed to assign Sep 13 00:23:05.760316 kernel: pci 0000:00:17.6: bridge window [io size 0x1000]: can't assign; no space Sep 13 00:23:05.760367 kernel: pci 0000:00:17.6: bridge window [io size 0x1000]: failed to assign Sep 13 00:23:05.760421 kernel: pci 0000:00:17.5: bridge window [io size 0x1000]: can't assign; no space Sep 13 00:23:05.760486 kernel: pci 0000:00:17.5: bridge window [io size 0x1000]: failed to assign Sep 13 00:23:05.760538 kernel: pci 0000:00:17.4: bridge window [io size 0x1000]: can't assign; no space Sep 13 00:23:05.760639 kernel: pci 0000:00:17.4: bridge window [io size 0x1000]: failed to assign Sep 13 00:23:05.760710 kernel: pci 0000:00:17.3: bridge window [io size 0x1000]: can't assign; no space Sep 13 00:23:05.760763 kernel: pci 0000:00:17.3: bridge window [io size 0x1000]: failed to assign Sep 13 00:23:05.760814 kernel: pci 0000:00:16.7: bridge window [io size 0x1000]: can't assign; no space Sep 13 00:23:05.760864 kernel: pci 0000:00:16.7: bridge window [io size 0x1000]: failed to assign Sep 13 00:23:05.760915 kernel: pci 0000:00:16.6: bridge window [io size 0x1000]: can't assign; no space Sep 13 00:23:05.760969 kernel: pci 0000:00:16.6: bridge window [io size 0x1000]: failed to assign Sep 13 00:23:05.761033 kernel: pci 0000:00:16.5: bridge window [io size 0x1000]: can't assign; no space Sep 13 00:23:05.761086 kernel: pci 0000:00:16.5: bridge window [io size 0x1000]: failed to assign Sep 13 00:23:05.761136 kernel: pci 0000:00:16.4: bridge window [io size 0x1000]: can't assign; no space Sep 13 00:23:05.761186 kernel: pci 0000:00:16.4: bridge window [io size 0x1000]: failed to assign Sep 13 00:23:05.761238 kernel: pci 0000:00:16.3: bridge window [io size 0x1000]: can't assign; no space Sep 13 00:23:05.761298 kernel: pci 0000:00:16.3: bridge window [io size 0x1000]: failed to assign Sep 13 00:23:05.761380 kernel: pci 0000:00:15.7: bridge window [io size 0x1000]: can't assign; no space Sep 13 00:23:05.761448 kernel: pci 0000:00:15.7: bridge window [io size 0x1000]: failed to assign Sep 13 00:23:05.761500 kernel: pci 0000:00:15.6: bridge window [io size 0x1000]: can't assign; no space Sep 13 00:23:05.761560 kernel: pci 0000:00:15.6: bridge window [io size 0x1000]: failed to assign Sep 13 00:23:05.761616 kernel: pci 0000:00:15.5: bridge window [io size 0x1000]: can't assign; no space Sep 13 00:23:05.761666 kernel: pci 0000:00:15.5: bridge window [io size 0x1000]: failed to assign Sep 13 00:23:05.761717 kernel: pci 0000:00:15.4: bridge window [io size 0x1000]: can't assign; no space Sep 13 00:23:05.761768 kernel: pci 0000:00:15.4: bridge window [io size 0x1000]: failed to assign Sep 13 00:23:05.761828 kernel: pci 0000:00:15.3: bridge window [io size 0x1000]: can't assign; no space Sep 13 00:23:05.761897 kernel: pci 0000:00:15.3: bridge window [io size 0x1000]: failed to assign Sep 13 00:23:05.761985 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Sep 13 00:23:05.762052 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] Sep 13 00:23:05.762119 kernel: pci 0000:00:11.0: bridge window [io 0x2000-0x3fff] Sep 13 00:23:05.762171 kernel: pci 0000:00:11.0: bridge window [mem 0xfd600000-0xfdffffff] Sep 13 00:23:05.762238 kernel: pci 0000:00:11.0: bridge window [mem 0xe7b00000-0xe7ffffff 64bit pref] Sep 13 00:23:05.762302 kernel: pci 0000:03:00.0: ROM [mem 0xfd500000-0xfd50ffff pref]: assigned Sep 13 00:23:05.762402 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] Sep 13 00:23:05.762457 kernel: pci 0000:00:15.0: bridge window [io 0x4000-0x4fff] Sep 13 00:23:05.762507 kernel: pci 0000:00:15.0: bridge window [mem 0xfd500000-0xfd5fffff] Sep 13 00:23:05.762570 kernel: pci 0000:00:15.0: bridge window [mem 0xc0000000-0xc01fffff 64bit pref] Sep 13 00:23:05.762623 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] Sep 13 00:23:05.762674 kernel: pci 0000:00:15.1: bridge window [io 0x8000-0x8fff] Sep 13 00:23:05.762727 kernel: pci 0000:00:15.1: bridge window [mem 0xfd100000-0xfd1fffff] Sep 13 00:23:05.762777 kernel: pci 0000:00:15.1: bridge window [mem 0xe7800000-0xe78fffff 64bit pref] Sep 13 00:23:05.762829 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] Sep 13 00:23:05.762880 kernel: pci 0000:00:15.2: bridge window [io 0xc000-0xcfff] Sep 13 00:23:05.762943 kernel: pci 0000:00:15.2: bridge window [mem 0xfcd00000-0xfcdfffff] Sep 13 00:23:05.763023 kernel: pci 0000:00:15.2: bridge window [mem 0xe7400000-0xe74fffff 64bit pref] Sep 13 00:23:05.763094 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] Sep 13 00:23:05.763149 kernel: pci 0000:00:15.3: bridge window [mem 0xfc900000-0xfc9fffff] Sep 13 00:23:05.763227 kernel: pci 0000:00:15.3: bridge window [mem 0xe7000000-0xe70fffff 64bit pref] Sep 13 00:23:05.763304 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] Sep 13 00:23:05.763356 kernel: pci 0000:00:15.4: bridge window [mem 0xfc500000-0xfc5fffff] Sep 13 00:23:05.763406 kernel: pci 0000:00:15.4: bridge window [mem 0xe6c00000-0xe6cfffff 64bit pref] Sep 13 00:23:05.763457 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] Sep 13 00:23:05.763507 kernel: pci 0000:00:15.5: bridge window [mem 0xfc100000-0xfc1fffff] Sep 13 00:23:05.763580 kernel: pci 0000:00:15.5: bridge window [mem 0xe6800000-0xe68fffff 64bit pref] Sep 13 00:23:05.763646 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] Sep 13 00:23:05.763703 kernel: pci 0000:00:15.6: bridge window [mem 0xfbd00000-0xfbdfffff] Sep 13 00:23:05.763784 kernel: pci 0000:00:15.6: bridge window [mem 0xe6400000-0xe64fffff 64bit pref] Sep 13 00:23:05.763854 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] Sep 13 00:23:05.763906 kernel: pci 0000:00:15.7: bridge window [mem 0xfb900000-0xfb9fffff] Sep 13 00:23:05.763956 kernel: pci 0000:00:15.7: bridge window [mem 0xe6000000-0xe60fffff 64bit pref] Sep 13 00:23:05.764011 kernel: pci 0000:0b:00.0: ROM [mem 0xfd400000-0xfd40ffff pref]: assigned Sep 13 00:23:05.764062 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] Sep 13 00:23:05.764125 kernel: pci 0000:00:16.0: bridge window [io 0x5000-0x5fff] Sep 13 00:23:05.764191 kernel: pci 0000:00:16.0: bridge window [mem 0xfd400000-0xfd4fffff] Sep 13 00:23:05.764242 kernel: pci 0000:00:16.0: bridge window [mem 0xc0200000-0xc03fffff 64bit pref] Sep 13 00:23:05.764320 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] Sep 13 00:23:05.764400 kernel: pci 0000:00:16.1: bridge window [io 0x9000-0x9fff] Sep 13 00:23:05.764451 kernel: pci 0000:00:16.1: bridge window [mem 0xfd000000-0xfd0fffff] Sep 13 00:23:05.764528 kernel: pci 0000:00:16.1: bridge window [mem 0xe7700000-0xe77fffff 64bit pref] Sep 13 00:23:05.764695 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] Sep 13 00:23:05.764749 kernel: pci 0000:00:16.2: bridge window [io 0xd000-0xdfff] Sep 13 00:23:05.764826 kernel: pci 0000:00:16.2: bridge window [mem 0xfcc00000-0xfccfffff] Sep 13 00:23:05.764888 kernel: pci 0000:00:16.2: bridge window [mem 0xe7300000-0xe73fffff 64bit pref] Sep 13 00:23:05.764970 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] Sep 13 00:23:05.765036 kernel: pci 0000:00:16.3: bridge window [mem 0xfc800000-0xfc8fffff] Sep 13 00:23:05.765100 kernel: pci 0000:00:16.3: bridge window [mem 0xe6f00000-0xe6ffffff 64bit pref] Sep 13 00:23:05.765166 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] Sep 13 00:23:05.765248 kernel: pci 0000:00:16.4: bridge window [mem 0xfc400000-0xfc4fffff] Sep 13 00:23:05.765323 kernel: pci 0000:00:16.4: bridge window [mem 0xe6b00000-0xe6bfffff 64bit pref] Sep 13 00:23:05.765386 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] Sep 13 00:23:05.765453 kernel: pci 0000:00:16.5: bridge window [mem 0xfc000000-0xfc0fffff] Sep 13 00:23:05.765530 kernel: pci 0000:00:16.5: bridge window [mem 0xe6700000-0xe67fffff 64bit pref] Sep 13 00:23:05.765618 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] Sep 13 00:23:05.765674 kernel: pci 0000:00:16.6: bridge window [mem 0xfbc00000-0xfbcfffff] Sep 13 00:23:05.765745 kernel: pci 0000:00:16.6: bridge window [mem 0xe6300000-0xe63fffff 64bit pref] Sep 13 00:23:05.765816 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] Sep 13 00:23:05.765908 kernel: pci 0000:00:16.7: bridge window [mem 0xfb800000-0xfb8fffff] Sep 13 00:23:05.765961 kernel: pci 0000:00:16.7: bridge window [mem 0xe5f00000-0xe5ffffff 64bit pref] Sep 13 00:23:05.766040 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] Sep 13 00:23:05.766092 kernel: pci 0000:00:17.0: bridge window [io 0x6000-0x6fff] Sep 13 00:23:05.766152 kernel: pci 0000:00:17.0: bridge window [mem 0xfd300000-0xfd3fffff] Sep 13 00:23:05.766237 kernel: pci 0000:00:17.0: bridge window [mem 0xe7a00000-0xe7afffff 64bit pref] Sep 13 00:23:05.766308 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] Sep 13 00:23:05.766360 kernel: pci 0000:00:17.1: bridge window [io 0xa000-0xafff] Sep 13 00:23:05.766409 kernel: pci 0000:00:17.1: bridge window [mem 0xfcf00000-0xfcffffff] Sep 13 00:23:05.766477 kernel: pci 0000:00:17.1: bridge window [mem 0xe7600000-0xe76fffff 64bit pref] Sep 13 00:23:05.766530 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] Sep 13 00:23:05.766621 kernel: pci 0000:00:17.2: bridge window [io 0xe000-0xefff] Sep 13 00:23:05.766709 kernel: pci 0000:00:17.2: bridge window [mem 0xfcb00000-0xfcbfffff] Sep 13 00:23:05.766761 kernel: pci 0000:00:17.2: bridge window [mem 0xe7200000-0xe72fffff 64bit pref] Sep 13 00:23:05.766811 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] Sep 13 00:23:05.766863 kernel: pci 0000:00:17.3: bridge window [mem 0xfc700000-0xfc7fffff] Sep 13 00:23:05.766944 kernel: pci 0000:00:17.3: bridge window [mem 0xe6e00000-0xe6efffff 64bit pref] Sep 13 00:23:05.766996 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] Sep 13 00:23:05.767068 kernel: pci 0000:00:17.4: bridge window [mem 0xfc300000-0xfc3fffff] Sep 13 00:23:05.767144 kernel: pci 0000:00:17.4: bridge window [mem 0xe6a00000-0xe6afffff 64bit pref] Sep 13 00:23:05.767212 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] Sep 13 00:23:05.767261 kernel: pci 0000:00:17.5: bridge window [mem 0xfbf00000-0xfbffffff] Sep 13 00:23:05.767311 kernel: pci 0000:00:17.5: bridge window [mem 0xe6600000-0xe66fffff 64bit pref] Sep 13 00:23:05.767378 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] Sep 13 00:23:05.767428 kernel: pci 0000:00:17.6: bridge window [mem 0xfbb00000-0xfbbfffff] Sep 13 00:23:05.767487 kernel: pci 0000:00:17.6: bridge window [mem 0xe6200000-0xe62fffff 64bit pref] Sep 13 00:23:05.767607 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] Sep 13 00:23:05.767660 kernel: pci 0000:00:17.7: bridge window [mem 0xfb700000-0xfb7fffff] Sep 13 00:23:05.767711 kernel: pci 0000:00:17.7: bridge window [mem 0xe5e00000-0xe5efffff 64bit pref] Sep 13 00:23:05.767766 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] Sep 13 00:23:05.767838 kernel: pci 0000:00:18.0: bridge window [io 0x7000-0x7fff] Sep 13 00:23:05.767890 kernel: pci 0000:00:18.0: bridge window [mem 0xfd200000-0xfd2fffff] Sep 13 00:23:05.767940 kernel: pci 0000:00:18.0: bridge window [mem 0xe7900000-0xe79fffff 64bit pref] Sep 13 00:23:05.767991 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] Sep 13 00:23:05.768062 kernel: pci 0000:00:18.1: bridge window [io 0xb000-0xbfff] Sep 13 00:23:05.768149 kernel: pci 0000:00:18.1: bridge window [mem 0xfce00000-0xfcefffff] Sep 13 00:23:05.768201 kernel: pci 0000:00:18.1: bridge window [mem 0xe7500000-0xe75fffff 64bit pref] Sep 13 00:23:05.768252 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] Sep 13 00:23:05.768324 kernel: pci 0000:00:18.2: bridge window [mem 0xfca00000-0xfcafffff] Sep 13 00:23:05.768379 kernel: pci 0000:00:18.2: bridge window [mem 0xe7100000-0xe71fffff 64bit pref] Sep 13 00:23:05.768430 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] Sep 13 00:23:05.768495 kernel: pci 0000:00:18.3: bridge window [mem 0xfc600000-0xfc6fffff] Sep 13 00:23:05.770590 kernel: pci 0000:00:18.3: bridge window [mem 0xe6d00000-0xe6dfffff 64bit pref] Sep 13 00:23:05.770710 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] Sep 13 00:23:05.770772 kernel: pci 0000:00:18.4: bridge window [mem 0xfc200000-0xfc2fffff] Sep 13 00:23:05.770845 kernel: pci 0000:00:18.4: bridge window [mem 0xe6900000-0xe69fffff 64bit pref] Sep 13 00:23:05.770915 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] Sep 13 00:23:05.771004 kernel: pci 0000:00:18.5: bridge window [mem 0xfbe00000-0xfbefffff] Sep 13 00:23:05.771059 kernel: pci 0000:00:18.5: bridge window [mem 0xe6500000-0xe65fffff 64bit pref] Sep 13 00:23:05.771138 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] Sep 13 00:23:05.771200 kernel: pci 0000:00:18.6: bridge window [mem 0xfba00000-0xfbafffff] Sep 13 00:23:05.771319 kernel: pci 0000:00:18.6: bridge window [mem 0xe6100000-0xe61fffff 64bit pref] Sep 13 00:23:05.771376 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] Sep 13 00:23:05.771451 kernel: pci 0000:00:18.7: bridge window [mem 0xfb600000-0xfb6fffff] Sep 13 00:23:05.771508 kernel: pci 0000:00:18.7: bridge window [mem 0xe5d00000-0xe5dfffff 64bit pref] Sep 13 00:23:05.771596 kernel: pci_bus 0000:00: resource 4 [mem 0x000a0000-0x000bffff window] Sep 13 00:23:05.771664 kernel: pci_bus 0000:00: resource 5 [mem 0x000cc000-0x000dbfff window] Sep 13 00:23:05.771756 kernel: pci_bus 0000:00: resource 6 [mem 0xc0000000-0xfebfffff window] Sep 13 00:23:05.771803 kernel: pci_bus 0000:00: resource 7 [io 0x0000-0x0cf7 window] Sep 13 00:23:05.771871 kernel: pci_bus 0000:00: resource 8 [io 0x0d00-0xfeff window] Sep 13 00:23:05.771953 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x3fff] Sep 13 00:23:05.772002 kernel: pci_bus 0000:02: resource 1 [mem 0xfd600000-0xfdffffff] Sep 13 00:23:05.772072 kernel: pci_bus 0000:02: resource 2 [mem 0xe7b00000-0xe7ffffff 64bit pref] Sep 13 00:23:05.772122 kernel: pci_bus 0000:02: resource 4 [mem 0x000a0000-0x000bffff window] Sep 13 00:23:05.772195 kernel: pci_bus 0000:02: resource 5 [mem 0x000cc000-0x000dbfff window] Sep 13 00:23:05.772276 kernel: pci_bus 0000:02: resource 6 [mem 0xc0000000-0xfebfffff window] Sep 13 00:23:05.772330 kernel: pci_bus 0000:02: resource 7 [io 0x0000-0x0cf7 window] Sep 13 00:23:05.772389 kernel: pci_bus 0000:02: resource 8 [io 0x0d00-0xfeff window] Sep 13 00:23:05.774557 kernel: pci_bus 0000:03: resource 0 [io 0x4000-0x4fff] Sep 13 00:23:05.774641 kernel: pci_bus 0000:03: resource 1 [mem 0xfd500000-0xfd5fffff] Sep 13 00:23:05.774697 kernel: pci_bus 0000:03: resource 2 [mem 0xc0000000-0xc01fffff 64bit pref] Sep 13 00:23:05.774774 kernel: pci_bus 0000:04: resource 0 [io 0x8000-0x8fff] Sep 13 00:23:05.774851 kernel: pci_bus 0000:04: resource 1 [mem 0xfd100000-0xfd1fffff] Sep 13 00:23:05.774917 kernel: pci_bus 0000:04: resource 2 [mem 0xe7800000-0xe78fffff 64bit pref] Sep 13 00:23:05.774973 kernel: pci_bus 0000:05: resource 0 [io 0xc000-0xcfff] Sep 13 00:23:05.775036 kernel: pci_bus 0000:05: resource 1 [mem 0xfcd00000-0xfcdfffff] Sep 13 00:23:05.775108 kernel: pci_bus 0000:05: resource 2 [mem 0xe7400000-0xe74fffff 64bit pref] Sep 13 00:23:05.775172 kernel: pci_bus 0000:06: resource 1 [mem 0xfc900000-0xfc9fffff] Sep 13 00:23:05.775237 kernel: pci_bus 0000:06: resource 2 [mem 0xe7000000-0xe70fffff 64bit pref] Sep 13 00:23:05.775290 kernel: pci_bus 0000:07: resource 1 [mem 0xfc500000-0xfc5fffff] Sep 13 00:23:05.775336 kernel: pci_bus 0000:07: resource 2 [mem 0xe6c00000-0xe6cfffff 64bit pref] Sep 13 00:23:05.775386 kernel: pci_bus 0000:08: resource 1 [mem 0xfc100000-0xfc1fffff] Sep 13 00:23:05.775433 kernel: pci_bus 0000:08: resource 2 [mem 0xe6800000-0xe68fffff 64bit pref] Sep 13 00:23:05.775485 kernel: pci_bus 0000:09: resource 1 [mem 0xfbd00000-0xfbdfffff] Sep 13 00:23:05.775536 kernel: pci_bus 0000:09: resource 2 [mem 0xe6400000-0xe64fffff 64bit pref] Sep 13 00:23:05.775629 kernel: pci_bus 0000:0a: resource 1 [mem 0xfb900000-0xfb9fffff] Sep 13 00:23:05.775699 kernel: pci_bus 0000:0a: resource 2 [mem 0xe6000000-0xe60fffff 64bit pref] Sep 13 00:23:05.775751 kernel: pci_bus 0000:0b: resource 0 [io 0x5000-0x5fff] Sep 13 00:23:05.775818 kernel: pci_bus 0000:0b: resource 1 [mem 0xfd400000-0xfd4fffff] Sep 13 00:23:05.775865 kernel: pci_bus 0000:0b: resource 2 [mem 0xc0200000-0xc03fffff 64bit pref] Sep 13 00:23:05.775918 kernel: pci_bus 0000:0c: resource 0 [io 0x9000-0x9fff] Sep 13 00:23:05.775964 kernel: pci_bus 0000:0c: resource 1 [mem 0xfd000000-0xfd0fffff] Sep 13 00:23:05.776029 kernel: pci_bus 0000:0c: resource 2 [mem 0xe7700000-0xe77fffff 64bit pref] Sep 13 00:23:05.776106 kernel: pci_bus 0000:0d: resource 0 [io 0xd000-0xdfff] Sep 13 00:23:05.776180 kernel: pci_bus 0000:0d: resource 1 [mem 0xfcc00000-0xfccfffff] Sep 13 00:23:05.776260 kernel: pci_bus 0000:0d: resource 2 [mem 0xe7300000-0xe73fffff 64bit pref] Sep 13 00:23:05.776311 kernel: pci_bus 0000:0e: resource 1 [mem 0xfc800000-0xfc8fffff] Sep 13 00:23:05.776382 kernel: pci_bus 0000:0e: resource 2 [mem 0xe6f00000-0xe6ffffff 64bit pref] Sep 13 00:23:05.776433 kernel: pci_bus 0000:0f: resource 1 [mem 0xfc400000-0xfc4fffff] Sep 13 00:23:05.776479 kernel: pci_bus 0000:0f: resource 2 [mem 0xe6b00000-0xe6bfffff 64bit pref] Sep 13 00:23:05.776529 kernel: pci_bus 0000:10: resource 1 [mem 0xfc000000-0xfc0fffff] Sep 13 00:23:05.776584 kernel: pci_bus 0000:10: resource 2 [mem 0xe6700000-0xe67fffff 64bit pref] Sep 13 00:23:05.776635 kernel: pci_bus 0000:11: resource 1 [mem 0xfbc00000-0xfbcfffff] Sep 13 00:23:05.776700 kernel: pci_bus 0000:11: resource 2 [mem 0xe6300000-0xe63fffff 64bit pref] Sep 13 00:23:05.776779 kernel: pci_bus 0000:12: resource 1 [mem 0xfb800000-0xfb8fffff] Sep 13 00:23:05.776840 kernel: pci_bus 0000:12: resource 2 [mem 0xe5f00000-0xe5ffffff 64bit pref] Sep 13 00:23:05.776910 kernel: pci_bus 0000:13: resource 0 [io 0x6000-0x6fff] Sep 13 00:23:05.776958 kernel: pci_bus 0000:13: resource 1 [mem 0xfd300000-0xfd3fffff] Sep 13 00:23:05.777017 kernel: pci_bus 0000:13: resource 2 [mem 0xe7a00000-0xe7afffff 64bit pref] Sep 13 00:23:05.777095 kernel: pci_bus 0000:14: resource 0 [io 0xa000-0xafff] Sep 13 00:23:05.777158 kernel: pci_bus 0000:14: resource 1 [mem 0xfcf00000-0xfcffffff] Sep 13 00:23:05.777226 kernel: pci_bus 0000:14: resource 2 [mem 0xe7600000-0xe76fffff 64bit pref] Sep 13 00:23:05.777276 kernel: pci_bus 0000:15: resource 0 [io 0xe000-0xefff] Sep 13 00:23:05.777339 kernel: pci_bus 0000:15: resource 1 [mem 0xfcb00000-0xfcbfffff] Sep 13 00:23:05.777415 kernel: pci_bus 0000:15: resource 2 [mem 0xe7200000-0xe72fffff 64bit pref] Sep 13 00:23:05.777481 kernel: pci_bus 0000:16: resource 1 [mem 0xfc700000-0xfc7fffff] Sep 13 00:23:05.777562 kernel: pci_bus 0000:16: resource 2 [mem 0xe6e00000-0xe6efffff 64bit pref] Sep 13 00:23:05.777623 kernel: pci_bus 0000:17: resource 1 [mem 0xfc300000-0xfc3fffff] Sep 13 00:23:05.777696 kernel: pci_bus 0000:17: resource 2 [mem 0xe6a00000-0xe6afffff 64bit pref] Sep 13 00:23:05.777771 kernel: pci_bus 0000:18: resource 1 [mem 0xfbf00000-0xfbffffff] Sep 13 00:23:05.777839 kernel: pci_bus 0000:18: resource 2 [mem 0xe6600000-0xe66fffff 64bit pref] Sep 13 00:23:05.777893 kernel: pci_bus 0000:19: resource 1 [mem 0xfbb00000-0xfbbfffff] Sep 13 00:23:05.777963 kernel: pci_bus 0000:19: resource 2 [mem 0xe6200000-0xe62fffff 64bit pref] Sep 13 00:23:05.778051 kernel: pci_bus 0000:1a: resource 1 [mem 0xfb700000-0xfb7fffff] Sep 13 00:23:05.778105 kernel: pci_bus 0000:1a: resource 2 [mem 0xe5e00000-0xe5efffff 64bit pref] Sep 13 00:23:05.778168 kernel: pci_bus 0000:1b: resource 0 [io 0x7000-0x7fff] Sep 13 00:23:05.778218 kernel: pci_bus 0000:1b: resource 1 [mem 0xfd200000-0xfd2fffff] Sep 13 00:23:05.778289 kernel: pci_bus 0000:1b: resource 2 [mem 0xe7900000-0xe79fffff 64bit pref] Sep 13 00:23:05.778370 kernel: pci_bus 0000:1c: resource 0 [io 0xb000-0xbfff] Sep 13 00:23:05.778439 kernel: pci_bus 0000:1c: resource 1 [mem 0xfce00000-0xfcefffff] Sep 13 00:23:05.778486 kernel: pci_bus 0000:1c: resource 2 [mem 0xe7500000-0xe75fffff 64bit pref] Sep 13 00:23:05.778825 kernel: pci_bus 0000:1d: resource 1 [mem 0xfca00000-0xfcafffff] Sep 13 00:23:05.778908 kernel: pci_bus 0000:1d: resource 2 [mem 0xe7100000-0xe71fffff 64bit pref] Sep 13 00:23:05.778986 kernel: pci_bus 0000:1e: resource 1 [mem 0xfc600000-0xfc6fffff] Sep 13 00:23:05.779054 kernel: pci_bus 0000:1e: resource 2 [mem 0xe6d00000-0xe6dfffff 64bit pref] Sep 13 00:23:05.779110 kernel: pci_bus 0000:1f: resource 1 [mem 0xfc200000-0xfc2fffff] Sep 13 00:23:05.779158 kernel: pci_bus 0000:1f: resource 2 [mem 0xe6900000-0xe69fffff 64bit pref] Sep 13 00:23:05.779211 kernel: pci_bus 0000:20: resource 1 [mem 0xfbe00000-0xfbefffff] Sep 13 00:23:05.779280 kernel: pci_bus 0000:20: resource 2 [mem 0xe6500000-0xe65fffff 64bit pref] Sep 13 00:23:05.779357 kernel: pci_bus 0000:21: resource 1 [mem 0xfba00000-0xfbafffff] Sep 13 00:23:05.779418 kernel: pci_bus 0000:21: resource 2 [mem 0xe6100000-0xe61fffff 64bit pref] Sep 13 00:23:05.779492 kernel: pci_bus 0000:22: resource 1 [mem 0xfb600000-0xfb6fffff] Sep 13 00:23:05.779540 kernel: pci_bus 0000:22: resource 2 [mem 0xe5d00000-0xe5dfffff 64bit pref] Sep 13 00:23:05.779647 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Sep 13 00:23:05.779663 kernel: PCI: CLS 32 bytes, default 64 Sep 13 00:23:05.779674 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Sep 13 00:23:05.779691 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x311fd3cd494, max_idle_ns: 440795223879 ns Sep 13 00:23:05.779700 kernel: clocksource: Switched to clocksource tsc Sep 13 00:23:05.779706 kernel: Initialise system trusted keyrings Sep 13 00:23:05.779714 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Sep 13 00:23:05.779721 kernel: Key type asymmetric registered Sep 13 00:23:05.779726 kernel: Asymmetric key parser 'x509' registered Sep 13 00:23:05.779732 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Sep 13 00:23:05.779738 kernel: io scheduler mq-deadline registered Sep 13 00:23:05.779744 kernel: io scheduler kyber registered Sep 13 00:23:05.779752 kernel: io scheduler bfq registered Sep 13 00:23:05.779823 kernel: pcieport 0000:00:15.0: PME: Signaling with IRQ 24 Sep 13 00:23:05.779879 kernel: pcieport 0000:00:15.0: pciehp: Slot #160 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 13 00:23:05.779962 kernel: pcieport 0000:00:15.1: PME: Signaling with IRQ 25 Sep 13 00:23:05.780375 kernel: pcieport 0000:00:15.1: pciehp: Slot #161 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 13 00:23:05.780452 kernel: pcieport 0000:00:15.2: PME: Signaling with IRQ 26 Sep 13 00:23:05.780509 kernel: pcieport 0000:00:15.2: pciehp: Slot #162 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 13 00:23:05.780610 kernel: pcieport 0000:00:15.3: PME: Signaling with IRQ 27 Sep 13 00:23:05.780687 kernel: pcieport 0000:00:15.3: pciehp: Slot #163 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 13 00:23:05.780760 kernel: pcieport 0000:00:15.4: PME: Signaling with IRQ 28 Sep 13 00:23:05.780817 kernel: pcieport 0000:00:15.4: pciehp: Slot #164 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 13 00:23:05.780896 kernel: pcieport 0000:00:15.5: PME: Signaling with IRQ 29 Sep 13 00:23:05.780978 kernel: pcieport 0000:00:15.5: pciehp: Slot #165 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 13 00:23:05.781051 kernel: pcieport 0000:00:15.6: PME: Signaling with IRQ 30 Sep 13 00:23:05.781102 kernel: pcieport 0000:00:15.6: pciehp: Slot #166 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 13 00:23:05.781175 kernel: pcieport 0000:00:15.7: PME: Signaling with IRQ 31 Sep 13 00:23:05.781263 kernel: pcieport 0000:00:15.7: pciehp: Slot #167 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 13 00:23:05.781339 kernel: pcieport 0000:00:16.0: PME: Signaling with IRQ 32 Sep 13 00:23:05.781393 kernel: pcieport 0000:00:16.0: pciehp: Slot #192 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 13 00:23:05.781460 kernel: pcieport 0000:00:16.1: PME: Signaling with IRQ 33 Sep 13 00:23:05.781539 kernel: pcieport 0000:00:16.1: pciehp: Slot #193 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 13 00:23:05.781619 kernel: pcieport 0000:00:16.2: PME: Signaling with IRQ 34 Sep 13 00:23:05.781670 kernel: pcieport 0000:00:16.2: pciehp: Slot #194 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 13 00:23:05.781721 kernel: pcieport 0000:00:16.3: PME: Signaling with IRQ 35 Sep 13 00:23:05.781781 kernel: pcieport 0000:00:16.3: pciehp: Slot #195 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 13 00:23:05.781836 kernel: pcieport 0000:00:16.4: PME: Signaling with IRQ 36 Sep 13 00:23:05.781887 kernel: pcieport 0000:00:16.4: pciehp: Slot #196 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 13 00:23:05.781940 kernel: pcieport 0000:00:16.5: PME: Signaling with IRQ 37 Sep 13 00:23:05.781991 kernel: pcieport 0000:00:16.5: pciehp: Slot #197 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 13 00:23:05.782062 kernel: pcieport 0000:00:16.6: PME: Signaling with IRQ 38 Sep 13 00:23:05.782113 kernel: pcieport 0000:00:16.6: pciehp: Slot #198 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 13 00:23:05.782167 kernel: pcieport 0000:00:16.7: PME: Signaling with IRQ 39 Sep 13 00:23:05.782243 kernel: pcieport 0000:00:16.7: pciehp: Slot #199 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 13 00:23:05.782329 kernel: pcieport 0000:00:17.0: PME: Signaling with IRQ 40 Sep 13 00:23:05.782381 kernel: pcieport 0000:00:17.0: pciehp: Slot #224 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 13 00:23:05.782433 kernel: pcieport 0000:00:17.1: PME: Signaling with IRQ 41 Sep 13 00:23:05.782483 kernel: pcieport 0000:00:17.1: pciehp: Slot #225 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 13 00:23:05.782562 kernel: pcieport 0000:00:17.2: PME: Signaling with IRQ 42 Sep 13 00:23:05.782617 kernel: pcieport 0000:00:17.2: pciehp: Slot #226 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 13 00:23:05.782699 kernel: pcieport 0000:00:17.3: PME: Signaling with IRQ 43 Sep 13 00:23:05.782781 kernel: pcieport 0000:00:17.3: pciehp: Slot #227 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 13 00:23:05.782834 kernel: pcieport 0000:00:17.4: PME: Signaling with IRQ 44 Sep 13 00:23:05.782884 kernel: pcieport 0000:00:17.4: pciehp: Slot #228 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 13 00:23:05.782937 kernel: pcieport 0000:00:17.5: PME: Signaling with IRQ 45 Sep 13 00:23:05.783007 kernel: pcieport 0000:00:17.5: pciehp: Slot #229 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 13 00:23:05.783074 kernel: pcieport 0000:00:17.6: PME: Signaling with IRQ 46 Sep 13 00:23:05.783155 kernel: pcieport 0000:00:17.6: pciehp: Slot #230 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 13 00:23:05.783209 kernel: pcieport 0000:00:17.7: PME: Signaling with IRQ 47 Sep 13 00:23:05.783260 kernel: pcieport 0000:00:17.7: pciehp: Slot #231 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 13 00:23:05.783311 kernel: pcieport 0000:00:18.0: PME: Signaling with IRQ 48 Sep 13 00:23:05.783365 kernel: pcieport 0000:00:18.0: pciehp: Slot #256 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 13 00:23:05.783420 kernel: pcieport 0000:00:18.1: PME: Signaling with IRQ 49 Sep 13 00:23:05.783496 kernel: pcieport 0000:00:18.1: pciehp: Slot #257 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 13 00:23:05.783568 kernel: pcieport 0000:00:18.2: PME: Signaling with IRQ 50 Sep 13 00:23:05.783660 kernel: pcieport 0000:00:18.2: pciehp: Slot #258 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 13 00:23:05.783714 kernel: pcieport 0000:00:18.3: PME: Signaling with IRQ 51 Sep 13 00:23:05.783765 kernel: pcieport 0000:00:18.3: pciehp: Slot #259 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 13 00:23:05.783817 kernel: pcieport 0000:00:18.4: PME: Signaling with IRQ 52 Sep 13 00:23:05.783868 kernel: pcieport 0000:00:18.4: pciehp: Slot #260 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 13 00:23:05.783919 kernel: pcieport 0000:00:18.5: PME: Signaling with IRQ 53 Sep 13 00:23:05.783998 kernel: pcieport 0000:00:18.5: pciehp: Slot #261 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 13 00:23:05.784054 kernel: pcieport 0000:00:18.6: PME: Signaling with IRQ 54 Sep 13 00:23:05.784134 kernel: pcieport 0000:00:18.6: pciehp: Slot #262 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 13 00:23:05.784211 kernel: pcieport 0000:00:18.7: PME: Signaling with IRQ 55 Sep 13 00:23:05.784264 kernel: pcieport 0000:00:18.7: pciehp: Slot #263 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 13 00:23:05.784275 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 13 00:23:05.784282 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 13 00:23:05.784288 kernel: 00:05: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 13 00:23:05.784296 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBC,PNP0f13:MOUS] at 0x60,0x64 irq 1,12 Sep 13 00:23:05.784302 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Sep 13 00:23:05.784308 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Sep 13 00:23:05.784363 kernel: rtc_cmos 00:01: registered as rtc0 Sep 13 00:23:05.784438 kernel: rtc_cmos 00:01: setting system clock to 2025-09-13T00:23:05 UTC (1757722985) Sep 13 00:23:05.784448 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Sep 13 00:23:05.784493 kernel: rtc_cmos 00:01: alarms up to one month, y3k, 114 bytes nvram Sep 13 00:23:05.784506 kernel: intel_pstate: CPU model not supported Sep 13 00:23:05.784520 kernel: NET: Registered PF_INET6 protocol family Sep 13 00:23:05.784531 kernel: Segment Routing with IPv6 Sep 13 00:23:05.784541 kernel: In-situ OAM (IOAM) with IPv6 Sep 13 00:23:05.784565 kernel: NET: Registered PF_PACKET protocol family Sep 13 00:23:05.784576 kernel: Key type dns_resolver registered Sep 13 00:23:05.784587 kernel: IPI shorthand broadcast: enabled Sep 13 00:23:05.784599 kernel: sched_clock: Marking stable (2680322119, 178429410)->(2872190599, -13439070) Sep 13 00:23:05.784618 kernel: registered taskstats version 1 Sep 13 00:23:05.784626 kernel: Loading compiled-in X.509 certificates Sep 13 00:23:05.784632 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.47-flatcar: dd6b45f5ed9ac8d42d60bdb17f83ef06c8bcd8f6' Sep 13 00:23:05.784641 kernel: Demotion targets for Node 0: null Sep 13 00:23:05.784647 kernel: Key type .fscrypt registered Sep 13 00:23:05.784653 kernel: Key type fscrypt-provisioning registered Sep 13 00:23:05.784659 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 13 00:23:05.784665 kernel: ima: Allocated hash algorithm: sha1 Sep 13 00:23:05.784671 kernel: ima: No architecture policies found Sep 13 00:23:05.784678 kernel: clk: Disabling unused clocks Sep 13 00:23:05.784684 kernel: Warning: unable to open an initial console. Sep 13 00:23:05.784692 kernel: Freeing unused kernel image (initmem) memory: 53828K Sep 13 00:23:05.784698 kernel: Write protecting the kernel read-only data: 24576k Sep 13 00:23:05.784704 kernel: Freeing unused kernel image (rodata/data gap) memory: 280K Sep 13 00:23:05.784710 kernel: Run /init as init process Sep 13 00:23:05.784717 kernel: with arguments: Sep 13 00:23:05.784723 kernel: /init Sep 13 00:23:05.784729 kernel: with environment: Sep 13 00:23:05.784735 kernel: HOME=/ Sep 13 00:23:05.784741 kernel: TERM=linux Sep 13 00:23:05.784747 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 13 00:23:05.784756 systemd[1]: Successfully made /usr/ read-only. Sep 13 00:23:05.784768 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 13 00:23:05.784780 systemd[1]: Detected virtualization vmware. Sep 13 00:23:05.784792 systemd[1]: Detected architecture x86-64. Sep 13 00:23:05.784806 systemd[1]: Running in initrd. Sep 13 00:23:05.784814 systemd[1]: No hostname configured, using default hostname. Sep 13 00:23:05.784821 systemd[1]: Hostname set to . Sep 13 00:23:05.784830 systemd[1]: Initializing machine ID from random generator. Sep 13 00:23:05.784836 systemd[1]: Queued start job for default target initrd.target. Sep 13 00:23:05.784843 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 13 00:23:05.784850 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 13 00:23:05.784862 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 13 00:23:05.784874 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 13 00:23:05.784885 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 13 00:23:05.784899 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 13 00:23:05.784912 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 13 00:23:05.784923 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 13 00:23:05.784935 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 13 00:23:05.784947 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 13 00:23:05.784958 systemd[1]: Reached target paths.target - Path Units. Sep 13 00:23:05.784972 systemd[1]: Reached target slices.target - Slice Units. Sep 13 00:23:05.784980 systemd[1]: Reached target swap.target - Swaps. Sep 13 00:23:05.784988 systemd[1]: Reached target timers.target - Timer Units. Sep 13 00:23:05.784994 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 13 00:23:05.785001 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 13 00:23:05.785007 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 13 00:23:05.785014 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Sep 13 00:23:05.785020 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 13 00:23:05.785026 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 13 00:23:05.785033 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 13 00:23:05.785041 systemd[1]: Reached target sockets.target - Socket Units. Sep 13 00:23:05.785047 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 13 00:23:05.785054 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 13 00:23:05.785061 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 13 00:23:05.785073 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Sep 13 00:23:05.785085 systemd[1]: Starting systemd-fsck-usr.service... Sep 13 00:23:05.785103 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 13 00:23:05.785112 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 13 00:23:05.785119 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 13 00:23:05.785127 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 13 00:23:05.785134 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 13 00:23:05.785140 systemd[1]: Finished systemd-fsck-usr.service. Sep 13 00:23:05.785147 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 13 00:23:05.785176 systemd-journald[244]: Collecting audit messages is disabled. Sep 13 00:23:05.785202 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 13 00:23:05.785218 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 13 00:23:05.785230 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 00:23:05.785244 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 13 00:23:05.785256 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 13 00:23:05.785270 kernel: Bridge firewalling registered Sep 13 00:23:05.785277 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 13 00:23:05.785284 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 13 00:23:05.785291 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 13 00:23:05.785297 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 13 00:23:05.785304 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 13 00:23:05.785312 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 13 00:23:05.785320 systemd-journald[244]: Journal started Sep 13 00:23:05.785334 systemd-journald[244]: Runtime Journal (/run/log/journal/974e559a90314251ab2373213e6c001e) is 4.8M, max 38.9M, 34M free. Sep 13 00:23:05.732862 systemd-modules-load[246]: Inserted module 'overlay' Sep 13 00:23:05.763044 systemd-modules-load[246]: Inserted module 'br_netfilter' Sep 13 00:23:05.788853 systemd[1]: Started systemd-journald.service - Journal Service. Sep 13 00:23:05.791800 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 13 00:23:05.795060 dracut-cmdline[271]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=21b29c6e420cf06e0546ff797fc1285d986af130e4ba1abb9f27cb6343b53294 Sep 13 00:23:05.800876 systemd-tmpfiles[284]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Sep 13 00:23:05.802688 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 13 00:23:05.803774 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 13 00:23:05.834666 systemd-resolved[313]: Positive Trust Anchors: Sep 13 00:23:05.834903 systemd-resolved[313]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 13 00:23:05.835072 systemd-resolved[313]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 13 00:23:05.837365 systemd-resolved[313]: Defaulting to hostname 'linux'. Sep 13 00:23:05.838104 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 13 00:23:05.838238 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 13 00:23:05.853572 kernel: SCSI subsystem initialized Sep 13 00:23:05.873563 kernel: Loading iSCSI transport class v2.0-870. Sep 13 00:23:05.883563 kernel: iscsi: registered transport (tcp) Sep 13 00:23:05.906569 kernel: iscsi: registered transport (qla4xxx) Sep 13 00:23:05.906599 kernel: QLogic iSCSI HBA Driver Sep 13 00:23:05.916894 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 13 00:23:05.927210 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 13 00:23:05.928442 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 13 00:23:05.951025 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 13 00:23:05.951824 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 13 00:23:05.994562 kernel: raid6: avx2x4 gen() 47236 MB/s Sep 13 00:23:06.011559 kernel: raid6: avx2x2 gen() 53031 MB/s Sep 13 00:23:06.028819 kernel: raid6: avx2x1 gen() 44655 MB/s Sep 13 00:23:06.028834 kernel: raid6: using algorithm avx2x2 gen() 53031 MB/s Sep 13 00:23:06.046735 kernel: raid6: .... xor() 31933 MB/s, rmw enabled Sep 13 00:23:06.046750 kernel: raid6: using avx2x2 recovery algorithm Sep 13 00:23:06.060558 kernel: xor: automatically using best checksumming function avx Sep 13 00:23:06.166564 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 13 00:23:06.170400 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 13 00:23:06.171493 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 13 00:23:06.185769 systemd-udevd[493]: Using default interface naming scheme 'v255'. Sep 13 00:23:06.189148 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 13 00:23:06.190417 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 13 00:23:06.205701 dracut-pre-trigger[499]: rd.md=0: removing MD RAID activation Sep 13 00:23:06.219555 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 13 00:23:06.220444 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 13 00:23:06.298778 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 13 00:23:06.300844 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 13 00:23:06.359561 kernel: VMware PVSCSI driver - version 1.0.7.0-k Sep 13 00:23:06.360559 kernel: vmw_pvscsi: using 64bit dma Sep 13 00:23:06.361884 kernel: vmw_pvscsi: max_id: 16 Sep 13 00:23:06.361906 kernel: vmw_pvscsi: setting ring_pages to 8 Sep 13 00:23:06.363898 kernel: vmw_pvscsi: enabling reqCallThreshold Sep 13 00:23:06.364001 kernel: vmw_pvscsi: driver-based request coalescing enabled Sep 13 00:23:06.364015 kernel: vmw_pvscsi: using MSI-X Sep 13 00:23:06.365578 kernel: scsi host0: VMware PVSCSI storage adapter rev 2, req/cmp/msg rings: 8/8/1 pages, cmd_per_lun=254 Sep 13 00:23:06.367559 kernel: vmw_pvscsi 0000:03:00.0: VMware PVSCSI rev 2 host #0 Sep 13 00:23:06.367663 kernel: scsi 0:0:0:0: Direct-Access VMware Virtual disk 2.0 PQ: 0 ANSI: 6 Sep 13 00:23:06.382557 kernel: VMware vmxnet3 virtual NIC driver - version 1.9.0.0-k-NAPI Sep 13 00:23:06.382586 kernel: vmxnet3 0000:0b:00.0: # of Tx queues : 2, # of Rx queues : 2 Sep 13 00:23:06.385797 kernel: vmxnet3 0000:0b:00.0 eth0: NIC Link is Up 10000 Mbps Sep 13 00:23:06.396584 kernel: cryptd: max_cpu_qlen set to 1000 Sep 13 00:23:06.402351 (udev-worker)[545]: id: Truncating stdout of 'dmi_memory_id' up to 16384 byte. Sep 13 00:23:06.405558 kernel: vmxnet3 0000:0b:00.0 ens192: renamed from eth0 Sep 13 00:23:06.406913 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 13 00:23:06.406995 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 00:23:06.407739 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 13 00:23:06.409090 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 13 00:23:06.420557 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input2 Sep 13 00:23:06.423555 kernel: sd 0:0:0:0: [sda] 17805312 512-byte logical blocks: (9.12 GB/8.49 GiB) Sep 13 00:23:06.423647 kernel: libata version 3.00 loaded. Sep 13 00:23:06.423660 kernel: sd 0:0:0:0: [sda] Write Protect is off Sep 13 00:23:06.424567 kernel: sd 0:0:0:0: [sda] Mode Sense: 31 00 00 00 Sep 13 00:23:06.426587 kernel: sd 0:0:0:0: [sda] Cache data unavailable Sep 13 00:23:06.426666 kernel: sd 0:0:0:0: [sda] Assuming drive cache: write through Sep 13 00:23:06.430553 kernel: AES CTR mode by8 optimization enabled Sep 13 00:23:06.441455 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 00:23:06.442567 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 13 00:23:06.443554 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Sep 13 00:23:06.448809 kernel: ata_piix 0000:00:07.1: version 2.13 Sep 13 00:23:06.448901 kernel: scsi host1: ata_piix Sep 13 00:23:06.451848 kernel: scsi host2: ata_piix Sep 13 00:23:06.451944 kernel: ata1: PATA max UDMA/33 cmd 0x1f0 ctl 0x3f6 bmdma 0x1060 irq 14 lpm-pol 0 Sep 13 00:23:06.451954 kernel: ata2: PATA max UDMA/33 cmd 0x170 ctl 0x376 bmdma 0x1068 irq 15 lpm-pol 0 Sep 13 00:23:06.483491 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_disk ROOT. Sep 13 00:23:06.488823 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_disk OEM. Sep 13 00:23:06.494001 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_disk EFI-SYSTEM. Sep 13 00:23:06.498234 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_disk USR-A. Sep 13 00:23:06.498375 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_disk USR-A. Sep 13 00:23:06.499033 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 13 00:23:06.539559 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 13 00:23:06.624822 kernel: ata2.00: ATAPI: VMware Virtual IDE CDROM Drive, 00000001, max UDMA/33 Sep 13 00:23:06.628615 kernel: scsi 2:0:0:0: CD-ROM NECVMWar VMware IDE CDR10 1.00 PQ: 0 ANSI: 5 Sep 13 00:23:06.656565 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 1x/1x writer dvd-ram cd/rw xa/form2 cdda tray Sep 13 00:23:06.656677 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Sep 13 00:23:06.667560 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Sep 13 00:23:06.992850 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 13 00:23:06.993229 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 13 00:23:06.993366 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 13 00:23:06.993582 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 13 00:23:06.994233 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 13 00:23:07.004804 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 13 00:23:07.558562 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 13 00:23:07.558870 disk-uuid[642]: The operation has completed successfully. Sep 13 00:23:07.601285 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 13 00:23:07.601347 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 13 00:23:07.611455 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 13 00:23:07.621615 sh[673]: Success Sep 13 00:23:07.636016 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 13 00:23:07.636047 kernel: device-mapper: uevent: version 1.0.3 Sep 13 00:23:07.637211 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Sep 13 00:23:07.645872 kernel: device-mapper: verity: sha256 using shash "sha256-avx2" Sep 13 00:23:07.681642 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 13 00:23:07.684593 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 13 00:23:07.692667 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 13 00:23:07.705581 kernel: BTRFS: device fsid ca815b72-c68a-4b5e-8622-cfb6842bab47 devid 1 transid 38 /dev/mapper/usr (254:0) scanned by mount (685) Sep 13 00:23:07.705608 kernel: BTRFS info (device dm-0): first mount of filesystem ca815b72-c68a-4b5e-8622-cfb6842bab47 Sep 13 00:23:07.707557 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Sep 13 00:23:07.714608 kernel: BTRFS info (device dm-0): enabling ssd optimizations Sep 13 00:23:07.714628 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 13 00:23:07.716010 kernel: BTRFS info (device dm-0): enabling free space tree Sep 13 00:23:07.718067 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 13 00:23:07.718409 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Sep 13 00:23:07.719393 systemd[1]: Starting afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments... Sep 13 00:23:07.720618 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 13 00:23:07.749561 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (708) Sep 13 00:23:07.752246 kernel: BTRFS info (device sda6): first mount of filesystem 9cd66393-e258-466a-9c7b-a40c48e4924e Sep 13 00:23:07.752287 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Sep 13 00:23:07.767164 kernel: BTRFS info (device sda6): enabling ssd optimizations Sep 13 00:23:07.767185 kernel: BTRFS info (device sda6): enabling free space tree Sep 13 00:23:07.771581 kernel: BTRFS info (device sda6): last unmount of filesystem 9cd66393-e258-466a-9c7b-a40c48e4924e Sep 13 00:23:07.772413 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 13 00:23:07.773506 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 13 00:23:07.801342 systemd[1]: Finished afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments. Sep 13 00:23:07.804632 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 13 00:23:07.880429 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 13 00:23:07.881613 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 13 00:23:07.890679 ignition[728]: Ignition 2.21.0 Sep 13 00:23:07.890686 ignition[728]: Stage: fetch-offline Sep 13 00:23:07.890706 ignition[728]: no configs at "/usr/lib/ignition/base.d" Sep 13 00:23:07.890711 ignition[728]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Sep 13 00:23:07.890764 ignition[728]: parsed url from cmdline: "" Sep 13 00:23:07.890766 ignition[728]: no config URL provided Sep 13 00:23:07.890769 ignition[728]: reading system config file "/usr/lib/ignition/user.ign" Sep 13 00:23:07.890774 ignition[728]: no config at "/usr/lib/ignition/user.ign" Sep 13 00:23:07.891133 ignition[728]: config successfully fetched Sep 13 00:23:07.891151 ignition[728]: parsing config with SHA512: 54b661f7636b61cb8b8550326744545dc4156b5d20c30c1955ccbf2cea90905a0f148479315a152eeec358ef6f2a1ce1c4e0ffd2b8667881ba06ebefedcafdce Sep 13 00:23:07.898244 unknown[728]: fetched base config from "system" Sep 13 00:23:07.899780 ignition[728]: fetch-offline: fetch-offline passed Sep 13 00:23:07.899569 unknown[728]: fetched user config from "vmware" Sep 13 00:23:07.899815 ignition[728]: Ignition finished successfully Sep 13 00:23:07.902706 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 13 00:23:07.910876 systemd-networkd[870]: lo: Link UP Sep 13 00:23:07.911145 systemd-networkd[870]: lo: Gained carrier Sep 13 00:23:07.912024 systemd-networkd[870]: Enumeration completed Sep 13 00:23:07.912187 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 13 00:23:07.912343 systemd[1]: Reached target network.target - Network. Sep 13 00:23:07.912436 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 13 00:23:07.913077 systemd-networkd[870]: ens192: Configuring with /etc/systemd/network/10-dracut-cmdline-99.network. Sep 13 00:23:07.914265 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 13 00:23:07.917211 kernel: vmxnet3 0000:0b:00.0 ens192: intr type 3, mode 0, 3 vectors allocated Sep 13 00:23:07.917367 kernel: vmxnet3 0000:0b:00.0 ens192: NIC Link is Up 10000 Mbps Sep 13 00:23:07.917771 systemd-networkd[870]: ens192: Link UP Sep 13 00:23:07.917946 systemd-networkd[870]: ens192: Gained carrier Sep 13 00:23:07.928526 ignition[874]: Ignition 2.21.0 Sep 13 00:23:07.928537 ignition[874]: Stage: kargs Sep 13 00:23:07.928667 ignition[874]: no configs at "/usr/lib/ignition/base.d" Sep 13 00:23:07.928673 ignition[874]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Sep 13 00:23:07.929654 ignition[874]: kargs: kargs passed Sep 13 00:23:07.929854 ignition[874]: Ignition finished successfully Sep 13 00:23:07.931100 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 13 00:23:07.931910 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 13 00:23:07.947050 ignition[881]: Ignition 2.21.0 Sep 13 00:23:07.947278 ignition[881]: Stage: disks Sep 13 00:23:07.947441 ignition[881]: no configs at "/usr/lib/ignition/base.d" Sep 13 00:23:07.947564 ignition[881]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Sep 13 00:23:07.948183 ignition[881]: disks: disks passed Sep 13 00:23:07.948341 ignition[881]: Ignition finished successfully Sep 13 00:23:07.949037 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 13 00:23:07.949394 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 13 00:23:07.949527 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 13 00:23:07.949719 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 13 00:23:07.949903 systemd[1]: Reached target sysinit.target - System Initialization. Sep 13 00:23:07.950074 systemd[1]: Reached target basic.target - Basic System. Sep 13 00:23:07.950766 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 13 00:23:07.971756 systemd-fsck[890]: ROOT: clean, 15/1628000 files, 120826/1617920 blocks Sep 13 00:23:07.972860 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 13 00:23:07.973844 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 13 00:23:08.050561 kernel: EXT4-fs (sda9): mounted filesystem 7f859ed0-e8c8-40c1-91d3-e1e964d8c4e8 r/w with ordered data mode. Quota mode: none. Sep 13 00:23:08.051151 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 13 00:23:08.051680 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 13 00:23:08.052737 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 13 00:23:08.053585 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 13 00:23:08.053959 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 13 00:23:08.055594 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 13 00:23:08.055612 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 13 00:23:08.062667 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 13 00:23:08.063407 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 13 00:23:08.071572 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (898) Sep 13 00:23:08.074852 kernel: BTRFS info (device sda6): first mount of filesystem 9cd66393-e258-466a-9c7b-a40c48e4924e Sep 13 00:23:08.074874 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Sep 13 00:23:08.078936 kernel: BTRFS info (device sda6): enabling ssd optimizations Sep 13 00:23:08.078954 kernel: BTRFS info (device sda6): enabling free space tree Sep 13 00:23:08.080370 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 13 00:23:08.092935 initrd-setup-root[922]: cut: /sysroot/etc/passwd: No such file or directory Sep 13 00:23:08.095449 initrd-setup-root[929]: cut: /sysroot/etc/group: No such file or directory Sep 13 00:23:08.098057 initrd-setup-root[936]: cut: /sysroot/etc/shadow: No such file or directory Sep 13 00:23:08.100457 initrd-setup-root[943]: cut: /sysroot/etc/gshadow: No such file or directory Sep 13 00:23:08.158050 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 13 00:23:08.159087 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 13 00:23:08.160630 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 13 00:23:08.169605 kernel: BTRFS info (device sda6): last unmount of filesystem 9cd66393-e258-466a-9c7b-a40c48e4924e Sep 13 00:23:08.185147 ignition[1011]: INFO : Ignition 2.21.0 Sep 13 00:23:08.185534 ignition[1011]: INFO : Stage: mount Sep 13 00:23:08.185534 ignition[1011]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 13 00:23:08.185534 ignition[1011]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Sep 13 00:23:08.187690 ignition[1011]: INFO : mount: mount passed Sep 13 00:23:08.188619 ignition[1011]: INFO : Ignition finished successfully Sep 13 00:23:08.189016 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 13 00:23:08.190176 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 13 00:23:08.190917 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 13 00:23:08.718601 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 13 00:23:08.719565 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 13 00:23:08.736582 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (1024) Sep 13 00:23:08.736617 kernel: BTRFS info (device sda6): first mount of filesystem 9cd66393-e258-466a-9c7b-a40c48e4924e Sep 13 00:23:08.738563 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Sep 13 00:23:08.742090 kernel: BTRFS info (device sda6): enabling ssd optimizations Sep 13 00:23:08.742115 kernel: BTRFS info (device sda6): enabling free space tree Sep 13 00:23:08.743199 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 13 00:23:08.757561 ignition[1041]: INFO : Ignition 2.21.0 Sep 13 00:23:08.757561 ignition[1041]: INFO : Stage: files Sep 13 00:23:08.757561 ignition[1041]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 13 00:23:08.757561 ignition[1041]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Sep 13 00:23:08.757561 ignition[1041]: DEBUG : files: compiled without relabeling support, skipping Sep 13 00:23:08.769720 ignition[1041]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 13 00:23:08.769720 ignition[1041]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 13 00:23:08.791280 ignition[1041]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 13 00:23:08.791577 ignition[1041]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 13 00:23:08.791882 unknown[1041]: wrote ssh authorized keys file for user: core Sep 13 00:23:08.792166 ignition[1041]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 13 00:23:08.794879 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Sep 13 00:23:08.795140 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Sep 13 00:23:08.829515 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 13 00:23:08.968634 systemd-networkd[870]: ens192: Gained IPv6LL Sep 13 00:23:09.017561 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Sep 13 00:23:09.017561 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Sep 13 00:23:09.018041 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Sep 13 00:23:09.018041 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 13 00:23:09.018041 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 13 00:23:09.018041 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 13 00:23:09.018041 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 13 00:23:09.018041 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 13 00:23:09.018041 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 13 00:23:09.020419 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 13 00:23:09.020621 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 13 00:23:09.020621 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 13 00:23:09.023211 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 13 00:23:09.023449 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 13 00:23:09.023449 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Sep 13 00:23:09.514934 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Sep 13 00:23:09.896390 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 13 00:23:09.896390 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/etc/systemd/network/00-vmware.network" Sep 13 00:23:09.897487 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/etc/systemd/network/00-vmware.network" Sep 13 00:23:09.897779 ignition[1041]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Sep 13 00:23:09.897779 ignition[1041]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 13 00:23:09.898185 ignition[1041]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 13 00:23:09.898530 ignition[1041]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Sep 13 00:23:09.898530 ignition[1041]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Sep 13 00:23:09.898530 ignition[1041]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 13 00:23:09.898530 ignition[1041]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 13 00:23:09.898530 ignition[1041]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Sep 13 00:23:09.898530 ignition[1041]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Sep 13 00:23:09.922201 ignition[1041]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Sep 13 00:23:09.924399 ignition[1041]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Sep 13 00:23:09.924620 ignition[1041]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Sep 13 00:23:09.924620 ignition[1041]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Sep 13 00:23:09.924620 ignition[1041]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Sep 13 00:23:09.925719 ignition[1041]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 13 00:23:09.925719 ignition[1041]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 13 00:23:09.925719 ignition[1041]: INFO : files: files passed Sep 13 00:23:09.925719 ignition[1041]: INFO : Ignition finished successfully Sep 13 00:23:09.925419 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 13 00:23:09.927617 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 13 00:23:09.928161 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 13 00:23:09.935242 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 13 00:23:09.935593 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 13 00:23:09.938270 initrd-setup-root-after-ignition[1072]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 13 00:23:09.938270 initrd-setup-root-after-ignition[1072]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 13 00:23:09.939331 initrd-setup-root-after-ignition[1076]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 13 00:23:09.940434 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 13 00:23:09.940676 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 13 00:23:09.941236 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 13 00:23:09.971198 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 13 00:23:09.971291 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 13 00:23:09.971690 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 13 00:23:09.971887 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 13 00:23:09.972112 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 13 00:23:09.972812 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 13 00:23:09.984974 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 13 00:23:09.986615 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 13 00:23:09.998812 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 13 00:23:09.999235 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 13 00:23:09.999732 systemd[1]: Stopped target timers.target - Timer Units. Sep 13 00:23:10.000086 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 13 00:23:10.000213 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 13 00:23:10.000888 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 13 00:23:10.001249 systemd[1]: Stopped target basic.target - Basic System. Sep 13 00:23:10.001609 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 13 00:23:10.001946 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 13 00:23:10.002364 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 13 00:23:10.002728 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Sep 13 00:23:10.002956 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 13 00:23:10.003170 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 13 00:23:10.003435 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 13 00:23:10.003670 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 13 00:23:10.003876 systemd[1]: Stopped target swap.target - Swaps. Sep 13 00:23:10.004042 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 13 00:23:10.004149 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 13 00:23:10.004542 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 13 00:23:10.004777 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 13 00:23:10.004968 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 13 00:23:10.005044 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 13 00:23:10.005239 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 13 00:23:10.005340 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 13 00:23:10.005764 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 13 00:23:10.005870 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 13 00:23:10.006234 systemd[1]: Stopped target paths.target - Path Units. Sep 13 00:23:10.006401 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 13 00:23:10.009591 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 13 00:23:10.009851 systemd[1]: Stopped target slices.target - Slice Units. Sep 13 00:23:10.010078 systemd[1]: Stopped target sockets.target - Socket Units. Sep 13 00:23:10.010277 systemd[1]: iscsid.socket: Deactivated successfully. Sep 13 00:23:10.010361 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 13 00:23:10.010632 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 13 00:23:10.010712 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 13 00:23:10.011008 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 13 00:23:10.011127 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 13 00:23:10.011381 systemd[1]: ignition-files.service: Deactivated successfully. Sep 13 00:23:10.011477 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 13 00:23:10.012422 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 13 00:23:10.012578 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 13 00:23:10.012683 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 13 00:23:10.013420 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 13 00:23:10.014602 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 13 00:23:10.014715 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 13 00:23:10.014968 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 13 00:23:10.015068 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 13 00:23:10.018495 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 13 00:23:10.018749 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 13 00:23:10.030120 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 13 00:23:10.034551 ignition[1096]: INFO : Ignition 2.21.0 Sep 13 00:23:10.034551 ignition[1096]: INFO : Stage: umount Sep 13 00:23:10.034551 ignition[1096]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 13 00:23:10.034551 ignition[1096]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Sep 13 00:23:10.035162 ignition[1096]: INFO : umount: umount passed Sep 13 00:23:10.035162 ignition[1096]: INFO : Ignition finished successfully Sep 13 00:23:10.036046 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 13 00:23:10.036105 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 13 00:23:10.036916 systemd[1]: Stopped target network.target - Network. Sep 13 00:23:10.037127 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 13 00:23:10.037160 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 13 00:23:10.037523 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 13 00:23:10.037557 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 13 00:23:10.037800 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 13 00:23:10.037822 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 13 00:23:10.038291 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 13 00:23:10.038315 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 13 00:23:10.038767 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 13 00:23:10.039217 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 13 00:23:10.043966 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 13 00:23:10.044219 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 13 00:23:10.045827 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Sep 13 00:23:10.046182 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 13 00:23:10.046272 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 13 00:23:10.047791 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Sep 13 00:23:10.047968 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 13 00:23:10.048033 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 13 00:23:10.049247 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Sep 13 00:23:10.050028 systemd[1]: Stopped target network-pre.target - Preparation for Network. Sep 13 00:23:10.050498 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 13 00:23:10.050523 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 13 00:23:10.051353 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 13 00:23:10.051460 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 13 00:23:10.051490 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 13 00:23:10.051668 systemd[1]: afterburn-network-kargs.service: Deactivated successfully. Sep 13 00:23:10.051692 systemd[1]: Stopped afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments. Sep 13 00:23:10.051860 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 13 00:23:10.051883 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 13 00:23:10.054642 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 13 00:23:10.054683 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 13 00:23:10.054828 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 13 00:23:10.055540 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 13 00:23:10.069019 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 13 00:23:10.070673 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 13 00:23:10.071007 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 13 00:23:10.071032 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 13 00:23:10.071159 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 13 00:23:10.071176 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 13 00:23:10.071285 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 13 00:23:10.071311 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 13 00:23:10.071473 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 13 00:23:10.071498 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 13 00:23:10.071657 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 13 00:23:10.071680 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 13 00:23:10.073647 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 13 00:23:10.073771 systemd[1]: systemd-network-generator.service: Deactivated successfully. Sep 13 00:23:10.073808 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Sep 13 00:23:10.074756 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 13 00:23:10.074787 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 13 00:23:10.074980 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 13 00:23:10.075004 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 00:23:10.075724 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 13 00:23:10.075785 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 13 00:23:10.083281 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 13 00:23:10.083386 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 13 00:23:10.095524 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 13 00:23:10.095628 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 13 00:23:10.096115 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 13 00:23:10.096281 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 13 00:23:10.096327 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 13 00:23:10.097083 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 13 00:23:10.111787 systemd[1]: Switching root. Sep 13 00:23:10.142796 systemd-journald[244]: Journal stopped Sep 13 00:23:11.343574 systemd-journald[244]: Received SIGTERM from PID 1 (systemd). Sep 13 00:23:11.343601 kernel: SELinux: policy capability network_peer_controls=1 Sep 13 00:23:11.343609 kernel: SELinux: policy capability open_perms=1 Sep 13 00:23:11.343615 kernel: SELinux: policy capability extended_socket_class=1 Sep 13 00:23:11.343621 kernel: SELinux: policy capability always_check_network=0 Sep 13 00:23:11.343627 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 13 00:23:11.343634 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 13 00:23:11.343639 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 13 00:23:11.343645 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 13 00:23:11.343650 kernel: SELinux: policy capability userspace_initial_context=0 Sep 13 00:23:11.343657 systemd[1]: Successfully loaded SELinux policy in 39.103ms. Sep 13 00:23:11.343664 kernel: audit: type=1403 audit(1757722990.735:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 13 00:23:11.343671 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 7.345ms. Sep 13 00:23:11.343679 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 13 00:23:11.343686 systemd[1]: Detected virtualization vmware. Sep 13 00:23:11.343693 systemd[1]: Detected architecture x86-64. Sep 13 00:23:11.343700 systemd[1]: Detected first boot. Sep 13 00:23:11.343707 systemd[1]: Initializing machine ID from random generator. Sep 13 00:23:11.343715 zram_generator::config[1140]: No configuration found. Sep 13 00:23:11.343800 kernel: vmw_vmci 0000:00:07.7: Using capabilities 0xc Sep 13 00:23:11.343811 kernel: Guest personality initialized and is active Sep 13 00:23:11.343817 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Sep 13 00:23:11.343823 kernel: Initialized host personality Sep 13 00:23:11.343831 kernel: NET: Registered PF_VSOCK protocol family Sep 13 00:23:11.343838 systemd[1]: Populated /etc with preset unit settings. Sep 13 00:23:11.343845 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Sep 13 00:23:11.343853 systemd[1]: COREOS_CUSTOM_PUBLIC_IPV4=$(ip addr show ens192 | grep -v "inet 10." | grep -Po "inet \K[\d.]+")" > ${OUTPUT}" Sep 13 00:23:11.343859 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Sep 13 00:23:11.343866 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 13 00:23:11.343872 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 13 00:23:11.343880 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 13 00:23:11.343888 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 13 00:23:11.343895 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 13 00:23:11.343901 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 13 00:23:11.343908 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 13 00:23:11.343915 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 13 00:23:11.343922 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 13 00:23:11.343931 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 13 00:23:11.343938 systemd[1]: Created slice user.slice - User and Session Slice. Sep 13 00:23:11.343944 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 13 00:23:11.343953 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 13 00:23:11.343960 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 13 00:23:11.343967 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 13 00:23:11.343974 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 13 00:23:11.343981 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 13 00:23:11.343989 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Sep 13 00:23:11.343996 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 13 00:23:11.344003 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 13 00:23:11.344010 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 13 00:23:11.344017 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 13 00:23:11.344023 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 13 00:23:11.344030 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 13 00:23:11.344037 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 13 00:23:11.344045 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 13 00:23:11.344052 systemd[1]: Reached target slices.target - Slice Units. Sep 13 00:23:11.344059 systemd[1]: Reached target swap.target - Swaps. Sep 13 00:23:11.344065 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 13 00:23:11.344073 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 13 00:23:11.344081 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Sep 13 00:23:11.344088 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 13 00:23:11.344095 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 13 00:23:11.344102 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 13 00:23:11.344109 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 13 00:23:11.344116 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 13 00:23:11.344123 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 13 00:23:11.344130 systemd[1]: Mounting media.mount - External Media Directory... Sep 13 00:23:11.344138 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:23:11.344145 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 13 00:23:11.344152 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 13 00:23:11.344159 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 13 00:23:11.344167 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 13 00:23:11.344174 systemd[1]: Reached target machines.target - Containers. Sep 13 00:23:11.344181 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 13 00:23:11.344187 systemd[1]: Starting ignition-delete-config.service - Ignition (delete config)... Sep 13 00:23:11.344196 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 13 00:23:11.344203 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 13 00:23:11.344210 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 13 00:23:11.344217 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 13 00:23:11.344224 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 13 00:23:11.344231 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 13 00:23:11.344238 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 13 00:23:11.344245 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 13 00:23:11.344253 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 13 00:23:11.344260 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 13 00:23:11.344267 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 13 00:23:11.344274 systemd[1]: Stopped systemd-fsck-usr.service. Sep 13 00:23:11.344281 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 13 00:23:11.344288 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 13 00:23:11.344295 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 13 00:23:11.344302 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 13 00:23:11.344310 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 13 00:23:11.344317 kernel: fuse: init (API version 7.41) Sep 13 00:23:11.344324 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Sep 13 00:23:11.344331 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 13 00:23:11.344338 systemd[1]: verity-setup.service: Deactivated successfully. Sep 13 00:23:11.344345 systemd[1]: Stopped verity-setup.service. Sep 13 00:23:11.344352 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:23:11.344359 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 13 00:23:11.344366 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 13 00:23:11.344374 systemd[1]: Mounted media.mount - External Media Directory. Sep 13 00:23:11.344382 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 13 00:23:11.344389 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 13 00:23:11.344396 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 13 00:23:11.344403 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 13 00:23:11.344410 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 13 00:23:11.344417 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 13 00:23:11.344424 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 00:23:11.344432 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 13 00:23:11.344439 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 00:23:11.344446 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 13 00:23:11.344453 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 13 00:23:11.344459 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 13 00:23:11.344467 kernel: loop: module loaded Sep 13 00:23:11.344473 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 13 00:23:11.344480 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 13 00:23:11.344487 kernel: ACPI: bus type drm_connector registered Sep 13 00:23:11.344495 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 13 00:23:11.344503 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 13 00:23:11.344512 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 13 00:23:11.344521 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 13 00:23:11.344528 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Sep 13 00:23:11.344536 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 13 00:23:11.344938 systemd-journald[1230]: Collecting audit messages is disabled. Sep 13 00:23:11.344963 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 13 00:23:11.344972 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 13 00:23:11.344979 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 13 00:23:11.344988 systemd-journald[1230]: Journal started Sep 13 00:23:11.345003 systemd-journald[1230]: Runtime Journal (/run/log/journal/6808c5eab0954c0fba38834d8d65224c) is 4.8M, max 38.9M, 34M free. Sep 13 00:23:11.141192 systemd[1]: Queued start job for default target multi-user.target. Sep 13 00:23:11.147489 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Sep 13 00:23:11.147721 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 13 00:23:11.348708 jq[1210]: true Sep 13 00:23:11.350018 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 13 00:23:11.350527 jq[1241]: true Sep 13 00:23:11.358262 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 13 00:23:11.360594 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 13 00:23:11.363511 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 13 00:23:11.363576 systemd[1]: Started systemd-journald.service - Journal Service. Sep 13 00:23:11.368643 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 13 00:23:11.368940 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 00:23:11.369047 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 13 00:23:11.369346 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 13 00:23:11.369634 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Sep 13 00:23:11.370199 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 13 00:23:11.370539 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 13 00:23:11.371573 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 13 00:23:11.384515 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 13 00:23:11.385750 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 13 00:23:11.392679 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 13 00:23:11.395659 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Sep 13 00:23:11.395813 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 13 00:23:11.397647 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 13 00:23:11.404045 kernel: loop0: detected capacity change from 0 to 2960 Sep 13 00:23:11.401660 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 13 00:23:11.423695 systemd-journald[1230]: Time spent on flushing to /var/log/journal/6808c5eab0954c0fba38834d8d65224c is 54.316ms for 1763 entries. Sep 13 00:23:11.423695 systemd-journald[1230]: System Journal (/var/log/journal/6808c5eab0954c0fba38834d8d65224c) is 8M, max 584.8M, 576.8M free. Sep 13 00:23:11.489841 systemd-journald[1230]: Received client request to flush runtime journal. Sep 13 00:23:11.489866 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 13 00:23:11.489877 kernel: loop1: detected capacity change from 0 to 113872 Sep 13 00:23:11.439521 ignition[1264]: Ignition 2.21.0 Sep 13 00:23:11.424369 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 13 00:23:11.440708 ignition[1264]: deleting config from guestinfo properties Sep 13 00:23:11.452724 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Sep 13 00:23:11.455540 ignition[1264]: Successfully deleted config Sep 13 00:23:11.457856 systemd[1]: Finished ignition-delete-config.service - Ignition (delete config). Sep 13 00:23:11.491334 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 13 00:23:11.495524 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 13 00:23:11.501180 kernel: loop2: detected capacity change from 0 to 224512 Sep 13 00:23:11.500974 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 13 00:23:11.531123 systemd-tmpfiles[1308]: ACLs are not supported, ignoring. Sep 13 00:23:11.531134 systemd-tmpfiles[1308]: ACLs are not supported, ignoring. Sep 13 00:23:11.536991 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 13 00:23:11.544019 kernel: loop3: detected capacity change from 0 to 146240 Sep 13 00:23:11.589591 kernel: loop4: detected capacity change from 0 to 2960 Sep 13 00:23:11.595739 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 13 00:23:11.667567 kernel: loop5: detected capacity change from 0 to 113872 Sep 13 00:23:11.694610 kernel: loop6: detected capacity change from 0 to 224512 Sep 13 00:23:11.720585 kernel: loop7: detected capacity change from 0 to 146240 Sep 13 00:23:11.746677 (sd-merge)[1313]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-vmware'. Sep 13 00:23:11.747457 (sd-merge)[1313]: Merged extensions into '/usr'. Sep 13 00:23:11.750638 systemd[1]: Reload requested from client PID 1260 ('systemd-sysext') (unit systemd-sysext.service)... Sep 13 00:23:11.750651 systemd[1]: Reloading... Sep 13 00:23:11.840573 zram_generator::config[1340]: No configuration found. Sep 13 00:23:11.904462 ldconfig[1252]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 13 00:23:11.943170 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 00:23:11.952424 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Sep 13 00:23:11.998415 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 13 00:23:11.998559 systemd[1]: Reloading finished in 247 ms. Sep 13 00:23:12.015972 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 13 00:23:12.016342 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 13 00:23:12.016656 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 13 00:23:12.021702 systemd[1]: Starting ensure-sysext.service... Sep 13 00:23:12.024633 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 13 00:23:12.025834 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 13 00:23:12.032622 systemd[1]: Reload requested from client PID 1397 ('systemctl') (unit ensure-sysext.service)... Sep 13 00:23:12.032634 systemd[1]: Reloading... Sep 13 00:23:12.038808 systemd-tmpfiles[1398]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Sep 13 00:23:12.039242 systemd-tmpfiles[1398]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Sep 13 00:23:12.039506 systemd-tmpfiles[1398]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 13 00:23:12.039770 systemd-tmpfiles[1398]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 13 00:23:12.040469 systemd-tmpfiles[1398]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 13 00:23:12.040746 systemd-tmpfiles[1398]: ACLs are not supported, ignoring. Sep 13 00:23:12.040816 systemd-tmpfiles[1398]: ACLs are not supported, ignoring. Sep 13 00:23:12.043041 systemd-tmpfiles[1398]: Detected autofs mount point /boot during canonicalization of boot. Sep 13 00:23:12.043106 systemd-tmpfiles[1398]: Skipping /boot Sep 13 00:23:12.050235 systemd-tmpfiles[1398]: Detected autofs mount point /boot during canonicalization of boot. Sep 13 00:23:12.050321 systemd-tmpfiles[1398]: Skipping /boot Sep 13 00:23:12.071107 systemd-udevd[1399]: Using default interface naming scheme 'v255'. Sep 13 00:23:12.084575 zram_generator::config[1431]: No configuration found. Sep 13 00:23:12.179155 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 00:23:12.188633 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Sep 13 00:23:12.246048 systemd[1]: Reloading finished in 213 ms. Sep 13 00:23:12.251841 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 13 00:23:12.252347 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 13 00:23:12.263645 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Sep 13 00:23:12.267614 kernel: mousedev: PS/2 mouse device common for all mice Sep 13 00:23:12.269310 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 13 00:23:12.273455 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 13 00:23:12.275938 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 13 00:23:12.278979 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 13 00:23:12.285794 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 13 00:23:12.288037 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 13 00:23:12.291901 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:23:12.292724 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 13 00:23:12.296700 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 13 00:23:12.299357 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 13 00:23:12.299565 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 13 00:23:12.299635 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 13 00:23:12.299696 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:23:12.304430 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Sep 13 00:23:12.306065 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 13 00:23:12.307618 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:23:12.307717 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 13 00:23:12.307771 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 13 00:23:12.307826 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:23:12.310009 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:23:12.313813 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 13 00:23:12.314087 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 13 00:23:12.314161 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 13 00:23:12.314263 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:23:12.324606 kernel: ACPI: button: Power Button [PWRF] Sep 13 00:23:12.326854 systemd[1]: Finished ensure-sysext.service. Sep 13 00:23:12.328163 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 13 00:23:12.333048 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 13 00:23:12.333648 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 00:23:12.336771 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 13 00:23:12.342996 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 00:23:12.343154 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 13 00:23:12.344192 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 00:23:12.344308 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 13 00:23:12.353157 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 13 00:23:12.353193 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 13 00:23:12.361730 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 13 00:23:12.361866 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 13 00:23:12.366419 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 13 00:23:12.367884 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 13 00:23:12.369638 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 13 00:23:12.369753 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 13 00:23:12.390346 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_disk OEM. Sep 13 00:23:12.392554 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 13 00:23:12.398284 augenrules[1561]: No rules Sep 13 00:23:12.399170 systemd[1]: audit-rules.service: Deactivated successfully. Sep 13 00:23:12.399800 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 13 00:23:12.400933 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 13 00:23:12.412640 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 13 00:23:12.413773 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 13 00:23:12.490448 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 13 00:23:12.490690 systemd[1]: Reached target time-set.target - System Time Set. Sep 13 00:23:12.498645 systemd-networkd[1519]: lo: Link UP Sep 13 00:23:12.498650 systemd-networkd[1519]: lo: Gained carrier Sep 13 00:23:12.499432 systemd-networkd[1519]: Enumeration completed Sep 13 00:23:12.499481 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 13 00:23:12.501482 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Sep 13 00:23:12.502934 systemd-resolved[1521]: Positive Trust Anchors: Sep 13 00:23:12.502941 systemd-resolved[1521]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 13 00:23:12.502964 systemd-resolved[1521]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 13 00:23:12.503891 systemd-networkd[1519]: ens192: Configuring with /etc/systemd/network/00-vmware.network. Sep 13 00:23:12.505842 kernel: vmxnet3 0000:0b:00.0 ens192: intr type 3, mode 0, 3 vectors allocated Sep 13 00:23:12.505971 kernel: vmxnet3 0000:0b:00.0 ens192: NIC Link is Up 10000 Mbps Sep 13 00:23:12.506295 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 13 00:23:12.509954 systemd-resolved[1521]: Defaulting to hostname 'linux'. Sep 13 00:23:12.510010 systemd-networkd[1519]: ens192: Link UP Sep 13 00:23:12.510407 systemd-networkd[1519]: ens192: Gained carrier Sep 13 00:23:12.512002 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 13 00:23:12.512654 systemd[1]: Reached target network.target - Network. Sep 13 00:23:12.512755 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 13 00:23:12.512878 systemd[1]: Reached target sysinit.target - System Initialization. Sep 13 00:23:12.513030 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 13 00:23:12.513169 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 13 00:23:12.513286 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Sep 13 00:23:12.513528 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 13 00:23:12.513961 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 13 00:23:12.514078 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 13 00:23:12.514193 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 13 00:23:12.514209 systemd[1]: Reached target paths.target - Path Units. Sep 13 00:23:12.514541 systemd[1]: Reached target timers.target - Timer Units. Sep 13 00:23:12.515796 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 13 00:23:12.515898 systemd-timesyncd[1538]: Network configuration changed, trying to establish connection. Sep 13 00:23:12.517560 kernel: piix4_smbus 0000:00:07.3: SMBus Host Controller not enabled! Sep 13 00:23:12.517484 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 13 00:23:12.519342 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Sep 13 00:23:12.520066 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Sep 13 00:23:12.520189 systemd[1]: Reached target ssh-access.target - SSH Access Available. Sep 13 00:23:12.522143 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 13 00:23:12.522450 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Sep 13 00:23:12.523413 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 13 00:23:12.524678 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Sep 13 00:23:12.525072 systemd[1]: Reached target sockets.target - Socket Units. Sep 13 00:23:12.525620 systemd[1]: Reached target basic.target - Basic System. Sep 13 00:23:12.525798 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 13 00:23:12.525819 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 13 00:23:12.527204 systemd[1]: Starting containerd.service - containerd container runtime... Sep 13 00:23:12.528654 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 13 00:23:12.530364 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 13 00:23:12.531667 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 13 00:23:12.533363 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 13 00:23:12.533480 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 13 00:23:12.537737 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Sep 13 00:23:12.541189 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 13 00:23:12.543104 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 13 00:23:12.546659 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 13 00:23:12.549768 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 13 00:23:12.550808 extend-filesystems[1588]: Found /dev/sda6 Sep 13 00:23:12.554268 extend-filesystems[1588]: Found /dev/sda9 Sep 13 00:23:12.557166 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 13 00:23:12.558056 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 13 00:23:12.558520 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 13 00:23:12.563308 systemd[1]: Starting update-engine.service - Update Engine... Sep 13 00:23:12.565638 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 13 00:23:12.568507 jq[1587]: false Sep 13 00:23:12.570735 systemd[1]: Starting vgauthd.service - VGAuth Service for open-vm-tools... Sep 13 00:23:12.575716 extend-filesystems[1588]: Checking size of /dev/sda9 Sep 13 00:23:12.577294 jq[1605]: true Sep 13 00:23:12.578535 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 13 00:23:12.579791 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 13 00:23:12.579923 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 13 00:23:12.581332 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 13 00:23:12.581451 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 13 00:23:12.596217 google_oslogin_nss_cache[1589]: oslogin_cache_refresh[1589]: Refreshing passwd entry cache Sep 13 00:23:12.595813 oslogin_cache_refresh[1589]: Refreshing passwd entry cache Sep 13 00:23:12.603828 update_engine[1600]: I20250913 00:23:12.603781 1600 main.cc:92] Flatcar Update Engine starting Sep 13 00:23:12.606349 jq[1616]: true Sep 13 00:23:12.606786 extend-filesystems[1588]: Old size kept for /dev/sda9 Sep 13 00:23:12.610849 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 13 00:23:12.612315 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 13 00:23:12.615001 google_oslogin_nss_cache[1589]: oslogin_cache_refresh[1589]: Failure getting users, quitting Sep 13 00:23:12.615001 google_oslogin_nss_cache[1589]: oslogin_cache_refresh[1589]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Sep 13 00:23:12.615001 google_oslogin_nss_cache[1589]: oslogin_cache_refresh[1589]: Refreshing group entry cache Sep 13 00:23:12.613652 oslogin_cache_refresh[1589]: Failure getting users, quitting Sep 13 00:23:12.613666 oslogin_cache_refresh[1589]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Sep 13 00:23:12.613693 oslogin_cache_refresh[1589]: Refreshing group entry cache Sep 13 00:23:12.616747 (ntainerd)[1628]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 13 00:23:12.630749 google_oslogin_nss_cache[1589]: oslogin_cache_refresh[1589]: Failure getting groups, quitting Sep 13 00:23:12.630749 google_oslogin_nss_cache[1589]: oslogin_cache_refresh[1589]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Sep 13 00:23:12.628394 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Sep 13 00:23:12.626358 oslogin_cache_refresh[1589]: Failure getting groups, quitting Sep 13 00:23:12.626365 oslogin_cache_refresh[1589]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Sep 13 00:23:12.631016 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Sep 13 00:23:12.641828 tar[1615]: linux-amd64/LICENSE Sep 13 00:23:12.643084 systemd[1]: motdgen.service: Deactivated successfully. Sep 13 00:23:12.643446 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 13 00:23:12.645063 tar[1615]: linux-amd64/helm Sep 13 00:23:12.646179 (udev-worker)[1444]: id: Truncating stdout of 'dmi_memory_id' up to 16384 byte. Sep 13 00:23:12.650517 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 13 00:23:12.697770 dbus-daemon[1585]: [system] SELinux support is enabled Sep 13 00:23:12.697886 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 13 00:23:12.701069 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 13 00:23:12.701096 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 13 00:23:12.701320 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 13 00:23:12.701332 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 13 00:23:12.702856 bash[1659]: Updated "/home/core/.ssh/authorized_keys" Sep 13 00:23:12.704846 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 13 00:23:12.705273 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Sep 13 00:23:12.715499 systemd[1]: Started update-engine.service - Update Engine. Sep 13 00:23:12.718050 update_engine[1600]: I20250913 00:23:12.716768 1600 update_check_scheduler.cc:74] Next update check in 2m26s Sep 13 00:23:12.718970 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 13 00:23:12.744090 systemd[1]: Started vgauthd.service - VGAuth Service for open-vm-tools. Sep 13 00:23:12.748175 systemd[1]: Starting vmtoolsd.service - Service for virtual machines hosted on VMware... Sep 13 00:23:12.814505 sshd_keygen[1639]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 13 00:23:12.879293 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 13 00:23:12.881419 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 13 00:23:12.891452 systemd[1]: Started vmtoolsd.service - Service for virtual machines hosted on VMware. Sep 13 00:23:12.898351 systemd[1]: issuegen.service: Deactivated successfully. Sep 13 00:23:12.898713 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 13 00:23:12.900710 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 13 00:23:12.901853 locksmithd[1662]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 13 00:23:12.916171 unknown[1665]: Pref_Init: Using '/etc/vmware-tools/vgauth.conf' as preferences filepath Sep 13 00:23:12.917187 unknown[1665]: Core dump limit set to -1 Sep 13 00:23:12.940344 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 13 00:23:12.943291 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 13 00:23:12.946234 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Sep 13 00:23:12.946368 systemd[1]: Reached target getty.target - Login Prompts. Sep 13 00:23:12.947517 systemd-logind[1599]: Watching system buttons on /dev/input/event2 (Power Button) Sep 13 00:23:12.947528 systemd-logind[1599]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 13 00:23:12.948276 systemd-logind[1599]: New seat seat0. Sep 13 00:23:12.949050 systemd[1]: Started systemd-logind.service - User Login Management. Sep 13 00:23:12.970994 containerd[1628]: time="2025-09-13T00:23:12Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Sep 13 00:23:12.973296 containerd[1628]: time="2025-09-13T00:23:12.971940833Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 Sep 13 00:23:12.989851 containerd[1628]: time="2025-09-13T00:23:12.988989243Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="5.892µs" Sep 13 00:23:12.989851 containerd[1628]: time="2025-09-13T00:23:12.989013230Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Sep 13 00:23:12.989851 containerd[1628]: time="2025-09-13T00:23:12.989024359Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Sep 13 00:23:12.989851 containerd[1628]: time="2025-09-13T00:23:12.989117375Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Sep 13 00:23:12.989851 containerd[1628]: time="2025-09-13T00:23:12.989126736Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Sep 13 00:23:12.989851 containerd[1628]: time="2025-09-13T00:23:12.989141922Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 13 00:23:12.989851 containerd[1628]: time="2025-09-13T00:23:12.989178687Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 13 00:23:12.989851 containerd[1628]: time="2025-09-13T00:23:12.989186247Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 13 00:23:12.989851 containerd[1628]: time="2025-09-13T00:23:12.989316328Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 13 00:23:12.989851 containerd[1628]: time="2025-09-13T00:23:12.989324095Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 13 00:23:12.989851 containerd[1628]: time="2025-09-13T00:23:12.989330508Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 13 00:23:12.989851 containerd[1628]: time="2025-09-13T00:23:12.989335239Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Sep 13 00:23:12.990066 containerd[1628]: time="2025-09-13T00:23:12.989377874Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Sep 13 00:23:12.990066 containerd[1628]: time="2025-09-13T00:23:12.989492244Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 13 00:23:12.990066 containerd[1628]: time="2025-09-13T00:23:12.989508034Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 13 00:23:12.990066 containerd[1628]: time="2025-09-13T00:23:12.989513977Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Sep 13 00:23:12.990066 containerd[1628]: time="2025-09-13T00:23:12.989537153Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Sep 13 00:23:12.992783 containerd[1628]: time="2025-09-13T00:23:12.992679060Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Sep 13 00:23:12.992783 containerd[1628]: time="2025-09-13T00:23:12.992722996Z" level=info msg="metadata content store policy set" policy=shared Sep 13 00:23:13.015960 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 00:23:13.016950 containerd[1628]: time="2025-09-13T00:23:13.016932896Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Sep 13 00:23:13.016977 containerd[1628]: time="2025-09-13T00:23:13.016965223Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Sep 13 00:23:13.016991 containerd[1628]: time="2025-09-13T00:23:13.016975913Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Sep 13 00:23:13.016991 containerd[1628]: time="2025-09-13T00:23:13.016983640Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Sep 13 00:23:13.017022 containerd[1628]: time="2025-09-13T00:23:13.016994118Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Sep 13 00:23:13.017022 containerd[1628]: time="2025-09-13T00:23:13.017001561Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Sep 13 00:23:13.017049 containerd[1628]: time="2025-09-13T00:23:13.017010013Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Sep 13 00:23:13.017049 containerd[1628]: time="2025-09-13T00:23:13.017038214Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Sep 13 00:23:13.017049 containerd[1628]: time="2025-09-13T00:23:13.017046575Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Sep 13 00:23:13.017084 containerd[1628]: time="2025-09-13T00:23:13.017052110Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Sep 13 00:23:13.017084 containerd[1628]: time="2025-09-13T00:23:13.017057080Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Sep 13 00:23:13.017084 containerd[1628]: time="2025-09-13T00:23:13.017064596Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Sep 13 00:23:13.017133 containerd[1628]: time="2025-09-13T00:23:13.017122775Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Sep 13 00:23:13.017152 containerd[1628]: time="2025-09-13T00:23:13.017136655Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Sep 13 00:23:13.017152 containerd[1628]: time="2025-09-13T00:23:13.017146049Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Sep 13 00:23:13.017176 containerd[1628]: time="2025-09-13T00:23:13.017151954Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Sep 13 00:23:13.017176 containerd[1628]: time="2025-09-13T00:23:13.017158331Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Sep 13 00:23:13.017176 containerd[1628]: time="2025-09-13T00:23:13.017163664Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Sep 13 00:23:13.017176 containerd[1628]: time="2025-09-13T00:23:13.017169521Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Sep 13 00:23:13.017176 containerd[1628]: time="2025-09-13T00:23:13.017174929Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Sep 13 00:23:13.017248 containerd[1628]: time="2025-09-13T00:23:13.017180656Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Sep 13 00:23:13.017248 containerd[1628]: time="2025-09-13T00:23:13.017189218Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Sep 13 00:23:13.017248 containerd[1628]: time="2025-09-13T00:23:13.017195894Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Sep 13 00:23:13.017248 containerd[1628]: time="2025-09-13T00:23:13.017228589Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Sep 13 00:23:13.017248 containerd[1628]: time="2025-09-13T00:23:13.017236751Z" level=info msg="Start snapshots syncer" Sep 13 00:23:13.017310 containerd[1628]: time="2025-09-13T00:23:13.017255294Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Sep 13 00:23:13.017474 containerd[1628]: time="2025-09-13T00:23:13.017433494Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Sep 13 00:23:13.017584 containerd[1628]: time="2025-09-13T00:23:13.017509311Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Sep 13 00:23:13.017601 containerd[1628]: time="2025-09-13T00:23:13.017589851Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Sep 13 00:23:13.017663 containerd[1628]: time="2025-09-13T00:23:13.017652545Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Sep 13 00:23:13.017678 containerd[1628]: time="2025-09-13T00:23:13.017666666Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Sep 13 00:23:13.017678 containerd[1628]: time="2025-09-13T00:23:13.017673069Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Sep 13 00:23:13.017704 containerd[1628]: time="2025-09-13T00:23:13.017679638Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Sep 13 00:23:13.017704 containerd[1628]: time="2025-09-13T00:23:13.017686393Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Sep 13 00:23:13.017732 containerd[1628]: time="2025-09-13T00:23:13.017692653Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Sep 13 00:23:13.017732 containerd[1628]: time="2025-09-13T00:23:13.017713861Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Sep 13 00:23:13.017732 containerd[1628]: time="2025-09-13T00:23:13.017728536Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Sep 13 00:23:13.017768 containerd[1628]: time="2025-09-13T00:23:13.017738464Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Sep 13 00:23:13.017768 containerd[1628]: time="2025-09-13T00:23:13.017744761Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Sep 13 00:23:13.017795 containerd[1628]: time="2025-09-13T00:23:13.017766726Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 13 00:23:13.017795 containerd[1628]: time="2025-09-13T00:23:13.017775663Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 13 00:23:13.017795 containerd[1628]: time="2025-09-13T00:23:13.017780360Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 13 00:23:13.017795 containerd[1628]: time="2025-09-13T00:23:13.017785414Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 13 00:23:13.017795 containerd[1628]: time="2025-09-13T00:23:13.017789555Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Sep 13 00:23:13.017855 containerd[1628]: time="2025-09-13T00:23:13.017800300Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Sep 13 00:23:13.017855 containerd[1628]: time="2025-09-13T00:23:13.017806723Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Sep 13 00:23:13.017855 containerd[1628]: time="2025-09-13T00:23:13.017837467Z" level=info msg="runtime interface created" Sep 13 00:23:13.017855 containerd[1628]: time="2025-09-13T00:23:13.017842045Z" level=info msg="created NRI interface" Sep 13 00:23:13.017855 containerd[1628]: time="2025-09-13T00:23:13.017849015Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Sep 13 00:23:13.017855 containerd[1628]: time="2025-09-13T00:23:13.017855326Z" level=info msg="Connect containerd service" Sep 13 00:23:13.017928 containerd[1628]: time="2025-09-13T00:23:13.017878156Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 13 00:23:13.019171 containerd[1628]: time="2025-09-13T00:23:13.019139424Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 13 00:23:13.170663 containerd[1628]: time="2025-09-13T00:23:13.170596196Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 13 00:23:13.170663 containerd[1628]: time="2025-09-13T00:23:13.170633887Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 13 00:23:13.170663 containerd[1628]: time="2025-09-13T00:23:13.170648748Z" level=info msg="Start subscribing containerd event" Sep 13 00:23:13.170754 containerd[1628]: time="2025-09-13T00:23:13.170664042Z" level=info msg="Start recovering state" Sep 13 00:23:13.170754 containerd[1628]: time="2025-09-13T00:23:13.170713969Z" level=info msg="Start event monitor" Sep 13 00:23:13.170754 containerd[1628]: time="2025-09-13T00:23:13.170721893Z" level=info msg="Start cni network conf syncer for default" Sep 13 00:23:13.170754 containerd[1628]: time="2025-09-13T00:23:13.170726668Z" level=info msg="Start streaming server" Sep 13 00:23:13.170754 containerd[1628]: time="2025-09-13T00:23:13.170733789Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Sep 13 00:23:13.170754 containerd[1628]: time="2025-09-13T00:23:13.170737728Z" level=info msg="runtime interface starting up..." Sep 13 00:23:13.170754 containerd[1628]: time="2025-09-13T00:23:13.170740816Z" level=info msg="starting plugins..." Sep 13 00:23:13.170754 containerd[1628]: time="2025-09-13T00:23:13.170747809Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Sep 13 00:23:13.171619 systemd[1]: Started containerd.service - containerd container runtime. Sep 13 00:23:13.172786 containerd[1628]: time="2025-09-13T00:23:13.172752875Z" level=info msg="containerd successfully booted in 0.202391s" Sep 13 00:23:13.204204 tar[1615]: linux-amd64/README.md Sep 13 00:23:13.211736 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 13 00:23:13.576727 systemd-networkd[1519]: ens192: Gained IPv6LL Sep 13 00:23:13.577044 systemd-timesyncd[1538]: Network configuration changed, trying to establish connection. Sep 13 00:23:13.578120 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 13 00:23:13.578651 systemd[1]: Reached target network-online.target - Network is Online. Sep 13 00:23:13.579845 systemd[1]: Starting coreos-metadata.service - VMware metadata agent... Sep 13 00:23:13.581204 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:23:13.585843 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 13 00:23:13.609512 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 13 00:23:13.619843 systemd[1]: coreos-metadata.service: Deactivated successfully. Sep 13 00:23:13.620094 systemd[1]: Finished coreos-metadata.service - VMware metadata agent. Sep 13 00:23:13.620666 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 13 00:23:14.544154 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:23:14.544970 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 13 00:23:14.545568 systemd[1]: Startup finished in 2.728s (kernel) + 5.141s (initrd) + 3.847s (userspace) = 11.718s. Sep 13 00:23:14.556866 (kubelet)[1800]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 13 00:23:14.592021 login[1696]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Sep 13 00:23:14.593772 login[1697]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Sep 13 00:23:14.599872 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 13 00:23:14.601207 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 13 00:23:14.607963 systemd-logind[1599]: New session 1 of user core. Sep 13 00:23:14.610175 systemd-logind[1599]: New session 2 of user core. Sep 13 00:23:14.618343 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 13 00:23:14.620369 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 13 00:23:14.639050 (systemd)[1807]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:23:14.640496 systemd-logind[1599]: New session c1 of user core. Sep 13 00:23:14.733014 systemd[1807]: Queued start job for default target default.target. Sep 13 00:23:14.740443 systemd[1807]: Created slice app.slice - User Application Slice. Sep 13 00:23:14.740461 systemd[1807]: Reached target paths.target - Paths. Sep 13 00:23:14.740488 systemd[1807]: Reached target timers.target - Timers. Sep 13 00:23:14.743624 systemd[1807]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 13 00:23:14.749131 systemd[1807]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 13 00:23:14.749239 systemd[1807]: Reached target sockets.target - Sockets. Sep 13 00:23:14.749334 systemd[1807]: Reached target basic.target - Basic System. Sep 13 00:23:14.749396 systemd[1807]: Reached target default.target - Main User Target. Sep 13 00:23:14.749415 systemd[1807]: Startup finished in 104ms. Sep 13 00:23:14.749917 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 13 00:23:14.755831 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 13 00:23:14.756557 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 13 00:23:15.080228 systemd-timesyncd[1538]: Network configuration changed, trying to establish connection. Sep 13 00:23:15.224134 kubelet[1800]: E0913 00:23:15.224095 1800 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 00:23:15.225736 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 00:23:15.225823 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 00:23:15.226197 systemd[1]: kubelet.service: Consumed 634ms CPU time, 264.3M memory peak. Sep 13 00:23:21.347855 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 13 00:23:21.349671 systemd[1]: Started sshd@0-139.178.70.110:22-34.123.134.194:55044.service - OpenSSH per-connection server daemon (34.123.134.194:55044). Sep 13 00:23:22.117011 sshd[1844]: Received disconnect from 34.123.134.194 port 55044:11: Bye Bye [preauth] Sep 13 00:23:22.117011 sshd[1844]: Disconnected from authenticating user root 34.123.134.194 port 55044 [preauth] Sep 13 00:23:22.118161 systemd[1]: sshd@0-139.178.70.110:22-34.123.134.194:55044.service: Deactivated successfully. Sep 13 00:23:25.443977 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 13 00:23:25.445348 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:23:25.802138 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:23:25.807832 (kubelet)[1856]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 13 00:23:25.877268 kubelet[1856]: E0913 00:23:25.877235 1856 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 00:23:25.880383 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 00:23:25.880572 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 00:23:25.881047 systemd[1]: kubelet.service: Consumed 113ms CPU time, 108.5M memory peak. Sep 13 00:23:35.943976 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 13 00:23:35.945436 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:23:36.282073 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:23:36.285326 (kubelet)[1871]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 13 00:23:36.307767 kubelet[1871]: E0913 00:23:36.307743 1871 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 00:23:36.309187 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 00:23:36.309267 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 00:23:36.309459 systemd[1]: kubelet.service: Consumed 99ms CPU time, 108.2M memory peak. Sep 13 00:23:43.018107 systemd[1]: Started sshd@1-139.178.70.110:22-139.178.89.65:40504.service - OpenSSH per-connection server daemon (139.178.89.65:40504). Sep 13 00:23:43.056370 sshd[1880]: Accepted publickey for core from 139.178.89.65 port 40504 ssh2: RSA SHA256:7j1m2IeDi8iUn41dLAEnUdP9v/F2nQcYG+53Qqm2Rt8 Sep 13 00:23:43.057174 sshd-session[1880]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:23:43.060448 systemd-logind[1599]: New session 3 of user core. Sep 13 00:23:43.068642 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 13 00:23:43.124700 systemd[1]: Started sshd@2-139.178.70.110:22-139.178.89.65:40506.service - OpenSSH per-connection server daemon (139.178.89.65:40506). Sep 13 00:23:43.157058 sshd[1885]: Accepted publickey for core from 139.178.89.65 port 40506 ssh2: RSA SHA256:7j1m2IeDi8iUn41dLAEnUdP9v/F2nQcYG+53Qqm2Rt8 Sep 13 00:23:43.157616 sshd-session[1885]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:23:43.160200 systemd-logind[1599]: New session 4 of user core. Sep 13 00:23:43.163640 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 13 00:23:43.212012 sshd[1887]: Connection closed by 139.178.89.65 port 40506 Sep 13 00:23:43.211964 sshd-session[1885]: pam_unix(sshd:session): session closed for user core Sep 13 00:23:43.222170 systemd[1]: sshd@2-139.178.70.110:22-139.178.89.65:40506.service: Deactivated successfully. Sep 13 00:23:43.223348 systemd[1]: session-4.scope: Deactivated successfully. Sep 13 00:23:43.223964 systemd-logind[1599]: Session 4 logged out. Waiting for processes to exit. Sep 13 00:23:43.226173 systemd[1]: Started sshd@3-139.178.70.110:22-139.178.89.65:40518.service - OpenSSH per-connection server daemon (139.178.89.65:40518). Sep 13 00:23:43.226945 systemd-logind[1599]: Removed session 4. Sep 13 00:23:43.264053 sshd[1893]: Accepted publickey for core from 139.178.89.65 port 40518 ssh2: RSA SHA256:7j1m2IeDi8iUn41dLAEnUdP9v/F2nQcYG+53Qqm2Rt8 Sep 13 00:23:43.264841 sshd-session[1893]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:23:43.268006 systemd-logind[1599]: New session 5 of user core. Sep 13 00:23:43.274654 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 13 00:23:43.321380 sshd[1895]: Connection closed by 139.178.89.65 port 40518 Sep 13 00:23:43.321855 sshd-session[1893]: pam_unix(sshd:session): session closed for user core Sep 13 00:23:43.332148 systemd[1]: sshd@3-139.178.70.110:22-139.178.89.65:40518.service: Deactivated successfully. Sep 13 00:23:43.333224 systemd[1]: session-5.scope: Deactivated successfully. Sep 13 00:23:43.333865 systemd-logind[1599]: Session 5 logged out. Waiting for processes to exit. Sep 13 00:23:43.335530 systemd[1]: Started sshd@4-139.178.70.110:22-139.178.89.65:40520.service - OpenSSH per-connection server daemon (139.178.89.65:40520). Sep 13 00:23:43.337682 systemd-logind[1599]: Removed session 5. Sep 13 00:23:43.376918 sshd[1901]: Accepted publickey for core from 139.178.89.65 port 40520 ssh2: RSA SHA256:7j1m2IeDi8iUn41dLAEnUdP9v/F2nQcYG+53Qqm2Rt8 Sep 13 00:23:43.377728 sshd-session[1901]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:23:43.381052 systemd-logind[1599]: New session 6 of user core. Sep 13 00:23:43.395933 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 13 00:23:43.444615 sshd[1903]: Connection closed by 139.178.89.65 port 40520 Sep 13 00:23:43.445324 sshd-session[1901]: pam_unix(sshd:session): session closed for user core Sep 13 00:23:43.451076 systemd[1]: sshd@4-139.178.70.110:22-139.178.89.65:40520.service: Deactivated successfully. Sep 13 00:23:43.452257 systemd[1]: session-6.scope: Deactivated successfully. Sep 13 00:23:43.452862 systemd-logind[1599]: Session 6 logged out. Waiting for processes to exit. Sep 13 00:23:43.455112 systemd[1]: Started sshd@5-139.178.70.110:22-139.178.89.65:40536.service - OpenSSH per-connection server daemon (139.178.89.65:40536). Sep 13 00:23:43.456056 systemd-logind[1599]: Removed session 6. Sep 13 00:23:43.495581 sshd[1909]: Accepted publickey for core from 139.178.89.65 port 40536 ssh2: RSA SHA256:7j1m2IeDi8iUn41dLAEnUdP9v/F2nQcYG+53Qqm2Rt8 Sep 13 00:23:43.496353 sshd-session[1909]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:23:43.499600 systemd-logind[1599]: New session 7 of user core. Sep 13 00:23:43.509706 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 13 00:23:43.573024 sudo[1912]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 13 00:23:43.573229 sudo[1912]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 13 00:23:43.586153 sudo[1912]: pam_unix(sudo:session): session closed for user root Sep 13 00:23:43.586981 sshd[1911]: Connection closed by 139.178.89.65 port 40536 Sep 13 00:23:43.587853 sshd-session[1909]: pam_unix(sshd:session): session closed for user core Sep 13 00:23:43.594917 systemd[1]: sshd@5-139.178.70.110:22-139.178.89.65:40536.service: Deactivated successfully. Sep 13 00:23:43.596408 systemd[1]: session-7.scope: Deactivated successfully. Sep 13 00:23:43.597316 systemd-logind[1599]: Session 7 logged out. Waiting for processes to exit. Sep 13 00:23:43.599660 systemd[1]: Started sshd@6-139.178.70.110:22-139.178.89.65:40538.service - OpenSSH per-connection server daemon (139.178.89.65:40538). Sep 13 00:23:43.600808 systemd-logind[1599]: Removed session 7. Sep 13 00:23:43.635438 sshd[1918]: Accepted publickey for core from 139.178.89.65 port 40538 ssh2: RSA SHA256:7j1m2IeDi8iUn41dLAEnUdP9v/F2nQcYG+53Qqm2Rt8 Sep 13 00:23:43.636287 sshd-session[1918]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:23:43.640224 systemd-logind[1599]: New session 8 of user core. Sep 13 00:23:43.646657 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 13 00:23:43.695736 sudo[1922]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 13 00:23:43.696126 sudo[1922]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 13 00:23:43.698941 sudo[1922]: pam_unix(sudo:session): session closed for user root Sep 13 00:23:43.702684 sudo[1921]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Sep 13 00:23:43.702876 sudo[1921]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 13 00:23:43.709803 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 13 00:23:43.745363 augenrules[1944]: No rules Sep 13 00:23:43.746117 systemd[1]: audit-rules.service: Deactivated successfully. Sep 13 00:23:43.746375 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 13 00:23:43.747164 sudo[1921]: pam_unix(sudo:session): session closed for user root Sep 13 00:23:43.748348 sshd[1920]: Connection closed by 139.178.89.65 port 40538 Sep 13 00:23:43.748301 sshd-session[1918]: pam_unix(sshd:session): session closed for user core Sep 13 00:23:43.756149 systemd[1]: sshd@6-139.178.70.110:22-139.178.89.65:40538.service: Deactivated successfully. Sep 13 00:23:43.757137 systemd[1]: session-8.scope: Deactivated successfully. Sep 13 00:23:43.757740 systemd-logind[1599]: Session 8 logged out. Waiting for processes to exit. Sep 13 00:23:43.759302 systemd[1]: Started sshd@7-139.178.70.110:22-139.178.89.65:40554.service - OpenSSH per-connection server daemon (139.178.89.65:40554). Sep 13 00:23:43.760731 systemd-logind[1599]: Removed session 8. Sep 13 00:23:43.796074 sshd[1953]: Accepted publickey for core from 139.178.89.65 port 40554 ssh2: RSA SHA256:7j1m2IeDi8iUn41dLAEnUdP9v/F2nQcYG+53Qqm2Rt8 Sep 13 00:23:43.796873 sshd-session[1953]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:23:43.800697 systemd-logind[1599]: New session 9 of user core. Sep 13 00:23:43.806643 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 13 00:23:43.855654 sudo[1956]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 13 00:23:43.855846 sudo[1956]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 13 00:23:44.152421 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 13 00:23:44.161790 (dockerd)[1974]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 13 00:23:44.358954 dockerd[1974]: time="2025-09-13T00:23:44.358750048Z" level=info msg="Starting up" Sep 13 00:23:44.359714 dockerd[1974]: time="2025-09-13T00:23:44.359702036Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Sep 13 00:23:44.375673 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport3784842702-merged.mount: Deactivated successfully. Sep 13 00:23:44.392690 dockerd[1974]: time="2025-09-13T00:23:44.392554180Z" level=info msg="Loading containers: start." Sep 13 00:23:44.399562 kernel: Initializing XFRM netlink socket Sep 13 00:23:44.520194 systemd-timesyncd[1538]: Network configuration changed, trying to establish connection. Sep 13 00:23:44.543601 systemd-networkd[1519]: docker0: Link UP Sep 13 00:23:44.544699 dockerd[1974]: time="2025-09-13T00:23:44.544679250Z" level=info msg="Loading containers: done." Sep 13 00:23:44.564725 dockerd[1974]: time="2025-09-13T00:23:44.564689527Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 13 00:23:44.564829 dockerd[1974]: time="2025-09-13T00:23:44.564754575Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 Sep 13 00:25:17.174972 dockerd[1974]: time="2025-09-13T00:23:44.564867900Z" level=info msg="Initializing buildkit" Sep 13 00:25:17.174866 systemd-timesyncd[1538]: Contacted time server 74.208.14.149:123 (2.flatcar.pool.ntp.org). Sep 13 00:25:17.174903 systemd-timesyncd[1538]: Initial clock synchronization to Sat 2025-09-13 00:25:17.174694 UTC. Sep 13 00:25:17.174937 systemd-resolved[1521]: Clock change detected. Flushing caches. Sep 13 00:25:17.204713 dockerd[1974]: time="2025-09-13T00:25:17.204682680Z" level=info msg="Completed buildkit initialization" Sep 13 00:25:17.209618 dockerd[1974]: time="2025-09-13T00:25:17.209579371Z" level=info msg="Daemon has completed initialization" Sep 13 00:25:17.209823 dockerd[1974]: time="2025-09-13T00:25:17.209678673Z" level=info msg="API listen on /run/docker.sock" Sep 13 00:25:17.209766 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 13 00:25:17.982154 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3372910632-merged.mount: Deactivated successfully. Sep 13 00:25:17.998448 containerd[1628]: time="2025-09-13T00:25:17.998417852Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\"" Sep 13 00:25:18.809054 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3184762033.mount: Deactivated successfully. Sep 13 00:25:19.053477 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Sep 13 00:25:19.055403 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:25:19.296657 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:25:19.305743 (kubelet)[2237]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 13 00:25:19.365097 kubelet[2237]: E0913 00:25:19.365064 2237 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 00:25:19.366672 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 00:25:19.366766 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 00:25:19.367102 systemd[1]: kubelet.service: Consumed 100ms CPU time, 108.8M memory peak. Sep 13 00:25:19.913527 containerd[1628]: time="2025-09-13T00:25:19.913503026Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:25:19.914333 containerd[1628]: time="2025-09-13T00:25:19.914314476Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.9: active requests=0, bytes read=28837916" Sep 13 00:25:19.915372 containerd[1628]: time="2025-09-13T00:25:19.914612940Z" level=info msg="ImageCreate event name:\"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:25:19.915892 containerd[1628]: time="2025-09-13T00:25:19.915878842Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:25:19.916496 containerd[1628]: time="2025-09-13T00:25:19.916481494Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.9\" with image id \"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\", size \"28834515\" in 1.918015189s" Sep 13 00:25:19.916553 containerd[1628]: time="2025-09-13T00:25:19.916538624Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\" returns image reference \"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\"" Sep 13 00:25:19.917001 containerd[1628]: time="2025-09-13T00:25:19.916986168Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\"" Sep 13 00:25:20.986392 containerd[1628]: time="2025-09-13T00:25:20.986127185Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:25:20.993826 containerd[1628]: time="2025-09-13T00:25:20.993795301Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.9: active requests=0, bytes read=24787027" Sep 13 00:25:21.001415 containerd[1628]: time="2025-09-13T00:25:21.001390445Z" level=info msg="ImageCreate event name:\"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:25:21.007521 containerd[1628]: time="2025-09-13T00:25:21.007484023Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:25:21.008251 containerd[1628]: time="2025-09-13T00:25:21.008062170Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.9\" with image id \"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\", size \"26421706\" in 1.091020884s" Sep 13 00:25:21.008251 containerd[1628]: time="2025-09-13T00:25:21.008085440Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\" returns image reference \"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\"" Sep 13 00:25:21.008651 containerd[1628]: time="2025-09-13T00:25:21.008630345Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\"" Sep 13 00:25:22.314378 containerd[1628]: time="2025-09-13T00:25:22.314130743Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:25:22.316000 containerd[1628]: time="2025-09-13T00:25:22.315854492Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.9: active requests=0, bytes read=19176289" Sep 13 00:25:22.316337 containerd[1628]: time="2025-09-13T00:25:22.316318391Z" level=info msg="ImageCreate event name:\"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:25:22.318377 containerd[1628]: time="2025-09-13T00:25:22.318176983Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:25:22.319134 containerd[1628]: time="2025-09-13T00:25:22.318959496Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.9\" with image id \"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\", size \"20810986\" in 1.310306438s" Sep 13 00:25:22.319134 containerd[1628]: time="2025-09-13T00:25:22.318989837Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\" returns image reference \"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\"" Sep 13 00:25:22.319415 containerd[1628]: time="2025-09-13T00:25:22.319386042Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\"" Sep 13 00:25:23.343451 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4208599533.mount: Deactivated successfully. Sep 13 00:25:23.757426 containerd[1628]: time="2025-09-13T00:25:23.757225300Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:25:23.762911 containerd[1628]: time="2025-09-13T00:25:23.762880310Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.9: active requests=0, bytes read=30924206" Sep 13 00:25:23.768294 containerd[1628]: time="2025-09-13T00:25:23.768261243Z" level=info msg="ImageCreate event name:\"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:25:23.773382 containerd[1628]: time="2025-09-13T00:25:23.773342589Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:25:23.773952 containerd[1628]: time="2025-09-13T00:25:23.773726272Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.9\" with image id \"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\", repo tag \"registry.k8s.io/kube-proxy:v1.32.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\", size \"30923225\" in 1.45431975s" Sep 13 00:25:23.773952 containerd[1628]: time="2025-09-13T00:25:23.773752377Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\" returns image reference \"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\"" Sep 13 00:25:23.774079 containerd[1628]: time="2025-09-13T00:25:23.774057730Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 13 00:25:24.441049 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1344008347.mount: Deactivated successfully. Sep 13 00:25:25.240372 containerd[1628]: time="2025-09-13T00:25:25.240094096Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:25:25.250252 containerd[1628]: time="2025-09-13T00:25:25.250228914Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Sep 13 00:25:25.263619 containerd[1628]: time="2025-09-13T00:25:25.263580719Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:25:25.273786 containerd[1628]: time="2025-09-13T00:25:25.273492046Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:25:25.274218 containerd[1628]: time="2025-09-13T00:25:25.274195284Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.50011249s" Sep 13 00:25:25.274262 containerd[1628]: time="2025-09-13T00:25:25.274219517Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Sep 13 00:25:25.274656 containerd[1628]: time="2025-09-13T00:25:25.274637405Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 13 00:25:25.765228 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3497941880.mount: Deactivated successfully. Sep 13 00:25:25.768954 containerd[1628]: time="2025-09-13T00:25:25.768532349Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 13 00:25:25.769213 containerd[1628]: time="2025-09-13T00:25:25.769201953Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Sep 13 00:25:25.769699 containerd[1628]: time="2025-09-13T00:25:25.769682375Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 13 00:25:25.771079 containerd[1628]: time="2025-09-13T00:25:25.771065935Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 13 00:25:25.771857 containerd[1628]: time="2025-09-13T00:25:25.771845044Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 497.188901ms" Sep 13 00:25:25.771909 containerd[1628]: time="2025-09-13T00:25:25.771901840Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Sep 13 00:25:25.772234 containerd[1628]: time="2025-09-13T00:25:25.772223522Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Sep 13 00:25:26.309688 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3862638512.mount: Deactivated successfully. Sep 13 00:25:29.553669 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Sep 13 00:25:29.555425 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:25:30.450874 update_engine[1600]: I20250913 00:25:30.450809 1600 update_attempter.cc:509] Updating boot flags... Sep 13 00:25:30.814204 containerd[1628]: time="2025-09-13T00:25:30.814170595Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:25:30.861592 containerd[1628]: time="2025-09-13T00:25:30.861539628Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57682056" Sep 13 00:25:30.905049 containerd[1628]: time="2025-09-13T00:25:30.904083714Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:25:30.915834 containerd[1628]: time="2025-09-13T00:25:30.915788960Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:25:30.916827 containerd[1628]: time="2025-09-13T00:25:30.916585744Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 5.14429131s" Sep 13 00:25:30.916827 containerd[1628]: time="2025-09-13T00:25:30.916609815Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Sep 13 00:25:31.060063 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:25:31.068562 (kubelet)[2411]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 13 00:25:31.150315 kubelet[2411]: E0913 00:25:31.150279 2411 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 00:25:31.151560 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 00:25:31.151648 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 00:25:31.151851 systemd[1]: kubelet.service: Consumed 117ms CPU time, 112.1M memory peak. Sep 13 00:25:32.457658 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:25:32.458096 systemd[1]: kubelet.service: Consumed 117ms CPU time, 112.1M memory peak. Sep 13 00:25:32.460790 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:25:32.482743 systemd[1]: Reload requested from client PID 2439 ('systemctl') (unit session-9.scope)... Sep 13 00:25:32.482846 systemd[1]: Reloading... Sep 13 00:25:32.562395 zram_generator::config[2486]: No configuration found. Sep 13 00:25:32.627010 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 00:25:32.635399 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Sep 13 00:25:32.703231 systemd[1]: Reloading finished in 220 ms. Sep 13 00:25:32.738990 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 13 00:25:32.739045 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 13 00:25:32.739305 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:25:32.740647 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:25:33.075040 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:25:33.080680 (kubelet)[2550]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 13 00:25:33.162842 kubelet[2550]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 00:25:33.163063 kubelet[2550]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 13 00:25:33.163104 kubelet[2550]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 00:25:33.163224 kubelet[2550]: I0913 00:25:33.163202 2550 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 13 00:25:33.498230 kubelet[2550]: I0913 00:25:33.498006 2550 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Sep 13 00:25:33.498230 kubelet[2550]: I0913 00:25:33.498024 2550 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 13 00:25:33.498230 kubelet[2550]: I0913 00:25:33.498203 2550 server.go:954] "Client rotation is on, will bootstrap in background" Sep 13 00:25:33.563078 kubelet[2550]: I0913 00:25:33.562428 2550 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 13 00:25:33.563078 kubelet[2550]: E0913 00:25:33.563017 2550 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://139.178.70.110:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 139.178.70.110:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:25:33.576395 kubelet[2550]: I0913 00:25:33.576374 2550 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 13 00:25:33.580875 kubelet[2550]: I0913 00:25:33.580709 2550 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 13 00:25:33.590260 kubelet[2550]: I0913 00:25:33.590239 2550 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 13 00:25:33.599481 kubelet[2550]: I0913 00:25:33.590301 2550 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 13 00:25:33.604678 kubelet[2550]: I0913 00:25:33.604562 2550 topology_manager.go:138] "Creating topology manager with none policy" Sep 13 00:25:33.604678 kubelet[2550]: I0913 00:25:33.604573 2550 container_manager_linux.go:304] "Creating device plugin manager" Sep 13 00:25:33.608942 kubelet[2550]: I0913 00:25:33.608933 2550 state_mem.go:36] "Initialized new in-memory state store" Sep 13 00:25:33.629364 kubelet[2550]: I0913 00:25:33.629355 2550 kubelet.go:446] "Attempting to sync node with API server" Sep 13 00:25:33.629408 kubelet[2550]: I0913 00:25:33.629404 2550 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 13 00:25:33.630799 kubelet[2550]: I0913 00:25:33.630756 2550 kubelet.go:352] "Adding apiserver pod source" Sep 13 00:25:33.630799 kubelet[2550]: I0913 00:25:33.630767 2550 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 13 00:25:33.641738 kubelet[2550]: W0913 00:25:33.641695 2550 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://139.178.70.110:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 139.178.70.110:6443: connect: connection refused Sep 13 00:25:33.641812 kubelet[2550]: E0913 00:25:33.641760 2550 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://139.178.70.110:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 139.178.70.110:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:25:33.642722 kubelet[2550]: W0913 00:25:33.642676 2550 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://139.178.70.110:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 139.178.70.110:6443: connect: connection refused Sep 13 00:25:33.642722 kubelet[2550]: E0913 00:25:33.642702 2550 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://139.178.70.110:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 139.178.70.110:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:25:33.642883 kubelet[2550]: I0913 00:25:33.642869 2550 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Sep 13 00:25:33.645154 kubelet[2550]: I0913 00:25:33.645138 2550 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 13 00:25:33.645656 kubelet[2550]: W0913 00:25:33.645640 2550 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 13 00:25:33.646091 kubelet[2550]: I0913 00:25:33.645986 2550 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 13 00:25:33.646091 kubelet[2550]: I0913 00:25:33.646006 2550 server.go:1287] "Started kubelet" Sep 13 00:25:33.646246 kubelet[2550]: I0913 00:25:33.646228 2550 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Sep 13 00:25:33.646998 kubelet[2550]: I0913 00:25:33.646989 2550 server.go:479] "Adding debug handlers to kubelet server" Sep 13 00:25:33.650759 kubelet[2550]: I0913 00:25:33.650747 2550 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 13 00:25:33.652403 kubelet[2550]: I0913 00:25:33.652360 2550 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 13 00:25:33.652514 kubelet[2550]: I0913 00:25:33.652501 2550 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 13 00:25:33.655994 kubelet[2550]: E0913 00:25:33.653296 2550 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://139.178.70.110:6443/api/v1/namespaces/default/events\": dial tcp 139.178.70.110:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1864afe09274bb18 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-13 00:25:33.645994776 +0000 UTC m=+0.563201944,LastTimestamp:2025-09-13 00:25:33.645994776 +0000 UTC m=+0.563201944,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 13 00:25:33.655994 kubelet[2550]: I0913 00:25:33.655712 2550 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 13 00:25:33.657848 kubelet[2550]: I0913 00:25:33.657638 2550 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 13 00:25:33.657848 kubelet[2550]: E0913 00:25:33.657747 2550 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:25:33.659819 kubelet[2550]: I0913 00:25:33.659079 2550 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 13 00:25:33.659819 kubelet[2550]: I0913 00:25:33.659115 2550 reconciler.go:26] "Reconciler: start to sync state" Sep 13 00:25:33.659819 kubelet[2550]: W0913 00:25:33.659319 2550 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://139.178.70.110:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 139.178.70.110:6443: connect: connection refused Sep 13 00:25:33.659819 kubelet[2550]: E0913 00:25:33.659357 2550 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://139.178.70.110:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 139.178.70.110:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:25:33.659819 kubelet[2550]: E0913 00:25:33.659404 2550 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.110:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.110:6443: connect: connection refused" interval="200ms" Sep 13 00:25:33.670178 kubelet[2550]: I0913 00:25:33.670152 2550 factory.go:221] Registration of the systemd container factory successfully Sep 13 00:25:33.670273 kubelet[2550]: I0913 00:25:33.670222 2550 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 13 00:25:33.674140 kubelet[2550]: I0913 00:25:33.674121 2550 factory.go:221] Registration of the containerd container factory successfully Sep 13 00:25:33.676400 kubelet[2550]: E0913 00:25:33.676363 2550 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 13 00:25:33.679717 kubelet[2550]: I0913 00:25:33.679641 2550 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 13 00:25:33.683666 kubelet[2550]: I0913 00:25:33.683646 2550 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 13 00:25:33.683666 kubelet[2550]: I0913 00:25:33.683659 2550 status_manager.go:227] "Starting to sync pod status with apiserver" Sep 13 00:25:33.683763 kubelet[2550]: I0913 00:25:33.683670 2550 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 13 00:25:33.683763 kubelet[2550]: I0913 00:25:33.683676 2550 kubelet.go:2382] "Starting kubelet main sync loop" Sep 13 00:25:33.683763 kubelet[2550]: E0913 00:25:33.683699 2550 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 13 00:25:33.684124 kubelet[2550]: W0913 00:25:33.684098 2550 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://139.178.70.110:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 139.178.70.110:6443: connect: connection refused Sep 13 00:25:33.684164 kubelet[2550]: E0913 00:25:33.684127 2550 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://139.178.70.110:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 139.178.70.110:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:25:33.684829 kubelet[2550]: I0913 00:25:33.684816 2550 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 13 00:25:33.684829 kubelet[2550]: I0913 00:25:33.684824 2550 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 13 00:25:33.684933 kubelet[2550]: I0913 00:25:33.684842 2550 state_mem.go:36] "Initialized new in-memory state store" Sep 13 00:25:33.691438 kubelet[2550]: I0913 00:25:33.691422 2550 policy_none.go:49] "None policy: Start" Sep 13 00:25:33.691438 kubelet[2550]: I0913 00:25:33.691435 2550 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 13 00:25:33.691497 kubelet[2550]: I0913 00:25:33.691442 2550 state_mem.go:35] "Initializing new in-memory state store" Sep 13 00:25:33.698939 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 13 00:25:33.708328 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 13 00:25:33.710774 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 13 00:25:33.718853 kubelet[2550]: I0913 00:25:33.718837 2550 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 13 00:25:33.719712 kubelet[2550]: I0913 00:25:33.719700 2550 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 13 00:25:33.719749 kubelet[2550]: I0913 00:25:33.719713 2550 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 13 00:25:33.719895 kubelet[2550]: I0913 00:25:33.719884 2550 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 13 00:25:33.720552 kubelet[2550]: E0913 00:25:33.720543 2550 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 13 00:25:33.720647 kubelet[2550]: E0913 00:25:33.720640 2550 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Sep 13 00:25:33.795579 systemd[1]: Created slice kubepods-burstable-podeafe04933aed4fc861359a595cd87f2a.slice - libcontainer container kubepods-burstable-podeafe04933aed4fc861359a595cd87f2a.slice. Sep 13 00:25:33.804901 kubelet[2550]: E0913 00:25:33.804880 2550 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 13 00:25:33.807086 systemd[1]: Created slice kubepods-burstable-pod1403266a9792debaa127cd8df7a81c3c.slice - libcontainer container kubepods-burstable-pod1403266a9792debaa127cd8df7a81c3c.slice. Sep 13 00:25:33.820643 kubelet[2550]: I0913 00:25:33.820630 2550 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 13 00:25:33.820954 kubelet[2550]: E0913 00:25:33.820939 2550 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://139.178.70.110:6443/api/v1/nodes\": dial tcp 139.178.70.110:6443: connect: connection refused" node="localhost" Sep 13 00:25:33.823287 kubelet[2550]: E0913 00:25:33.823252 2550 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 13 00:25:33.824925 systemd[1]: Created slice kubepods-burstable-pod72a30db4fc25e4da65a3b99eba43be94.slice - libcontainer container kubepods-burstable-pod72a30db4fc25e4da65a3b99eba43be94.slice. Sep 13 00:25:33.826294 kubelet[2550]: E0913 00:25:33.826262 2550 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 13 00:25:33.860147 kubelet[2550]: E0913 00:25:33.860115 2550 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.110:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.110:6443: connect: connection refused" interval="400ms" Sep 13 00:25:33.960642 kubelet[2550]: I0913 00:25:33.960449 2550 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/eafe04933aed4fc861359a595cd87f2a-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"eafe04933aed4fc861359a595cd87f2a\") " pod="kube-system/kube-apiserver-localhost" Sep 13 00:25:33.960642 kubelet[2550]: I0913 00:25:33.960489 2550 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 00:25:33.960642 kubelet[2550]: I0913 00:25:33.960505 2550 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 00:25:33.960642 kubelet[2550]: I0913 00:25:33.960548 2550 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 00:25:33.960642 kubelet[2550]: I0913 00:25:33.960561 2550 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/72a30db4fc25e4da65a3b99eba43be94-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"72a30db4fc25e4da65a3b99eba43be94\") " pod="kube-system/kube-scheduler-localhost" Sep 13 00:25:33.960864 kubelet[2550]: I0913 00:25:33.960570 2550 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/eafe04933aed4fc861359a595cd87f2a-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"eafe04933aed4fc861359a595cd87f2a\") " pod="kube-system/kube-apiserver-localhost" Sep 13 00:25:33.960864 kubelet[2550]: I0913 00:25:33.960580 2550 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/eafe04933aed4fc861359a595cd87f2a-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"eafe04933aed4fc861359a595cd87f2a\") " pod="kube-system/kube-apiserver-localhost" Sep 13 00:25:33.960864 kubelet[2550]: I0913 00:25:33.960590 2550 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 00:25:33.960864 kubelet[2550]: I0913 00:25:33.960598 2550 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 00:25:34.022615 kubelet[2550]: I0913 00:25:34.022442 2550 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 13 00:25:34.022809 kubelet[2550]: E0913 00:25:34.022790 2550 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://139.178.70.110:6443/api/v1/nodes\": dial tcp 139.178.70.110:6443: connect: connection refused" node="localhost" Sep 13 00:25:34.105877 containerd[1628]: time="2025-09-13T00:25:34.105844951Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:eafe04933aed4fc861359a595cd87f2a,Namespace:kube-system,Attempt:0,}" Sep 13 00:25:34.124565 containerd[1628]: time="2025-09-13T00:25:34.124330666Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:1403266a9792debaa127cd8df7a81c3c,Namespace:kube-system,Attempt:0,}" Sep 13 00:25:34.131667 containerd[1628]: time="2025-09-13T00:25:34.131635920Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:72a30db4fc25e4da65a3b99eba43be94,Namespace:kube-system,Attempt:0,}" Sep 13 00:25:34.224264 containerd[1628]: time="2025-09-13T00:25:34.224235942Z" level=info msg="connecting to shim 6f931f3c4e0c006fbe609fbe0a7d58bc3ab6dead9bc9899ddd3535879870e202" address="unix:///run/containerd/s/786257a001a8ab1a57234d543136e5d13736de9ae0012d30796bfb6185f31378" namespace=k8s.io protocol=ttrpc version=3 Sep 13 00:25:34.224780 containerd[1628]: time="2025-09-13T00:25:34.224235915Z" level=info msg="connecting to shim 91ae9977217a1afce5eb279cb720a449ea277d4890124c97828d58f476623f5a" address="unix:///run/containerd/s/cda1a86c10c0724aad7593375b429e7d462b801f1751b5f0e8484b9fe2bb4904" namespace=k8s.io protocol=ttrpc version=3 Sep 13 00:25:34.229185 containerd[1628]: time="2025-09-13T00:25:34.229164317Z" level=info msg="connecting to shim 99453712ba3a24bd7495199efea9d7c3a6c9815b73cb6592aa9e87f76ac51453" address="unix:///run/containerd/s/1ce83a10da7d07608a4c808748a8daa83041c6f277a7d57b845c1306bef4734f" namespace=k8s.io protocol=ttrpc version=3 Sep 13 00:25:34.261003 kubelet[2550]: E0913 00:25:34.260979 2550 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.110:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.110:6443: connect: connection refused" interval="800ms" Sep 13 00:25:34.399453 systemd[1]: Started cri-containerd-6f931f3c4e0c006fbe609fbe0a7d58bc3ab6dead9bc9899ddd3535879870e202.scope - libcontainer container 6f931f3c4e0c006fbe609fbe0a7d58bc3ab6dead9bc9899ddd3535879870e202. Sep 13 00:25:34.401503 systemd[1]: Started cri-containerd-91ae9977217a1afce5eb279cb720a449ea277d4890124c97828d58f476623f5a.scope - libcontainer container 91ae9977217a1afce5eb279cb720a449ea277d4890124c97828d58f476623f5a. Sep 13 00:25:34.403195 systemd[1]: Started cri-containerd-99453712ba3a24bd7495199efea9d7c3a6c9815b73cb6592aa9e87f76ac51453.scope - libcontainer container 99453712ba3a24bd7495199efea9d7c3a6c9815b73cb6592aa9e87f76ac51453. Sep 13 00:25:34.424216 kubelet[2550]: I0913 00:25:34.424196 2550 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 13 00:25:34.424595 kubelet[2550]: E0913 00:25:34.424394 2550 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://139.178.70.110:6443/api/v1/nodes\": dial tcp 139.178.70.110:6443: connect: connection refused" node="localhost" Sep 13 00:25:34.462368 containerd[1628]: time="2025-09-13T00:25:34.461833045Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:1403266a9792debaa127cd8df7a81c3c,Namespace:kube-system,Attempt:0,} returns sandbox id \"6f931f3c4e0c006fbe609fbe0a7d58bc3ab6dead9bc9899ddd3535879870e202\"" Sep 13 00:25:34.466684 containerd[1628]: time="2025-09-13T00:25:34.466483515Z" level=info msg="CreateContainer within sandbox \"6f931f3c4e0c006fbe609fbe0a7d58bc3ab6dead9bc9899ddd3535879870e202\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 13 00:25:34.491600 containerd[1628]: time="2025-09-13T00:25:34.491554946Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:72a30db4fc25e4da65a3b99eba43be94,Namespace:kube-system,Attempt:0,} returns sandbox id \"99453712ba3a24bd7495199efea9d7c3a6c9815b73cb6592aa9e87f76ac51453\"" Sep 13 00:25:34.495198 containerd[1628]: time="2025-09-13T00:25:34.494694837Z" level=info msg="CreateContainer within sandbox \"99453712ba3a24bd7495199efea9d7c3a6c9815b73cb6592aa9e87f76ac51453\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 13 00:25:34.496140 kubelet[2550]: W0913 00:25:34.495955 2550 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://139.178.70.110:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 139.178.70.110:6443: connect: connection refused Sep 13 00:25:34.496191 kubelet[2550]: E0913 00:25:34.496147 2550 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://139.178.70.110:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 139.178.70.110:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:25:34.547548 containerd[1628]: time="2025-09-13T00:25:34.547468487Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:eafe04933aed4fc861359a595cd87f2a,Namespace:kube-system,Attempt:0,} returns sandbox id \"91ae9977217a1afce5eb279cb720a449ea277d4890124c97828d58f476623f5a\"" Sep 13 00:25:34.548876 containerd[1628]: time="2025-09-13T00:25:34.548860527Z" level=info msg="CreateContainer within sandbox \"91ae9977217a1afce5eb279cb720a449ea277d4890124c97828d58f476623f5a\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 13 00:25:34.566117 kubelet[2550]: W0913 00:25:34.566075 2550 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://139.178.70.110:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 139.178.70.110:6443: connect: connection refused Sep 13 00:25:34.566177 kubelet[2550]: E0913 00:25:34.566124 2550 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://139.178.70.110:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 139.178.70.110:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:25:34.600851 containerd[1628]: time="2025-09-13T00:25:34.600812174Z" level=info msg="Container 8810f2400c1571550a7412a8ebb7d0b316a0495da7b7649e9b49833117fecf52: CDI devices from CRI Config.CDIDevices: []" Sep 13 00:25:34.603365 containerd[1628]: time="2025-09-13T00:25:34.603156858Z" level=info msg="Container 373ef1971ea5633a21c4716b08465b82dc6db9df1ec2f87b1f51c9b877fe3c63: CDI devices from CRI Config.CDIDevices: []" Sep 13 00:25:34.604996 containerd[1628]: time="2025-09-13T00:25:34.604965088Z" level=info msg="Container 7179dc91444da6dbdc8c6b98736c928edbcf6f8156c101eb846469fe9044f9bd: CDI devices from CRI Config.CDIDevices: []" Sep 13 00:25:34.610325 containerd[1628]: time="2025-09-13T00:25:34.610299597Z" level=info msg="CreateContainer within sandbox \"91ae9977217a1afce5eb279cb720a449ea277d4890124c97828d58f476623f5a\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"373ef1971ea5633a21c4716b08465b82dc6db9df1ec2f87b1f51c9b877fe3c63\"" Sep 13 00:25:34.611751 containerd[1628]: time="2025-09-13T00:25:34.611725836Z" level=info msg="CreateContainer within sandbox \"6f931f3c4e0c006fbe609fbe0a7d58bc3ab6dead9bc9899ddd3535879870e202\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"8810f2400c1571550a7412a8ebb7d0b316a0495da7b7649e9b49833117fecf52\"" Sep 13 00:25:34.612587 containerd[1628]: time="2025-09-13T00:25:34.612568182Z" level=info msg="StartContainer for \"8810f2400c1571550a7412a8ebb7d0b316a0495da7b7649e9b49833117fecf52\"" Sep 13 00:25:34.614362 containerd[1628]: time="2025-09-13T00:25:34.613946439Z" level=info msg="CreateContainer within sandbox \"99453712ba3a24bd7495199efea9d7c3a6c9815b73cb6592aa9e87f76ac51453\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"7179dc91444da6dbdc8c6b98736c928edbcf6f8156c101eb846469fe9044f9bd\"" Sep 13 00:25:34.614362 containerd[1628]: time="2025-09-13T00:25:34.614032062Z" level=info msg="StartContainer for \"373ef1971ea5633a21c4716b08465b82dc6db9df1ec2f87b1f51c9b877fe3c63\"" Sep 13 00:25:34.614766 containerd[1628]: time="2025-09-13T00:25:34.614746454Z" level=info msg="connecting to shim 8810f2400c1571550a7412a8ebb7d0b316a0495da7b7649e9b49833117fecf52" address="unix:///run/containerd/s/786257a001a8ab1a57234d543136e5d13736de9ae0012d30796bfb6185f31378" protocol=ttrpc version=3 Sep 13 00:25:34.615078 containerd[1628]: time="2025-09-13T00:25:34.615056424Z" level=info msg="StartContainer for \"7179dc91444da6dbdc8c6b98736c928edbcf6f8156c101eb846469fe9044f9bd\"" Sep 13 00:25:34.615981 containerd[1628]: time="2025-09-13T00:25:34.615959032Z" level=info msg="connecting to shim 7179dc91444da6dbdc8c6b98736c928edbcf6f8156c101eb846469fe9044f9bd" address="unix:///run/containerd/s/1ce83a10da7d07608a4c808748a8daa83041c6f277a7d57b845c1306bef4734f" protocol=ttrpc version=3 Sep 13 00:25:34.616083 containerd[1628]: time="2025-09-13T00:25:34.614772856Z" level=info msg="connecting to shim 373ef1971ea5633a21c4716b08465b82dc6db9df1ec2f87b1f51c9b877fe3c63" address="unix:///run/containerd/s/cda1a86c10c0724aad7593375b429e7d462b801f1751b5f0e8484b9fe2bb4904" protocol=ttrpc version=3 Sep 13 00:25:34.638510 systemd[1]: Started cri-containerd-7179dc91444da6dbdc8c6b98736c928edbcf6f8156c101eb846469fe9044f9bd.scope - libcontainer container 7179dc91444da6dbdc8c6b98736c928edbcf6f8156c101eb846469fe9044f9bd. Sep 13 00:25:34.640432 systemd[1]: Started cri-containerd-8810f2400c1571550a7412a8ebb7d0b316a0495da7b7649e9b49833117fecf52.scope - libcontainer container 8810f2400c1571550a7412a8ebb7d0b316a0495da7b7649e9b49833117fecf52. Sep 13 00:25:34.645096 systemd[1]: Started cri-containerd-373ef1971ea5633a21c4716b08465b82dc6db9df1ec2f87b1f51c9b877fe3c63.scope - libcontainer container 373ef1971ea5633a21c4716b08465b82dc6db9df1ec2f87b1f51c9b877fe3c63. Sep 13 00:25:34.707378 containerd[1628]: time="2025-09-13T00:25:34.706532254Z" level=info msg="StartContainer for \"7179dc91444da6dbdc8c6b98736c928edbcf6f8156c101eb846469fe9044f9bd\" returns successfully" Sep 13 00:25:34.720899 kubelet[2550]: W0913 00:25:34.720847 2550 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://139.178.70.110:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 139.178.70.110:6443: connect: connection refused Sep 13 00:25:34.720899 kubelet[2550]: E0913 00:25:34.720888 2550 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://139.178.70.110:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 139.178.70.110:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:25:34.731047 containerd[1628]: time="2025-09-13T00:25:34.730894242Z" level=info msg="StartContainer for \"373ef1971ea5633a21c4716b08465b82dc6db9df1ec2f87b1f51c9b877fe3c63\" returns successfully" Sep 13 00:25:34.731576 containerd[1628]: time="2025-09-13T00:25:34.731533309Z" level=info msg="StartContainer for \"8810f2400c1571550a7412a8ebb7d0b316a0495da7b7649e9b49833117fecf52\" returns successfully" Sep 13 00:25:35.061601 kubelet[2550]: E0913 00:25:35.061572 2550 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.110:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.110:6443: connect: connection refused" interval="1.6s" Sep 13 00:25:35.195585 kubelet[2550]: W0913 00:25:35.195547 2550 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://139.178.70.110:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 139.178.70.110:6443: connect: connection refused Sep 13 00:25:35.195585 kubelet[2550]: E0913 00:25:35.195588 2550 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://139.178.70.110:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 139.178.70.110:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:25:35.226209 kubelet[2550]: I0913 00:25:35.226184 2550 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 13 00:25:35.226397 kubelet[2550]: E0913 00:25:35.226378 2550 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://139.178.70.110:6443/api/v1/nodes\": dial tcp 139.178.70.110:6443: connect: connection refused" node="localhost" Sep 13 00:25:35.640380 kubelet[2550]: E0913 00:25:35.639624 2550 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://139.178.70.110:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 139.178.70.110:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:25:35.706016 kubelet[2550]: E0913 00:25:35.705939 2550 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 13 00:25:35.707194 kubelet[2550]: E0913 00:25:35.707177 2550 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 13 00:25:35.708998 kubelet[2550]: E0913 00:25:35.708897 2550 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 13 00:25:36.710286 kubelet[2550]: E0913 00:25:36.710264 2550 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 13 00:25:36.710531 kubelet[2550]: E0913 00:25:36.710450 2550 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 13 00:25:36.710595 kubelet[2550]: E0913 00:25:36.710584 2550 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 13 00:25:36.827263 kubelet[2550]: I0913 00:25:36.827229 2550 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 13 00:25:36.928443 kubelet[2550]: E0913 00:25:36.928423 2550 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Sep 13 00:25:36.990553 kubelet[2550]: I0913 00:25:36.990240 2550 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 13 00:25:36.990553 kubelet[2550]: E0913 00:25:36.990267 2550 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Sep 13 00:25:36.997377 kubelet[2550]: E0913 00:25:36.997286 2550 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:25:37.098412 kubelet[2550]: E0913 00:25:37.098382 2550 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:25:37.199367 kubelet[2550]: E0913 00:25:37.199324 2550 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:25:37.300459 kubelet[2550]: E0913 00:25:37.300119 2550 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:25:37.400861 kubelet[2550]: E0913 00:25:37.400781 2550 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:25:37.501472 kubelet[2550]: E0913 00:25:37.501437 2550 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:25:37.602403 kubelet[2550]: E0913 00:25:37.602363 2550 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:25:37.710995 kubelet[2550]: I0913 00:25:37.710975 2550 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 13 00:25:37.711246 kubelet[2550]: I0913 00:25:37.711187 2550 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 13 00:25:37.711760 kubelet[2550]: I0913 00:25:37.711371 2550 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 13 00:25:37.720635 kubelet[2550]: E0913 00:25:37.720611 2550 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Sep 13 00:25:37.720807 kubelet[2550]: E0913 00:25:37.720611 2550 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Sep 13 00:25:37.720930 kubelet[2550]: E0913 00:25:37.720915 2550 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Sep 13 00:25:37.760330 kubelet[2550]: I0913 00:25:37.760296 2550 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 13 00:25:37.762157 kubelet[2550]: E0913 00:25:37.762084 2550 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Sep 13 00:25:37.762157 kubelet[2550]: I0913 00:25:37.762113 2550 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 13 00:25:37.763410 kubelet[2550]: E0913 00:25:37.763390 2550 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Sep 13 00:25:37.763410 kubelet[2550]: I0913 00:25:37.763407 2550 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 13 00:25:37.764424 kubelet[2550]: E0913 00:25:37.764390 2550 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Sep 13 00:25:38.646106 kubelet[2550]: I0913 00:25:38.645453 2550 apiserver.go:52] "Watching apiserver" Sep 13 00:25:38.659852 kubelet[2550]: I0913 00:25:38.659843 2550 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 13 00:25:38.712277 kubelet[2550]: I0913 00:25:38.712191 2550 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 13 00:25:38.758720 systemd[1]: Reload requested from client PID 2816 ('systemctl') (unit session-9.scope)... Sep 13 00:25:38.758731 systemd[1]: Reloading... Sep 13 00:25:38.819415 zram_generator::config[2859]: No configuration found. Sep 13 00:25:38.892971 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 00:25:38.901192 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Sep 13 00:25:38.980979 systemd[1]: Reloading finished in 222 ms. Sep 13 00:25:39.007900 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:25:39.019733 systemd[1]: kubelet.service: Deactivated successfully. Sep 13 00:25:39.020009 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:25:39.020096 systemd[1]: kubelet.service: Consumed 619ms CPU time, 127.6M memory peak. Sep 13 00:25:39.021868 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:25:39.632242 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:25:39.642622 (kubelet)[2927]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 13 00:25:39.715955 kubelet[2927]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 00:25:39.715955 kubelet[2927]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 13 00:25:39.715955 kubelet[2927]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 00:25:39.716236 kubelet[2927]: I0913 00:25:39.715981 2927 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 13 00:25:39.722364 kubelet[2927]: I0913 00:25:39.721891 2927 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Sep 13 00:25:39.722364 kubelet[2927]: I0913 00:25:39.721905 2927 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 13 00:25:39.722364 kubelet[2927]: I0913 00:25:39.722329 2927 server.go:954] "Client rotation is on, will bootstrap in background" Sep 13 00:25:39.724600 kubelet[2927]: I0913 00:25:39.724583 2927 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 13 00:25:39.727895 kubelet[2927]: I0913 00:25:39.727868 2927 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 13 00:25:39.730620 kubelet[2927]: I0913 00:25:39.730606 2927 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 13 00:25:39.732825 kubelet[2927]: I0913 00:25:39.732811 2927 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 13 00:25:39.732969 kubelet[2927]: I0913 00:25:39.732946 2927 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 13 00:25:39.733070 kubelet[2927]: I0913 00:25:39.732969 2927 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 13 00:25:39.733138 kubelet[2927]: I0913 00:25:39.733073 2927 topology_manager.go:138] "Creating topology manager with none policy" Sep 13 00:25:39.733138 kubelet[2927]: I0913 00:25:39.733080 2927 container_manager_linux.go:304] "Creating device plugin manager" Sep 13 00:25:39.733138 kubelet[2927]: I0913 00:25:39.733111 2927 state_mem.go:36] "Initialized new in-memory state store" Sep 13 00:25:39.733260 kubelet[2927]: I0913 00:25:39.733247 2927 kubelet.go:446] "Attempting to sync node with API server" Sep 13 00:25:39.735253 kubelet[2927]: I0913 00:25:39.734694 2927 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 13 00:25:39.735253 kubelet[2927]: I0913 00:25:39.734717 2927 kubelet.go:352] "Adding apiserver pod source" Sep 13 00:25:39.735253 kubelet[2927]: I0913 00:25:39.734725 2927 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 13 00:25:39.736397 kubelet[2927]: I0913 00:25:39.736388 2927 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Sep 13 00:25:39.737389 kubelet[2927]: I0913 00:25:39.736757 2927 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 13 00:25:39.737389 kubelet[2927]: I0913 00:25:39.737029 2927 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 13 00:25:39.737389 kubelet[2927]: I0913 00:25:39.737050 2927 server.go:1287] "Started kubelet" Sep 13 00:25:39.742297 kubelet[2927]: I0913 00:25:39.742152 2927 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 13 00:25:39.750938 kubelet[2927]: I0913 00:25:39.748539 2927 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Sep 13 00:25:39.750938 kubelet[2927]: I0913 00:25:39.749174 2927 server.go:479] "Adding debug handlers to kubelet server" Sep 13 00:25:39.750938 kubelet[2927]: I0913 00:25:39.749814 2927 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 13 00:25:39.750938 kubelet[2927]: I0913 00:25:39.749957 2927 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 13 00:25:39.750938 kubelet[2927]: I0913 00:25:39.750274 2927 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 13 00:25:39.752862 kubelet[2927]: I0913 00:25:39.752027 2927 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 13 00:25:39.752862 kubelet[2927]: E0913 00:25:39.752150 2927 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:25:39.755655 kubelet[2927]: I0913 00:25:39.755635 2927 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 13 00:25:39.755713 kubelet[2927]: I0913 00:25:39.755705 2927 reconciler.go:26] "Reconciler: start to sync state" Sep 13 00:25:39.760256 kubelet[2927]: I0913 00:25:39.760160 2927 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 13 00:25:39.761843 kubelet[2927]: I0913 00:25:39.761826 2927 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 13 00:25:39.761906 kubelet[2927]: I0913 00:25:39.761846 2927 status_manager.go:227] "Starting to sync pod status with apiserver" Sep 13 00:25:39.761906 kubelet[2927]: I0913 00:25:39.761857 2927 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 13 00:25:39.761906 kubelet[2927]: I0913 00:25:39.761861 2927 kubelet.go:2382] "Starting kubelet main sync loop" Sep 13 00:25:39.761906 kubelet[2927]: E0913 00:25:39.761891 2927 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 13 00:25:39.768376 kubelet[2927]: I0913 00:25:39.768178 2927 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 13 00:25:39.771176 kubelet[2927]: I0913 00:25:39.771157 2927 factory.go:221] Registration of the containerd container factory successfully Sep 13 00:25:39.771258 kubelet[2927]: I0913 00:25:39.771252 2927 factory.go:221] Registration of the systemd container factory successfully Sep 13 00:25:39.774435 kubelet[2927]: E0913 00:25:39.773491 2927 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 13 00:25:39.816596 kubelet[2927]: I0913 00:25:39.816565 2927 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 13 00:25:39.816596 kubelet[2927]: I0913 00:25:39.816578 2927 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 13 00:25:39.816709 kubelet[2927]: I0913 00:25:39.816601 2927 state_mem.go:36] "Initialized new in-memory state store" Sep 13 00:25:39.816760 kubelet[2927]: I0913 00:25:39.816729 2927 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 13 00:25:39.816760 kubelet[2927]: I0913 00:25:39.816738 2927 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 13 00:25:39.816760 kubelet[2927]: I0913 00:25:39.816751 2927 policy_none.go:49] "None policy: Start" Sep 13 00:25:39.816760 kubelet[2927]: I0913 00:25:39.816757 2927 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 13 00:25:39.816839 kubelet[2927]: I0913 00:25:39.816763 2927 state_mem.go:35] "Initializing new in-memory state store" Sep 13 00:25:39.816839 kubelet[2927]: I0913 00:25:39.816833 2927 state_mem.go:75] "Updated machine memory state" Sep 13 00:25:39.821842 kubelet[2927]: I0913 00:25:39.821344 2927 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 13 00:25:39.823057 kubelet[2927]: I0913 00:25:39.823042 2927 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 13 00:25:39.823115 kubelet[2927]: I0913 00:25:39.823054 2927 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 13 00:25:39.824120 kubelet[2927]: I0913 00:25:39.824108 2927 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 13 00:25:39.826409 kubelet[2927]: E0913 00:25:39.825107 2927 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 13 00:25:39.869447 kubelet[2927]: I0913 00:25:39.869426 2927 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 13 00:25:39.869711 kubelet[2927]: I0913 00:25:39.869698 2927 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 13 00:25:39.869842 kubelet[2927]: I0913 00:25:39.869829 2927 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 13 00:25:39.879026 kubelet[2927]: E0913 00:25:39.878652 2927 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Sep 13 00:25:39.925160 kubelet[2927]: I0913 00:25:39.924906 2927 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 13 00:25:39.937382 kubelet[2927]: I0913 00:25:39.937360 2927 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Sep 13 00:25:39.937472 kubelet[2927]: I0913 00:25:39.937414 2927 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 13 00:25:40.057022 kubelet[2927]: I0913 00:25:40.056874 2927 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/72a30db4fc25e4da65a3b99eba43be94-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"72a30db4fc25e4da65a3b99eba43be94\") " pod="kube-system/kube-scheduler-localhost" Sep 13 00:25:40.057022 kubelet[2927]: I0913 00:25:40.056904 2927 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/eafe04933aed4fc861359a595cd87f2a-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"eafe04933aed4fc861359a595cd87f2a\") " pod="kube-system/kube-apiserver-localhost" Sep 13 00:25:40.057022 kubelet[2927]: I0913 00:25:40.056922 2927 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/eafe04933aed4fc861359a595cd87f2a-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"eafe04933aed4fc861359a595cd87f2a\") " pod="kube-system/kube-apiserver-localhost" Sep 13 00:25:40.057022 kubelet[2927]: I0913 00:25:40.056940 2927 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 00:25:40.057022 kubelet[2927]: I0913 00:25:40.056952 2927 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 00:25:40.057180 kubelet[2927]: I0913 00:25:40.056963 2927 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/eafe04933aed4fc861359a595cd87f2a-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"eafe04933aed4fc861359a595cd87f2a\") " pod="kube-system/kube-apiserver-localhost" Sep 13 00:25:40.057180 kubelet[2927]: I0913 00:25:40.056973 2927 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 00:25:40.057180 kubelet[2927]: I0913 00:25:40.056982 2927 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 00:25:40.057180 kubelet[2927]: I0913 00:25:40.056996 2927 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 00:25:40.738668 kubelet[2927]: I0913 00:25:40.738634 2927 apiserver.go:52] "Watching apiserver" Sep 13 00:25:40.756570 kubelet[2927]: I0913 00:25:40.756542 2927 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 13 00:25:40.821024 kubelet[2927]: I0913 00:25:40.820982 2927 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=2.820970301 podStartE2EDuration="2.820970301s" podCreationTimestamp="2025-09-13 00:25:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:25:40.810479443 +0000 UTC m=+1.161941309" watchObservedRunningTime="2025-09-13 00:25:40.820970301 +0000 UTC m=+1.172432162" Sep 13 00:25:40.832448 kubelet[2927]: I0913 00:25:40.832410 2927 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.832398816 podStartE2EDuration="1.832398816s" podCreationTimestamp="2025-09-13 00:25:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:25:40.821373631 +0000 UTC m=+1.172835499" watchObservedRunningTime="2025-09-13 00:25:40.832398816 +0000 UTC m=+1.183860677" Sep 13 00:25:40.842728 kubelet[2927]: I0913 00:25:40.841964 2927 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.841932575 podStartE2EDuration="1.841932575s" podCreationTimestamp="2025-09-13 00:25:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:25:40.832711999 +0000 UTC m=+1.184173856" watchObservedRunningTime="2025-09-13 00:25:40.841932575 +0000 UTC m=+1.193394434" Sep 13 00:25:44.043319 kubelet[2927]: I0913 00:25:44.043299 2927 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 13 00:25:44.043843 containerd[1628]: time="2025-09-13T00:25:44.043819379Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 13 00:25:44.044112 kubelet[2927]: I0913 00:25:44.043961 2927 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 13 00:25:44.572153 kubelet[2927]: I0913 00:25:44.570869 2927 status_manager.go:890] "Failed to get status for pod" podUID="cebf1c54-18ce-4a0a-b437-e73528124152" pod="kube-system/kube-proxy-8wzgs" err="pods \"kube-proxy-8wzgs\" is forbidden: User \"system:node:localhost\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'localhost' and this object" Sep 13 00:25:44.574438 systemd[1]: Created slice kubepods-besteffort-podcebf1c54_18ce_4a0a_b437_e73528124152.slice - libcontainer container kubepods-besteffort-podcebf1c54_18ce_4a0a_b437_e73528124152.slice. Sep 13 00:25:44.585961 kubelet[2927]: I0913 00:25:44.585942 2927 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/cebf1c54-18ce-4a0a-b437-e73528124152-kube-proxy\") pod \"kube-proxy-8wzgs\" (UID: \"cebf1c54-18ce-4a0a-b437-e73528124152\") " pod="kube-system/kube-proxy-8wzgs" Sep 13 00:25:44.586083 kubelet[2927]: I0913 00:25:44.586034 2927 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cebf1c54-18ce-4a0a-b437-e73528124152-xtables-lock\") pod \"kube-proxy-8wzgs\" (UID: \"cebf1c54-18ce-4a0a-b437-e73528124152\") " pod="kube-system/kube-proxy-8wzgs" Sep 13 00:25:44.586083 kubelet[2927]: I0913 00:25:44.586049 2927 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cebf1c54-18ce-4a0a-b437-e73528124152-lib-modules\") pod \"kube-proxy-8wzgs\" (UID: \"cebf1c54-18ce-4a0a-b437-e73528124152\") " pod="kube-system/kube-proxy-8wzgs" Sep 13 00:25:44.586083 kubelet[2927]: I0913 00:25:44.586060 2927 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lssjd\" (UniqueName: \"kubernetes.io/projected/cebf1c54-18ce-4a0a-b437-e73528124152-kube-api-access-lssjd\") pod \"kube-proxy-8wzgs\" (UID: \"cebf1c54-18ce-4a0a-b437-e73528124152\") " pod="kube-system/kube-proxy-8wzgs" Sep 13 00:25:44.884804 containerd[1628]: time="2025-09-13T00:25:44.884056576Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-8wzgs,Uid:cebf1c54-18ce-4a0a-b437-e73528124152,Namespace:kube-system,Attempt:0,}" Sep 13 00:25:44.893796 containerd[1628]: time="2025-09-13T00:25:44.893723818Z" level=info msg="connecting to shim e1fbe368397b4bb0e2a1f321951899265f71d6a61635dbd8e970284c941f635c" address="unix:///run/containerd/s/128ac054a17623ae4d77511fca91ecb2ab9e60ee889d21291a30d5084f28dd71" namespace=k8s.io protocol=ttrpc version=3 Sep 13 00:25:44.914492 systemd[1]: Started cri-containerd-e1fbe368397b4bb0e2a1f321951899265f71d6a61635dbd8e970284c941f635c.scope - libcontainer container e1fbe368397b4bb0e2a1f321951899265f71d6a61635dbd8e970284c941f635c. Sep 13 00:25:44.932493 containerd[1628]: time="2025-09-13T00:25:44.932468177Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-8wzgs,Uid:cebf1c54-18ce-4a0a-b437-e73528124152,Namespace:kube-system,Attempt:0,} returns sandbox id \"e1fbe368397b4bb0e2a1f321951899265f71d6a61635dbd8e970284c941f635c\"" Sep 13 00:25:44.934580 containerd[1628]: time="2025-09-13T00:25:44.934552384Z" level=info msg="CreateContainer within sandbox \"e1fbe368397b4bb0e2a1f321951899265f71d6a61635dbd8e970284c941f635c\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 13 00:25:44.942370 containerd[1628]: time="2025-09-13T00:25:44.942089892Z" level=info msg="Container d3347c1e272157359e68028c7a14fb3f69cb57e24bd12d3e001f1e33d9e22c74: CDI devices from CRI Config.CDIDevices: []" Sep 13 00:25:44.944713 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3229716538.mount: Deactivated successfully. Sep 13 00:25:44.946762 containerd[1628]: time="2025-09-13T00:25:44.946742705Z" level=info msg="CreateContainer within sandbox \"e1fbe368397b4bb0e2a1f321951899265f71d6a61635dbd8e970284c941f635c\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"d3347c1e272157359e68028c7a14fb3f69cb57e24bd12d3e001f1e33d9e22c74\"" Sep 13 00:25:44.947576 containerd[1628]: time="2025-09-13T00:25:44.947555085Z" level=info msg="StartContainer for \"d3347c1e272157359e68028c7a14fb3f69cb57e24bd12d3e001f1e33d9e22c74\"" Sep 13 00:25:44.949030 containerd[1628]: time="2025-09-13T00:25:44.949017466Z" level=info msg="connecting to shim d3347c1e272157359e68028c7a14fb3f69cb57e24bd12d3e001f1e33d9e22c74" address="unix:///run/containerd/s/128ac054a17623ae4d77511fca91ecb2ab9e60ee889d21291a30d5084f28dd71" protocol=ttrpc version=3 Sep 13 00:25:44.960522 systemd[1]: Started cri-containerd-d3347c1e272157359e68028c7a14fb3f69cb57e24bd12d3e001f1e33d9e22c74.scope - libcontainer container d3347c1e272157359e68028c7a14fb3f69cb57e24bd12d3e001f1e33d9e22c74. Sep 13 00:25:44.984897 containerd[1628]: time="2025-09-13T00:25:44.984875624Z" level=info msg="StartContainer for \"d3347c1e272157359e68028c7a14fb3f69cb57e24bd12d3e001f1e33d9e22c74\" returns successfully" Sep 13 00:25:45.042368 systemd[1]: Created slice kubepods-besteffort-podfb4fbb79_8627_4b87_906a_cb893eb90c8f.slice - libcontainer container kubepods-besteffort-podfb4fbb79_8627_4b87_906a_cb893eb90c8f.slice. Sep 13 00:25:45.089765 kubelet[2927]: I0913 00:25:45.089734 2927 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rhndb\" (UniqueName: \"kubernetes.io/projected/fb4fbb79-8627-4b87-906a-cb893eb90c8f-kube-api-access-rhndb\") pod \"tigera-operator-755d956888-bjd9c\" (UID: \"fb4fbb79-8627-4b87-906a-cb893eb90c8f\") " pod="tigera-operator/tigera-operator-755d956888-bjd9c" Sep 13 00:25:45.089765 kubelet[2927]: I0913 00:25:45.089765 2927 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/fb4fbb79-8627-4b87-906a-cb893eb90c8f-var-lib-calico\") pod \"tigera-operator-755d956888-bjd9c\" (UID: \"fb4fbb79-8627-4b87-906a-cb893eb90c8f\") " pod="tigera-operator/tigera-operator-755d956888-bjd9c" Sep 13 00:25:45.348911 containerd[1628]: time="2025-09-13T00:25:45.348841890Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-755d956888-bjd9c,Uid:fb4fbb79-8627-4b87-906a-cb893eb90c8f,Namespace:tigera-operator,Attempt:0,}" Sep 13 00:25:45.361244 containerd[1628]: time="2025-09-13T00:25:45.361213123Z" level=info msg="connecting to shim 3da56f3fbbdc63df5fd3db8fc3ddb3b01fdf6e6604153cb39f69954684f2fa34" address="unix:///run/containerd/s/606e079a49e79ddb8d7f16d94e0182270f227b7c11cd1956c0558c01e57c6736" namespace=k8s.io protocol=ttrpc version=3 Sep 13 00:25:45.377457 systemd[1]: Started cri-containerd-3da56f3fbbdc63df5fd3db8fc3ddb3b01fdf6e6604153cb39f69954684f2fa34.scope - libcontainer container 3da56f3fbbdc63df5fd3db8fc3ddb3b01fdf6e6604153cb39f69954684f2fa34. Sep 13 00:25:45.408724 containerd[1628]: time="2025-09-13T00:25:45.408700586Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-755d956888-bjd9c,Uid:fb4fbb79-8627-4b87-906a-cb893eb90c8f,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"3da56f3fbbdc63df5fd3db8fc3ddb3b01fdf6e6604153cb39f69954684f2fa34\"" Sep 13 00:25:45.410184 containerd[1628]: time="2025-09-13T00:25:45.409816917Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\"" Sep 13 00:25:45.804185 kubelet[2927]: I0913 00:25:45.804119 2927 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-8wzgs" podStartSLOduration=1.804106488 podStartE2EDuration="1.804106488s" podCreationTimestamp="2025-09-13 00:25:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:25:45.803977225 +0000 UTC m=+6.155439083" watchObservedRunningTime="2025-09-13 00:25:45.804106488 +0000 UTC m=+6.155568345" Sep 13 00:25:47.204674 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount989484882.mount: Deactivated successfully. Sep 13 00:25:47.761274 containerd[1628]: time="2025-09-13T00:25:47.761246457Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:25:47.761944 containerd[1628]: time="2025-09-13T00:25:47.761929469Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.6: active requests=0, bytes read=25062609" Sep 13 00:25:47.762165 containerd[1628]: time="2025-09-13T00:25:47.762150514Z" level=info msg="ImageCreate event name:\"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:25:47.763809 containerd[1628]: time="2025-09-13T00:25:47.763794316Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:25:47.764985 containerd[1628]: time="2025-09-13T00:25:47.764969298Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.6\" with image id \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\", repo tag \"quay.io/tigera/operator:v1.38.6\", repo digest \"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\", size \"25058604\" in 2.355135846s" Sep 13 00:25:47.765044 containerd[1628]: time="2025-09-13T00:25:47.764985856Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\" returns image reference \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\"" Sep 13 00:25:47.766465 containerd[1628]: time="2025-09-13T00:25:47.766439629Z" level=info msg="CreateContainer within sandbox \"3da56f3fbbdc63df5fd3db8fc3ddb3b01fdf6e6604153cb39f69954684f2fa34\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Sep 13 00:25:47.774373 containerd[1628]: time="2025-09-13T00:25:47.773030621Z" level=info msg="Container 05aa1513ffe9896c47343fc4cc07e9a43dce533b3aa941b2716c26a11fce286f: CDI devices from CRI Config.CDIDevices: []" Sep 13 00:25:47.774115 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount351710698.mount: Deactivated successfully. Sep 13 00:25:47.780521 containerd[1628]: time="2025-09-13T00:25:47.780502736Z" level=info msg="CreateContainer within sandbox \"3da56f3fbbdc63df5fd3db8fc3ddb3b01fdf6e6604153cb39f69954684f2fa34\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"05aa1513ffe9896c47343fc4cc07e9a43dce533b3aa941b2716c26a11fce286f\"" Sep 13 00:25:47.781126 containerd[1628]: time="2025-09-13T00:25:47.781039924Z" level=info msg="StartContainer for \"05aa1513ffe9896c47343fc4cc07e9a43dce533b3aa941b2716c26a11fce286f\"" Sep 13 00:25:47.781667 containerd[1628]: time="2025-09-13T00:25:47.781641346Z" level=info msg="connecting to shim 05aa1513ffe9896c47343fc4cc07e9a43dce533b3aa941b2716c26a11fce286f" address="unix:///run/containerd/s/606e079a49e79ddb8d7f16d94e0182270f227b7c11cd1956c0558c01e57c6736" protocol=ttrpc version=3 Sep 13 00:25:47.797465 systemd[1]: Started cri-containerd-05aa1513ffe9896c47343fc4cc07e9a43dce533b3aa941b2716c26a11fce286f.scope - libcontainer container 05aa1513ffe9896c47343fc4cc07e9a43dce533b3aa941b2716c26a11fce286f. Sep 13 00:25:47.823854 containerd[1628]: time="2025-09-13T00:25:47.823835149Z" level=info msg="StartContainer for \"05aa1513ffe9896c47343fc4cc07e9a43dce533b3aa941b2716c26a11fce286f\" returns successfully" Sep 13 00:25:49.467741 kubelet[2927]: I0913 00:25:49.467493 2927 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-755d956888-bjd9c" podStartSLOduration=2.111491278 podStartE2EDuration="4.467481527s" podCreationTimestamp="2025-09-13 00:25:45 +0000 UTC" firstStartedPulling="2025-09-13 00:25:45.409381231 +0000 UTC m=+5.760843088" lastFinishedPulling="2025-09-13 00:25:47.76537148 +0000 UTC m=+8.116833337" observedRunningTime="2025-09-13 00:25:48.812873837 +0000 UTC m=+9.164335697" watchObservedRunningTime="2025-09-13 00:25:49.467481527 +0000 UTC m=+9.818943388" Sep 13 00:25:53.030009 sudo[1956]: pam_unix(sudo:session): session closed for user root Sep 13 00:25:53.030884 sshd[1955]: Connection closed by 139.178.89.65 port 40554 Sep 13 00:25:53.034262 sshd-session[1953]: pam_unix(sshd:session): session closed for user core Sep 13 00:25:53.038176 systemd[1]: sshd@7-139.178.70.110:22-139.178.89.65:40554.service: Deactivated successfully. Sep 13 00:25:53.042166 systemd[1]: session-9.scope: Deactivated successfully. Sep 13 00:25:53.043589 systemd[1]: session-9.scope: Consumed 2.511s CPU time, 150.8M memory peak. Sep 13 00:25:53.046594 systemd-logind[1599]: Session 9 logged out. Waiting for processes to exit. Sep 13 00:25:53.047763 systemd-logind[1599]: Removed session 9. Sep 13 00:25:55.735443 systemd[1]: Created slice kubepods-besteffort-pod56a000e6_49d9_4c0e_a05a_85686182c797.slice - libcontainer container kubepods-besteffort-pod56a000e6_49d9_4c0e_a05a_85686182c797.slice. Sep 13 00:25:55.760374 kubelet[2927]: I0913 00:25:55.760303 2927 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x7fds\" (UniqueName: \"kubernetes.io/projected/56a000e6-49d9-4c0e-a05a-85686182c797-kube-api-access-x7fds\") pod \"calico-typha-7cf95bbdc8-grwm8\" (UID: \"56a000e6-49d9-4c0e-a05a-85686182c797\") " pod="calico-system/calico-typha-7cf95bbdc8-grwm8" Sep 13 00:25:55.760909 kubelet[2927]: I0913 00:25:55.760848 2927 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/56a000e6-49d9-4c0e-a05a-85686182c797-typha-certs\") pod \"calico-typha-7cf95bbdc8-grwm8\" (UID: \"56a000e6-49d9-4c0e-a05a-85686182c797\") " pod="calico-system/calico-typha-7cf95bbdc8-grwm8" Sep 13 00:25:55.760909 kubelet[2927]: I0913 00:25:55.760873 2927 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/56a000e6-49d9-4c0e-a05a-85686182c797-tigera-ca-bundle\") pod \"calico-typha-7cf95bbdc8-grwm8\" (UID: \"56a000e6-49d9-4c0e-a05a-85686182c797\") " pod="calico-system/calico-typha-7cf95bbdc8-grwm8" Sep 13 00:25:56.038514 containerd[1628]: time="2025-09-13T00:25:56.038437122Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7cf95bbdc8-grwm8,Uid:56a000e6-49d9-4c0e-a05a-85686182c797,Namespace:calico-system,Attempt:0,}" Sep 13 00:25:56.085414 systemd[1]: Created slice kubepods-besteffort-pod71fb5a5a_8e98_4da4_8837_3e3d7f3c04db.slice - libcontainer container kubepods-besteffort-pod71fb5a5a_8e98_4da4_8837_3e3d7f3c04db.slice. Sep 13 00:25:56.087490 containerd[1628]: time="2025-09-13T00:25:56.086057961Z" level=info msg="connecting to shim 87bd19bcfc11305e6a461a02f6b4d03d5925584747aafadad336cf1be27dfe8a" address="unix:///run/containerd/s/42f09cd65435b6335bc2163d32d978dc4654244c5f4c67596f8d15ffa9c07abf" namespace=k8s.io protocol=ttrpc version=3 Sep 13 00:25:56.114500 systemd[1]: Started cri-containerd-87bd19bcfc11305e6a461a02f6b4d03d5925584747aafadad336cf1be27dfe8a.scope - libcontainer container 87bd19bcfc11305e6a461a02f6b4d03d5925584747aafadad336cf1be27dfe8a. Sep 13 00:25:56.163164 kubelet[2927]: I0913 00:25:56.163060 2927 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/71fb5a5a-8e98-4da4-8837-3e3d7f3c04db-cni-bin-dir\") pod \"calico-node-szwgf\" (UID: \"71fb5a5a-8e98-4da4-8837-3e3d7f3c04db\") " pod="calico-system/calico-node-szwgf" Sep 13 00:25:56.163258 kubelet[2927]: I0913 00:25:56.163182 2927 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/71fb5a5a-8e98-4da4-8837-3e3d7f3c04db-tigera-ca-bundle\") pod \"calico-node-szwgf\" (UID: \"71fb5a5a-8e98-4da4-8837-3e3d7f3c04db\") " pod="calico-system/calico-node-szwgf" Sep 13 00:25:56.163337 kubelet[2927]: I0913 00:25:56.163296 2927 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/71fb5a5a-8e98-4da4-8837-3e3d7f3c04db-cni-net-dir\") pod \"calico-node-szwgf\" (UID: \"71fb5a5a-8e98-4da4-8837-3e3d7f3c04db\") " pod="calico-system/calico-node-szwgf" Sep 13 00:25:56.163337 kubelet[2927]: I0913 00:25:56.163325 2927 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-269wt\" (UniqueName: \"kubernetes.io/projected/71fb5a5a-8e98-4da4-8837-3e3d7f3c04db-kube-api-access-269wt\") pod \"calico-node-szwgf\" (UID: \"71fb5a5a-8e98-4da4-8837-3e3d7f3c04db\") " pod="calico-system/calico-node-szwgf" Sep 13 00:25:56.163455 kubelet[2927]: I0913 00:25:56.163344 2927 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/71fb5a5a-8e98-4da4-8837-3e3d7f3c04db-lib-modules\") pod \"calico-node-szwgf\" (UID: \"71fb5a5a-8e98-4da4-8837-3e3d7f3c04db\") " pod="calico-system/calico-node-szwgf" Sep 13 00:25:56.163455 kubelet[2927]: I0913 00:25:56.163386 2927 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/71fb5a5a-8e98-4da4-8837-3e3d7f3c04db-node-certs\") pod \"calico-node-szwgf\" (UID: \"71fb5a5a-8e98-4da4-8837-3e3d7f3c04db\") " pod="calico-system/calico-node-szwgf" Sep 13 00:25:56.163455 kubelet[2927]: I0913 00:25:56.163404 2927 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/71fb5a5a-8e98-4da4-8837-3e3d7f3c04db-cni-log-dir\") pod \"calico-node-szwgf\" (UID: \"71fb5a5a-8e98-4da4-8837-3e3d7f3c04db\") " pod="calico-system/calico-node-szwgf" Sep 13 00:25:56.163455 kubelet[2927]: I0913 00:25:56.163419 2927 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/71fb5a5a-8e98-4da4-8837-3e3d7f3c04db-xtables-lock\") pod \"calico-node-szwgf\" (UID: \"71fb5a5a-8e98-4da4-8837-3e3d7f3c04db\") " pod="calico-system/calico-node-szwgf" Sep 13 00:25:56.163638 kubelet[2927]: I0913 00:25:56.163470 2927 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/71fb5a5a-8e98-4da4-8837-3e3d7f3c04db-flexvol-driver-host\") pod \"calico-node-szwgf\" (UID: \"71fb5a5a-8e98-4da4-8837-3e3d7f3c04db\") " pod="calico-system/calico-node-szwgf" Sep 13 00:25:56.163638 kubelet[2927]: I0913 00:25:56.163489 2927 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/71fb5a5a-8e98-4da4-8837-3e3d7f3c04db-policysync\") pod \"calico-node-szwgf\" (UID: \"71fb5a5a-8e98-4da4-8837-3e3d7f3c04db\") " pod="calico-system/calico-node-szwgf" Sep 13 00:25:56.163638 kubelet[2927]: I0913 00:25:56.163498 2927 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/71fb5a5a-8e98-4da4-8837-3e3d7f3c04db-var-lib-calico\") pod \"calico-node-szwgf\" (UID: \"71fb5a5a-8e98-4da4-8837-3e3d7f3c04db\") " pod="calico-system/calico-node-szwgf" Sep 13 00:25:56.163638 kubelet[2927]: I0913 00:25:56.163510 2927 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/71fb5a5a-8e98-4da4-8837-3e3d7f3c04db-var-run-calico\") pod \"calico-node-szwgf\" (UID: \"71fb5a5a-8e98-4da4-8837-3e3d7f3c04db\") " pod="calico-system/calico-node-szwgf" Sep 13 00:25:56.168883 containerd[1628]: time="2025-09-13T00:25:56.168849669Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7cf95bbdc8-grwm8,Uid:56a000e6-49d9-4c0e-a05a-85686182c797,Namespace:calico-system,Attempt:0,} returns sandbox id \"87bd19bcfc11305e6a461a02f6b4d03d5925584747aafadad336cf1be27dfe8a\"" Sep 13 00:25:56.170160 containerd[1628]: time="2025-09-13T00:25:56.170139349Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\"" Sep 13 00:25:56.283253 kubelet[2927]: E0913 00:25:56.282786 2927 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:25:56.283253 kubelet[2927]: W0913 00:25:56.282807 2927 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:25:56.283636 kubelet[2927]: E0913 00:25:56.283431 2927 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:25:56.286639 kubelet[2927]: E0913 00:25:56.286619 2927 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:25:56.286639 kubelet[2927]: W0913 00:25:56.286635 2927 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:25:56.286990 kubelet[2927]: E0913 00:25:56.286658 2927 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:25:56.295764 kubelet[2927]: E0913 00:25:56.295701 2927 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:25:56.295919 kubelet[2927]: W0913 00:25:56.295870 2927 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:25:56.295919 kubelet[2927]: E0913 00:25:56.295892 2927 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:25:56.374962 kubelet[2927]: E0913 00:25:56.374911 2927 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9rjbl" podUID="250c5ded-929d-4e71-af09-039897e71791" Sep 13 00:25:56.412009 containerd[1628]: time="2025-09-13T00:25:56.411800322Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-szwgf,Uid:71fb5a5a-8e98-4da4-8837-3e3d7f3c04db,Namespace:calico-system,Attempt:0,}" Sep 13 00:25:56.453975 kubelet[2927]: E0913 00:25:56.453946 2927 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:25:56.453975 kubelet[2927]: W0913 00:25:56.453970 2927 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:25:56.454091 kubelet[2927]: E0913 00:25:56.454002 2927 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:25:56.454160 kubelet[2927]: E0913 00:25:56.454142 2927 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:25:56.454196 kubelet[2927]: W0913 00:25:56.454165 2927 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:25:56.454196 kubelet[2927]: E0913 00:25:56.454174 2927 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:25:56.454380 kubelet[2927]: E0913 00:25:56.454306 2927 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:25:56.454380 kubelet[2927]: W0913 00:25:56.454377 2927 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:25:56.454425 kubelet[2927]: E0913 00:25:56.454387 2927 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:25:56.455130 kubelet[2927]: E0913 00:25:56.454582 2927 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:25:56.455130 kubelet[2927]: W0913 00:25:56.454592 2927 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:25:56.455130 kubelet[2927]: E0913 00:25:56.454698 2927 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:25:56.455130 kubelet[2927]: E0913 00:25:56.454843 2927 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:25:56.455130 kubelet[2927]: W0913 00:25:56.454852 2927 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:25:56.455130 kubelet[2927]: E0913 00:25:56.454860 2927 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:25:56.455130 kubelet[2927]: E0913 00:25:56.454986 2927 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:25:56.455130 kubelet[2927]: W0913 00:25:56.455012 2927 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:25:56.455130 kubelet[2927]: E0913 00:25:56.455021 2927 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:25:56.455312 kubelet[2927]: E0913 00:25:56.455146 2927 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:25:56.455312 kubelet[2927]: W0913 00:25:56.455171 2927 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:25:56.455312 kubelet[2927]: E0913 00:25:56.455179 2927 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:25:56.455312 kubelet[2927]: E0913 00:25:56.455291 2927 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:25:56.455312 kubelet[2927]: W0913 00:25:56.455297 2927 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:25:56.455312 kubelet[2927]: E0913 00:25:56.455304 2927 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:25:56.456250 kubelet[2927]: E0913 00:25:56.455497 2927 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:25:56.456250 kubelet[2927]: W0913 00:25:56.455504 2927 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:25:56.456250 kubelet[2927]: E0913 00:25:56.455511 2927 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:25:56.456966 kubelet[2927]: E0913 00:25:56.456430 2927 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:25:56.456966 kubelet[2927]: W0913 00:25:56.456446 2927 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:25:56.456966 kubelet[2927]: E0913 00:25:56.456455 2927 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:25:56.456966 kubelet[2927]: E0913 00:25:56.456562 2927 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:25:56.456966 kubelet[2927]: W0913 00:25:56.456569 2927 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:25:56.456966 kubelet[2927]: E0913 00:25:56.456578 2927 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:25:56.456966 kubelet[2927]: E0913 00:25:56.456707 2927 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:25:56.456966 kubelet[2927]: W0913 00:25:56.456714 2927 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:25:56.456966 kubelet[2927]: E0913 00:25:56.456721 2927 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:25:56.456966 kubelet[2927]: E0913 00:25:56.456851 2927 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:25:56.457131 kubelet[2927]: W0913 00:25:56.456859 2927 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:25:56.457131 kubelet[2927]: E0913 00:25:56.456867 2927 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:25:56.457131 kubelet[2927]: E0913 00:25:56.456993 2927 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:25:56.457131 kubelet[2927]: W0913 00:25:56.457000 2927 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:25:56.457131 kubelet[2927]: E0913 00:25:56.457008 2927 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:25:56.457131 kubelet[2927]: E0913 00:25:56.457124 2927 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:25:56.457228 kubelet[2927]: W0913 00:25:56.457142 2927 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:25:56.457228 kubelet[2927]: E0913 00:25:56.457152 2927 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:25:56.457958 kubelet[2927]: E0913 00:25:56.457266 2927 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:25:56.457958 kubelet[2927]: W0913 00:25:56.457276 2927 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:25:56.457958 kubelet[2927]: E0913 00:25:56.457284 2927 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:25:56.457958 kubelet[2927]: E0913 00:25:56.457431 2927 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:25:56.457958 kubelet[2927]: W0913 00:25:56.457448 2927 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:25:56.457958 kubelet[2927]: E0913 00:25:56.457458 2927 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:25:56.457958 kubelet[2927]: E0913 00:25:56.457577 2927 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:25:56.457958 kubelet[2927]: W0913 00:25:56.457584 2927 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:25:56.457958 kubelet[2927]: E0913 00:25:56.457592 2927 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:25:56.457958 kubelet[2927]: E0913 00:25:56.457719 2927 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:25:56.458266 kubelet[2927]: W0913 00:25:56.457725 2927 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:25:56.458266 kubelet[2927]: E0913 00:25:56.457733 2927 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:25:56.458266 kubelet[2927]: E0913 00:25:56.457853 2927 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:25:56.458266 kubelet[2927]: W0913 00:25:56.457871 2927 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:25:56.458266 kubelet[2927]: E0913 00:25:56.457880 2927 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:25:56.467849 kubelet[2927]: E0913 00:25:56.467828 2927 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:25:56.468475 kubelet[2927]: W0913 00:25:56.468367 2927 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:25:56.468475 kubelet[2927]: E0913 00:25:56.468389 2927 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:25:56.468475 kubelet[2927]: I0913 00:25:56.468428 2927 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/250c5ded-929d-4e71-af09-039897e71791-registration-dir\") pod \"csi-node-driver-9rjbl\" (UID: \"250c5ded-929d-4e71-af09-039897e71791\") " pod="calico-system/csi-node-driver-9rjbl" Sep 13 00:25:56.468724 kubelet[2927]: E0913 00:25:56.468705 2927 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:25:56.468793 kubelet[2927]: W0913 00:25:56.468778 2927 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:25:56.469008 kubelet[2927]: E0913 00:25:56.468860 2927 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:25:56.469008 kubelet[2927]: I0913 00:25:56.468880 2927 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b6dh7\" (UniqueName: \"kubernetes.io/projected/250c5ded-929d-4e71-af09-039897e71791-kube-api-access-b6dh7\") pod \"csi-node-driver-9rjbl\" (UID: \"250c5ded-929d-4e71-af09-039897e71791\") " pod="calico-system/csi-node-driver-9rjbl" Sep 13 00:25:56.469427 kubelet[2927]: E0913 00:25:56.469412 2927 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:25:56.469564 kubelet[2927]: W0913 00:25:56.469554 2927 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:25:56.469722 kubelet[2927]: E0913 00:25:56.469712 2927 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:25:56.469920 kubelet[2927]: I0913 00:25:56.469858 2927 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/250c5ded-929d-4e71-af09-039897e71791-kubelet-dir\") pod \"csi-node-driver-9rjbl\" (UID: \"250c5ded-929d-4e71-af09-039897e71791\") " pod="calico-system/csi-node-driver-9rjbl" Sep 13 00:25:56.470200 kubelet[2927]: E0913 00:25:56.470191 2927 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:25:56.470341 kubelet[2927]: W0913 00:25:56.470327 2927 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:25:56.470599 kubelet[2927]: E0913 00:25:56.470550 2927 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:25:56.470599 kubelet[2927]: I0913 00:25:56.470573 2927 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/250c5ded-929d-4e71-af09-039897e71791-socket-dir\") pod \"csi-node-driver-9rjbl\" (UID: \"250c5ded-929d-4e71-af09-039897e71791\") " pod="calico-system/csi-node-driver-9rjbl" Sep 13 00:25:56.470955 kubelet[2927]: E0913 00:25:56.470935 2927 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:25:56.470955 kubelet[2927]: W0913 00:25:56.470944 2927 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:25:56.471274 kubelet[2927]: E0913 00:25:56.471257 2927 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:25:56.471483 kubelet[2927]: E0913 00:25:56.471414 2927 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:25:56.471483 kubelet[2927]: W0913 00:25:56.471423 2927 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:25:56.471810 kubelet[2927]: E0913 00:25:56.471730 2927 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:25:56.472005 kubelet[2927]: E0913 00:25:56.471996 2927 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:25:56.472181 kubelet[2927]: W0913 00:25:56.472061 2927 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:25:56.472376 kubelet[2927]: E0913 00:25:56.472321 2927 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:25:56.472549 kubelet[2927]: E0913 00:25:56.472450 2927 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:25:56.472650 kubelet[2927]: W0913 00:25:56.472600 2927 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:25:56.472650 kubelet[2927]: E0913 00:25:56.472633 2927 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:25:56.473009 kubelet[2927]: I0913 00:25:56.472655 2927 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/250c5ded-929d-4e71-af09-039897e71791-varrun\") pod \"csi-node-driver-9rjbl\" (UID: \"250c5ded-929d-4e71-af09-039897e71791\") " pod="calico-system/csi-node-driver-9rjbl" Sep 13 00:25:56.473297 kubelet[2927]: E0913 00:25:56.473101 2927 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:25:56.473297 kubelet[2927]: W0913 00:25:56.473110 2927 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:25:56.473297 kubelet[2927]: E0913 00:25:56.473133 2927 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:25:56.473676 kubelet[2927]: E0913 00:25:56.473567 2927 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:25:56.473676 kubelet[2927]: W0913 00:25:56.473582 2927 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:25:56.473676 kubelet[2927]: E0913 00:25:56.473591 2927 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:25:56.474451 kubelet[2927]: E0913 00:25:56.474437 2927 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:25:56.474451 kubelet[2927]: W0913 00:25:56.474449 2927 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:25:56.474603 kubelet[2927]: E0913 00:25:56.474463 2927 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:25:56.474603 kubelet[2927]: E0913 00:25:56.474568 2927 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:25:56.474603 kubelet[2927]: W0913 00:25:56.474574 2927 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:25:56.474603 kubelet[2927]: E0913 00:25:56.474582 2927 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:25:56.475017 kubelet[2927]: E0913 00:25:56.475007 2927 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:25:56.475017 kubelet[2927]: W0913 00:25:56.475014 2927 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:25:56.475211 kubelet[2927]: E0913 00:25:56.475021 2927 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:25:56.475211 kubelet[2927]: E0913 00:25:56.475120 2927 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:25:56.475211 kubelet[2927]: W0913 00:25:56.475125 2927 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:25:56.475211 kubelet[2927]: E0913 00:25:56.475130 2927 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:25:56.476477 kubelet[2927]: E0913 00:25:56.476425 2927 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:25:56.476477 kubelet[2927]: W0913 00:25:56.476435 2927 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:25:56.476477 kubelet[2927]: E0913 00:25:56.476443 2927 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:25:56.540451 containerd[1628]: time="2025-09-13T00:25:56.540419311Z" level=info msg="connecting to shim 4c74e1a9cc77b30de39de73e4d1bb74c5f5c202718264caedd6ab7a236dd6238" address="unix:///run/containerd/s/eab66ab92d6535f060224470e730d49cb9b00056775c410e61a33f9f64ce374e" namespace=k8s.io protocol=ttrpc version=3 Sep 13 00:25:56.557596 systemd[1]: Started cri-containerd-4c74e1a9cc77b30de39de73e4d1bb74c5f5c202718264caedd6ab7a236dd6238.scope - libcontainer container 4c74e1a9cc77b30de39de73e4d1bb74c5f5c202718264caedd6ab7a236dd6238. Sep 13 00:25:56.577124 kubelet[2927]: E0913 00:25:56.577072 2927 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:25:56.577849 kubelet[2927]: W0913 00:25:56.577269 2927 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:25:56.577849 kubelet[2927]: E0913 00:25:56.577295 2927 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:25:56.577849 kubelet[2927]: E0913 00:25:56.577788 2927 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:25:56.577849 kubelet[2927]: W0913 00:25:56.577794 2927 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:25:56.577849 kubelet[2927]: E0913 00:25:56.577801 2927 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:25:56.578082 kubelet[2927]: E0913 00:25:56.578074 2927 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:25:56.578131 kubelet[2927]: W0913 00:25:56.578118 2927 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:25:56.578178 kubelet[2927]: E0913 00:25:56.578172 2927 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:25:56.578426 kubelet[2927]: E0913 00:25:56.578420 2927 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:25:56.578489 kubelet[2927]: W0913 00:25:56.578482 2927 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:25:56.578596 kubelet[2927]: E0913 00:25:56.578571 2927 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:25:56.578809 kubelet[2927]: E0913 00:25:56.578802 2927 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:25:56.578923 kubelet[2927]: W0913 00:25:56.578887 2927 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:25:56.578923 kubelet[2927]: E0913 00:25:56.578902 2927 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:25:56.579169 kubelet[2927]: E0913 00:25:56.579116 2927 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:25:56.579169 kubelet[2927]: W0913 00:25:56.579124 2927 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:25:56.579169 kubelet[2927]: E0913 00:25:56.579135 2927 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:25:56.582576 kubelet[2927]: E0913 00:25:56.579587 2927 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:25:56.582576 kubelet[2927]: W0913 00:25:56.579593 2927 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:25:56.582576 kubelet[2927]: E0913 00:25:56.579618 2927 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:25:56.582576 kubelet[2927]: E0913 00:25:56.579748 2927 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:25:56.582576 kubelet[2927]: W0913 00:25:56.579753 2927 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:25:56.582576 kubelet[2927]: E0913 00:25:56.579771 2927 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:25:56.582576 kubelet[2927]: E0913 00:25:56.580079 2927 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:25:56.582576 kubelet[2927]: W0913 00:25:56.580084 2927 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:25:56.582576 kubelet[2927]: E0913 00:25:56.580103 2927 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:25:56.582576 kubelet[2927]: E0913 00:25:56.580505 2927 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:25:56.582787 kubelet[2927]: W0913 00:25:56.580511 2927 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:25:56.582787 kubelet[2927]: E0913 00:25:56.580693 2927 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:25:56.582787 kubelet[2927]: E0913 00:25:56.580733 2927 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:25:56.582787 kubelet[2927]: W0913 00:25:56.580812 2927 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:25:56.582787 kubelet[2927]: E0913 00:25:56.580847 2927 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:25:56.582787 kubelet[2927]: E0913 00:25:56.581027 2927 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:25:56.582787 kubelet[2927]: W0913 00:25:56.581033 2927 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:25:56.582787 kubelet[2927]: E0913 00:25:56.581042 2927 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:25:56.582787 kubelet[2927]: E0913 00:25:56.581180 2927 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:25:56.582787 kubelet[2927]: W0913 00:25:56.581198 2927 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:25:56.583003 kubelet[2927]: E0913 00:25:56.581209 2927 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:25:56.583003 kubelet[2927]: E0913 00:25:56.581334 2927 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:25:56.583003 kubelet[2927]: W0913 00:25:56.581341 2927 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:25:56.583003 kubelet[2927]: E0913 00:25:56.581378 2927 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:25:56.583003 kubelet[2927]: E0913 00:25:56.581497 2927 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:25:56.583003 kubelet[2927]: W0913 00:25:56.581502 2927 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:25:56.583003 kubelet[2927]: E0913 00:25:56.581526 2927 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:25:56.583003 kubelet[2927]: E0913 00:25:56.581637 2927 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:25:56.583003 kubelet[2927]: W0913 00:25:56.581642 2927 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:25:56.583003 kubelet[2927]: E0913 00:25:56.581649 2927 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:25:56.583249 kubelet[2927]: E0913 00:25:56.581738 2927 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:25:56.583249 kubelet[2927]: W0913 00:25:56.581743 2927 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:25:56.583249 kubelet[2927]: E0913 00:25:56.581750 2927 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:25:56.583249 kubelet[2927]: E0913 00:25:56.581883 2927 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:25:56.583249 kubelet[2927]: W0913 00:25:56.581888 2927 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:25:56.583249 kubelet[2927]: E0913 00:25:56.581940 2927 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:25:56.583249 kubelet[2927]: E0913 00:25:56.582341 2927 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:25:56.583249 kubelet[2927]: W0913 00:25:56.582354 2927 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:25:56.583249 kubelet[2927]: E0913 00:25:56.582444 2927 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:25:56.586679 kubelet[2927]: E0913 00:25:56.585224 2927 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:25:56.586679 kubelet[2927]: W0913 00:25:56.585250 2927 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:25:56.586679 kubelet[2927]: E0913 00:25:56.585337 2927 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:25:56.586679 kubelet[2927]: E0913 00:25:56.586099 2927 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:25:56.586679 kubelet[2927]: W0913 00:25:56.586112 2927 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:25:56.586679 kubelet[2927]: E0913 00:25:56.586131 2927 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:25:56.586679 kubelet[2927]: E0913 00:25:56.586262 2927 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:25:56.586679 kubelet[2927]: W0913 00:25:56.586269 2927 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:25:56.586679 kubelet[2927]: E0913 00:25:56.586283 2927 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:25:56.586679 kubelet[2927]: E0913 00:25:56.586468 2927 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:25:56.586987 containerd[1628]: time="2025-09-13T00:25:56.585854753Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-szwgf,Uid:71fb5a5a-8e98-4da4-8837-3e3d7f3c04db,Namespace:calico-system,Attempt:0,} returns sandbox id \"4c74e1a9cc77b30de39de73e4d1bb74c5f5c202718264caedd6ab7a236dd6238\"" Sep 13 00:25:56.587029 kubelet[2927]: W0913 00:25:56.586474 2927 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:25:56.587029 kubelet[2927]: E0913 00:25:56.586494 2927 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:25:56.587029 kubelet[2927]: E0913 00:25:56.586587 2927 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:25:56.587029 kubelet[2927]: W0913 00:25:56.586592 2927 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:25:56.587029 kubelet[2927]: E0913 00:25:56.586597 2927 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:25:56.593457 kubelet[2927]: E0913 00:25:56.593416 2927 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:25:56.593457 kubelet[2927]: W0913 00:25:56.593435 2927 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:25:56.593457 kubelet[2927]: E0913 00:25:56.593458 2927 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:25:56.594949 kubelet[2927]: E0913 00:25:56.594932 2927 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:25:56.594949 kubelet[2927]: W0913 00:25:56.594945 2927 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:25:56.595018 kubelet[2927]: E0913 00:25:56.594958 2927 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:25:57.762982 kubelet[2927]: E0913 00:25:57.762052 2927 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9rjbl" podUID="250c5ded-929d-4e71-af09-039897e71791" Sep 13 00:25:57.851568 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2684041551.mount: Deactivated successfully. Sep 13 00:25:58.562392 containerd[1628]: time="2025-09-13T00:25:58.561891685Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:25:58.562392 containerd[1628]: time="2025-09-13T00:25:58.562293974Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.3: active requests=0, bytes read=35237389" Sep 13 00:25:58.563249 containerd[1628]: time="2025-09-13T00:25:58.563223803Z" level=info msg="ImageCreate event name:\"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:25:58.564880 containerd[1628]: time="2025-09-13T00:25:58.564852671Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:25:58.565828 containerd[1628]: time="2025-09-13T00:25:58.565803231Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.3\" with image id \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\", size \"35237243\" in 2.39562496s" Sep 13 00:25:58.565885 containerd[1628]: time="2025-09-13T00:25:58.565831406Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\" returns image reference \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\"" Sep 13 00:25:58.566644 containerd[1628]: time="2025-09-13T00:25:58.566607280Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\"" Sep 13 00:25:58.578895 containerd[1628]: time="2025-09-13T00:25:58.577698785Z" level=info msg="CreateContainer within sandbox \"87bd19bcfc11305e6a461a02f6b4d03d5925584747aafadad336cf1be27dfe8a\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Sep 13 00:25:58.584488 containerd[1628]: time="2025-09-13T00:25:58.582801751Z" level=info msg="Container 2a88934645f7e2db0b9ec6d1b72334cc2f25034fca12e5b78b588d6616442bb8: CDI devices from CRI Config.CDIDevices: []" Sep 13 00:25:58.594178 containerd[1628]: time="2025-09-13T00:25:58.594118964Z" level=info msg="CreateContainer within sandbox \"87bd19bcfc11305e6a461a02f6b4d03d5925584747aafadad336cf1be27dfe8a\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"2a88934645f7e2db0b9ec6d1b72334cc2f25034fca12e5b78b588d6616442bb8\"" Sep 13 00:25:58.595030 containerd[1628]: time="2025-09-13T00:25:58.594841717Z" level=info msg="StartContainer for \"2a88934645f7e2db0b9ec6d1b72334cc2f25034fca12e5b78b588d6616442bb8\"" Sep 13 00:25:58.595874 containerd[1628]: time="2025-09-13T00:25:58.595837106Z" level=info msg="connecting to shim 2a88934645f7e2db0b9ec6d1b72334cc2f25034fca12e5b78b588d6616442bb8" address="unix:///run/containerd/s/42f09cd65435b6335bc2163d32d978dc4654244c5f4c67596f8d15ffa9c07abf" protocol=ttrpc version=3 Sep 13 00:25:58.616542 systemd[1]: Started cri-containerd-2a88934645f7e2db0b9ec6d1b72334cc2f25034fca12e5b78b588d6616442bb8.scope - libcontainer container 2a88934645f7e2db0b9ec6d1b72334cc2f25034fca12e5b78b588d6616442bb8. Sep 13 00:25:58.672682 containerd[1628]: time="2025-09-13T00:25:58.672417740Z" level=info msg="StartContainer for \"2a88934645f7e2db0b9ec6d1b72334cc2f25034fca12e5b78b588d6616442bb8\" returns successfully" Sep 13 00:25:58.972175 kubelet[2927]: E0913 00:25:58.972081 2927 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:25:58.972175 kubelet[2927]: W0913 00:25:58.972099 2927 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:25:58.972175 kubelet[2927]: E0913 00:25:58.972113 2927 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:25:58.972663 kubelet[2927]: E0913 00:25:58.972472 2927 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:25:58.972663 kubelet[2927]: W0913 00:25:58.972479 2927 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:25:58.972663 kubelet[2927]: E0913 00:25:58.972485 2927 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:25:58.972663 kubelet[2927]: E0913 00:25:58.972616 2927 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:25:58.972663 kubelet[2927]: W0913 00:25:58.972621 2927 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:25:58.972663 kubelet[2927]: E0913 00:25:58.972626 2927 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:25:58.972878 kubelet[2927]: E0913 00:25:58.972720 2927 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:25:58.972878 kubelet[2927]: W0913 00:25:58.972726 2927 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:25:58.972878 kubelet[2927]: E0913 00:25:58.972734 2927 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:25:58.972878 kubelet[2927]: E0913 00:25:58.972832 2927 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:25:58.972878 kubelet[2927]: W0913 00:25:58.972836 2927 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:25:58.972878 kubelet[2927]: E0913 00:25:58.972844 2927 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:25:58.973100 kubelet[2927]: E0913 00:25:58.972914 2927 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:25:58.973100 kubelet[2927]: W0913 00:25:58.972918 2927 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:25:58.973100 kubelet[2927]: E0913 00:25:58.972923 2927 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:25:58.973100 kubelet[2927]: E0913 00:25:58.973000 2927 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:25:58.973100 kubelet[2927]: W0913 00:25:58.973004 2927 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:25:58.973100 kubelet[2927]: E0913 00:25:58.973008 2927 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:25:58.973100 kubelet[2927]: E0913 00:25:58.973076 2927 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:25:58.973100 kubelet[2927]: W0913 00:25:58.973080 2927 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:25:58.973100 kubelet[2927]: E0913 00:25:58.973085 2927 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:25:58.973420 kubelet[2927]: E0913 00:25:58.973174 2927 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:25:58.973420 kubelet[2927]: W0913 00:25:58.973178 2927 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:25:58.973420 kubelet[2927]: E0913 00:25:58.973184 2927 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:25:58.973420 kubelet[2927]: E0913 00:25:58.973256 2927 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:25:58.973420 kubelet[2927]: W0913 00:25:58.973262 2927 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:25:58.973420 kubelet[2927]: E0913 00:25:58.973267 2927 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:25:58.973420 kubelet[2927]: E0913 00:25:58.973359 2927 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:25:58.973420 kubelet[2927]: W0913 00:25:58.973365 2927 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:25:58.973420 kubelet[2927]: E0913 00:25:58.973370 2927 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:25:58.978500 kubelet[2927]: E0913 00:25:58.973446 2927 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:25:58.978500 kubelet[2927]: W0913 00:25:58.973450 2927 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:25:58.978500 kubelet[2927]: E0913 00:25:58.973455 2927 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:25:58.978500 kubelet[2927]: E0913 00:25:58.973533 2927 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:25:58.978500 kubelet[2927]: W0913 00:25:58.973537 2927 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:25:58.978500 kubelet[2927]: E0913 00:25:58.973541 2927 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:25:58.978500 kubelet[2927]: E0913 00:25:58.973619 2927 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:25:58.978500 kubelet[2927]: W0913 00:25:58.973623 2927 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:25:58.978500 kubelet[2927]: E0913 00:25:58.973628 2927 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:25:58.978500 kubelet[2927]: E0913 00:25:58.973715 2927 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:25:58.983339 kubelet[2927]: W0913 00:25:58.973719 2927 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:25:58.983339 kubelet[2927]: E0913 00:25:58.973724 2927 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:25:58.997203 kubelet[2927]: E0913 00:25:58.997026 2927 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:25:58.997203 kubelet[2927]: W0913 00:25:58.997135 2927 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:25:58.997203 kubelet[2927]: E0913 00:25:58.997148 2927 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:25:58.997519 kubelet[2927]: E0913 00:25:58.997504 2927 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:25:58.997614 kubelet[2927]: W0913 00:25:58.997561 2927 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:25:58.997614 kubelet[2927]: E0913 00:25:58.997571 2927 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:25:58.998007 kubelet[2927]: E0913 00:25:58.997960 2927 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:25:58.998007 kubelet[2927]: W0913 00:25:58.997965 2927 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:25:58.998007 kubelet[2927]: E0913 00:25:58.997971 2927 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:25:58.998332 kubelet[2927]: E0913 00:25:58.998297 2927 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:25:58.998332 kubelet[2927]: W0913 00:25:58.998304 2927 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:25:58.998436 kubelet[2927]: E0913 00:25:58.998312 2927 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:25:58.998565 kubelet[2927]: E0913 00:25:58.998559 2927 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:25:58.998604 kubelet[2927]: W0913 00:25:58.998595 2927 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:25:58.998693 kubelet[2927]: E0913 00:25:58.998639 2927 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:25:58.998761 kubelet[2927]: E0913 00:25:58.998754 2927 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:25:58.998807 kubelet[2927]: W0913 00:25:58.998790 2927 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:25:58.998807 kubelet[2927]: E0913 00:25:58.998797 2927 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:25:58.999004 kubelet[2927]: E0913 00:25:58.998965 2927 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:25:58.999004 kubelet[2927]: W0913 00:25:58.998972 2927 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:25:58.999004 kubelet[2927]: E0913 00:25:58.998982 2927 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:25:58.999205 kubelet[2927]: E0913 00:25:58.999168 2927 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:25:58.999205 kubelet[2927]: W0913 00:25:58.999175 2927 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:25:58.999205 kubelet[2927]: E0913 00:25:58.999185 2927 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:25:58.999470 kubelet[2927]: E0913 00:25:58.999437 2927 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:25:58.999470 kubelet[2927]: W0913 00:25:58.999445 2927 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:25:58.999470 kubelet[2927]: E0913 00:25:58.999466 2927 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:25:58.999676 kubelet[2927]: E0913 00:25:58.999628 2927 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:25:58.999676 kubelet[2927]: W0913 00:25:58.999634 2927 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:25:58.999676 kubelet[2927]: E0913 00:25:58.999652 2927 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:25:59.000001 kubelet[2927]: E0913 00:25:58.999912 2927 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:25:59.000001 kubelet[2927]: W0913 00:25:58.999918 2927 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:25:59.000001 kubelet[2927]: E0913 00:25:58.999930 2927 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:25:59.000070 kubelet[2927]: E0913 00:25:59.000037 2927 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:25:59.000070 kubelet[2927]: W0913 00:25:59.000045 2927 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:25:59.000070 kubelet[2927]: E0913 00:25:59.000052 2927 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:25:59.000326 kubelet[2927]: E0913 00:25:59.000315 2927 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:25:59.000326 kubelet[2927]: W0913 00:25:59.000324 2927 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:25:59.000408 kubelet[2927]: E0913 00:25:59.000332 2927 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:25:59.000458 kubelet[2927]: E0913 00:25:59.000420 2927 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:25:59.000458 kubelet[2927]: W0913 00:25:59.000425 2927 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:25:59.000458 kubelet[2927]: E0913 00:25:59.000432 2927 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:25:59.000549 kubelet[2927]: E0913 00:25:59.000518 2927 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:25:59.000549 kubelet[2927]: W0913 00:25:59.000522 2927 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:25:59.000549 kubelet[2927]: E0913 00:25:59.000527 2927 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:25:59.000629 kubelet[2927]: E0913 00:25:59.000595 2927 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:25:59.000629 kubelet[2927]: W0913 00:25:59.000601 2927 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:25:59.000629 kubelet[2927]: E0913 00:25:59.000606 2927 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:25:59.000713 kubelet[2927]: E0913 00:25:59.000695 2927 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:25:59.000713 kubelet[2927]: W0913 00:25:59.000700 2927 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:25:59.000713 kubelet[2927]: E0913 00:25:59.000705 2927 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:25:59.000859 kubelet[2927]: E0913 00:25:59.000849 2927 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:25:59.000859 kubelet[2927]: W0913 00:25:59.000856 2927 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:25:59.001206 kubelet[2927]: E0913 00:25:59.000861 2927 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:25:59.762574 kubelet[2927]: E0913 00:25:59.762521 2927 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9rjbl" podUID="250c5ded-929d-4e71-af09-039897e71791" Sep 13 00:25:59.890370 kubelet[2927]: I0913 00:25:59.890242 2927 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 13 00:25:59.980873 kubelet[2927]: E0913 00:25:59.980845 2927 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:25:59.980873 kubelet[2927]: W0913 00:25:59.980866 2927 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:25:59.981149 kubelet[2927]: E0913 00:25:59.980884 2927 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:25:59.981149 kubelet[2927]: E0913 00:25:59.981008 2927 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:25:59.981149 kubelet[2927]: W0913 00:25:59.981016 2927 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:25:59.981149 kubelet[2927]: E0913 00:25:59.981023 2927 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:25:59.982150 kubelet[2927]: E0913 00:25:59.981335 2927 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:25:59.982150 kubelet[2927]: W0913 00:25:59.981345 2927 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:25:59.982150 kubelet[2927]: E0913 00:25:59.981364 2927 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:25:59.982150 kubelet[2927]: E0913 00:25:59.981513 2927 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:25:59.982150 kubelet[2927]: W0913 00:25:59.981520 2927 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:25:59.982150 kubelet[2927]: E0913 00:25:59.981525 2927 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:25:59.982150 kubelet[2927]: E0913 00:25:59.981664 2927 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:25:59.982150 kubelet[2927]: W0913 00:25:59.981670 2927 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:25:59.982150 kubelet[2927]: E0913 00:25:59.981675 2927 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:25:59.982150 kubelet[2927]: E0913 00:25:59.981793 2927 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:25:59.982382 kubelet[2927]: W0913 00:25:59.981800 2927 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:25:59.982382 kubelet[2927]: E0913 00:25:59.981807 2927 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:25:59.982382 kubelet[2927]: E0913 00:25:59.981916 2927 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:25:59.982382 kubelet[2927]: W0913 00:25:59.981923 2927 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:25:59.982382 kubelet[2927]: E0913 00:25:59.981930 2927 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:25:59.982382 kubelet[2927]: E0913 00:25:59.982053 2927 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:25:59.982382 kubelet[2927]: W0913 00:25:59.982060 2927 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:25:59.982382 kubelet[2927]: E0913 00:25:59.982067 2927 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:25:59.982382 kubelet[2927]: E0913 00:25:59.982204 2927 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:25:59.982382 kubelet[2927]: W0913 00:25:59.982210 2927 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:25:59.982622 kubelet[2927]: E0913 00:25:59.982217 2927 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:25:59.982622 kubelet[2927]: E0913 00:25:59.982311 2927 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:25:59.982622 kubelet[2927]: W0913 00:25:59.982316 2927 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:25:59.982622 kubelet[2927]: E0913 00:25:59.982321 2927 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:25:59.982622 kubelet[2927]: E0913 00:25:59.982425 2927 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:25:59.982622 kubelet[2927]: W0913 00:25:59.982431 2927 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:25:59.982622 kubelet[2927]: E0913 00:25:59.982438 2927 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:25:59.982862 kubelet[2927]: E0913 00:25:59.982840 2927 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:25:59.982862 kubelet[2927]: W0913 00:25:59.982848 2927 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:25:59.982862 kubelet[2927]: E0913 00:25:59.982854 2927 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:25:59.982952 kubelet[2927]: E0913 00:25:59.982942 2927 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:25:59.982952 kubelet[2927]: W0913 00:25:59.982949 2927 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:25:59.982993 kubelet[2927]: E0913 00:25:59.982954 2927 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:25:59.983059 kubelet[2927]: E0913 00:25:59.983048 2927 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:25:59.983059 kubelet[2927]: W0913 00:25:59.983057 2927 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:25:59.983116 kubelet[2927]: E0913 00:25:59.983068 2927 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:25:59.983196 kubelet[2927]: E0913 00:25:59.983185 2927 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:25:59.983196 kubelet[2927]: W0913 00:25:59.983193 2927 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:25:59.983242 kubelet[2927]: E0913 00:25:59.983200 2927 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:26:00.005627 kubelet[2927]: E0913 00:26:00.005605 2927 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:26:00.005627 kubelet[2927]: W0913 00:26:00.005622 2927 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:26:00.005781 kubelet[2927]: E0913 00:26:00.005638 2927 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:26:00.005781 kubelet[2927]: E0913 00:26:00.005771 2927 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:26:00.005781 kubelet[2927]: W0913 00:26:00.005777 2927 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:26:00.005876 kubelet[2927]: E0913 00:26:00.005789 2927 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:26:00.005917 kubelet[2927]: E0913 00:26:00.005907 2927 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:26:00.005917 kubelet[2927]: W0913 00:26:00.005916 2927 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:26:00.005963 kubelet[2927]: E0913 00:26:00.005924 2927 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:26:00.006092 kubelet[2927]: E0913 00:26:00.006082 2927 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:26:00.006119 kubelet[2927]: W0913 00:26:00.006092 2927 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:26:00.006119 kubelet[2927]: E0913 00:26:00.006102 2927 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:26:00.006258 kubelet[2927]: E0913 00:26:00.006248 2927 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:26:00.006258 kubelet[2927]: W0913 00:26:00.006257 2927 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:26:00.006338 kubelet[2927]: E0913 00:26:00.006268 2927 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:26:00.006391 kubelet[2927]: E0913 00:26:00.006365 2927 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:26:00.006391 kubelet[2927]: W0913 00:26:00.006370 2927 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:26:00.006391 kubelet[2927]: E0913 00:26:00.006378 2927 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:26:00.009236 kubelet[2927]: E0913 00:26:00.006477 2927 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:26:00.009236 kubelet[2927]: W0913 00:26:00.006483 2927 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:26:00.009236 kubelet[2927]: E0913 00:26:00.006490 2927 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:26:00.009236 kubelet[2927]: E0913 00:26:00.006662 2927 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:26:00.009236 kubelet[2927]: W0913 00:26:00.006671 2927 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:26:00.009236 kubelet[2927]: E0913 00:26:00.006688 2927 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:26:00.009236 kubelet[2927]: E0913 00:26:00.006791 2927 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:26:00.009236 kubelet[2927]: W0913 00:26:00.006798 2927 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:26:00.009236 kubelet[2927]: E0913 00:26:00.006812 2927 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:26:00.009236 kubelet[2927]: E0913 00:26:00.006908 2927 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:26:00.010200 kubelet[2927]: W0913 00:26:00.006914 2927 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:26:00.010200 kubelet[2927]: E0913 00:26:00.006923 2927 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:26:00.010200 kubelet[2927]: E0913 00:26:00.007017 2927 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:26:00.010200 kubelet[2927]: W0913 00:26:00.007022 2927 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:26:00.010200 kubelet[2927]: E0913 00:26:00.007030 2927 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:26:00.010200 kubelet[2927]: E0913 00:26:00.007391 2927 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:26:00.010200 kubelet[2927]: W0913 00:26:00.007397 2927 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:26:00.010200 kubelet[2927]: E0913 00:26:00.007409 2927 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:26:00.010200 kubelet[2927]: E0913 00:26:00.007877 2927 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:26:00.010200 kubelet[2927]: W0913 00:26:00.007883 2927 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:26:00.011102 kubelet[2927]: E0913 00:26:00.007894 2927 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:26:00.011102 kubelet[2927]: E0913 00:26:00.008046 2927 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:26:00.011102 kubelet[2927]: W0913 00:26:00.008051 2927 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:26:00.011102 kubelet[2927]: E0913 00:26:00.008178 2927 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:26:00.011102 kubelet[2927]: E0913 00:26:00.008810 2927 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:26:00.011102 kubelet[2927]: W0913 00:26:00.008816 2927 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:26:00.011102 kubelet[2927]: E0913 00:26:00.008822 2927 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:26:00.011102 kubelet[2927]: E0913 00:26:00.009712 2927 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:26:00.011102 kubelet[2927]: W0913 00:26:00.009718 2927 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:26:00.011102 kubelet[2927]: E0913 00:26:00.009730 2927 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:26:00.011565 kubelet[2927]: E0913 00:26:00.010402 2927 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:26:00.011565 kubelet[2927]: W0913 00:26:00.010408 2927 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:26:00.011565 kubelet[2927]: E0913 00:26:00.010421 2927 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:26:00.011565 kubelet[2927]: E0913 00:26:00.010740 2927 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:26:00.011565 kubelet[2927]: W0913 00:26:00.010746 2927 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:26:00.011565 kubelet[2927]: E0913 00:26:00.010752 2927 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:26:00.051073 kubelet[2927]: I0913 00:26:00.050720 2927 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-7cf95bbdc8-grwm8" podStartSLOduration=2.653982191 podStartE2EDuration="5.050705475s" podCreationTimestamp="2025-09-13 00:25:55 +0000 UTC" firstStartedPulling="2025-09-13 00:25:56.169813287 +0000 UTC m=+16.521275144" lastFinishedPulling="2025-09-13 00:25:58.566536569 +0000 UTC m=+18.917998428" observedRunningTime="2025-09-13 00:25:58.913796183 +0000 UTC m=+19.265258045" watchObservedRunningTime="2025-09-13 00:26:00.050705475 +0000 UTC m=+20.402167337" Sep 13 00:26:00.385639 containerd[1628]: time="2025-09-13T00:26:00.385611111Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:26:00.394149 containerd[1628]: time="2025-09-13T00:26:00.394120070Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3: active requests=0, bytes read=4446660" Sep 13 00:26:00.404461 containerd[1628]: time="2025-09-13T00:26:00.404427005Z" level=info msg="ImageCreate event name:\"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:26:00.415079 containerd[1628]: time="2025-09-13T00:26:00.415049294Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:26:00.415953 containerd[1628]: time="2025-09-13T00:26:00.415928585Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" with image id \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\", size \"5939323\" in 1.849075606s" Sep 13 00:26:00.416228 containerd[1628]: time="2025-09-13T00:26:00.415954847Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" returns image reference \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\"" Sep 13 00:26:00.418371 containerd[1628]: time="2025-09-13T00:26:00.418320870Z" level=info msg="CreateContainer within sandbox \"4c74e1a9cc77b30de39de73e4d1bb74c5f5c202718264caedd6ab7a236dd6238\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Sep 13 00:26:00.467327 containerd[1628]: time="2025-09-13T00:26:00.467183837Z" level=info msg="Container 617bf429ddb8731fe7b9e1b452da23e3a4d667d68d49b01dd55e030c691042ec: CDI devices from CRI Config.CDIDevices: []" Sep 13 00:26:00.505081 containerd[1628]: time="2025-09-13T00:26:00.505006282Z" level=info msg="CreateContainer within sandbox \"4c74e1a9cc77b30de39de73e4d1bb74c5f5c202718264caedd6ab7a236dd6238\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"617bf429ddb8731fe7b9e1b452da23e3a4d667d68d49b01dd55e030c691042ec\"" Sep 13 00:26:00.505631 containerd[1628]: time="2025-09-13T00:26:00.505610028Z" level=info msg="StartContainer for \"617bf429ddb8731fe7b9e1b452da23e3a4d667d68d49b01dd55e030c691042ec\"" Sep 13 00:26:00.507537 containerd[1628]: time="2025-09-13T00:26:00.507345486Z" level=info msg="connecting to shim 617bf429ddb8731fe7b9e1b452da23e3a4d667d68d49b01dd55e030c691042ec" address="unix:///run/containerd/s/eab66ab92d6535f060224470e730d49cb9b00056775c410e61a33f9f64ce374e" protocol=ttrpc version=3 Sep 13 00:26:00.525463 systemd[1]: Started cri-containerd-617bf429ddb8731fe7b9e1b452da23e3a4d667d68d49b01dd55e030c691042ec.scope - libcontainer container 617bf429ddb8731fe7b9e1b452da23e3a4d667d68d49b01dd55e030c691042ec. Sep 13 00:26:00.557378 containerd[1628]: time="2025-09-13T00:26:00.557171390Z" level=info msg="StartContainer for \"617bf429ddb8731fe7b9e1b452da23e3a4d667d68d49b01dd55e030c691042ec\" returns successfully" Sep 13 00:26:00.566939 systemd[1]: cri-containerd-617bf429ddb8731fe7b9e1b452da23e3a4d667d68d49b01dd55e030c691042ec.scope: Deactivated successfully. Sep 13 00:26:00.605427 containerd[1628]: time="2025-09-13T00:26:00.605383410Z" level=info msg="TaskExit event in podsandbox handler container_id:\"617bf429ddb8731fe7b9e1b452da23e3a4d667d68d49b01dd55e030c691042ec\" id:\"617bf429ddb8731fe7b9e1b452da23e3a4d667d68d49b01dd55e030c691042ec\" pid:3633 exited_at:{seconds:1757723160 nanos:568468113}" Sep 13 00:26:00.605912 containerd[1628]: time="2025-09-13T00:26:00.605888113Z" level=info msg="received exit event container_id:\"617bf429ddb8731fe7b9e1b452da23e3a4d667d68d49b01dd55e030c691042ec\" id:\"617bf429ddb8731fe7b9e1b452da23e3a4d667d68d49b01dd55e030c691042ec\" pid:3633 exited_at:{seconds:1757723160 nanos:568468113}" Sep 13 00:26:00.623476 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-617bf429ddb8731fe7b9e1b452da23e3a4d667d68d49b01dd55e030c691042ec-rootfs.mount: Deactivated successfully. Sep 13 00:26:01.762577 kubelet[2927]: E0913 00:26:01.762302 2927 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9rjbl" podUID="250c5ded-929d-4e71-af09-039897e71791" Sep 13 00:26:01.896777 containerd[1628]: time="2025-09-13T00:26:01.896638873Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\"" Sep 13 00:26:03.457310 systemd[1]: Started sshd@8-139.178.70.110:22-34.123.134.194:52218.service - OpenSSH per-connection server daemon (34.123.134.194:52218). Sep 13 00:26:03.762825 kubelet[2927]: E0913 00:26:03.762555 2927 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9rjbl" podUID="250c5ded-929d-4e71-af09-039897e71791" Sep 13 00:26:04.146099 sshd[3672]: Invalid user mongo from 34.123.134.194 port 52218 Sep 13 00:26:04.206516 sshd[3672]: Received disconnect from 34.123.134.194 port 52218:11: Bye Bye [preauth] Sep 13 00:26:04.206604 sshd[3672]: Disconnected from invalid user mongo 34.123.134.194 port 52218 [preauth] Sep 13 00:26:04.208503 systemd[1]: sshd@8-139.178.70.110:22-34.123.134.194:52218.service: Deactivated successfully. Sep 13 00:26:05.536480 containerd[1628]: time="2025-09-13T00:26:05.536445770Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:26:05.537134 containerd[1628]: time="2025-09-13T00:26:05.537117231Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.3: active requests=0, bytes read=70440613" Sep 13 00:26:05.537437 containerd[1628]: time="2025-09-13T00:26:05.537419739Z" level=info msg="ImageCreate event name:\"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:26:05.538403 containerd[1628]: time="2025-09-13T00:26:05.538384167Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:26:05.539112 containerd[1628]: time="2025-09-13T00:26:05.539096041Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.3\" with image id \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\", size \"71933316\" in 3.642417844s" Sep 13 00:26:05.539137 containerd[1628]: time="2025-09-13T00:26:05.539114466Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\" returns image reference \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\"" Sep 13 00:26:05.541033 containerd[1628]: time="2025-09-13T00:26:05.541014905Z" level=info msg="CreateContainer within sandbox \"4c74e1a9cc77b30de39de73e4d1bb74c5f5c202718264caedd6ab7a236dd6238\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Sep 13 00:26:05.549361 containerd[1628]: time="2025-09-13T00:26:05.549052867Z" level=info msg="Container 732e103d59074387b7b0f017715fde80f45f89f2fe593ab44a4b1d33b4dbdb1c: CDI devices from CRI Config.CDIDevices: []" Sep 13 00:26:05.554694 containerd[1628]: time="2025-09-13T00:26:05.554621083Z" level=info msg="CreateContainer within sandbox \"4c74e1a9cc77b30de39de73e4d1bb74c5f5c202718264caedd6ab7a236dd6238\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"732e103d59074387b7b0f017715fde80f45f89f2fe593ab44a4b1d33b4dbdb1c\"" Sep 13 00:26:05.555932 containerd[1628]: time="2025-09-13T00:26:05.555746272Z" level=info msg="StartContainer for \"732e103d59074387b7b0f017715fde80f45f89f2fe593ab44a4b1d33b4dbdb1c\"" Sep 13 00:26:05.557166 containerd[1628]: time="2025-09-13T00:26:05.557147555Z" level=info msg="connecting to shim 732e103d59074387b7b0f017715fde80f45f89f2fe593ab44a4b1d33b4dbdb1c" address="unix:///run/containerd/s/eab66ab92d6535f060224470e730d49cb9b00056775c410e61a33f9f64ce374e" protocol=ttrpc version=3 Sep 13 00:26:05.576471 systemd[1]: Started cri-containerd-732e103d59074387b7b0f017715fde80f45f89f2fe593ab44a4b1d33b4dbdb1c.scope - libcontainer container 732e103d59074387b7b0f017715fde80f45f89f2fe593ab44a4b1d33b4dbdb1c. Sep 13 00:26:05.615206 containerd[1628]: time="2025-09-13T00:26:05.615181607Z" level=info msg="StartContainer for \"732e103d59074387b7b0f017715fde80f45f89f2fe593ab44a4b1d33b4dbdb1c\" returns successfully" Sep 13 00:26:05.762585 kubelet[2927]: E0913 00:26:05.762345 2927 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9rjbl" podUID="250c5ded-929d-4e71-af09-039897e71791" Sep 13 00:26:06.837870 systemd[1]: cri-containerd-732e103d59074387b7b0f017715fde80f45f89f2fe593ab44a4b1d33b4dbdb1c.scope: Deactivated successfully. Sep 13 00:26:06.838529 systemd[1]: cri-containerd-732e103d59074387b7b0f017715fde80f45f89f2fe593ab44a4b1d33b4dbdb1c.scope: Consumed 282ms CPU time, 156.9M memory peak, 40K read from disk, 171.3M written to disk. Sep 13 00:26:06.896171 containerd[1628]: time="2025-09-13T00:26:06.894460673Z" level=info msg="received exit event container_id:\"732e103d59074387b7b0f017715fde80f45f89f2fe593ab44a4b1d33b4dbdb1c\" id:\"732e103d59074387b7b0f017715fde80f45f89f2fe593ab44a4b1d33b4dbdb1c\" pid:3698 exited_at:{seconds:1757723166 nanos:894309411}" Sep 13 00:26:06.896171 containerd[1628]: time="2025-09-13T00:26:06.894611687Z" level=info msg="TaskExit event in podsandbox handler container_id:\"732e103d59074387b7b0f017715fde80f45f89f2fe593ab44a4b1d33b4dbdb1c\" id:\"732e103d59074387b7b0f017715fde80f45f89f2fe593ab44a4b1d33b4dbdb1c\" pid:3698 exited_at:{seconds:1757723166 nanos:894309411}" Sep 13 00:26:06.902193 kubelet[2927]: I0913 00:26:06.902176 2927 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Sep 13 00:26:06.916675 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-732e103d59074387b7b0f017715fde80f45f89f2fe593ab44a4b1d33b4dbdb1c-rootfs.mount: Deactivated successfully. Sep 13 00:26:07.048636 systemd[1]: Created slice kubepods-burstable-podfb56d53b_d4f3_4fb0_8ed0_d970a346c029.slice - libcontainer container kubepods-burstable-podfb56d53b_d4f3_4fb0_8ed0_d970a346c029.slice. Sep 13 00:26:07.055542 kubelet[2927]: W0913 00:26:07.055520 2927 reflector.go:569] object-"calico-apiserver"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:localhost" cannot list resource "configmaps" in API group "" in the namespace "calico-apiserver": no relationship found between node 'localhost' and this object Sep 13 00:26:07.057025 kubelet[2927]: E0913 00:26:07.057002 2927 reflector.go:166] "Unhandled Error" err="object-\"calico-apiserver\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:localhost\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-apiserver\": no relationship found between node 'localhost' and this object" logger="UnhandledError" Sep 13 00:26:07.065642 systemd[1]: Created slice kubepods-burstable-podf3f75fb6_0073_44fe_9812_dc1c8eba6ba9.slice - libcontainer container kubepods-burstable-podf3f75fb6_0073_44fe_9812_dc1c8eba6ba9.slice. Sep 13 00:26:07.072136 systemd[1]: Created slice kubepods-besteffort-podfef92abd_c727_44ef_9eb1_bae2f69ae74d.slice - libcontainer container kubepods-besteffort-podfef92abd_c727_44ef_9eb1_bae2f69ae74d.slice. Sep 13 00:26:07.078690 systemd[1]: Created slice kubepods-besteffort-pod07e6f729_e3ab_45e0_ab76_d34bfb5a97db.slice - libcontainer container kubepods-besteffort-pod07e6f729_e3ab_45e0_ab76_d34bfb5a97db.slice. Sep 13 00:26:07.084713 systemd[1]: Created slice kubepods-besteffort-pod340a543d_09d0_4895_9435_78201b2b74e5.slice - libcontainer container kubepods-besteffort-pod340a543d_09d0_4895_9435_78201b2b74e5.slice. Sep 13 00:26:07.090200 systemd[1]: Created slice kubepods-besteffort-pod9a891804_3bb4_4165_9b0f_6e24d55c0380.slice - libcontainer container kubepods-besteffort-pod9a891804_3bb4_4165_9b0f_6e24d55c0380.slice. Sep 13 00:26:07.095105 systemd[1]: Created slice kubepods-besteffort-podfe582828_806d_4540_8f45_5dc578de7bff.slice - libcontainer container kubepods-besteffort-podfe582828_806d_4540_8f45_5dc578de7bff.slice. Sep 13 00:26:07.163842 kubelet[2927]: I0913 00:26:07.163591 2927 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f3f75fb6-0073-44fe-9812-dc1c8eba6ba9-config-volume\") pod \"coredns-668d6bf9bc-rjdzj\" (UID: \"f3f75fb6-0073-44fe-9812-dc1c8eba6ba9\") " pod="kube-system/coredns-668d6bf9bc-rjdzj" Sep 13 00:26:07.163842 kubelet[2927]: I0913 00:26:07.163619 2927 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/340a543d-09d0-4895-9435-78201b2b74e5-goldmane-key-pair\") pod \"goldmane-54d579b49d-td6hn\" (UID: \"340a543d-09d0-4895-9435-78201b2b74e5\") " pod="calico-system/goldmane-54d579b49d-td6hn" Sep 13 00:26:07.163842 kubelet[2927]: I0913 00:26:07.163630 2927 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mh48x\" (UniqueName: \"kubernetes.io/projected/340a543d-09d0-4895-9435-78201b2b74e5-kube-api-access-mh48x\") pod \"goldmane-54d579b49d-td6hn\" (UID: \"340a543d-09d0-4895-9435-78201b2b74e5\") " pod="calico-system/goldmane-54d579b49d-td6hn" Sep 13 00:26:07.163842 kubelet[2927]: I0913 00:26:07.163641 2927 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dp8vx\" (UniqueName: \"kubernetes.io/projected/fb56d53b-d4f3-4fb0-8ed0-d970a346c029-kube-api-access-dp8vx\") pod \"coredns-668d6bf9bc-tgvtb\" (UID: \"fb56d53b-d4f3-4fb0-8ed0-d970a346c029\") " pod="kube-system/coredns-668d6bf9bc-tgvtb" Sep 13 00:26:07.163842 kubelet[2927]: I0913 00:26:07.163651 2927 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/fe582828-806d-4540-8f45-5dc578de7bff-calico-apiserver-certs\") pod \"calico-apiserver-6b9768b6f-mvxpt\" (UID: \"fe582828-806d-4540-8f45-5dc578de7bff\") " pod="calico-apiserver/calico-apiserver-6b9768b6f-mvxpt" Sep 13 00:26:07.164028 kubelet[2927]: I0913 00:26:07.163681 2927 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-khr2c\" (UniqueName: \"kubernetes.io/projected/fef92abd-c727-44ef-9eb1-bae2f69ae74d-kube-api-access-khr2c\") pod \"calico-apiserver-6b9768b6f-brfmv\" (UID: \"fef92abd-c727-44ef-9eb1-bae2f69ae74d\") " pod="calico-apiserver/calico-apiserver-6b9768b6f-brfmv" Sep 13 00:26:07.164028 kubelet[2927]: I0913 00:26:07.163694 2927 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fb56d53b-d4f3-4fb0-8ed0-d970a346c029-config-volume\") pod \"coredns-668d6bf9bc-tgvtb\" (UID: \"fb56d53b-d4f3-4fb0-8ed0-d970a346c029\") " pod="kube-system/coredns-668d6bf9bc-tgvtb" Sep 13 00:26:07.164028 kubelet[2927]: I0913 00:26:07.163703 2927 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qc9zv\" (UniqueName: \"kubernetes.io/projected/fe582828-806d-4540-8f45-5dc578de7bff-kube-api-access-qc9zv\") pod \"calico-apiserver-6b9768b6f-mvxpt\" (UID: \"fe582828-806d-4540-8f45-5dc578de7bff\") " pod="calico-apiserver/calico-apiserver-6b9768b6f-mvxpt" Sep 13 00:26:07.164028 kubelet[2927]: I0913 00:26:07.163716 2927 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/340a543d-09d0-4895-9435-78201b2b74e5-config\") pod \"goldmane-54d579b49d-td6hn\" (UID: \"340a543d-09d0-4895-9435-78201b2b74e5\") " pod="calico-system/goldmane-54d579b49d-td6hn" Sep 13 00:26:07.164028 kubelet[2927]: I0913 00:26:07.163726 2927 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/340a543d-09d0-4895-9435-78201b2b74e5-goldmane-ca-bundle\") pod \"goldmane-54d579b49d-td6hn\" (UID: \"340a543d-09d0-4895-9435-78201b2b74e5\") " pod="calico-system/goldmane-54d579b49d-td6hn" Sep 13 00:26:07.164454 kubelet[2927]: I0913 00:26:07.163735 2927 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/9a891804-3bb4-4165-9b0f-6e24d55c0380-whisker-backend-key-pair\") pod \"whisker-5485fd9959-mljf7\" (UID: \"9a891804-3bb4-4165-9b0f-6e24d55c0380\") " pod="calico-system/whisker-5485fd9959-mljf7" Sep 13 00:26:07.164454 kubelet[2927]: I0913 00:26:07.163747 2927 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h57p5\" (UniqueName: \"kubernetes.io/projected/9a891804-3bb4-4165-9b0f-6e24d55c0380-kube-api-access-h57p5\") pod \"whisker-5485fd9959-mljf7\" (UID: \"9a891804-3bb4-4165-9b0f-6e24d55c0380\") " pod="calico-system/whisker-5485fd9959-mljf7" Sep 13 00:26:07.164454 kubelet[2927]: I0913 00:26:07.163771 2927 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/07e6f729-e3ab-45e0-ab76-d34bfb5a97db-tigera-ca-bundle\") pod \"calico-kube-controllers-8487fb95d8-4ffbb\" (UID: \"07e6f729-e3ab-45e0-ab76-d34bfb5a97db\") " pod="calico-system/calico-kube-controllers-8487fb95d8-4ffbb" Sep 13 00:26:07.164454 kubelet[2927]: I0913 00:26:07.163793 2927 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/fef92abd-c727-44ef-9eb1-bae2f69ae74d-calico-apiserver-certs\") pod \"calico-apiserver-6b9768b6f-brfmv\" (UID: \"fef92abd-c727-44ef-9eb1-bae2f69ae74d\") " pod="calico-apiserver/calico-apiserver-6b9768b6f-brfmv" Sep 13 00:26:07.164454 kubelet[2927]: I0913 00:26:07.163805 2927 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9a891804-3bb4-4165-9b0f-6e24d55c0380-whisker-ca-bundle\") pod \"whisker-5485fd9959-mljf7\" (UID: \"9a891804-3bb4-4165-9b0f-6e24d55c0380\") " pod="calico-system/whisker-5485fd9959-mljf7" Sep 13 00:26:07.164542 kubelet[2927]: I0913 00:26:07.163819 2927 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fg9kp\" (UniqueName: \"kubernetes.io/projected/f3f75fb6-0073-44fe-9812-dc1c8eba6ba9-kube-api-access-fg9kp\") pod \"coredns-668d6bf9bc-rjdzj\" (UID: \"f3f75fb6-0073-44fe-9812-dc1c8eba6ba9\") " pod="kube-system/coredns-668d6bf9bc-rjdzj" Sep 13 00:26:07.164542 kubelet[2927]: I0913 00:26:07.163885 2927 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tzvgq\" (UniqueName: \"kubernetes.io/projected/07e6f729-e3ab-45e0-ab76-d34bfb5a97db-kube-api-access-tzvgq\") pod \"calico-kube-controllers-8487fb95d8-4ffbb\" (UID: \"07e6f729-e3ab-45e0-ab76-d34bfb5a97db\") " pod="calico-system/calico-kube-controllers-8487fb95d8-4ffbb" Sep 13 00:26:07.370709 containerd[1628]: time="2025-09-13T00:26:07.370063972Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-rjdzj,Uid:f3f75fb6-0073-44fe-9812-dc1c8eba6ba9,Namespace:kube-system,Attempt:0,}" Sep 13 00:26:07.373140 containerd[1628]: time="2025-09-13T00:26:07.373117847Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-tgvtb,Uid:fb56d53b-d4f3-4fb0-8ed0-d970a346c029,Namespace:kube-system,Attempt:0,}" Sep 13 00:26:07.384931 containerd[1628]: time="2025-09-13T00:26:07.384904712Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-8487fb95d8-4ffbb,Uid:07e6f729-e3ab-45e0-ab76-d34bfb5a97db,Namespace:calico-system,Attempt:0,}" Sep 13 00:26:07.388065 containerd[1628]: time="2025-09-13T00:26:07.388005504Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-td6hn,Uid:340a543d-09d0-4895-9435-78201b2b74e5,Namespace:calico-system,Attempt:0,}" Sep 13 00:26:07.394384 containerd[1628]: time="2025-09-13T00:26:07.394144221Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5485fd9959-mljf7,Uid:9a891804-3bb4-4165-9b0f-6e24d55c0380,Namespace:calico-system,Attempt:0,}" Sep 13 00:26:07.645113 containerd[1628]: time="2025-09-13T00:26:07.645002733Z" level=error msg="Failed to destroy network for sandbox \"2856d1092a8cecd36db700e75274c6d9e6ffeac8ad2ca4d945d14e78f3f83f7c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:26:07.652926 containerd[1628]: time="2025-09-13T00:26:07.645598354Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-rjdzj,Uid:f3f75fb6-0073-44fe-9812-dc1c8eba6ba9,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"2856d1092a8cecd36db700e75274c6d9e6ffeac8ad2ca4d945d14e78f3f83f7c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:26:07.653096 containerd[1628]: time="2025-09-13T00:26:07.646310122Z" level=error msg="Failed to destroy network for sandbox \"f97eadb321bb6a43a3ce1bd8f4539e23d59562c521be8c00b20932240f50e9b0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:26:07.653213 containerd[1628]: time="2025-09-13T00:26:07.651936342Z" level=error msg="Failed to destroy network for sandbox \"d5c8ff4d0d074c756d6006754badfa4f1e25a6b319eea275faa56e8e70244f45\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:26:07.653615 containerd[1628]: time="2025-09-13T00:26:07.649319699Z" level=error msg="Failed to destroy network for sandbox \"84524868414aacc39f209b1fced3213a4ba504985c040b0b5832449598e57303\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:26:07.653899 containerd[1628]: time="2025-09-13T00:26:07.653801874Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-8487fb95d8-4ffbb,Uid:07e6f729-e3ab-45e0-ab76-d34bfb5a97db,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f97eadb321bb6a43a3ce1bd8f4539e23d59562c521be8c00b20932240f50e9b0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:26:07.654051 containerd[1628]: time="2025-09-13T00:26:07.654037197Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-td6hn,Uid:340a543d-09d0-4895-9435-78201b2b74e5,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d5c8ff4d0d074c756d6006754badfa4f1e25a6b319eea275faa56e8e70244f45\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:26:07.654538 containerd[1628]: time="2025-09-13T00:26:07.654523213Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5485fd9959-mljf7,Uid:9a891804-3bb4-4165-9b0f-6e24d55c0380,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"84524868414aacc39f209b1fced3213a4ba504985c040b0b5832449598e57303\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:26:07.655869 kubelet[2927]: E0913 00:26:07.655841 2927 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2856d1092a8cecd36db700e75274c6d9e6ffeac8ad2ca4d945d14e78f3f83f7c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:26:07.655912 kubelet[2927]: E0913 00:26:07.655892 2927 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2856d1092a8cecd36db700e75274c6d9e6ffeac8ad2ca4d945d14e78f3f83f7c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-rjdzj" Sep 13 00:26:07.655981 kubelet[2927]: E0913 00:26:07.655946 2927 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f97eadb321bb6a43a3ce1bd8f4539e23d59562c521be8c00b20932240f50e9b0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:26:07.655981 kubelet[2927]: E0913 00:26:07.655968 2927 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f97eadb321bb6a43a3ce1bd8f4539e23d59562c521be8c00b20932240f50e9b0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-8487fb95d8-4ffbb" Sep 13 00:26:07.656577 containerd[1628]: time="2025-09-13T00:26:07.656528826Z" level=error msg="Failed to destroy network for sandbox \"08f525ba8a2464182d3eec8f648fe5bdca70959a00ec1b34fd17a1b0703481f7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:26:07.656843 containerd[1628]: time="2025-09-13T00:26:07.656829101Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-tgvtb,Uid:fb56d53b-d4f3-4fb0-8ed0-d970a346c029,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"08f525ba8a2464182d3eec8f648fe5bdca70959a00ec1b34fd17a1b0703481f7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:26:07.657405 kubelet[2927]: E0913 00:26:07.657220 2927 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2856d1092a8cecd36db700e75274c6d9e6ffeac8ad2ca4d945d14e78f3f83f7c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-rjdzj" Sep 13 00:26:07.657405 kubelet[2927]: E0913 00:26:07.657258 2927 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-rjdzj_kube-system(f3f75fb6-0073-44fe-9812-dc1c8eba6ba9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-rjdzj_kube-system(f3f75fb6-0073-44fe-9812-dc1c8eba6ba9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2856d1092a8cecd36db700e75274c6d9e6ffeac8ad2ca4d945d14e78f3f83f7c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-rjdzj" podUID="f3f75fb6-0073-44fe-9812-dc1c8eba6ba9" Sep 13 00:26:07.657405 kubelet[2927]: E0913 00:26:07.657279 2927 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"08f525ba8a2464182d3eec8f648fe5bdca70959a00ec1b34fd17a1b0703481f7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:26:07.658313 kubelet[2927]: E0913 00:26:07.657294 2927 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"08f525ba8a2464182d3eec8f648fe5bdca70959a00ec1b34fd17a1b0703481f7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-tgvtb" Sep 13 00:26:07.658313 kubelet[2927]: E0913 00:26:07.657304 2927 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"08f525ba8a2464182d3eec8f648fe5bdca70959a00ec1b34fd17a1b0703481f7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-tgvtb" Sep 13 00:26:07.658313 kubelet[2927]: E0913 00:26:07.657321 2927 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-tgvtb_kube-system(fb56d53b-d4f3-4fb0-8ed0-d970a346c029)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-tgvtb_kube-system(fb56d53b-d4f3-4fb0-8ed0-d970a346c029)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"08f525ba8a2464182d3eec8f648fe5bdca70959a00ec1b34fd17a1b0703481f7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-tgvtb" podUID="fb56d53b-d4f3-4fb0-8ed0-d970a346c029" Sep 13 00:26:07.658433 kubelet[2927]: E0913 00:26:07.657339 2927 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"84524868414aacc39f209b1fced3213a4ba504985c040b0b5832449598e57303\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:26:07.658433 kubelet[2927]: E0913 00:26:07.657375 2927 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"84524868414aacc39f209b1fced3213a4ba504985c040b0b5832449598e57303\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5485fd9959-mljf7" Sep 13 00:26:07.658433 kubelet[2927]: E0913 00:26:07.657384 2927 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"84524868414aacc39f209b1fced3213a4ba504985c040b0b5832449598e57303\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5485fd9959-mljf7" Sep 13 00:26:07.658433 kubelet[2927]: E0913 00:26:07.655844 2927 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d5c8ff4d0d074c756d6006754badfa4f1e25a6b319eea275faa56e8e70244f45\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:26:07.658507 kubelet[2927]: E0913 00:26:07.657606 2927 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d5c8ff4d0d074c756d6006754badfa4f1e25a6b319eea275faa56e8e70244f45\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-td6hn" Sep 13 00:26:07.658507 kubelet[2927]: E0913 00:26:07.657722 2927 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d5c8ff4d0d074c756d6006754badfa4f1e25a6b319eea275faa56e8e70244f45\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-td6hn" Sep 13 00:26:07.658507 kubelet[2927]: E0913 00:26:07.657740 2927 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-54d579b49d-td6hn_calico-system(340a543d-09d0-4895-9435-78201b2b74e5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-54d579b49d-td6hn_calico-system(340a543d-09d0-4895-9435-78201b2b74e5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d5c8ff4d0d074c756d6006754badfa4f1e25a6b319eea275faa56e8e70244f45\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-54d579b49d-td6hn" podUID="340a543d-09d0-4895-9435-78201b2b74e5" Sep 13 00:26:07.658573 kubelet[2927]: E0913 00:26:07.657224 2927 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f97eadb321bb6a43a3ce1bd8f4539e23d59562c521be8c00b20932240f50e9b0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-8487fb95d8-4ffbb" Sep 13 00:26:07.658573 kubelet[2927]: E0913 00:26:07.657761 2927 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-8487fb95d8-4ffbb_calico-system(07e6f729-e3ab-45e0-ab76-d34bfb5a97db)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-8487fb95d8-4ffbb_calico-system(07e6f729-e3ab-45e0-ab76-d34bfb5a97db)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f97eadb321bb6a43a3ce1bd8f4539e23d59562c521be8c00b20932240f50e9b0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-8487fb95d8-4ffbb" podUID="07e6f729-e3ab-45e0-ab76-d34bfb5a97db" Sep 13 00:26:07.658626 kubelet[2927]: E0913 00:26:07.658401 2927 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-5485fd9959-mljf7_calico-system(9a891804-3bb4-4165-9b0f-6e24d55c0380)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-5485fd9959-mljf7_calico-system(9a891804-3bb4-4165-9b0f-6e24d55c0380)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"84524868414aacc39f209b1fced3213a4ba504985c040b0b5832449598e57303\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-5485fd9959-mljf7" podUID="9a891804-3bb4-4165-9b0f-6e24d55c0380" Sep 13 00:26:07.768450 systemd[1]: Created slice kubepods-besteffort-pod250c5ded_929d_4e71_af09_039897e71791.slice - libcontainer container kubepods-besteffort-pod250c5ded_929d_4e71_af09_039897e71791.slice. Sep 13 00:26:07.769807 containerd[1628]: time="2025-09-13T00:26:07.769783613Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9rjbl,Uid:250c5ded-929d-4e71-af09-039897e71791,Namespace:calico-system,Attempt:0,}" Sep 13 00:26:07.807807 containerd[1628]: time="2025-09-13T00:26:07.807747814Z" level=error msg="Failed to destroy network for sandbox \"935bfd7481e9e603859b27e209d0268c5303ff74600ad66945ce00dbaa930d45\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:26:07.808673 containerd[1628]: time="2025-09-13T00:26:07.808642804Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9rjbl,Uid:250c5ded-929d-4e71-af09-039897e71791,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"935bfd7481e9e603859b27e209d0268c5303ff74600ad66945ce00dbaa930d45\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:26:07.808930 kubelet[2927]: E0913 00:26:07.808903 2927 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"935bfd7481e9e603859b27e209d0268c5303ff74600ad66945ce00dbaa930d45\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:26:07.809004 kubelet[2927]: E0913 00:26:07.808939 2927 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"935bfd7481e9e603859b27e209d0268c5303ff74600ad66945ce00dbaa930d45\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-9rjbl" Sep 13 00:26:07.809004 kubelet[2927]: E0913 00:26:07.808954 2927 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"935bfd7481e9e603859b27e209d0268c5303ff74600ad66945ce00dbaa930d45\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-9rjbl" Sep 13 00:26:07.809004 kubelet[2927]: E0913 00:26:07.808982 2927 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-9rjbl_calico-system(250c5ded-929d-4e71-af09-039897e71791)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-9rjbl_calico-system(250c5ded-929d-4e71-af09-039897e71791)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"935bfd7481e9e603859b27e209d0268c5303ff74600ad66945ce00dbaa930d45\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-9rjbl" podUID="250c5ded-929d-4e71-af09-039897e71791" Sep 13 00:26:07.985049 containerd[1628]: time="2025-09-13T00:26:07.984962207Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\"" Sep 13 00:26:08.298002 kubelet[2927]: E0913 00:26:08.297933 2927 projected.go:288] Couldn't get configMap calico-apiserver/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Sep 13 00:26:08.298002 kubelet[2927]: E0913 00:26:08.297958 2927 projected.go:194] Error preparing data for projected volume kube-api-access-khr2c for pod calico-apiserver/calico-apiserver-6b9768b6f-brfmv: failed to sync configmap cache: timed out waiting for the condition Sep 13 00:26:08.298319 kubelet[2927]: E0913 00:26:08.298006 2927 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/fef92abd-c727-44ef-9eb1-bae2f69ae74d-kube-api-access-khr2c podName:fef92abd-c727-44ef-9eb1-bae2f69ae74d nodeName:}" failed. No retries permitted until 2025-09-13 00:26:08.797991691 +0000 UTC m=+29.149453550 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-khr2c" (UniqueName: "kubernetes.io/projected/fef92abd-c727-44ef-9eb1-bae2f69ae74d-kube-api-access-khr2c") pod "calico-apiserver-6b9768b6f-brfmv" (UID: "fef92abd-c727-44ef-9eb1-bae2f69ae74d") : failed to sync configmap cache: timed out waiting for the condition Sep 13 00:26:08.300483 kubelet[2927]: E0913 00:26:08.300470 2927 projected.go:288] Couldn't get configMap calico-apiserver/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Sep 13 00:26:08.300521 kubelet[2927]: E0913 00:26:08.300485 2927 projected.go:194] Error preparing data for projected volume kube-api-access-qc9zv for pod calico-apiserver/calico-apiserver-6b9768b6f-mvxpt: failed to sync configmap cache: timed out waiting for the condition Sep 13 00:26:08.300521 kubelet[2927]: E0913 00:26:08.300509 2927 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/fe582828-806d-4540-8f45-5dc578de7bff-kube-api-access-qc9zv podName:fe582828-806d-4540-8f45-5dc578de7bff nodeName:}" failed. No retries permitted until 2025-09-13 00:26:08.800500584 +0000 UTC m=+29.151962441 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-qc9zv" (UniqueName: "kubernetes.io/projected/fe582828-806d-4540-8f45-5dc578de7bff-kube-api-access-qc9zv") pod "calico-apiserver-6b9768b6f-mvxpt" (UID: "fe582828-806d-4540-8f45-5dc578de7bff") : failed to sync configmap cache: timed out waiting for the condition Sep 13 00:26:08.898024 containerd[1628]: time="2025-09-13T00:26:08.897998716Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b9768b6f-mvxpt,Uid:fe582828-806d-4540-8f45-5dc578de7bff,Namespace:calico-apiserver,Attempt:0,}" Sep 13 00:26:08.931227 containerd[1628]: time="2025-09-13T00:26:08.931193385Z" level=error msg="Failed to destroy network for sandbox \"238ce81f087c8d3e499a7f3fa1edeb37df794ba2d56b8acd6bd690c2cdf93bab\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:26:08.932050 containerd[1628]: time="2025-09-13T00:26:08.931991354Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b9768b6f-mvxpt,Uid:fe582828-806d-4540-8f45-5dc578de7bff,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"238ce81f087c8d3e499a7f3fa1edeb37df794ba2d56b8acd6bd690c2cdf93bab\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:26:08.932855 systemd[1]: run-netns-cni\x2dc95993e6\x2db8f9\x2d8f85\x2d5950\x2dffcdce4dbc2e.mount: Deactivated successfully. Sep 13 00:26:08.933073 kubelet[2927]: E0913 00:26:08.932963 2927 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"238ce81f087c8d3e499a7f3fa1edeb37df794ba2d56b8acd6bd690c2cdf93bab\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:26:08.933073 kubelet[2927]: E0913 00:26:08.933003 2927 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"238ce81f087c8d3e499a7f3fa1edeb37df794ba2d56b8acd6bd690c2cdf93bab\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6b9768b6f-mvxpt" Sep 13 00:26:08.933073 kubelet[2927]: E0913 00:26:08.933017 2927 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"238ce81f087c8d3e499a7f3fa1edeb37df794ba2d56b8acd6bd690c2cdf93bab\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6b9768b6f-mvxpt" Sep 13 00:26:08.933549 kubelet[2927]: E0913 00:26:08.933526 2927 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6b9768b6f-mvxpt_calico-apiserver(fe582828-806d-4540-8f45-5dc578de7bff)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6b9768b6f-mvxpt_calico-apiserver(fe582828-806d-4540-8f45-5dc578de7bff)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"238ce81f087c8d3e499a7f3fa1edeb37df794ba2d56b8acd6bd690c2cdf93bab\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6b9768b6f-mvxpt" podUID="fe582828-806d-4540-8f45-5dc578de7bff" Sep 13 00:26:09.182597 containerd[1628]: time="2025-09-13T00:26:09.182402000Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b9768b6f-brfmv,Uid:fef92abd-c727-44ef-9eb1-bae2f69ae74d,Namespace:calico-apiserver,Attempt:0,}" Sep 13 00:26:09.229561 containerd[1628]: time="2025-09-13T00:26:09.229436813Z" level=error msg="Failed to destroy network for sandbox \"f08c872d2f6f9433a64c45834766ffc871a8ebb2a31c365521aa2d9438ce47fb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:26:09.231232 systemd[1]: run-netns-cni\x2d1830679b\x2da937\x2d96aa\x2d002e\x2d37bdb377e246.mount: Deactivated successfully. Sep 13 00:26:09.240276 containerd[1628]: time="2025-09-13T00:26:09.240174868Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b9768b6f-brfmv,Uid:fef92abd-c727-44ef-9eb1-bae2f69ae74d,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f08c872d2f6f9433a64c45834766ffc871a8ebb2a31c365521aa2d9438ce47fb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:26:09.240474 kubelet[2927]: E0913 00:26:09.240320 2927 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f08c872d2f6f9433a64c45834766ffc871a8ebb2a31c365521aa2d9438ce47fb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:26:09.240474 kubelet[2927]: E0913 00:26:09.240394 2927 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f08c872d2f6f9433a64c45834766ffc871a8ebb2a31c365521aa2d9438ce47fb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6b9768b6f-brfmv" Sep 13 00:26:09.240474 kubelet[2927]: E0913 00:26:09.240409 2927 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f08c872d2f6f9433a64c45834766ffc871a8ebb2a31c365521aa2d9438ce47fb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6b9768b6f-brfmv" Sep 13 00:26:09.240647 kubelet[2927]: E0913 00:26:09.240454 2927 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6b9768b6f-brfmv_calico-apiserver(fef92abd-c727-44ef-9eb1-bae2f69ae74d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6b9768b6f-brfmv_calico-apiserver(fef92abd-c727-44ef-9eb1-bae2f69ae74d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f08c872d2f6f9433a64c45834766ffc871a8ebb2a31c365521aa2d9438ce47fb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6b9768b6f-brfmv" podUID="fef92abd-c727-44ef-9eb1-bae2f69ae74d" Sep 13 00:26:12.248372 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount91930671.mount: Deactivated successfully. Sep 13 00:26:12.371372 containerd[1628]: time="2025-09-13T00:26:12.371116189Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.3: active requests=0, bytes read=157078339" Sep 13 00:26:12.381254 containerd[1628]: time="2025-09-13T00:26:12.381178819Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.3\" with image id \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\", size \"157078201\" in 4.396107963s" Sep 13 00:26:12.381254 containerd[1628]: time="2025-09-13T00:26:12.381199607Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\" returns image reference \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\"" Sep 13 00:26:12.383226 containerd[1628]: time="2025-09-13T00:26:12.383212578Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:26:12.390590 containerd[1628]: time="2025-09-13T00:26:12.390566110Z" level=info msg="ImageCreate event name:\"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:26:12.390849 containerd[1628]: time="2025-09-13T00:26:12.390836291Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:26:12.405269 containerd[1628]: time="2025-09-13T00:26:12.405235710Z" level=info msg="CreateContainer within sandbox \"4c74e1a9cc77b30de39de73e4d1bb74c5f5c202718264caedd6ab7a236dd6238\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Sep 13 00:26:12.449377 containerd[1628]: time="2025-09-13T00:26:12.449267216Z" level=info msg="Container de6f0d8fd132e5be22f5d90a9b3534d6fa9ee6fc02025f6473734576b285c18e: CDI devices from CRI Config.CDIDevices: []" Sep 13 00:26:12.450232 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount638053602.mount: Deactivated successfully. Sep 13 00:26:12.488945 containerd[1628]: time="2025-09-13T00:26:12.488924754Z" level=info msg="CreateContainer within sandbox \"4c74e1a9cc77b30de39de73e4d1bb74c5f5c202718264caedd6ab7a236dd6238\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"de6f0d8fd132e5be22f5d90a9b3534d6fa9ee6fc02025f6473734576b285c18e\"" Sep 13 00:26:12.489289 containerd[1628]: time="2025-09-13T00:26:12.489278511Z" level=info msg="StartContainer for \"de6f0d8fd132e5be22f5d90a9b3534d6fa9ee6fc02025f6473734576b285c18e\"" Sep 13 00:26:12.498006 containerd[1628]: time="2025-09-13T00:26:12.497980289Z" level=info msg="connecting to shim de6f0d8fd132e5be22f5d90a9b3534d6fa9ee6fc02025f6473734576b285c18e" address="unix:///run/containerd/s/eab66ab92d6535f060224470e730d49cb9b00056775c410e61a33f9f64ce374e" protocol=ttrpc version=3 Sep 13 00:26:12.606460 systemd[1]: Started cri-containerd-de6f0d8fd132e5be22f5d90a9b3534d6fa9ee6fc02025f6473734576b285c18e.scope - libcontainer container de6f0d8fd132e5be22f5d90a9b3534d6fa9ee6fc02025f6473734576b285c18e. Sep 13 00:26:12.649613 containerd[1628]: time="2025-09-13T00:26:12.649551589Z" level=info msg="StartContainer for \"de6f0d8fd132e5be22f5d90a9b3534d6fa9ee6fc02025f6473734576b285c18e\" returns successfully" Sep 13 00:26:12.732400 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Sep 13 00:26:12.733799 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Sep 13 00:26:13.025272 kubelet[2927]: I0913 00:26:13.025136 2927 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-szwgf" podStartSLOduration=1.232282464 podStartE2EDuration="17.025116233s" podCreationTimestamp="2025-09-13 00:25:56 +0000 UTC" firstStartedPulling="2025-09-13 00:25:56.588774209 +0000 UTC m=+16.940236065" lastFinishedPulling="2025-09-13 00:26:12.381607978 +0000 UTC m=+32.733069834" observedRunningTime="2025-09-13 00:26:13.022651378 +0000 UTC m=+33.374113243" watchObservedRunningTime="2025-09-13 00:26:13.025116233 +0000 UTC m=+33.376578264" Sep 13 00:26:13.119580 kubelet[2927]: I0913 00:26:13.119338 2927 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h57p5\" (UniqueName: \"kubernetes.io/projected/9a891804-3bb4-4165-9b0f-6e24d55c0380-kube-api-access-h57p5\") pod \"9a891804-3bb4-4165-9b0f-6e24d55c0380\" (UID: \"9a891804-3bb4-4165-9b0f-6e24d55c0380\") " Sep 13 00:26:13.119580 kubelet[2927]: I0913 00:26:13.119436 2927 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/9a891804-3bb4-4165-9b0f-6e24d55c0380-whisker-backend-key-pair\") pod \"9a891804-3bb4-4165-9b0f-6e24d55c0380\" (UID: \"9a891804-3bb4-4165-9b0f-6e24d55c0380\") " Sep 13 00:26:13.119580 kubelet[2927]: I0913 00:26:13.119451 2927 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9a891804-3bb4-4165-9b0f-6e24d55c0380-whisker-ca-bundle\") pod \"9a891804-3bb4-4165-9b0f-6e24d55c0380\" (UID: \"9a891804-3bb4-4165-9b0f-6e24d55c0380\") " Sep 13 00:26:13.129445 kubelet[2927]: I0913 00:26:13.128851 2927 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a891804-3bb4-4165-9b0f-6e24d55c0380-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "9a891804-3bb4-4165-9b0f-6e24d55c0380" (UID: "9a891804-3bb4-4165-9b0f-6e24d55c0380"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 13 00:26:13.129445 kubelet[2927]: I0913 00:26:13.129050 2927 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9a891804-3bb4-4165-9b0f-6e24d55c0380-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "9a891804-3bb4-4165-9b0f-6e24d55c0380" (UID: "9a891804-3bb4-4165-9b0f-6e24d55c0380"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 13 00:26:13.129762 kubelet[2927]: I0913 00:26:13.129749 2927 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9a891804-3bb4-4165-9b0f-6e24d55c0380-kube-api-access-h57p5" (OuterVolumeSpecName: "kube-api-access-h57p5") pod "9a891804-3bb4-4165-9b0f-6e24d55c0380" (UID: "9a891804-3bb4-4165-9b0f-6e24d55c0380"). InnerVolumeSpecName "kube-api-access-h57p5". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 13 00:26:13.188284 containerd[1628]: time="2025-09-13T00:26:13.188257483Z" level=info msg="TaskExit event in podsandbox handler container_id:\"de6f0d8fd132e5be22f5d90a9b3534d6fa9ee6fc02025f6473734576b285c18e\" id:\"c9cc75a91e4bf9e2a177bb64d0900e6691f643e6cb5e097f89eaa1662c700e13\" pid:4020 exit_status:1 exited_at:{seconds:1757723173 nanos:186880076}" Sep 13 00:26:13.220486 kubelet[2927]: I0913 00:26:13.220457 2927 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/9a891804-3bb4-4165-9b0f-6e24d55c0380-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Sep 13 00:26:13.220486 kubelet[2927]: I0913 00:26:13.220479 2927 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-h57p5\" (UniqueName: \"kubernetes.io/projected/9a891804-3bb4-4165-9b0f-6e24d55c0380-kube-api-access-h57p5\") on node \"localhost\" DevicePath \"\"" Sep 13 00:26:13.220486 kubelet[2927]: I0913 00:26:13.220485 2927 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9a891804-3bb4-4165-9b0f-6e24d55c0380-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Sep 13 00:26:13.250137 systemd[1]: var-lib-kubelet-pods-9a891804\x2d3bb4\x2d4165\x2d9b0f\x2d6e24d55c0380-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dh57p5.mount: Deactivated successfully. Sep 13 00:26:13.250192 systemd[1]: var-lib-kubelet-pods-9a891804\x2d3bb4\x2d4165\x2d9b0f\x2d6e24d55c0380-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Sep 13 00:26:13.767245 systemd[1]: Removed slice kubepods-besteffort-pod9a891804_3bb4_4165_9b0f_6e24d55c0380.slice - libcontainer container kubepods-besteffort-pod9a891804_3bb4_4165_9b0f_6e24d55c0380.slice. Sep 13 00:26:14.118190 systemd[1]: Created slice kubepods-besteffort-pode87c4c1a_b8e6_4a22_94c1_00898ffaeadd.slice - libcontainer container kubepods-besteffort-pode87c4c1a_b8e6_4a22_94c1_00898ffaeadd.slice. Sep 13 00:26:14.120234 kubelet[2927]: W0913 00:26:14.120222 2927 reflector.go:569] object-"calico-system"/"whisker-ca-bundle": failed to list *v1.ConfigMap: configmaps "whisker-ca-bundle" is forbidden: User "system:node:localhost" cannot list resource "configmaps" in API group "" in the namespace "calico-system": no relationship found between node 'localhost' and this object Sep 13 00:26:14.120514 kubelet[2927]: E0913 00:26:14.120450 2927 reflector.go:166] "Unhandled Error" err="object-\"calico-system\"/\"whisker-ca-bundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"whisker-ca-bundle\" is forbidden: User \"system:node:localhost\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'localhost' and this object" logger="UnhandledError" Sep 13 00:26:14.120514 kubelet[2927]: I0913 00:26:14.120221 2927 status_manager.go:890] "Failed to get status for pod" podUID="e87c4c1a-b8e6-4a22-94c1-00898ffaeadd" pod="calico-system/whisker-678f7c66bb-qxgwd" err="pods \"whisker-678f7c66bb-qxgwd\" is forbidden: User \"system:node:localhost\" cannot get resource \"pods\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'localhost' and this object" Sep 13 00:26:14.158415 containerd[1628]: time="2025-09-13T00:26:14.158390864Z" level=info msg="TaskExit event in podsandbox handler container_id:\"de6f0d8fd132e5be22f5d90a9b3534d6fa9ee6fc02025f6473734576b285c18e\" id:\"8a27e45d61e7ae8d6927259473d0060858c9aa43a8fde0d1e548199a007ec395\" pid:4049 exit_status:1 exited_at:{seconds:1757723174 nanos:158172775}" Sep 13 00:26:14.225925 kubelet[2927]: I0913 00:26:14.225818 2927 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e87c4c1a-b8e6-4a22-94c1-00898ffaeadd-whisker-ca-bundle\") pod \"whisker-678f7c66bb-qxgwd\" (UID: \"e87c4c1a-b8e6-4a22-94c1-00898ffaeadd\") " pod="calico-system/whisker-678f7c66bb-qxgwd" Sep 13 00:26:14.226307 kubelet[2927]: I0913 00:26:14.226047 2927 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s62px\" (UniqueName: \"kubernetes.io/projected/e87c4c1a-b8e6-4a22-94c1-00898ffaeadd-kube-api-access-s62px\") pod \"whisker-678f7c66bb-qxgwd\" (UID: \"e87c4c1a-b8e6-4a22-94c1-00898ffaeadd\") " pod="calico-system/whisker-678f7c66bb-qxgwd" Sep 13 00:26:14.226307 kubelet[2927]: I0913 00:26:14.226073 2927 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/e87c4c1a-b8e6-4a22-94c1-00898ffaeadd-whisker-backend-key-pair\") pod \"whisker-678f7c66bb-qxgwd\" (UID: \"e87c4c1a-b8e6-4a22-94c1-00898ffaeadd\") " pod="calico-system/whisker-678f7c66bb-qxgwd" Sep 13 00:26:14.669704 systemd-networkd[1519]: vxlan.calico: Link UP Sep 13 00:26:14.670774 systemd-networkd[1519]: vxlan.calico: Gained carrier Sep 13 00:26:15.021118 containerd[1628]: time="2025-09-13T00:26:15.021051096Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-678f7c66bb-qxgwd,Uid:e87c4c1a-b8e6-4a22-94c1-00898ffaeadd,Namespace:calico-system,Attempt:0,}" Sep 13 00:26:15.446647 systemd-networkd[1519]: calia8b5728f0cc: Link UP Sep 13 00:26:15.447428 systemd-networkd[1519]: calia8b5728f0cc: Gained carrier Sep 13 00:26:15.455868 containerd[1628]: 2025-09-13 00:26:15.080 [INFO][4253] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--678f7c66bb--qxgwd-eth0 whisker-678f7c66bb- calico-system e87c4c1a-b8e6-4a22-94c1-00898ffaeadd 858 0 2025-09-13 00:26:14 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:678f7c66bb projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-678f7c66bb-qxgwd eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calia8b5728f0cc [] [] }} ContainerID="86d6b01b4465d96431b2a4eac2ff71aebf3f67bd9592c3964064a128ea34f6e3" Namespace="calico-system" Pod="whisker-678f7c66bb-qxgwd" WorkloadEndpoint="localhost-k8s-whisker--678f7c66bb--qxgwd-" Sep 13 00:26:15.455868 containerd[1628]: 2025-09-13 00:26:15.081 [INFO][4253] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="86d6b01b4465d96431b2a4eac2ff71aebf3f67bd9592c3964064a128ea34f6e3" Namespace="calico-system" Pod="whisker-678f7c66bb-qxgwd" WorkloadEndpoint="localhost-k8s-whisker--678f7c66bb--qxgwd-eth0" Sep 13 00:26:15.455868 containerd[1628]: 2025-09-13 00:26:15.395 [INFO][4264] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="86d6b01b4465d96431b2a4eac2ff71aebf3f67bd9592c3964064a128ea34f6e3" HandleID="k8s-pod-network.86d6b01b4465d96431b2a4eac2ff71aebf3f67bd9592c3964064a128ea34f6e3" Workload="localhost-k8s-whisker--678f7c66bb--qxgwd-eth0" Sep 13 00:26:15.456160 containerd[1628]: 2025-09-13 00:26:15.399 [INFO][4264] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="86d6b01b4465d96431b2a4eac2ff71aebf3f67bd9592c3964064a128ea34f6e3" HandleID="k8s-pod-network.86d6b01b4465d96431b2a4eac2ff71aebf3f67bd9592c3964064a128ea34f6e3" Workload="localhost-k8s-whisker--678f7c66bb--qxgwd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002b4150), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-678f7c66bb-qxgwd", "timestamp":"2025-09-13 00:26:15.395321999 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:26:15.456160 containerd[1628]: 2025-09-13 00:26:15.399 [INFO][4264] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:26:15.456160 containerd[1628]: 2025-09-13 00:26:15.400 [INFO][4264] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:26:15.456160 containerd[1628]: 2025-09-13 00:26:15.400 [INFO][4264] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 13 00:26:15.456160 containerd[1628]: 2025-09-13 00:26:15.419 [INFO][4264] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.86d6b01b4465d96431b2a4eac2ff71aebf3f67bd9592c3964064a128ea34f6e3" host="localhost" Sep 13 00:26:15.456160 containerd[1628]: 2025-09-13 00:26:15.430 [INFO][4264] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 13 00:26:15.456160 containerd[1628]: 2025-09-13 00:26:15.432 [INFO][4264] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 13 00:26:15.456160 containerd[1628]: 2025-09-13 00:26:15.433 [INFO][4264] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 13 00:26:15.456160 containerd[1628]: 2025-09-13 00:26:15.434 [INFO][4264] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 13 00:26:15.456160 containerd[1628]: 2025-09-13 00:26:15.434 [INFO][4264] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.86d6b01b4465d96431b2a4eac2ff71aebf3f67bd9592c3964064a128ea34f6e3" host="localhost" Sep 13 00:26:15.456420 containerd[1628]: 2025-09-13 00:26:15.435 [INFO][4264] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.86d6b01b4465d96431b2a4eac2ff71aebf3f67bd9592c3964064a128ea34f6e3 Sep 13 00:26:15.456420 containerd[1628]: 2025-09-13 00:26:15.437 [INFO][4264] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.86d6b01b4465d96431b2a4eac2ff71aebf3f67bd9592c3964064a128ea34f6e3" host="localhost" Sep 13 00:26:15.456420 containerd[1628]: 2025-09-13 00:26:15.440 [INFO][4264] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.86d6b01b4465d96431b2a4eac2ff71aebf3f67bd9592c3964064a128ea34f6e3" host="localhost" Sep 13 00:26:15.456420 containerd[1628]: 2025-09-13 00:26:15.440 [INFO][4264] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.86d6b01b4465d96431b2a4eac2ff71aebf3f67bd9592c3964064a128ea34f6e3" host="localhost" Sep 13 00:26:15.456420 containerd[1628]: 2025-09-13 00:26:15.440 [INFO][4264] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:26:15.456420 containerd[1628]: 2025-09-13 00:26:15.440 [INFO][4264] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="86d6b01b4465d96431b2a4eac2ff71aebf3f67bd9592c3964064a128ea34f6e3" HandleID="k8s-pod-network.86d6b01b4465d96431b2a4eac2ff71aebf3f67bd9592c3964064a128ea34f6e3" Workload="localhost-k8s-whisker--678f7c66bb--qxgwd-eth0" Sep 13 00:26:15.456886 containerd[1628]: 2025-09-13 00:26:15.442 [INFO][4253] cni-plugin/k8s.go 418: Populated endpoint ContainerID="86d6b01b4465d96431b2a4eac2ff71aebf3f67bd9592c3964064a128ea34f6e3" Namespace="calico-system" Pod="whisker-678f7c66bb-qxgwd" WorkloadEndpoint="localhost-k8s-whisker--678f7c66bb--qxgwd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--678f7c66bb--qxgwd-eth0", GenerateName:"whisker-678f7c66bb-", Namespace:"calico-system", SelfLink:"", UID:"e87c4c1a-b8e6-4a22-94c1-00898ffaeadd", ResourceVersion:"858", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 26, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"678f7c66bb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-678f7c66bb-qxgwd", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calia8b5728f0cc", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:26:15.456886 containerd[1628]: 2025-09-13 00:26:15.442 [INFO][4253] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="86d6b01b4465d96431b2a4eac2ff71aebf3f67bd9592c3964064a128ea34f6e3" Namespace="calico-system" Pod="whisker-678f7c66bb-qxgwd" WorkloadEndpoint="localhost-k8s-whisker--678f7c66bb--qxgwd-eth0" Sep 13 00:26:15.456950 containerd[1628]: 2025-09-13 00:26:15.442 [INFO][4253] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia8b5728f0cc ContainerID="86d6b01b4465d96431b2a4eac2ff71aebf3f67bd9592c3964064a128ea34f6e3" Namespace="calico-system" Pod="whisker-678f7c66bb-qxgwd" WorkloadEndpoint="localhost-k8s-whisker--678f7c66bb--qxgwd-eth0" Sep 13 00:26:15.456950 containerd[1628]: 2025-09-13 00:26:15.447 [INFO][4253] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="86d6b01b4465d96431b2a4eac2ff71aebf3f67bd9592c3964064a128ea34f6e3" Namespace="calico-system" Pod="whisker-678f7c66bb-qxgwd" WorkloadEndpoint="localhost-k8s-whisker--678f7c66bb--qxgwd-eth0" Sep 13 00:26:15.457732 containerd[1628]: 2025-09-13 00:26:15.448 [INFO][4253] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="86d6b01b4465d96431b2a4eac2ff71aebf3f67bd9592c3964064a128ea34f6e3" Namespace="calico-system" Pod="whisker-678f7c66bb-qxgwd" WorkloadEndpoint="localhost-k8s-whisker--678f7c66bb--qxgwd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--678f7c66bb--qxgwd-eth0", GenerateName:"whisker-678f7c66bb-", Namespace:"calico-system", SelfLink:"", UID:"e87c4c1a-b8e6-4a22-94c1-00898ffaeadd", ResourceVersion:"858", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 26, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"678f7c66bb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"86d6b01b4465d96431b2a4eac2ff71aebf3f67bd9592c3964064a128ea34f6e3", Pod:"whisker-678f7c66bb-qxgwd", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calia8b5728f0cc", MAC:"da:d7:d9:46:8e:33", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:26:15.457814 containerd[1628]: 2025-09-13 00:26:15.453 [INFO][4253] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="86d6b01b4465d96431b2a4eac2ff71aebf3f67bd9592c3964064a128ea34f6e3" Namespace="calico-system" Pod="whisker-678f7c66bb-qxgwd" WorkloadEndpoint="localhost-k8s-whisker--678f7c66bb--qxgwd-eth0" Sep 13 00:26:15.541499 containerd[1628]: time="2025-09-13T00:26:15.541429589Z" level=info msg="connecting to shim 86d6b01b4465d96431b2a4eac2ff71aebf3f67bd9592c3964064a128ea34f6e3" address="unix:///run/containerd/s/c4ee68eef2649678b8992c9c0f4aef23ff51c726cc2b0f5b0eb6c7396494f0b7" namespace=k8s.io protocol=ttrpc version=3 Sep 13 00:26:15.565442 systemd[1]: Started cri-containerd-86d6b01b4465d96431b2a4eac2ff71aebf3f67bd9592c3964064a128ea34f6e3.scope - libcontainer container 86d6b01b4465d96431b2a4eac2ff71aebf3f67bd9592c3964064a128ea34f6e3. Sep 13 00:26:15.573032 systemd-resolved[1521]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 13 00:26:15.602819 containerd[1628]: time="2025-09-13T00:26:15.602453404Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-678f7c66bb-qxgwd,Uid:e87c4c1a-b8e6-4a22-94c1-00898ffaeadd,Namespace:calico-system,Attempt:0,} returns sandbox id \"86d6b01b4465d96431b2a4eac2ff71aebf3f67bd9592c3964064a128ea34f6e3\"" Sep 13 00:26:15.605903 containerd[1628]: time="2025-09-13T00:26:15.605850690Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\"" Sep 13 00:26:15.763934 kubelet[2927]: I0913 00:26:15.763875 2927 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9a891804-3bb4-4165-9b0f-6e24d55c0380" path="/var/lib/kubelet/pods/9a891804-3bb4-4165-9b0f-6e24d55c0380/volumes" Sep 13 00:26:15.978469 systemd-networkd[1519]: vxlan.calico: Gained IPv6LL Sep 13 00:26:16.812534 containerd[1628]: time="2025-09-13T00:26:16.812136634Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:26:16.812534 containerd[1628]: time="2025-09-13T00:26:16.812484410Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.3: active requests=0, bytes read=4661291" Sep 13 00:26:16.812534 containerd[1628]: time="2025-09-13T00:26:16.812500219Z" level=info msg="ImageCreate event name:\"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:26:16.813544 containerd[1628]: time="2025-09-13T00:26:16.813532746Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:26:16.813934 containerd[1628]: time="2025-09-13T00:26:16.813912882Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.3\" with image id \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\", size \"6153986\" in 1.208040703s" Sep 13 00:26:16.813934 containerd[1628]: time="2025-09-13T00:26:16.813929737Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\" returns image reference \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\"" Sep 13 00:26:16.815448 containerd[1628]: time="2025-09-13T00:26:16.815402442Z" level=info msg="CreateContainer within sandbox \"86d6b01b4465d96431b2a4eac2ff71aebf3f67bd9592c3964064a128ea34f6e3\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Sep 13 00:26:16.821513 containerd[1628]: time="2025-09-13T00:26:16.821400829Z" level=info msg="Container a29a379882c2d298ba10a103f6fe932d4766891af30f8e013262a331d5022e3d: CDI devices from CRI Config.CDIDevices: []" Sep 13 00:26:16.822580 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1986662202.mount: Deactivated successfully. Sep 13 00:26:16.830287 containerd[1628]: time="2025-09-13T00:26:16.830223758Z" level=info msg="CreateContainer within sandbox \"86d6b01b4465d96431b2a4eac2ff71aebf3f67bd9592c3964064a128ea34f6e3\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"a29a379882c2d298ba10a103f6fe932d4766891af30f8e013262a331d5022e3d\"" Sep 13 00:26:16.830667 containerd[1628]: time="2025-09-13T00:26:16.830646405Z" level=info msg="StartContainer for \"a29a379882c2d298ba10a103f6fe932d4766891af30f8e013262a331d5022e3d\"" Sep 13 00:26:16.835732 containerd[1628]: time="2025-09-13T00:26:16.831869842Z" level=info msg="connecting to shim a29a379882c2d298ba10a103f6fe932d4766891af30f8e013262a331d5022e3d" address="unix:///run/containerd/s/c4ee68eef2649678b8992c9c0f4aef23ff51c726cc2b0f5b0eb6c7396494f0b7" protocol=ttrpc version=3 Sep 13 00:26:16.854531 systemd[1]: Started cri-containerd-a29a379882c2d298ba10a103f6fe932d4766891af30f8e013262a331d5022e3d.scope - libcontainer container a29a379882c2d298ba10a103f6fe932d4766891af30f8e013262a331d5022e3d. Sep 13 00:26:16.905446 containerd[1628]: time="2025-09-13T00:26:16.905422000Z" level=info msg="StartContainer for \"a29a379882c2d298ba10a103f6fe932d4766891af30f8e013262a331d5022e3d\" returns successfully" Sep 13 00:26:16.924808 containerd[1628]: time="2025-09-13T00:26:16.924784444Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\"" Sep 13 00:26:17.066600 systemd-networkd[1519]: calia8b5728f0cc: Gained IPv6LL Sep 13 00:26:17.764252 containerd[1628]: time="2025-09-13T00:26:17.764179624Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-rjdzj,Uid:f3f75fb6-0073-44fe-9812-dc1c8eba6ba9,Namespace:kube-system,Attempt:0,}" Sep 13 00:26:17.842772 systemd-networkd[1519]: califd89a15de05: Link UP Sep 13 00:26:17.843313 systemd-networkd[1519]: califd89a15de05: Gained carrier Sep 13 00:26:17.856433 containerd[1628]: 2025-09-13 00:26:17.799 [INFO][4373] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--rjdzj-eth0 coredns-668d6bf9bc- kube-system f3f75fb6-0073-44fe-9812-dc1c8eba6ba9 791 0 2025-09-13 00:25:45 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-rjdzj eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] califd89a15de05 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="38c61c7f9b2c5dcf0f41ef42673c94f47b7bf97587ab7ee3351fbe28f0432836" Namespace="kube-system" Pod="coredns-668d6bf9bc-rjdzj" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--rjdzj-" Sep 13 00:26:17.856433 containerd[1628]: 2025-09-13 00:26:17.799 [INFO][4373] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="38c61c7f9b2c5dcf0f41ef42673c94f47b7bf97587ab7ee3351fbe28f0432836" Namespace="kube-system" Pod="coredns-668d6bf9bc-rjdzj" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--rjdzj-eth0" Sep 13 00:26:17.856433 containerd[1628]: 2025-09-13 00:26:17.816 [INFO][4386] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="38c61c7f9b2c5dcf0f41ef42673c94f47b7bf97587ab7ee3351fbe28f0432836" HandleID="k8s-pod-network.38c61c7f9b2c5dcf0f41ef42673c94f47b7bf97587ab7ee3351fbe28f0432836" Workload="localhost-k8s-coredns--668d6bf9bc--rjdzj-eth0" Sep 13 00:26:17.856810 containerd[1628]: 2025-09-13 00:26:17.816 [INFO][4386] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="38c61c7f9b2c5dcf0f41ef42673c94f47b7bf97587ab7ee3351fbe28f0432836" HandleID="k8s-pod-network.38c61c7f9b2c5dcf0f41ef42673c94f47b7bf97587ab7ee3351fbe28f0432836" Workload="localhost-k8s-coredns--668d6bf9bc--rjdzj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f610), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-rjdzj", "timestamp":"2025-09-13 00:26:17.816187729 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:26:17.856810 containerd[1628]: 2025-09-13 00:26:17.816 [INFO][4386] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:26:17.856810 containerd[1628]: 2025-09-13 00:26:17.816 [INFO][4386] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:26:17.856810 containerd[1628]: 2025-09-13 00:26:17.816 [INFO][4386] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 13 00:26:17.856810 containerd[1628]: 2025-09-13 00:26:17.820 [INFO][4386] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.38c61c7f9b2c5dcf0f41ef42673c94f47b7bf97587ab7ee3351fbe28f0432836" host="localhost" Sep 13 00:26:17.856810 containerd[1628]: 2025-09-13 00:26:17.822 [INFO][4386] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 13 00:26:17.856810 containerd[1628]: 2025-09-13 00:26:17.824 [INFO][4386] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 13 00:26:17.856810 containerd[1628]: 2025-09-13 00:26:17.825 [INFO][4386] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 13 00:26:17.856810 containerd[1628]: 2025-09-13 00:26:17.826 [INFO][4386] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 13 00:26:17.856810 containerd[1628]: 2025-09-13 00:26:17.826 [INFO][4386] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.38c61c7f9b2c5dcf0f41ef42673c94f47b7bf97587ab7ee3351fbe28f0432836" host="localhost" Sep 13 00:26:17.857092 containerd[1628]: 2025-09-13 00:26:17.827 [INFO][4386] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.38c61c7f9b2c5dcf0f41ef42673c94f47b7bf97587ab7ee3351fbe28f0432836 Sep 13 00:26:17.857092 containerd[1628]: 2025-09-13 00:26:17.830 [INFO][4386] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.38c61c7f9b2c5dcf0f41ef42673c94f47b7bf97587ab7ee3351fbe28f0432836" host="localhost" Sep 13 00:26:17.857092 containerd[1628]: 2025-09-13 00:26:17.836 [INFO][4386] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.38c61c7f9b2c5dcf0f41ef42673c94f47b7bf97587ab7ee3351fbe28f0432836" host="localhost" Sep 13 00:26:17.857092 containerd[1628]: 2025-09-13 00:26:17.837 [INFO][4386] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.38c61c7f9b2c5dcf0f41ef42673c94f47b7bf97587ab7ee3351fbe28f0432836" host="localhost" Sep 13 00:26:17.857092 containerd[1628]: 2025-09-13 00:26:17.837 [INFO][4386] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:26:17.857092 containerd[1628]: 2025-09-13 00:26:17.837 [INFO][4386] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="38c61c7f9b2c5dcf0f41ef42673c94f47b7bf97587ab7ee3351fbe28f0432836" HandleID="k8s-pod-network.38c61c7f9b2c5dcf0f41ef42673c94f47b7bf97587ab7ee3351fbe28f0432836" Workload="localhost-k8s-coredns--668d6bf9bc--rjdzj-eth0" Sep 13 00:26:17.857325 containerd[1628]: 2025-09-13 00:26:17.840 [INFO][4373] cni-plugin/k8s.go 418: Populated endpoint ContainerID="38c61c7f9b2c5dcf0f41ef42673c94f47b7bf97587ab7ee3351fbe28f0432836" Namespace="kube-system" Pod="coredns-668d6bf9bc-rjdzj" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--rjdzj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--rjdzj-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"f3f75fb6-0073-44fe-9812-dc1c8eba6ba9", ResourceVersion:"791", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 25, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-rjdzj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califd89a15de05", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:26:17.857552 containerd[1628]: 2025-09-13 00:26:17.840 [INFO][4373] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="38c61c7f9b2c5dcf0f41ef42673c94f47b7bf97587ab7ee3351fbe28f0432836" Namespace="kube-system" Pod="coredns-668d6bf9bc-rjdzj" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--rjdzj-eth0" Sep 13 00:26:17.857552 containerd[1628]: 2025-09-13 00:26:17.840 [INFO][4373] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califd89a15de05 ContainerID="38c61c7f9b2c5dcf0f41ef42673c94f47b7bf97587ab7ee3351fbe28f0432836" Namespace="kube-system" Pod="coredns-668d6bf9bc-rjdzj" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--rjdzj-eth0" Sep 13 00:26:17.857552 containerd[1628]: 2025-09-13 00:26:17.842 [INFO][4373] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="38c61c7f9b2c5dcf0f41ef42673c94f47b7bf97587ab7ee3351fbe28f0432836" Namespace="kube-system" Pod="coredns-668d6bf9bc-rjdzj" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--rjdzj-eth0" Sep 13 00:26:17.857918 containerd[1628]: 2025-09-13 00:26:17.842 [INFO][4373] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="38c61c7f9b2c5dcf0f41ef42673c94f47b7bf97587ab7ee3351fbe28f0432836" Namespace="kube-system" Pod="coredns-668d6bf9bc-rjdzj" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--rjdzj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--rjdzj-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"f3f75fb6-0073-44fe-9812-dc1c8eba6ba9", ResourceVersion:"791", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 25, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"38c61c7f9b2c5dcf0f41ef42673c94f47b7bf97587ab7ee3351fbe28f0432836", Pod:"coredns-668d6bf9bc-rjdzj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califd89a15de05", MAC:"aa:a3:95:8f:7d:89", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:26:17.857918 containerd[1628]: 2025-09-13 00:26:17.850 [INFO][4373] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="38c61c7f9b2c5dcf0f41ef42673c94f47b7bf97587ab7ee3351fbe28f0432836" Namespace="kube-system" Pod="coredns-668d6bf9bc-rjdzj" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--rjdzj-eth0" Sep 13 00:26:17.874587 containerd[1628]: time="2025-09-13T00:26:17.874560695Z" level=info msg="connecting to shim 38c61c7f9b2c5dcf0f41ef42673c94f47b7bf97587ab7ee3351fbe28f0432836" address="unix:///run/containerd/s/6d2a1bd154772a1625092b358e1da6910d1d4a36e333dbad4b1af9b1da97a3ef" namespace=k8s.io protocol=ttrpc version=3 Sep 13 00:26:17.891456 systemd[1]: Started cri-containerd-38c61c7f9b2c5dcf0f41ef42673c94f47b7bf97587ab7ee3351fbe28f0432836.scope - libcontainer container 38c61c7f9b2c5dcf0f41ef42673c94f47b7bf97587ab7ee3351fbe28f0432836. Sep 13 00:26:17.900963 systemd-resolved[1521]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 13 00:26:17.927813 containerd[1628]: time="2025-09-13T00:26:17.927748740Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-rjdzj,Uid:f3f75fb6-0073-44fe-9812-dc1c8eba6ba9,Namespace:kube-system,Attempt:0,} returns sandbox id \"38c61c7f9b2c5dcf0f41ef42673c94f47b7bf97587ab7ee3351fbe28f0432836\"" Sep 13 00:26:17.929532 containerd[1628]: time="2025-09-13T00:26:17.929488811Z" level=info msg="CreateContainer within sandbox \"38c61c7f9b2c5dcf0f41ef42673c94f47b7bf97587ab7ee3351fbe28f0432836\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 13 00:26:17.947357 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3237710576.mount: Deactivated successfully. Sep 13 00:26:17.949860 containerd[1628]: time="2025-09-13T00:26:17.949840497Z" level=info msg="Container 7e3860fcb7d8af6e9aac95f0f1031e9af11d056a62f0f6d7cbcc59a6d12f4c75: CDI devices from CRI Config.CDIDevices: []" Sep 13 00:26:17.952887 containerd[1628]: time="2025-09-13T00:26:17.952869755Z" level=info msg="CreateContainer within sandbox \"38c61c7f9b2c5dcf0f41ef42673c94f47b7bf97587ab7ee3351fbe28f0432836\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"7e3860fcb7d8af6e9aac95f0f1031e9af11d056a62f0f6d7cbcc59a6d12f4c75\"" Sep 13 00:26:17.953305 containerd[1628]: time="2025-09-13T00:26:17.953228645Z" level=info msg="StartContainer for \"7e3860fcb7d8af6e9aac95f0f1031e9af11d056a62f0f6d7cbcc59a6d12f4c75\"" Sep 13 00:26:17.953692 containerd[1628]: time="2025-09-13T00:26:17.953677696Z" level=info msg="connecting to shim 7e3860fcb7d8af6e9aac95f0f1031e9af11d056a62f0f6d7cbcc59a6d12f4c75" address="unix:///run/containerd/s/6d2a1bd154772a1625092b358e1da6910d1d4a36e333dbad4b1af9b1da97a3ef" protocol=ttrpc version=3 Sep 13 00:26:17.966567 systemd[1]: Started cri-containerd-7e3860fcb7d8af6e9aac95f0f1031e9af11d056a62f0f6d7cbcc59a6d12f4c75.scope - libcontainer container 7e3860fcb7d8af6e9aac95f0f1031e9af11d056a62f0f6d7cbcc59a6d12f4c75. Sep 13 00:26:17.990694 containerd[1628]: time="2025-09-13T00:26:17.990671665Z" level=info msg="StartContainer for \"7e3860fcb7d8af6e9aac95f0f1031e9af11d056a62f0f6d7cbcc59a6d12f4c75\" returns successfully" Sep 13 00:26:18.024564 kubelet[2927]: I0913 00:26:18.024489 2927 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-rjdzj" podStartSLOduration=33.024478183 podStartE2EDuration="33.024478183s" podCreationTimestamp="2025-09-13 00:25:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:26:18.024130856 +0000 UTC m=+38.375592721" watchObservedRunningTime="2025-09-13 00:26:18.024478183 +0000 UTC m=+38.375940043" Sep 13 00:26:18.854188 containerd[1628]: time="2025-09-13T00:26:18.854158817Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:26:18.854892 containerd[1628]: time="2025-09-13T00:26:18.854556267Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.3: active requests=0, bytes read=33085545" Sep 13 00:26:18.855099 containerd[1628]: time="2025-09-13T00:26:18.855084869Z" level=info msg="ImageCreate event name:\"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:26:18.856427 containerd[1628]: time="2025-09-13T00:26:18.856412194Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:26:18.857417 containerd[1628]: time="2025-09-13T00:26:18.857402360Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" with image id \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\", size \"33085375\" in 1.932594931s" Sep 13 00:26:18.857904 containerd[1628]: time="2025-09-13T00:26:18.857599370Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" returns image reference \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\"" Sep 13 00:26:18.866108 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2126177531.mount: Deactivated successfully. Sep 13 00:26:18.876228 containerd[1628]: time="2025-09-13T00:26:18.876200609Z" level=info msg="CreateContainer within sandbox \"86d6b01b4465d96431b2a4eac2ff71aebf3f67bd9592c3964064a128ea34f6e3\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Sep 13 00:26:18.881728 containerd[1628]: time="2025-09-13T00:26:18.881702828Z" level=info msg="Container 43535ef6298fa3720c558d42d7653fc9d30f812cbaa05fe378ccae4e3a9f9d0b: CDI devices from CRI Config.CDIDevices: []" Sep 13 00:26:18.883256 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2549461260.mount: Deactivated successfully. Sep 13 00:26:18.889907 containerd[1628]: time="2025-09-13T00:26:18.889849342Z" level=info msg="CreateContainer within sandbox \"86d6b01b4465d96431b2a4eac2ff71aebf3f67bd9592c3964064a128ea34f6e3\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"43535ef6298fa3720c558d42d7653fc9d30f812cbaa05fe378ccae4e3a9f9d0b\"" Sep 13 00:26:18.890307 containerd[1628]: time="2025-09-13T00:26:18.890274778Z" level=info msg="StartContainer for \"43535ef6298fa3720c558d42d7653fc9d30f812cbaa05fe378ccae4e3a9f9d0b\"" Sep 13 00:26:18.891000 containerd[1628]: time="2025-09-13T00:26:18.890984770Z" level=info msg="connecting to shim 43535ef6298fa3720c558d42d7653fc9d30f812cbaa05fe378ccae4e3a9f9d0b" address="unix:///run/containerd/s/c4ee68eef2649678b8992c9c0f4aef23ff51c726cc2b0f5b0eb6c7396494f0b7" protocol=ttrpc version=3 Sep 13 00:26:18.905546 systemd[1]: Started cri-containerd-43535ef6298fa3720c558d42d7653fc9d30f812cbaa05fe378ccae4e3a9f9d0b.scope - libcontainer container 43535ef6298fa3720c558d42d7653fc9d30f812cbaa05fe378ccae4e3a9f9d0b. Sep 13 00:26:18.922423 systemd-networkd[1519]: califd89a15de05: Gained IPv6LL Sep 13 00:26:18.948525 containerd[1628]: time="2025-09-13T00:26:18.948500994Z" level=info msg="StartContainer for \"43535ef6298fa3720c558d42d7653fc9d30f812cbaa05fe378ccae4e3a9f9d0b\" returns successfully" Sep 13 00:26:19.097988 kubelet[2927]: I0913 00:26:19.097407 2927 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-678f7c66bb-qxgwd" podStartSLOduration=1.838324547 podStartE2EDuration="5.097390748s" podCreationTimestamp="2025-09-13 00:26:14 +0000 UTC" firstStartedPulling="2025-09-13 00:26:15.603587945 +0000 UTC m=+35.955049801" lastFinishedPulling="2025-09-13 00:26:18.862654145 +0000 UTC m=+39.214116002" observedRunningTime="2025-09-13 00:26:19.089335448 +0000 UTC m=+39.440797333" watchObservedRunningTime="2025-09-13 00:26:19.097390748 +0000 UTC m=+39.448852616" Sep 13 00:26:19.774080 containerd[1628]: time="2025-09-13T00:26:19.774047242Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b9768b6f-brfmv,Uid:fef92abd-c727-44ef-9eb1-bae2f69ae74d,Namespace:calico-apiserver,Attempt:0,}" Sep 13 00:26:19.774527 containerd[1628]: time="2025-09-13T00:26:19.774507466Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9rjbl,Uid:250c5ded-929d-4e71-af09-039897e71791,Namespace:calico-system,Attempt:0,}" Sep 13 00:26:19.774640 containerd[1628]: time="2025-09-13T00:26:19.774615524Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b9768b6f-mvxpt,Uid:fe582828-806d-4540-8f45-5dc578de7bff,Namespace:calico-apiserver,Attempt:0,}" Sep 13 00:26:19.912624 systemd-networkd[1519]: caliea2947c2797: Link UP Sep 13 00:26:19.913932 systemd-networkd[1519]: caliea2947c2797: Gained carrier Sep 13 00:26:19.929752 containerd[1628]: 2025-09-13 00:26:19.838 [INFO][4529] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--9rjbl-eth0 csi-node-driver- calico-system 250c5ded-929d-4e71-af09-039897e71791 672 0 2025-09-13 00:25:56 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:6c96d95cc7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-9rjbl eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] caliea2947c2797 [] [] }} ContainerID="f51918d16d5b648c0893d774630f40debf080993d8ded7e5ab145ed8d3406720" Namespace="calico-system" Pod="csi-node-driver-9rjbl" WorkloadEndpoint="localhost-k8s-csi--node--driver--9rjbl-" Sep 13 00:26:19.929752 containerd[1628]: 2025-09-13 00:26:19.839 [INFO][4529] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f51918d16d5b648c0893d774630f40debf080993d8ded7e5ab145ed8d3406720" Namespace="calico-system" Pod="csi-node-driver-9rjbl" WorkloadEndpoint="localhost-k8s-csi--node--driver--9rjbl-eth0" Sep 13 00:26:19.929752 containerd[1628]: 2025-09-13 00:26:19.875 [INFO][4565] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f51918d16d5b648c0893d774630f40debf080993d8ded7e5ab145ed8d3406720" HandleID="k8s-pod-network.f51918d16d5b648c0893d774630f40debf080993d8ded7e5ab145ed8d3406720" Workload="localhost-k8s-csi--node--driver--9rjbl-eth0" Sep 13 00:26:19.929752 containerd[1628]: 2025-09-13 00:26:19.876 [INFO][4565] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f51918d16d5b648c0893d774630f40debf080993d8ded7e5ab145ed8d3406720" HandleID="k8s-pod-network.f51918d16d5b648c0893d774630f40debf080993d8ded7e5ab145ed8d3406720" Workload="localhost-k8s-csi--node--driver--9rjbl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5600), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-9rjbl", "timestamp":"2025-09-13 00:26:19.875963334 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:26:19.929752 containerd[1628]: 2025-09-13 00:26:19.876 [INFO][4565] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:26:19.929752 containerd[1628]: 2025-09-13 00:26:19.876 [INFO][4565] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:26:19.929752 containerd[1628]: 2025-09-13 00:26:19.876 [INFO][4565] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 13 00:26:19.929752 containerd[1628]: 2025-09-13 00:26:19.883 [INFO][4565] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f51918d16d5b648c0893d774630f40debf080993d8ded7e5ab145ed8d3406720" host="localhost" Sep 13 00:26:19.929752 containerd[1628]: 2025-09-13 00:26:19.886 [INFO][4565] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 13 00:26:19.929752 containerd[1628]: 2025-09-13 00:26:19.893 [INFO][4565] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 13 00:26:19.929752 containerd[1628]: 2025-09-13 00:26:19.895 [INFO][4565] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 13 00:26:19.929752 containerd[1628]: 2025-09-13 00:26:19.896 [INFO][4565] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 13 00:26:19.929752 containerd[1628]: 2025-09-13 00:26:19.897 [INFO][4565] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.f51918d16d5b648c0893d774630f40debf080993d8ded7e5ab145ed8d3406720" host="localhost" Sep 13 00:26:19.929752 containerd[1628]: 2025-09-13 00:26:19.898 [INFO][4565] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.f51918d16d5b648c0893d774630f40debf080993d8ded7e5ab145ed8d3406720 Sep 13 00:26:19.929752 containerd[1628]: 2025-09-13 00:26:19.900 [INFO][4565] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.f51918d16d5b648c0893d774630f40debf080993d8ded7e5ab145ed8d3406720" host="localhost" Sep 13 00:26:19.929752 containerd[1628]: 2025-09-13 00:26:19.903 [INFO][4565] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.f51918d16d5b648c0893d774630f40debf080993d8ded7e5ab145ed8d3406720" host="localhost" Sep 13 00:26:19.929752 containerd[1628]: 2025-09-13 00:26:19.903 [INFO][4565] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.f51918d16d5b648c0893d774630f40debf080993d8ded7e5ab145ed8d3406720" host="localhost" Sep 13 00:26:19.929752 containerd[1628]: 2025-09-13 00:26:19.904 [INFO][4565] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:26:19.929752 containerd[1628]: 2025-09-13 00:26:19.904 [INFO][4565] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="f51918d16d5b648c0893d774630f40debf080993d8ded7e5ab145ed8d3406720" HandleID="k8s-pod-network.f51918d16d5b648c0893d774630f40debf080993d8ded7e5ab145ed8d3406720" Workload="localhost-k8s-csi--node--driver--9rjbl-eth0" Sep 13 00:26:19.930969 containerd[1628]: 2025-09-13 00:26:19.908 [INFO][4529] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f51918d16d5b648c0893d774630f40debf080993d8ded7e5ab145ed8d3406720" Namespace="calico-system" Pod="csi-node-driver-9rjbl" WorkloadEndpoint="localhost-k8s-csi--node--driver--9rjbl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--9rjbl-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"250c5ded-929d-4e71-af09-039897e71791", ResourceVersion:"672", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 25, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-9rjbl", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"caliea2947c2797", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:26:19.930969 containerd[1628]: 2025-09-13 00:26:19.908 [INFO][4529] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="f51918d16d5b648c0893d774630f40debf080993d8ded7e5ab145ed8d3406720" Namespace="calico-system" Pod="csi-node-driver-9rjbl" WorkloadEndpoint="localhost-k8s-csi--node--driver--9rjbl-eth0" Sep 13 00:26:19.930969 containerd[1628]: 2025-09-13 00:26:19.909 [INFO][4529] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliea2947c2797 ContainerID="f51918d16d5b648c0893d774630f40debf080993d8ded7e5ab145ed8d3406720" Namespace="calico-system" Pod="csi-node-driver-9rjbl" WorkloadEndpoint="localhost-k8s-csi--node--driver--9rjbl-eth0" Sep 13 00:26:19.930969 containerd[1628]: 2025-09-13 00:26:19.914 [INFO][4529] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f51918d16d5b648c0893d774630f40debf080993d8ded7e5ab145ed8d3406720" Namespace="calico-system" Pod="csi-node-driver-9rjbl" WorkloadEndpoint="localhost-k8s-csi--node--driver--9rjbl-eth0" Sep 13 00:26:19.930969 containerd[1628]: 2025-09-13 00:26:19.914 [INFO][4529] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f51918d16d5b648c0893d774630f40debf080993d8ded7e5ab145ed8d3406720" Namespace="calico-system" Pod="csi-node-driver-9rjbl" WorkloadEndpoint="localhost-k8s-csi--node--driver--9rjbl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--9rjbl-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"250c5ded-929d-4e71-af09-039897e71791", ResourceVersion:"672", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 25, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f51918d16d5b648c0893d774630f40debf080993d8ded7e5ab145ed8d3406720", Pod:"csi-node-driver-9rjbl", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"caliea2947c2797", MAC:"fa:31:d5:53:1a:4b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:26:19.930969 containerd[1628]: 2025-09-13 00:26:19.922 [INFO][4529] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f51918d16d5b648c0893d774630f40debf080993d8ded7e5ab145ed8d3406720" Namespace="calico-system" Pod="csi-node-driver-9rjbl" WorkloadEndpoint="localhost-k8s-csi--node--driver--9rjbl-eth0" Sep 13 00:26:19.978252 containerd[1628]: time="2025-09-13T00:26:19.978223899Z" level=info msg="connecting to shim f51918d16d5b648c0893d774630f40debf080993d8ded7e5ab145ed8d3406720" address="unix:///run/containerd/s/d7675d9f1cd6ba79880a6811ac89c53596e8822e3623f7742c8a04c21b768fa8" namespace=k8s.io protocol=ttrpc version=3 Sep 13 00:26:20.011622 systemd[1]: Started cri-containerd-f51918d16d5b648c0893d774630f40debf080993d8ded7e5ab145ed8d3406720.scope - libcontainer container f51918d16d5b648c0893d774630f40debf080993d8ded7e5ab145ed8d3406720. Sep 13 00:26:20.026497 systemd-networkd[1519]: calib50a955d642: Link UP Sep 13 00:26:20.027473 systemd-networkd[1519]: calib50a955d642: Gained carrier Sep 13 00:26:20.039321 systemd-resolved[1521]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 13 00:26:20.042669 containerd[1628]: 2025-09-13 00:26:19.841 [INFO][4534] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--6b9768b6f--mvxpt-eth0 calico-apiserver-6b9768b6f- calico-apiserver fe582828-806d-4540-8f45-5dc578de7bff 790 0 2025-09-13 00:25:53 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6b9768b6f projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-6b9768b6f-mvxpt eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calib50a955d642 [] [] }} ContainerID="2367995f69c8cd162794e4bfbba4a080df4ae5ab6879bfdbddfbb83b94dec49d" Namespace="calico-apiserver" Pod="calico-apiserver-6b9768b6f-mvxpt" WorkloadEndpoint="localhost-k8s-calico--apiserver--6b9768b6f--mvxpt-" Sep 13 00:26:20.042669 containerd[1628]: 2025-09-13 00:26:19.842 [INFO][4534] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2367995f69c8cd162794e4bfbba4a080df4ae5ab6879bfdbddfbb83b94dec49d" Namespace="calico-apiserver" Pod="calico-apiserver-6b9768b6f-mvxpt" WorkloadEndpoint="localhost-k8s-calico--apiserver--6b9768b6f--mvxpt-eth0" Sep 13 00:26:20.042669 containerd[1628]: 2025-09-13 00:26:19.890 [INFO][4567] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2367995f69c8cd162794e4bfbba4a080df4ae5ab6879bfdbddfbb83b94dec49d" HandleID="k8s-pod-network.2367995f69c8cd162794e4bfbba4a080df4ae5ab6879bfdbddfbb83b94dec49d" Workload="localhost-k8s-calico--apiserver--6b9768b6f--mvxpt-eth0" Sep 13 00:26:20.042669 containerd[1628]: 2025-09-13 00:26:19.890 [INFO][4567] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="2367995f69c8cd162794e4bfbba4a080df4ae5ab6879bfdbddfbb83b94dec49d" HandleID="k8s-pod-network.2367995f69c8cd162794e4bfbba4a080df4ae5ab6879bfdbddfbb83b94dec49d" Workload="localhost-k8s-calico--apiserver--6b9768b6f--mvxpt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5600), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-6b9768b6f-mvxpt", "timestamp":"2025-09-13 00:26:19.890423683 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:26:20.042669 containerd[1628]: 2025-09-13 00:26:19.890 [INFO][4567] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:26:20.042669 containerd[1628]: 2025-09-13 00:26:19.903 [INFO][4567] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:26:20.042669 containerd[1628]: 2025-09-13 00:26:19.903 [INFO][4567] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 13 00:26:20.042669 containerd[1628]: 2025-09-13 00:26:19.986 [INFO][4567] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2367995f69c8cd162794e4bfbba4a080df4ae5ab6879bfdbddfbb83b94dec49d" host="localhost" Sep 13 00:26:20.042669 containerd[1628]: 2025-09-13 00:26:19.991 [INFO][4567] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 13 00:26:20.042669 containerd[1628]: 2025-09-13 00:26:19.997 [INFO][4567] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 13 00:26:20.042669 containerd[1628]: 2025-09-13 00:26:19.998 [INFO][4567] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 13 00:26:20.042669 containerd[1628]: 2025-09-13 00:26:19.999 [INFO][4567] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 13 00:26:20.042669 containerd[1628]: 2025-09-13 00:26:19.999 [INFO][4567] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.2367995f69c8cd162794e4bfbba4a080df4ae5ab6879bfdbddfbb83b94dec49d" host="localhost" Sep 13 00:26:20.042669 containerd[1628]: 2025-09-13 00:26:20.000 [INFO][4567] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.2367995f69c8cd162794e4bfbba4a080df4ae5ab6879bfdbddfbb83b94dec49d Sep 13 00:26:20.042669 containerd[1628]: 2025-09-13 00:26:20.009 [INFO][4567] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.2367995f69c8cd162794e4bfbba4a080df4ae5ab6879bfdbddfbb83b94dec49d" host="localhost" Sep 13 00:26:20.042669 containerd[1628]: 2025-09-13 00:26:20.014 [INFO][4567] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.2367995f69c8cd162794e4bfbba4a080df4ae5ab6879bfdbddfbb83b94dec49d" host="localhost" Sep 13 00:26:20.042669 containerd[1628]: 2025-09-13 00:26:20.014 [INFO][4567] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.2367995f69c8cd162794e4bfbba4a080df4ae5ab6879bfdbddfbb83b94dec49d" host="localhost" Sep 13 00:26:20.042669 containerd[1628]: 2025-09-13 00:26:20.014 [INFO][4567] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:26:20.042669 containerd[1628]: 2025-09-13 00:26:20.014 [INFO][4567] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="2367995f69c8cd162794e4bfbba4a080df4ae5ab6879bfdbddfbb83b94dec49d" HandleID="k8s-pod-network.2367995f69c8cd162794e4bfbba4a080df4ae5ab6879bfdbddfbb83b94dec49d" Workload="localhost-k8s-calico--apiserver--6b9768b6f--mvxpt-eth0" Sep 13 00:26:20.043653 containerd[1628]: 2025-09-13 00:26:20.023 [INFO][4534] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2367995f69c8cd162794e4bfbba4a080df4ae5ab6879bfdbddfbb83b94dec49d" Namespace="calico-apiserver" Pod="calico-apiserver-6b9768b6f-mvxpt" WorkloadEndpoint="localhost-k8s-calico--apiserver--6b9768b6f--mvxpt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6b9768b6f--mvxpt-eth0", GenerateName:"calico-apiserver-6b9768b6f-", Namespace:"calico-apiserver", SelfLink:"", UID:"fe582828-806d-4540-8f45-5dc578de7bff", ResourceVersion:"790", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 25, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6b9768b6f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-6b9768b6f-mvxpt", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib50a955d642", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:26:20.043653 containerd[1628]: 2025-09-13 00:26:20.024 [INFO][4534] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="2367995f69c8cd162794e4bfbba4a080df4ae5ab6879bfdbddfbb83b94dec49d" Namespace="calico-apiserver" Pod="calico-apiserver-6b9768b6f-mvxpt" WorkloadEndpoint="localhost-k8s-calico--apiserver--6b9768b6f--mvxpt-eth0" Sep 13 00:26:20.043653 containerd[1628]: 2025-09-13 00:26:20.024 [INFO][4534] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib50a955d642 ContainerID="2367995f69c8cd162794e4bfbba4a080df4ae5ab6879bfdbddfbb83b94dec49d" Namespace="calico-apiserver" Pod="calico-apiserver-6b9768b6f-mvxpt" WorkloadEndpoint="localhost-k8s-calico--apiserver--6b9768b6f--mvxpt-eth0" Sep 13 00:26:20.043653 containerd[1628]: 2025-09-13 00:26:20.027 [INFO][4534] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2367995f69c8cd162794e4bfbba4a080df4ae5ab6879bfdbddfbb83b94dec49d" Namespace="calico-apiserver" Pod="calico-apiserver-6b9768b6f-mvxpt" WorkloadEndpoint="localhost-k8s-calico--apiserver--6b9768b6f--mvxpt-eth0" Sep 13 00:26:20.043653 containerd[1628]: 2025-09-13 00:26:20.032 [INFO][4534] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2367995f69c8cd162794e4bfbba4a080df4ae5ab6879bfdbddfbb83b94dec49d" Namespace="calico-apiserver" Pod="calico-apiserver-6b9768b6f-mvxpt" WorkloadEndpoint="localhost-k8s-calico--apiserver--6b9768b6f--mvxpt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6b9768b6f--mvxpt-eth0", GenerateName:"calico-apiserver-6b9768b6f-", Namespace:"calico-apiserver", SelfLink:"", UID:"fe582828-806d-4540-8f45-5dc578de7bff", ResourceVersion:"790", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 25, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6b9768b6f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2367995f69c8cd162794e4bfbba4a080df4ae5ab6879bfdbddfbb83b94dec49d", Pod:"calico-apiserver-6b9768b6f-mvxpt", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib50a955d642", MAC:"ce:1a:18:ee:bb:b7", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:26:20.043653 containerd[1628]: 2025-09-13 00:26:20.040 [INFO][4534] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2367995f69c8cd162794e4bfbba4a080df4ae5ab6879bfdbddfbb83b94dec49d" Namespace="calico-apiserver" Pod="calico-apiserver-6b9768b6f-mvxpt" WorkloadEndpoint="localhost-k8s-calico--apiserver--6b9768b6f--mvxpt-eth0" Sep 13 00:26:20.062214 containerd[1628]: time="2025-09-13T00:26:20.061696295Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9rjbl,Uid:250c5ded-929d-4e71-af09-039897e71791,Namespace:calico-system,Attempt:0,} returns sandbox id \"f51918d16d5b648c0893d774630f40debf080993d8ded7e5ab145ed8d3406720\"" Sep 13 00:26:20.073456 containerd[1628]: time="2025-09-13T00:26:20.073182163Z" level=info msg="connecting to shim 2367995f69c8cd162794e4bfbba4a080df4ae5ab6879bfdbddfbb83b94dec49d" address="unix:///run/containerd/s/4e5288b34aa94077e4a5293b0bdab79253a6cf9aba863d4bd34a80714ee2f82e" namespace=k8s.io protocol=ttrpc version=3 Sep 13 00:26:20.098409 containerd[1628]: time="2025-09-13T00:26:20.098321541Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\"" Sep 13 00:26:20.111548 systemd[1]: Started cri-containerd-2367995f69c8cd162794e4bfbba4a080df4ae5ab6879bfdbddfbb83b94dec49d.scope - libcontainer container 2367995f69c8cd162794e4bfbba4a080df4ae5ab6879bfdbddfbb83b94dec49d. Sep 13 00:26:20.124064 systemd-networkd[1519]: cali444412cdf96: Link UP Sep 13 00:26:20.125735 systemd-networkd[1519]: cali444412cdf96: Gained carrier Sep 13 00:26:20.131673 systemd-resolved[1521]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 13 00:26:20.138778 containerd[1628]: 2025-09-13 00:26:19.848 [INFO][4526] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--6b9768b6f--brfmv-eth0 calico-apiserver-6b9768b6f- calico-apiserver fef92abd-c727-44ef-9eb1-bae2f69ae74d 787 0 2025-09-13 00:25:53 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6b9768b6f projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-6b9768b6f-brfmv eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali444412cdf96 [] [] }} ContainerID="f23d9b2282f9ee12034eb6220d5ab8c14ddbb0844c61049a9148484901e0b461" Namespace="calico-apiserver" Pod="calico-apiserver-6b9768b6f-brfmv" WorkloadEndpoint="localhost-k8s-calico--apiserver--6b9768b6f--brfmv-" Sep 13 00:26:20.138778 containerd[1628]: 2025-09-13 00:26:19.848 [INFO][4526] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f23d9b2282f9ee12034eb6220d5ab8c14ddbb0844c61049a9148484901e0b461" Namespace="calico-apiserver" Pod="calico-apiserver-6b9768b6f-brfmv" WorkloadEndpoint="localhost-k8s-calico--apiserver--6b9768b6f--brfmv-eth0" Sep 13 00:26:20.138778 containerd[1628]: 2025-09-13 00:26:19.890 [INFO][4572] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f23d9b2282f9ee12034eb6220d5ab8c14ddbb0844c61049a9148484901e0b461" HandleID="k8s-pod-network.f23d9b2282f9ee12034eb6220d5ab8c14ddbb0844c61049a9148484901e0b461" Workload="localhost-k8s-calico--apiserver--6b9768b6f--brfmv-eth0" Sep 13 00:26:20.138778 containerd[1628]: 2025-09-13 00:26:19.891 [INFO][4572] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f23d9b2282f9ee12034eb6220d5ab8c14ddbb0844c61049a9148484901e0b461" HandleID="k8s-pod-network.f23d9b2282f9ee12034eb6220d5ab8c14ddbb0844c61049a9148484901e0b461" Workload="localhost-k8s-calico--apiserver--6b9768b6f--brfmv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002cd770), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-6b9768b6f-brfmv", "timestamp":"2025-09-13 00:26:19.89096431 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:26:20.138778 containerd[1628]: 2025-09-13 00:26:19.891 [INFO][4572] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:26:20.138778 containerd[1628]: 2025-09-13 00:26:20.014 [INFO][4572] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:26:20.138778 containerd[1628]: 2025-09-13 00:26:20.015 [INFO][4572] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 13 00:26:20.138778 containerd[1628]: 2025-09-13 00:26:20.088 [INFO][4572] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f23d9b2282f9ee12034eb6220d5ab8c14ddbb0844c61049a9148484901e0b461" host="localhost" Sep 13 00:26:20.138778 containerd[1628]: 2025-09-13 00:26:20.094 [INFO][4572] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 13 00:26:20.138778 containerd[1628]: 2025-09-13 00:26:20.099 [INFO][4572] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 13 00:26:20.138778 containerd[1628]: 2025-09-13 00:26:20.103 [INFO][4572] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 13 00:26:20.138778 containerd[1628]: 2025-09-13 00:26:20.108 [INFO][4572] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 13 00:26:20.138778 containerd[1628]: 2025-09-13 00:26:20.108 [INFO][4572] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.f23d9b2282f9ee12034eb6220d5ab8c14ddbb0844c61049a9148484901e0b461" host="localhost" Sep 13 00:26:20.138778 containerd[1628]: 2025-09-13 00:26:20.109 [INFO][4572] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.f23d9b2282f9ee12034eb6220d5ab8c14ddbb0844c61049a9148484901e0b461 Sep 13 00:26:20.138778 containerd[1628]: 2025-09-13 00:26:20.113 [INFO][4572] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.f23d9b2282f9ee12034eb6220d5ab8c14ddbb0844c61049a9148484901e0b461" host="localhost" Sep 13 00:26:20.138778 containerd[1628]: 2025-09-13 00:26:20.118 [INFO][4572] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.f23d9b2282f9ee12034eb6220d5ab8c14ddbb0844c61049a9148484901e0b461" host="localhost" Sep 13 00:26:20.138778 containerd[1628]: 2025-09-13 00:26:20.118 [INFO][4572] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.f23d9b2282f9ee12034eb6220d5ab8c14ddbb0844c61049a9148484901e0b461" host="localhost" Sep 13 00:26:20.138778 containerd[1628]: 2025-09-13 00:26:20.118 [INFO][4572] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:26:20.138778 containerd[1628]: 2025-09-13 00:26:20.118 [INFO][4572] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="f23d9b2282f9ee12034eb6220d5ab8c14ddbb0844c61049a9148484901e0b461" HandleID="k8s-pod-network.f23d9b2282f9ee12034eb6220d5ab8c14ddbb0844c61049a9148484901e0b461" Workload="localhost-k8s-calico--apiserver--6b9768b6f--brfmv-eth0" Sep 13 00:26:20.139214 containerd[1628]: 2025-09-13 00:26:20.120 [INFO][4526] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f23d9b2282f9ee12034eb6220d5ab8c14ddbb0844c61049a9148484901e0b461" Namespace="calico-apiserver" Pod="calico-apiserver-6b9768b6f-brfmv" WorkloadEndpoint="localhost-k8s-calico--apiserver--6b9768b6f--brfmv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6b9768b6f--brfmv-eth0", GenerateName:"calico-apiserver-6b9768b6f-", Namespace:"calico-apiserver", SelfLink:"", UID:"fef92abd-c727-44ef-9eb1-bae2f69ae74d", ResourceVersion:"787", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 25, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6b9768b6f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-6b9768b6f-brfmv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali444412cdf96", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:26:20.139214 containerd[1628]: 2025-09-13 00:26:20.120 [INFO][4526] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="f23d9b2282f9ee12034eb6220d5ab8c14ddbb0844c61049a9148484901e0b461" Namespace="calico-apiserver" Pod="calico-apiserver-6b9768b6f-brfmv" WorkloadEndpoint="localhost-k8s-calico--apiserver--6b9768b6f--brfmv-eth0" Sep 13 00:26:20.139214 containerd[1628]: 2025-09-13 00:26:20.120 [INFO][4526] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali444412cdf96 ContainerID="f23d9b2282f9ee12034eb6220d5ab8c14ddbb0844c61049a9148484901e0b461" Namespace="calico-apiserver" Pod="calico-apiserver-6b9768b6f-brfmv" WorkloadEndpoint="localhost-k8s-calico--apiserver--6b9768b6f--brfmv-eth0" Sep 13 00:26:20.139214 containerd[1628]: 2025-09-13 00:26:20.126 [INFO][4526] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f23d9b2282f9ee12034eb6220d5ab8c14ddbb0844c61049a9148484901e0b461" Namespace="calico-apiserver" Pod="calico-apiserver-6b9768b6f-brfmv" WorkloadEndpoint="localhost-k8s-calico--apiserver--6b9768b6f--brfmv-eth0" Sep 13 00:26:20.139214 containerd[1628]: 2025-09-13 00:26:20.128 [INFO][4526] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f23d9b2282f9ee12034eb6220d5ab8c14ddbb0844c61049a9148484901e0b461" Namespace="calico-apiserver" Pod="calico-apiserver-6b9768b6f-brfmv" WorkloadEndpoint="localhost-k8s-calico--apiserver--6b9768b6f--brfmv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6b9768b6f--brfmv-eth0", GenerateName:"calico-apiserver-6b9768b6f-", Namespace:"calico-apiserver", SelfLink:"", UID:"fef92abd-c727-44ef-9eb1-bae2f69ae74d", ResourceVersion:"787", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 25, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6b9768b6f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f23d9b2282f9ee12034eb6220d5ab8c14ddbb0844c61049a9148484901e0b461", Pod:"calico-apiserver-6b9768b6f-brfmv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali444412cdf96", MAC:"ba:ee:60:fc:c6:8c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:26:20.139214 containerd[1628]: 2025-09-13 00:26:20.136 [INFO][4526] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f23d9b2282f9ee12034eb6220d5ab8c14ddbb0844c61049a9148484901e0b461" Namespace="calico-apiserver" Pod="calico-apiserver-6b9768b6f-brfmv" WorkloadEndpoint="localhost-k8s-calico--apiserver--6b9768b6f--brfmv-eth0" Sep 13 00:26:20.156521 containerd[1628]: time="2025-09-13T00:26:20.156452907Z" level=info msg="connecting to shim f23d9b2282f9ee12034eb6220d5ab8c14ddbb0844c61049a9148484901e0b461" address="unix:///run/containerd/s/b445e5cbd52ff3c84ea4cd67d7592fee985c694023c334c4d5adfc2d175b262e" namespace=k8s.io protocol=ttrpc version=3 Sep 13 00:26:20.175505 systemd[1]: Started cri-containerd-f23d9b2282f9ee12034eb6220d5ab8c14ddbb0844c61049a9148484901e0b461.scope - libcontainer container f23d9b2282f9ee12034eb6220d5ab8c14ddbb0844c61049a9148484901e0b461. Sep 13 00:26:20.181068 containerd[1628]: time="2025-09-13T00:26:20.181039819Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b9768b6f-mvxpt,Uid:fe582828-806d-4540-8f45-5dc578de7bff,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"2367995f69c8cd162794e4bfbba4a080df4ae5ab6879bfdbddfbb83b94dec49d\"" Sep 13 00:26:20.188261 systemd-resolved[1521]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 13 00:26:20.214491 containerd[1628]: time="2025-09-13T00:26:20.214464221Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b9768b6f-brfmv,Uid:fef92abd-c727-44ef-9eb1-bae2f69ae74d,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"f23d9b2282f9ee12034eb6220d5ab8c14ddbb0844c61049a9148484901e0b461\"" Sep 13 00:26:20.763186 containerd[1628]: time="2025-09-13T00:26:20.763026600Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-tgvtb,Uid:fb56d53b-d4f3-4fb0-8ed0-d970a346c029,Namespace:kube-system,Attempt:0,}" Sep 13 00:26:20.763186 containerd[1628]: time="2025-09-13T00:26:20.763119242Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-td6hn,Uid:340a543d-09d0-4895-9435-78201b2b74e5,Namespace:calico-system,Attempt:0,}" Sep 13 00:26:20.841184 systemd-networkd[1519]: cali0968c03714c: Link UP Sep 13 00:26:20.841621 systemd-networkd[1519]: cali0968c03714c: Gained carrier Sep 13 00:26:20.853397 containerd[1628]: 2025-09-13 00:26:20.790 [INFO][4764] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--tgvtb-eth0 coredns-668d6bf9bc- kube-system fb56d53b-d4f3-4fb0-8ed0-d970a346c029 783 0 2025-09-13 00:25:45 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-tgvtb eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali0968c03714c [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="d3fe9abc606ea8286762285e6c3da30d08518e91a2fc20452b7e35e52677c937" Namespace="kube-system" Pod="coredns-668d6bf9bc-tgvtb" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--tgvtb-" Sep 13 00:26:20.853397 containerd[1628]: 2025-09-13 00:26:20.790 [INFO][4764] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d3fe9abc606ea8286762285e6c3da30d08518e91a2fc20452b7e35e52677c937" Namespace="kube-system" Pod="coredns-668d6bf9bc-tgvtb" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--tgvtb-eth0" Sep 13 00:26:20.853397 containerd[1628]: 2025-09-13 00:26:20.811 [INFO][4791] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d3fe9abc606ea8286762285e6c3da30d08518e91a2fc20452b7e35e52677c937" HandleID="k8s-pod-network.d3fe9abc606ea8286762285e6c3da30d08518e91a2fc20452b7e35e52677c937" Workload="localhost-k8s-coredns--668d6bf9bc--tgvtb-eth0" Sep 13 00:26:20.853397 containerd[1628]: 2025-09-13 00:26:20.812 [INFO][4791] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d3fe9abc606ea8286762285e6c3da30d08518e91a2fc20452b7e35e52677c937" HandleID="k8s-pod-network.d3fe9abc606ea8286762285e6c3da30d08518e91a2fc20452b7e35e52677c937" Workload="localhost-k8s-coredns--668d6bf9bc--tgvtb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d4ff0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-tgvtb", "timestamp":"2025-09-13 00:26:20.811941539 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:26:20.853397 containerd[1628]: 2025-09-13 00:26:20.812 [INFO][4791] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:26:20.853397 containerd[1628]: 2025-09-13 00:26:20.812 [INFO][4791] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:26:20.853397 containerd[1628]: 2025-09-13 00:26:20.812 [INFO][4791] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 13 00:26:20.853397 containerd[1628]: 2025-09-13 00:26:20.817 [INFO][4791] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d3fe9abc606ea8286762285e6c3da30d08518e91a2fc20452b7e35e52677c937" host="localhost" Sep 13 00:26:20.853397 containerd[1628]: 2025-09-13 00:26:20.820 [INFO][4791] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 13 00:26:20.853397 containerd[1628]: 2025-09-13 00:26:20.822 [INFO][4791] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 13 00:26:20.853397 containerd[1628]: 2025-09-13 00:26:20.823 [INFO][4791] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 13 00:26:20.853397 containerd[1628]: 2025-09-13 00:26:20.824 [INFO][4791] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 13 00:26:20.853397 containerd[1628]: 2025-09-13 00:26:20.824 [INFO][4791] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.d3fe9abc606ea8286762285e6c3da30d08518e91a2fc20452b7e35e52677c937" host="localhost" Sep 13 00:26:20.853397 containerd[1628]: 2025-09-13 00:26:20.825 [INFO][4791] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.d3fe9abc606ea8286762285e6c3da30d08518e91a2fc20452b7e35e52677c937 Sep 13 00:26:20.853397 containerd[1628]: 2025-09-13 00:26:20.831 [INFO][4791] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.d3fe9abc606ea8286762285e6c3da30d08518e91a2fc20452b7e35e52677c937" host="localhost" Sep 13 00:26:20.853397 containerd[1628]: 2025-09-13 00:26:20.834 [INFO][4791] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.d3fe9abc606ea8286762285e6c3da30d08518e91a2fc20452b7e35e52677c937" host="localhost" Sep 13 00:26:20.853397 containerd[1628]: 2025-09-13 00:26:20.834 [INFO][4791] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.d3fe9abc606ea8286762285e6c3da30d08518e91a2fc20452b7e35e52677c937" host="localhost" Sep 13 00:26:20.853397 containerd[1628]: 2025-09-13 00:26:20.834 [INFO][4791] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:26:20.853397 containerd[1628]: 2025-09-13 00:26:20.834 [INFO][4791] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="d3fe9abc606ea8286762285e6c3da30d08518e91a2fc20452b7e35e52677c937" HandleID="k8s-pod-network.d3fe9abc606ea8286762285e6c3da30d08518e91a2fc20452b7e35e52677c937" Workload="localhost-k8s-coredns--668d6bf9bc--tgvtb-eth0" Sep 13 00:26:20.854099 containerd[1628]: 2025-09-13 00:26:20.837 [INFO][4764] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d3fe9abc606ea8286762285e6c3da30d08518e91a2fc20452b7e35e52677c937" Namespace="kube-system" Pod="coredns-668d6bf9bc-tgvtb" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--tgvtb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--tgvtb-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"fb56d53b-d4f3-4fb0-8ed0-d970a346c029", ResourceVersion:"783", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 25, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-tgvtb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0968c03714c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:26:20.854099 containerd[1628]: 2025-09-13 00:26:20.837 [INFO][4764] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="d3fe9abc606ea8286762285e6c3da30d08518e91a2fc20452b7e35e52677c937" Namespace="kube-system" Pod="coredns-668d6bf9bc-tgvtb" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--tgvtb-eth0" Sep 13 00:26:20.854099 containerd[1628]: 2025-09-13 00:26:20.837 [INFO][4764] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0968c03714c ContainerID="d3fe9abc606ea8286762285e6c3da30d08518e91a2fc20452b7e35e52677c937" Namespace="kube-system" Pod="coredns-668d6bf9bc-tgvtb" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--tgvtb-eth0" Sep 13 00:26:20.854099 containerd[1628]: 2025-09-13 00:26:20.842 [INFO][4764] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d3fe9abc606ea8286762285e6c3da30d08518e91a2fc20452b7e35e52677c937" Namespace="kube-system" Pod="coredns-668d6bf9bc-tgvtb" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--tgvtb-eth0" Sep 13 00:26:20.854099 containerd[1628]: 2025-09-13 00:26:20.842 [INFO][4764] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d3fe9abc606ea8286762285e6c3da30d08518e91a2fc20452b7e35e52677c937" Namespace="kube-system" Pod="coredns-668d6bf9bc-tgvtb" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--tgvtb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--tgvtb-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"fb56d53b-d4f3-4fb0-8ed0-d970a346c029", ResourceVersion:"783", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 25, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d3fe9abc606ea8286762285e6c3da30d08518e91a2fc20452b7e35e52677c937", Pod:"coredns-668d6bf9bc-tgvtb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0968c03714c", MAC:"3e:ff:bb:06:e3:49", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:26:20.854099 containerd[1628]: 2025-09-13 00:26:20.851 [INFO][4764] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d3fe9abc606ea8286762285e6c3da30d08518e91a2fc20452b7e35e52677c937" Namespace="kube-system" Pod="coredns-668d6bf9bc-tgvtb" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--tgvtb-eth0" Sep 13 00:26:20.869113 containerd[1628]: time="2025-09-13T00:26:20.869063174Z" level=info msg="connecting to shim d3fe9abc606ea8286762285e6c3da30d08518e91a2fc20452b7e35e52677c937" address="unix:///run/containerd/s/d3d4cb46a0c83c7f5668e6938786a5c44cddb8dcafd190ddfc6407f1aaf1da9f" namespace=k8s.io protocol=ttrpc version=3 Sep 13 00:26:20.895487 systemd[1]: Started cri-containerd-d3fe9abc606ea8286762285e6c3da30d08518e91a2fc20452b7e35e52677c937.scope - libcontainer container d3fe9abc606ea8286762285e6c3da30d08518e91a2fc20452b7e35e52677c937. Sep 13 00:26:20.905482 systemd-resolved[1521]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 13 00:26:20.944077 systemd-networkd[1519]: cali3b53c9cb346: Link UP Sep 13 00:26:20.944705 systemd-networkd[1519]: cali3b53c9cb346: Gained carrier Sep 13 00:26:20.960486 containerd[1628]: 2025-09-13 00:26:20.788 [INFO][4767] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--54d579b49d--td6hn-eth0 goldmane-54d579b49d- calico-system 340a543d-09d0-4895-9435-78201b2b74e5 788 0 2025-09-13 00:25:55 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:54d579b49d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-54d579b49d-td6hn eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali3b53c9cb346 [] [] }} ContainerID="d2998b028e2394716b3c875cb07fcf3833e821ad5752b003658cc065d3857c00" Namespace="calico-system" Pod="goldmane-54d579b49d-td6hn" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--td6hn-" Sep 13 00:26:20.960486 containerd[1628]: 2025-09-13 00:26:20.789 [INFO][4767] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d2998b028e2394716b3c875cb07fcf3833e821ad5752b003658cc065d3857c00" Namespace="calico-system" Pod="goldmane-54d579b49d-td6hn" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--td6hn-eth0" Sep 13 00:26:20.960486 containerd[1628]: 2025-09-13 00:26:20.818 [INFO][4789] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d2998b028e2394716b3c875cb07fcf3833e821ad5752b003658cc065d3857c00" HandleID="k8s-pod-network.d2998b028e2394716b3c875cb07fcf3833e821ad5752b003658cc065d3857c00" Workload="localhost-k8s-goldmane--54d579b49d--td6hn-eth0" Sep 13 00:26:20.960486 containerd[1628]: 2025-09-13 00:26:20.818 [INFO][4789] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d2998b028e2394716b3c875cb07fcf3833e821ad5752b003658cc065d3857c00" HandleID="k8s-pod-network.d2998b028e2394716b3c875cb07fcf3833e821ad5752b003658cc065d3857c00" Workload="localhost-k8s-goldmane--54d579b49d--td6hn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d4ff0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-54d579b49d-td6hn", "timestamp":"2025-09-13 00:26:20.81799482 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:26:20.960486 containerd[1628]: 2025-09-13 00:26:20.818 [INFO][4789] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:26:20.960486 containerd[1628]: 2025-09-13 00:26:20.834 [INFO][4789] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:26:20.960486 containerd[1628]: 2025-09-13 00:26:20.834 [INFO][4789] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 13 00:26:20.960486 containerd[1628]: 2025-09-13 00:26:20.918 [INFO][4789] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d2998b028e2394716b3c875cb07fcf3833e821ad5752b003658cc065d3857c00" host="localhost" Sep 13 00:26:20.960486 containerd[1628]: 2025-09-13 00:26:20.922 [INFO][4789] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 13 00:26:20.960486 containerd[1628]: 2025-09-13 00:26:20.924 [INFO][4789] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 13 00:26:20.960486 containerd[1628]: 2025-09-13 00:26:20.926 [INFO][4789] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 13 00:26:20.960486 containerd[1628]: 2025-09-13 00:26:20.928 [INFO][4789] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 13 00:26:20.960486 containerd[1628]: 2025-09-13 00:26:20.928 [INFO][4789] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.d2998b028e2394716b3c875cb07fcf3833e821ad5752b003658cc065d3857c00" host="localhost" Sep 13 00:26:20.960486 containerd[1628]: 2025-09-13 00:26:20.929 [INFO][4789] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.d2998b028e2394716b3c875cb07fcf3833e821ad5752b003658cc065d3857c00 Sep 13 00:26:20.960486 containerd[1628]: 2025-09-13 00:26:20.932 [INFO][4789] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.d2998b028e2394716b3c875cb07fcf3833e821ad5752b003658cc065d3857c00" host="localhost" Sep 13 00:26:20.960486 containerd[1628]: 2025-09-13 00:26:20.936 [INFO][4789] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.d2998b028e2394716b3c875cb07fcf3833e821ad5752b003658cc065d3857c00" host="localhost" Sep 13 00:26:20.960486 containerd[1628]: 2025-09-13 00:26:20.936 [INFO][4789] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.d2998b028e2394716b3c875cb07fcf3833e821ad5752b003658cc065d3857c00" host="localhost" Sep 13 00:26:20.960486 containerd[1628]: 2025-09-13 00:26:20.936 [INFO][4789] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:26:20.960486 containerd[1628]: 2025-09-13 00:26:20.936 [INFO][4789] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="d2998b028e2394716b3c875cb07fcf3833e821ad5752b003658cc065d3857c00" HandleID="k8s-pod-network.d2998b028e2394716b3c875cb07fcf3833e821ad5752b003658cc065d3857c00" Workload="localhost-k8s-goldmane--54d579b49d--td6hn-eth0" Sep 13 00:26:20.961631 containerd[1628]: 2025-09-13 00:26:20.939 [INFO][4767] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d2998b028e2394716b3c875cb07fcf3833e821ad5752b003658cc065d3857c00" Namespace="calico-system" Pod="goldmane-54d579b49d-td6hn" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--td6hn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--54d579b49d--td6hn-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"340a543d-09d0-4895-9435-78201b2b74e5", ResourceVersion:"788", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 25, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-54d579b49d-td6hn", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali3b53c9cb346", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:26:20.961631 containerd[1628]: 2025-09-13 00:26:20.940 [INFO][4767] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="d2998b028e2394716b3c875cb07fcf3833e821ad5752b003658cc065d3857c00" Namespace="calico-system" Pod="goldmane-54d579b49d-td6hn" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--td6hn-eth0" Sep 13 00:26:20.961631 containerd[1628]: 2025-09-13 00:26:20.940 [INFO][4767] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3b53c9cb346 ContainerID="d2998b028e2394716b3c875cb07fcf3833e821ad5752b003658cc065d3857c00" Namespace="calico-system" Pod="goldmane-54d579b49d-td6hn" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--td6hn-eth0" Sep 13 00:26:20.961631 containerd[1628]: 2025-09-13 00:26:20.945 [INFO][4767] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d2998b028e2394716b3c875cb07fcf3833e821ad5752b003658cc065d3857c00" Namespace="calico-system" Pod="goldmane-54d579b49d-td6hn" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--td6hn-eth0" Sep 13 00:26:20.961631 containerd[1628]: 2025-09-13 00:26:20.945 [INFO][4767] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d2998b028e2394716b3c875cb07fcf3833e821ad5752b003658cc065d3857c00" Namespace="calico-system" Pod="goldmane-54d579b49d-td6hn" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--td6hn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--54d579b49d--td6hn-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"340a543d-09d0-4895-9435-78201b2b74e5", ResourceVersion:"788", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 25, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d2998b028e2394716b3c875cb07fcf3833e821ad5752b003658cc065d3857c00", Pod:"goldmane-54d579b49d-td6hn", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali3b53c9cb346", MAC:"da:0f:a6:a0:d0:8b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:26:20.961631 containerd[1628]: 2025-09-13 00:26:20.955 [INFO][4767] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d2998b028e2394716b3c875cb07fcf3833e821ad5752b003658cc065d3857c00" Namespace="calico-system" Pod="goldmane-54d579b49d-td6hn" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--td6hn-eth0" Sep 13 00:26:20.962563 containerd[1628]: time="2025-09-13T00:26:20.962504806Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-tgvtb,Uid:fb56d53b-d4f3-4fb0-8ed0-d970a346c029,Namespace:kube-system,Attempt:0,} returns sandbox id \"d3fe9abc606ea8286762285e6c3da30d08518e91a2fc20452b7e35e52677c937\"" Sep 13 00:26:20.965256 containerd[1628]: time="2025-09-13T00:26:20.965233212Z" level=info msg="CreateContainer within sandbox \"d3fe9abc606ea8286762285e6c3da30d08518e91a2fc20452b7e35e52677c937\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 13 00:26:20.977956 containerd[1628]: time="2025-09-13T00:26:20.977447575Z" level=info msg="Container f25a05f0600fd508d9662f63c1996b59fc6da0a453fbd35a79f0eb33b1b0ee9a: CDI devices from CRI Config.CDIDevices: []" Sep 13 00:26:20.979729 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1126693337.mount: Deactivated successfully. Sep 13 00:26:20.988376 containerd[1628]: time="2025-09-13T00:26:20.988329510Z" level=info msg="CreateContainer within sandbox \"d3fe9abc606ea8286762285e6c3da30d08518e91a2fc20452b7e35e52677c937\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f25a05f0600fd508d9662f63c1996b59fc6da0a453fbd35a79f0eb33b1b0ee9a\"" Sep 13 00:26:20.989725 containerd[1628]: time="2025-09-13T00:26:20.989460125Z" level=info msg="connecting to shim d2998b028e2394716b3c875cb07fcf3833e821ad5752b003658cc065d3857c00" address="unix:///run/containerd/s/d23667d6cad94543093c8b5cb1d85cc419332fbb0a609ef29564d4daa4c0e77e" namespace=k8s.io protocol=ttrpc version=3 Sep 13 00:26:20.990633 containerd[1628]: time="2025-09-13T00:26:20.990615639Z" level=info msg="StartContainer for \"f25a05f0600fd508d9662f63c1996b59fc6da0a453fbd35a79f0eb33b1b0ee9a\"" Sep 13 00:26:20.996522 containerd[1628]: time="2025-09-13T00:26:20.996454393Z" level=info msg="connecting to shim f25a05f0600fd508d9662f63c1996b59fc6da0a453fbd35a79f0eb33b1b0ee9a" address="unix:///run/containerd/s/d3d4cb46a0c83c7f5668e6938786a5c44cddb8dcafd190ddfc6407f1aaf1da9f" protocol=ttrpc version=3 Sep 13 00:26:21.010562 systemd[1]: Started cri-containerd-d2998b028e2394716b3c875cb07fcf3833e821ad5752b003658cc065d3857c00.scope - libcontainer container d2998b028e2394716b3c875cb07fcf3833e821ad5752b003658cc065d3857c00. Sep 13 00:26:21.013906 systemd[1]: Started cri-containerd-f25a05f0600fd508d9662f63c1996b59fc6da0a453fbd35a79f0eb33b1b0ee9a.scope - libcontainer container f25a05f0600fd508d9662f63c1996b59fc6da0a453fbd35a79f0eb33b1b0ee9a. Sep 13 00:26:21.031592 systemd-resolved[1521]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 13 00:26:21.049824 containerd[1628]: time="2025-09-13T00:26:21.049795347Z" level=info msg="StartContainer for \"f25a05f0600fd508d9662f63c1996b59fc6da0a453fbd35a79f0eb33b1b0ee9a\" returns successfully" Sep 13 00:26:21.069745 containerd[1628]: time="2025-09-13T00:26:21.069722790Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-td6hn,Uid:340a543d-09d0-4895-9435-78201b2b74e5,Namespace:calico-system,Attempt:0,} returns sandbox id \"d2998b028e2394716b3c875cb07fcf3833e821ad5752b003658cc065d3857c00\"" Sep 13 00:26:21.098502 systemd-networkd[1519]: caliea2947c2797: Gained IPv6LL Sep 13 00:26:21.482519 systemd-networkd[1519]: cali444412cdf96: Gained IPv6LL Sep 13 00:26:21.761586 containerd[1628]: time="2025-09-13T00:26:21.761510278Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:26:21.763925 containerd[1628]: time="2025-09-13T00:26:21.763426670Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.3: active requests=0, bytes read=8760527" Sep 13 00:26:21.763925 containerd[1628]: time="2025-09-13T00:26:21.763853142Z" level=info msg="ImageCreate event name:\"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:26:21.764048 containerd[1628]: time="2025-09-13T00:26:21.764037561Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-8487fb95d8-4ffbb,Uid:07e6f729-e3ab-45e0-ab76-d34bfb5a97db,Namespace:calico-system,Attempt:0,}" Sep 13 00:26:21.767490 containerd[1628]: time="2025-09-13T00:26:21.767471513Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:26:21.767935 containerd[1628]: time="2025-09-13T00:26:21.767922644Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.3\" with image id \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\", size \"10253230\" in 1.669344614s" Sep 13 00:26:21.767997 containerd[1628]: time="2025-09-13T00:26:21.767987760Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\" returns image reference \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\"" Sep 13 00:26:21.782474 containerd[1628]: time="2025-09-13T00:26:21.782430186Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 13 00:26:21.786152 containerd[1628]: time="2025-09-13T00:26:21.785832644Z" level=info msg="CreateContainer within sandbox \"f51918d16d5b648c0893d774630f40debf080993d8ded7e5ab145ed8d3406720\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Sep 13 00:26:21.796470 containerd[1628]: time="2025-09-13T00:26:21.796441000Z" level=info msg="Container bd3a3056cbf13f367637daf125cc30fba1bde480b9016c2ff88b079f64d6060a: CDI devices from CRI Config.CDIDevices: []" Sep 13 00:26:21.802967 containerd[1628]: time="2025-09-13T00:26:21.802943081Z" level=info msg="CreateContainer within sandbox \"f51918d16d5b648c0893d774630f40debf080993d8ded7e5ab145ed8d3406720\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"bd3a3056cbf13f367637daf125cc30fba1bde480b9016c2ff88b079f64d6060a\"" Sep 13 00:26:21.804081 containerd[1628]: time="2025-09-13T00:26:21.804051575Z" level=info msg="StartContainer for \"bd3a3056cbf13f367637daf125cc30fba1bde480b9016c2ff88b079f64d6060a\"" Sep 13 00:26:21.806945 containerd[1628]: time="2025-09-13T00:26:21.806918760Z" level=info msg="connecting to shim bd3a3056cbf13f367637daf125cc30fba1bde480b9016c2ff88b079f64d6060a" address="unix:///run/containerd/s/d7675d9f1cd6ba79880a6811ac89c53596e8822e3623f7742c8a04c21b768fa8" protocol=ttrpc version=3 Sep 13 00:26:21.826457 systemd[1]: Started cri-containerd-bd3a3056cbf13f367637daf125cc30fba1bde480b9016c2ff88b079f64d6060a.scope - libcontainer container bd3a3056cbf13f367637daf125cc30fba1bde480b9016c2ff88b079f64d6060a. Sep 13 00:26:21.882538 systemd-networkd[1519]: calie593315ec12: Link UP Sep 13 00:26:21.883119 systemd-networkd[1519]: calie593315ec12: Gained carrier Sep 13 00:26:21.893873 containerd[1628]: time="2025-09-13T00:26:21.893838994Z" level=info msg="StartContainer for \"bd3a3056cbf13f367637daf125cc30fba1bde480b9016c2ff88b079f64d6060a\" returns successfully" Sep 13 00:26:21.901855 containerd[1628]: 2025-09-13 00:26:21.799 [INFO][4950] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--8487fb95d8--4ffbb-eth0 calico-kube-controllers-8487fb95d8- calico-system 07e6f729-e3ab-45e0-ab76-d34bfb5a97db 792 0 2025-09-13 00:25:56 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:8487fb95d8 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-8487fb95d8-4ffbb eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calie593315ec12 [] [] }} ContainerID="64a724da2065f4a6609e176c9b5cf59da71a6ccc60de346c00a0d39d091afd9f" Namespace="calico-system" Pod="calico-kube-controllers-8487fb95d8-4ffbb" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--8487fb95d8--4ffbb-" Sep 13 00:26:21.901855 containerd[1628]: 2025-09-13 00:26:21.799 [INFO][4950] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="64a724da2065f4a6609e176c9b5cf59da71a6ccc60de346c00a0d39d091afd9f" Namespace="calico-system" Pod="calico-kube-controllers-8487fb95d8-4ffbb" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--8487fb95d8--4ffbb-eth0" Sep 13 00:26:21.901855 containerd[1628]: 2025-09-13 00:26:21.834 [INFO][4964] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="64a724da2065f4a6609e176c9b5cf59da71a6ccc60de346c00a0d39d091afd9f" HandleID="k8s-pod-network.64a724da2065f4a6609e176c9b5cf59da71a6ccc60de346c00a0d39d091afd9f" Workload="localhost-k8s-calico--kube--controllers--8487fb95d8--4ffbb-eth0" Sep 13 00:26:21.901855 containerd[1628]: 2025-09-13 00:26:21.834 [INFO][4964] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="64a724da2065f4a6609e176c9b5cf59da71a6ccc60de346c00a0d39d091afd9f" HandleID="k8s-pod-network.64a724da2065f4a6609e176c9b5cf59da71a6ccc60de346c00a0d39d091afd9f" Workload="localhost-k8s-calico--kube--controllers--8487fb95d8--4ffbb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5730), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-8487fb95d8-4ffbb", "timestamp":"2025-09-13 00:26:21.834523953 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:26:21.901855 containerd[1628]: 2025-09-13 00:26:21.834 [INFO][4964] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:26:21.901855 containerd[1628]: 2025-09-13 00:26:21.834 [INFO][4964] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:26:21.901855 containerd[1628]: 2025-09-13 00:26:21.834 [INFO][4964] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 13 00:26:21.901855 containerd[1628]: 2025-09-13 00:26:21.839 [INFO][4964] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.64a724da2065f4a6609e176c9b5cf59da71a6ccc60de346c00a0d39d091afd9f" host="localhost" Sep 13 00:26:21.901855 containerd[1628]: 2025-09-13 00:26:21.841 [INFO][4964] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 13 00:26:21.901855 containerd[1628]: 2025-09-13 00:26:21.844 [INFO][4964] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 13 00:26:21.901855 containerd[1628]: 2025-09-13 00:26:21.844 [INFO][4964] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 13 00:26:21.901855 containerd[1628]: 2025-09-13 00:26:21.846 [INFO][4964] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 13 00:26:21.901855 containerd[1628]: 2025-09-13 00:26:21.846 [INFO][4964] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.64a724da2065f4a6609e176c9b5cf59da71a6ccc60de346c00a0d39d091afd9f" host="localhost" Sep 13 00:26:21.901855 containerd[1628]: 2025-09-13 00:26:21.847 [INFO][4964] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.64a724da2065f4a6609e176c9b5cf59da71a6ccc60de346c00a0d39d091afd9f Sep 13 00:26:21.901855 containerd[1628]: 2025-09-13 00:26:21.850 [INFO][4964] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.64a724da2065f4a6609e176c9b5cf59da71a6ccc60de346c00a0d39d091afd9f" host="localhost" Sep 13 00:26:21.901855 containerd[1628]: 2025-09-13 00:26:21.877 [INFO][4964] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.64a724da2065f4a6609e176c9b5cf59da71a6ccc60de346c00a0d39d091afd9f" host="localhost" Sep 13 00:26:21.901855 containerd[1628]: 2025-09-13 00:26:21.877 [INFO][4964] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.64a724da2065f4a6609e176c9b5cf59da71a6ccc60de346c00a0d39d091afd9f" host="localhost" Sep 13 00:26:21.901855 containerd[1628]: 2025-09-13 00:26:21.877 [INFO][4964] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:26:21.901855 containerd[1628]: 2025-09-13 00:26:21.877 [INFO][4964] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="64a724da2065f4a6609e176c9b5cf59da71a6ccc60de346c00a0d39d091afd9f" HandleID="k8s-pod-network.64a724da2065f4a6609e176c9b5cf59da71a6ccc60de346c00a0d39d091afd9f" Workload="localhost-k8s-calico--kube--controllers--8487fb95d8--4ffbb-eth0" Sep 13 00:26:21.906725 containerd[1628]: 2025-09-13 00:26:21.879 [INFO][4950] cni-plugin/k8s.go 418: Populated endpoint ContainerID="64a724da2065f4a6609e176c9b5cf59da71a6ccc60de346c00a0d39d091afd9f" Namespace="calico-system" Pod="calico-kube-controllers-8487fb95d8-4ffbb" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--8487fb95d8--4ffbb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--8487fb95d8--4ffbb-eth0", GenerateName:"calico-kube-controllers-8487fb95d8-", Namespace:"calico-system", SelfLink:"", UID:"07e6f729-e3ab-45e0-ab76-d34bfb5a97db", ResourceVersion:"792", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 25, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"8487fb95d8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-8487fb95d8-4ffbb", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calie593315ec12", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:26:21.906725 containerd[1628]: 2025-09-13 00:26:21.879 [INFO][4950] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="64a724da2065f4a6609e176c9b5cf59da71a6ccc60de346c00a0d39d091afd9f" Namespace="calico-system" Pod="calico-kube-controllers-8487fb95d8-4ffbb" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--8487fb95d8--4ffbb-eth0" Sep 13 00:26:21.906725 containerd[1628]: 2025-09-13 00:26:21.879 [INFO][4950] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie593315ec12 ContainerID="64a724da2065f4a6609e176c9b5cf59da71a6ccc60de346c00a0d39d091afd9f" Namespace="calico-system" Pod="calico-kube-controllers-8487fb95d8-4ffbb" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--8487fb95d8--4ffbb-eth0" Sep 13 00:26:21.906725 containerd[1628]: 2025-09-13 00:26:21.883 [INFO][4950] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="64a724da2065f4a6609e176c9b5cf59da71a6ccc60de346c00a0d39d091afd9f" Namespace="calico-system" Pod="calico-kube-controllers-8487fb95d8-4ffbb" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--8487fb95d8--4ffbb-eth0" Sep 13 00:26:21.906725 containerd[1628]: 2025-09-13 00:26:21.888 [INFO][4950] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="64a724da2065f4a6609e176c9b5cf59da71a6ccc60de346c00a0d39d091afd9f" Namespace="calico-system" Pod="calico-kube-controllers-8487fb95d8-4ffbb" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--8487fb95d8--4ffbb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--8487fb95d8--4ffbb-eth0", GenerateName:"calico-kube-controllers-8487fb95d8-", Namespace:"calico-system", SelfLink:"", UID:"07e6f729-e3ab-45e0-ab76-d34bfb5a97db", ResourceVersion:"792", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 25, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"8487fb95d8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"64a724da2065f4a6609e176c9b5cf59da71a6ccc60de346c00a0d39d091afd9f", Pod:"calico-kube-controllers-8487fb95d8-4ffbb", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calie593315ec12", MAC:"fa:40:69:86:d0:27", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:26:21.906725 containerd[1628]: 2025-09-13 00:26:21.898 [INFO][4950] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="64a724da2065f4a6609e176c9b5cf59da71a6ccc60de346c00a0d39d091afd9f" Namespace="calico-system" Pod="calico-kube-controllers-8487fb95d8-4ffbb" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--8487fb95d8--4ffbb-eth0" Sep 13 00:26:21.925152 containerd[1628]: time="2025-09-13T00:26:21.925099781Z" level=info msg="connecting to shim 64a724da2065f4a6609e176c9b5cf59da71a6ccc60de346c00a0d39d091afd9f" address="unix:///run/containerd/s/ec7c3a30f269d625cb7d05c55a3a79e618b620aad8b1972d71cd1b50e8ebacf1" namespace=k8s.io protocol=ttrpc version=3 Sep 13 00:26:21.932329 systemd-networkd[1519]: calib50a955d642: Gained IPv6LL Sep 13 00:26:21.942481 systemd[1]: Started cri-containerd-64a724da2065f4a6609e176c9b5cf59da71a6ccc60de346c00a0d39d091afd9f.scope - libcontainer container 64a724da2065f4a6609e176c9b5cf59da71a6ccc60de346c00a0d39d091afd9f. Sep 13 00:26:21.950292 systemd-resolved[1521]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 13 00:26:21.974497 containerd[1628]: time="2025-09-13T00:26:21.974466343Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-8487fb95d8-4ffbb,Uid:07e6f729-e3ab-45e0-ab76-d34bfb5a97db,Namespace:calico-system,Attempt:0,} returns sandbox id \"64a724da2065f4a6609e176c9b5cf59da71a6ccc60de346c00a0d39d091afd9f\"" Sep 13 00:26:22.052442 kubelet[2927]: I0913 00:26:22.052337 2927 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-tgvtb" podStartSLOduration=37.051866832 podStartE2EDuration="37.051866832s" podCreationTimestamp="2025-09-13 00:25:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:26:22.042561815 +0000 UTC m=+42.394023681" watchObservedRunningTime="2025-09-13 00:26:22.051866832 +0000 UTC m=+42.403328692" Sep 13 00:26:22.186500 systemd-networkd[1519]: cali3b53c9cb346: Gained IPv6LL Sep 13 00:26:22.314498 systemd-networkd[1519]: cali0968c03714c: Gained IPv6LL Sep 13 00:26:23.082552 systemd-networkd[1519]: calie593315ec12: Gained IPv6LL Sep 13 00:26:24.632046 containerd[1628]: time="2025-09-13T00:26:24.632007049Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:26:24.641157 containerd[1628]: time="2025-09-13T00:26:24.638164256Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=47333864" Sep 13 00:26:24.656396 containerd[1628]: time="2025-09-13T00:26:24.655988436Z" level=info msg="ImageCreate event name:\"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:26:24.670044 containerd[1628]: time="2025-09-13T00:26:24.670006983Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:26:24.688694 containerd[1628]: time="2025-09-13T00:26:24.688595891Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 2.905874154s" Sep 13 00:26:24.688694 containerd[1628]: time="2025-09-13T00:26:24.688624082Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 13 00:26:24.696440 containerd[1628]: time="2025-09-13T00:26:24.689393093Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 13 00:26:24.704629 containerd[1628]: time="2025-09-13T00:26:24.704401138Z" level=info msg="CreateContainer within sandbox \"2367995f69c8cd162794e4bfbba4a080df4ae5ab6879bfdbddfbb83b94dec49d\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 13 00:26:24.953420 containerd[1628]: time="2025-09-13T00:26:24.952575131Z" level=info msg="Container fe19da361153543d7f2c7702004844cd2132b1820603ac861306f5495fbbd55e: CDI devices from CRI Config.CDIDevices: []" Sep 13 00:26:24.959998 containerd[1628]: time="2025-09-13T00:26:24.959971417Z" level=info msg="CreateContainer within sandbox \"2367995f69c8cd162794e4bfbba4a080df4ae5ab6879bfdbddfbb83b94dec49d\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"fe19da361153543d7f2c7702004844cd2132b1820603ac861306f5495fbbd55e\"" Sep 13 00:26:24.961917 containerd[1628]: time="2025-09-13T00:26:24.961309646Z" level=info msg="StartContainer for \"fe19da361153543d7f2c7702004844cd2132b1820603ac861306f5495fbbd55e\"" Sep 13 00:26:24.963543 containerd[1628]: time="2025-09-13T00:26:24.963469526Z" level=info msg="connecting to shim fe19da361153543d7f2c7702004844cd2132b1820603ac861306f5495fbbd55e" address="unix:///run/containerd/s/4e5288b34aa94077e4a5293b0bdab79253a6cf9aba863d4bd34a80714ee2f82e" protocol=ttrpc version=3 Sep 13 00:26:25.023464 systemd[1]: Started cri-containerd-fe19da361153543d7f2c7702004844cd2132b1820603ac861306f5495fbbd55e.scope - libcontainer container fe19da361153543d7f2c7702004844cd2132b1820603ac861306f5495fbbd55e. Sep 13 00:26:25.062328 containerd[1628]: time="2025-09-13T00:26:25.062267378Z" level=info msg="StartContainer for \"fe19da361153543d7f2c7702004844cd2132b1820603ac861306f5495fbbd55e\" returns successfully" Sep 13 00:26:25.157771 kubelet[2927]: I0913 00:26:25.156991 2927 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6b9768b6f-mvxpt" podStartSLOduration=27.649912994 podStartE2EDuration="32.15697739s" podCreationTimestamp="2025-09-13 00:25:53 +0000 UTC" firstStartedPulling="2025-09-13 00:26:20.182209995 +0000 UTC m=+40.533671852" lastFinishedPulling="2025-09-13 00:26:24.68927439 +0000 UTC m=+45.040736248" observedRunningTime="2025-09-13 00:26:25.155006796 +0000 UTC m=+45.506468662" watchObservedRunningTime="2025-09-13 00:26:25.15697739 +0000 UTC m=+45.508439256" Sep 13 00:26:25.165658 containerd[1628]: time="2025-09-13T00:26:25.165628798Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:26:25.165864 containerd[1628]: time="2025-09-13T00:26:25.165829791Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=77" Sep 13 00:26:25.168377 containerd[1628]: time="2025-09-13T00:26:25.167522104Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 478.116857ms" Sep 13 00:26:25.168377 containerd[1628]: time="2025-09-13T00:26:25.168364999Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 13 00:26:25.169179 containerd[1628]: time="2025-09-13T00:26:25.169162019Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\"" Sep 13 00:26:25.175216 containerd[1628]: time="2025-09-13T00:26:25.175117090Z" level=info msg="CreateContainer within sandbox \"f23d9b2282f9ee12034eb6220d5ab8c14ddbb0844c61049a9148484901e0b461\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 13 00:26:25.197573 containerd[1628]: time="2025-09-13T00:26:25.197533287Z" level=info msg="Container d6e19194ca7dc0e1e11cb50b5f5d6490599dc24e8e8ef2c08b1e7ed0ce48bad3: CDI devices from CRI Config.CDIDevices: []" Sep 13 00:26:25.264088 containerd[1628]: time="2025-09-13T00:26:25.264002963Z" level=info msg="CreateContainer within sandbox \"f23d9b2282f9ee12034eb6220d5ab8c14ddbb0844c61049a9148484901e0b461\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"d6e19194ca7dc0e1e11cb50b5f5d6490599dc24e8e8ef2c08b1e7ed0ce48bad3\"" Sep 13 00:26:25.265366 containerd[1628]: time="2025-09-13T00:26:25.264954787Z" level=info msg="StartContainer for \"d6e19194ca7dc0e1e11cb50b5f5d6490599dc24e8e8ef2c08b1e7ed0ce48bad3\"" Sep 13 00:26:25.265532 containerd[1628]: time="2025-09-13T00:26:25.265515353Z" level=info msg="connecting to shim d6e19194ca7dc0e1e11cb50b5f5d6490599dc24e8e8ef2c08b1e7ed0ce48bad3" address="unix:///run/containerd/s/b445e5cbd52ff3c84ea4cd67d7592fee985c694023c334c4d5adfc2d175b262e" protocol=ttrpc version=3 Sep 13 00:26:25.287679 systemd[1]: Started cri-containerd-d6e19194ca7dc0e1e11cb50b5f5d6490599dc24e8e8ef2c08b1e7ed0ce48bad3.scope - libcontainer container d6e19194ca7dc0e1e11cb50b5f5d6490599dc24e8e8ef2c08b1e7ed0ce48bad3. Sep 13 00:26:25.343008 containerd[1628]: time="2025-09-13T00:26:25.342983785Z" level=info msg="StartContainer for \"d6e19194ca7dc0e1e11cb50b5f5d6490599dc24e8e8ef2c08b1e7ed0ce48bad3\" returns successfully" Sep 13 00:26:26.147209 kubelet[2927]: I0913 00:26:26.147173 2927 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6b9768b6f-brfmv" podStartSLOduration=28.194110208 podStartE2EDuration="33.147163588s" podCreationTimestamp="2025-09-13 00:25:53 +0000 UTC" firstStartedPulling="2025-09-13 00:26:20.215913639 +0000 UTC m=+40.567375499" lastFinishedPulling="2025-09-13 00:26:25.168967026 +0000 UTC m=+45.520428879" observedRunningTime="2025-09-13 00:26:26.14692409 +0000 UTC m=+46.498385956" watchObservedRunningTime="2025-09-13 00:26:26.147163588 +0000 UTC m=+46.498625448" Sep 13 00:26:27.164892 kubelet[2927]: I0913 00:26:27.164866 2927 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 13 00:26:31.003987 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2555711398.mount: Deactivated successfully. Sep 13 00:26:32.803004 containerd[1628]: time="2025-09-13T00:26:32.802892216Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:26:32.808679 containerd[1628]: time="2025-09-13T00:26:32.808233481Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.3: active requests=0, bytes read=66357526" Sep 13 00:26:32.809915 containerd[1628]: time="2025-09-13T00:26:32.809482338Z" level=info msg="ImageCreate event name:\"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:26:32.811310 containerd[1628]: time="2025-09-13T00:26:32.811287567Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:26:32.813248 containerd[1628]: time="2025-09-13T00:26:32.811913878Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" with image id \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\", size \"66357372\" in 7.642674069s" Sep 13 00:26:32.813248 containerd[1628]: time="2025-09-13T00:26:32.811929460Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" returns image reference \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\"" Sep 13 00:26:32.929240 containerd[1628]: time="2025-09-13T00:26:32.929041817Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\"" Sep 13 00:26:33.197483 containerd[1628]: time="2025-09-13T00:26:33.197327464Z" level=info msg="CreateContainer within sandbox \"d2998b028e2394716b3c875cb07fcf3833e821ad5752b003658cc065d3857c00\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Sep 13 00:26:33.397726 containerd[1628]: time="2025-09-13T00:26:33.396516898Z" level=info msg="Container 422a5323a0686c3975fe24a38304028528b935b8204aef5d08e5a00107554bc6: CDI devices from CRI Config.CDIDevices: []" Sep 13 00:26:33.400757 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3189879129.mount: Deactivated successfully. Sep 13 00:26:33.405362 containerd[1628]: time="2025-09-13T00:26:33.405037602Z" level=info msg="CreateContainer within sandbox \"d2998b028e2394716b3c875cb07fcf3833e821ad5752b003658cc065d3857c00\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"422a5323a0686c3975fe24a38304028528b935b8204aef5d08e5a00107554bc6\"" Sep 13 00:26:33.405558 containerd[1628]: time="2025-09-13T00:26:33.405545954Z" level=info msg="StartContainer for \"422a5323a0686c3975fe24a38304028528b935b8204aef5d08e5a00107554bc6\"" Sep 13 00:26:33.409910 containerd[1628]: time="2025-09-13T00:26:33.409876734Z" level=info msg="connecting to shim 422a5323a0686c3975fe24a38304028528b935b8204aef5d08e5a00107554bc6" address="unix:///run/containerd/s/d23667d6cad94543093c8b5cb1d85cc419332fbb0a609ef29564d4daa4c0e77e" protocol=ttrpc version=3 Sep 13 00:26:33.636495 systemd[1]: Started cri-containerd-422a5323a0686c3975fe24a38304028528b935b8204aef5d08e5a00107554bc6.scope - libcontainer container 422a5323a0686c3975fe24a38304028528b935b8204aef5d08e5a00107554bc6. Sep 13 00:26:33.756556 containerd[1628]: time="2025-09-13T00:26:33.756487796Z" level=info msg="StartContainer for \"422a5323a0686c3975fe24a38304028528b935b8204aef5d08e5a00107554bc6\" returns successfully" Sep 13 00:26:34.912462 kubelet[2927]: I0913 00:26:34.912107 2927 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-54d579b49d-td6hn" podStartSLOduration=28.058685447 podStartE2EDuration="39.847307379s" podCreationTimestamp="2025-09-13 00:25:55 +0000 UTC" firstStartedPulling="2025-09-13 00:26:21.070450855 +0000 UTC m=+41.421912715" lastFinishedPulling="2025-09-13 00:26:32.859072785 +0000 UTC m=+53.210534647" observedRunningTime="2025-09-13 00:26:34.803638669 +0000 UTC m=+55.155100542" watchObservedRunningTime="2025-09-13 00:26:34.847307379 +0000 UTC m=+55.198769246" Sep 13 00:26:35.529966 containerd[1628]: time="2025-09-13T00:26:35.529881621Z" level=info msg="TaskExit event in podsandbox handler container_id:\"422a5323a0686c3975fe24a38304028528b935b8204aef5d08e5a00107554bc6\" id:\"df2efc9480dd8405b8a883e4061adfc55dfbcc1f946223f4ff5706df7fee548f\" pid:5237 exit_status:1 exited_at:{seconds:1757723195 nanos:431602260}" Sep 13 00:26:35.577841 kubelet[2927]: I0913 00:26:35.569445 2927 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 13 00:26:35.703964 containerd[1628]: time="2025-09-13T00:26:35.703899688Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:26:35.732447 containerd[1628]: time="2025-09-13T00:26:35.732124615Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3: active requests=0, bytes read=14698542" Sep 13 00:26:35.742911 containerd[1628]: time="2025-09-13T00:26:35.742506538Z" level=info msg="ImageCreate event name:\"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:26:35.759850 containerd[1628]: time="2025-09-13T00:26:35.759824653Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:26:35.765118 containerd[1628]: time="2025-09-13T00:26:35.760076620Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" with image id \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\", size \"16191197\" in 2.831003801s" Sep 13 00:26:35.765118 containerd[1628]: time="2025-09-13T00:26:35.760098502Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" returns image reference \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\"" Sep 13 00:26:35.876428 containerd[1628]: time="2025-09-13T00:26:35.876403023Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\"" Sep 13 00:26:35.889252 containerd[1628]: time="2025-09-13T00:26:35.889227135Z" level=info msg="CreateContainer within sandbox \"f51918d16d5b648c0893d774630f40debf080993d8ded7e5ab145ed8d3406720\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Sep 13 00:26:35.978615 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2619390408.mount: Deactivated successfully. Sep 13 00:26:35.989625 containerd[1628]: time="2025-09-13T00:26:35.986134806Z" level=info msg="Container 8f04936e6e572ac93637ceef41203fff5f2e7d061a3674c8412658f048185500: CDI devices from CRI Config.CDIDevices: []" Sep 13 00:26:36.521941 containerd[1628]: time="2025-09-13T00:26:36.521748289Z" level=info msg="TaskExit event in podsandbox handler container_id:\"422a5323a0686c3975fe24a38304028528b935b8204aef5d08e5a00107554bc6\" id:\"cc3ced1650218de9bedfd3a7cd598363cf466bceaefd99bc677cebb6bf3b591e\" pid:5259 exit_status:1 exited_at:{seconds:1757723196 nanos:514674357}" Sep 13 00:26:36.556767 containerd[1628]: time="2025-09-13T00:26:36.556722223Z" level=info msg="CreateContainer within sandbox \"f51918d16d5b648c0893d774630f40debf080993d8ded7e5ab145ed8d3406720\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"8f04936e6e572ac93637ceef41203fff5f2e7d061a3674c8412658f048185500\"" Sep 13 00:26:36.577480 containerd[1628]: time="2025-09-13T00:26:36.577450067Z" level=info msg="StartContainer for \"8f04936e6e572ac93637ceef41203fff5f2e7d061a3674c8412658f048185500\"" Sep 13 00:26:36.578492 containerd[1628]: time="2025-09-13T00:26:36.578471589Z" level=info msg="connecting to shim 8f04936e6e572ac93637ceef41203fff5f2e7d061a3674c8412658f048185500" address="unix:///run/containerd/s/d7675d9f1cd6ba79880a6811ac89c53596e8822e3623f7742c8a04c21b768fa8" protocol=ttrpc version=3 Sep 13 00:26:36.602441 systemd[1]: Started cri-containerd-8f04936e6e572ac93637ceef41203fff5f2e7d061a3674c8412658f048185500.scope - libcontainer container 8f04936e6e572ac93637ceef41203fff5f2e7d061a3674c8412658f048185500. Sep 13 00:26:36.680682 containerd[1628]: time="2025-09-13T00:26:36.680613009Z" level=info msg="StartContainer for \"8f04936e6e572ac93637ceef41203fff5f2e7d061a3674c8412658f048185500\" returns successfully" Sep 13 00:26:36.693822 containerd[1628]: time="2025-09-13T00:26:36.693798959Z" level=info msg="TaskExit event in podsandbox handler container_id:\"422a5323a0686c3975fe24a38304028528b935b8204aef5d08e5a00107554bc6\" id:\"aab670e0d7742d6aaa3ac20a0e55ca5098a7cee8c3f94e781cf25a357c65331b\" pid:5304 exit_status:1 exited_at:{seconds:1757723196 nanos:690375068}" Sep 13 00:26:38.010654 kubelet[2927]: I0913 00:26:38.004034 2927 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-9rjbl" podStartSLOduration=26.234003165 podStartE2EDuration="41.995749221s" podCreationTimestamp="2025-09-13 00:25:56 +0000 UTC" firstStartedPulling="2025-09-13 00:26:20.096983916 +0000 UTC m=+40.448445776" lastFinishedPulling="2025-09-13 00:26:35.858729977 +0000 UTC m=+56.210191832" observedRunningTime="2025-09-13 00:26:37.99020611 +0000 UTC m=+58.341667988" watchObservedRunningTime="2025-09-13 00:26:37.995749221 +0000 UTC m=+58.347211088" Sep 13 00:26:38.152389 kubelet[2927]: I0913 00:26:38.147451 2927 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Sep 13 00:26:38.158257 kubelet[2927]: I0913 00:26:38.152398 2927 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Sep 13 00:26:39.374703 containerd[1628]: time="2025-09-13T00:26:39.374671640Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:26:39.399513 containerd[1628]: time="2025-09-13T00:26:39.399491334Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.3: active requests=0, bytes read=51277746" Sep 13 00:26:39.425367 containerd[1628]: time="2025-09-13T00:26:39.425167990Z" level=info msg="ImageCreate event name:\"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:26:39.454201 containerd[1628]: time="2025-09-13T00:26:39.454123623Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" with image id \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\", size \"52770417\" in 3.577686745s" Sep 13 00:26:39.454201 containerd[1628]: time="2025-09-13T00:26:39.454151347Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" returns image reference \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\"" Sep 13 00:26:39.484339 containerd[1628]: time="2025-09-13T00:26:39.484293563Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:26:40.631082 containerd[1628]: time="2025-09-13T00:26:40.631052610Z" level=info msg="CreateContainer within sandbox \"64a724da2065f4a6609e176c9b5cf59da71a6ccc60de346c00a0d39d091afd9f\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Sep 13 00:26:40.652103 containerd[1628]: time="2025-09-13T00:26:40.652053590Z" level=info msg="Container 3173d62298f596fbf01bd6fc6723bc7518cc759aebfb804ca3c7ddf7a34df79a: CDI devices from CRI Config.CDIDevices: []" Sep 13 00:26:40.669375 containerd[1628]: time="2025-09-13T00:26:40.669229462Z" level=info msg="CreateContainer within sandbox \"64a724da2065f4a6609e176c9b5cf59da71a6ccc60de346c00a0d39d091afd9f\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"3173d62298f596fbf01bd6fc6723bc7518cc759aebfb804ca3c7ddf7a34df79a\"" Sep 13 00:26:40.670969 containerd[1628]: time="2025-09-13T00:26:40.669822702Z" level=info msg="StartContainer for \"3173d62298f596fbf01bd6fc6723bc7518cc759aebfb804ca3c7ddf7a34df79a\"" Sep 13 00:26:40.671433 containerd[1628]: time="2025-09-13T00:26:40.670754620Z" level=info msg="connecting to shim 3173d62298f596fbf01bd6fc6723bc7518cc759aebfb804ca3c7ddf7a34df79a" address="unix:///run/containerd/s/ec7c3a30f269d625cb7d05c55a3a79e618b620aad8b1972d71cd1b50e8ebacf1" protocol=ttrpc version=3 Sep 13 00:26:40.752451 systemd[1]: Started cri-containerd-3173d62298f596fbf01bd6fc6723bc7518cc759aebfb804ca3c7ddf7a34df79a.scope - libcontainer container 3173d62298f596fbf01bd6fc6723bc7518cc759aebfb804ca3c7ddf7a34df79a. Sep 13 00:26:40.822919 containerd[1628]: time="2025-09-13T00:26:40.822864166Z" level=info msg="StartContainer for \"3173d62298f596fbf01bd6fc6723bc7518cc759aebfb804ca3c7ddf7a34df79a\" returns successfully" Sep 13 00:26:41.084228 kubelet[2927]: I0913 00:26:41.083690 2927 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-8487fb95d8-4ffbb" podStartSLOduration=27.597460086 podStartE2EDuration="45.077081659s" podCreationTimestamp="2025-09-13 00:25:56 +0000 UTC" firstStartedPulling="2025-09-13 00:26:21.975450306 +0000 UTC m=+42.326912166" lastFinishedPulling="2025-09-13 00:26:39.455071879 +0000 UTC m=+59.806533739" observedRunningTime="2025-09-13 00:26:41.07461146 +0000 UTC m=+61.426073322" watchObservedRunningTime="2025-09-13 00:26:41.077081659 +0000 UTC m=+61.428543527" Sep 13 00:26:41.224921 containerd[1628]: time="2025-09-13T00:26:41.224891899Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3173d62298f596fbf01bd6fc6723bc7518cc759aebfb804ca3c7ddf7a34df79a\" id:\"15c789591a5fa30b347de104a5505bf44755cff8c9e2588a96b2768c85b74a4d\" pid:5389 exited_at:{seconds:1757723201 nanos:224619260}" Sep 13 00:26:44.507610 containerd[1628]: time="2025-09-13T00:26:44.507568973Z" level=info msg="TaskExit event in podsandbox handler container_id:\"de6f0d8fd132e5be22f5d90a9b3534d6fa9ee6fc02025f6473734576b285c18e\" id:\"51964346ba79916db0993584b750ff1d6d5a0952b33123ffc57e9dacd65d141c\" pid:5412 exited_at:{seconds:1757723204 nanos:506478251}" Sep 13 00:26:59.430031 systemd[1]: Started sshd@9-139.178.70.110:22-139.178.89.65:44450.service - OpenSSH per-connection server daemon (139.178.89.65:44450). Sep 13 00:26:59.620650 sshd[5443]: Accepted publickey for core from 139.178.89.65 port 44450 ssh2: RSA SHA256:7j1m2IeDi8iUn41dLAEnUdP9v/F2nQcYG+53Qqm2Rt8 Sep 13 00:26:59.628845 sshd-session[5443]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:26:59.659006 systemd-logind[1599]: New session 10 of user core. Sep 13 00:26:59.662510 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 13 00:27:00.471669 kubelet[2927]: I0913 00:27:00.468073 2927 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 13 00:27:00.520243 sshd[5445]: Connection closed by 139.178.89.65 port 44450 Sep 13 00:27:00.519753 sshd-session[5443]: pam_unix(sshd:session): session closed for user core Sep 13 00:27:00.538593 systemd[1]: sshd@9-139.178.70.110:22-139.178.89.65:44450.service: Deactivated successfully. Sep 13 00:27:00.541067 systemd[1]: session-10.scope: Deactivated successfully. Sep 13 00:27:00.543834 systemd-logind[1599]: Session 10 logged out. Waiting for processes to exit. Sep 13 00:27:00.544968 systemd-logind[1599]: Removed session 10. Sep 13 00:27:05.542735 systemd[1]: Started sshd@10-139.178.70.110:22-139.178.89.65:35550.service - OpenSSH per-connection server daemon (139.178.89.65:35550). Sep 13 00:27:05.675004 sshd[5465]: Accepted publickey for core from 139.178.89.65 port 35550 ssh2: RSA SHA256:7j1m2IeDi8iUn41dLAEnUdP9v/F2nQcYG+53Qqm2Rt8 Sep 13 00:27:05.677436 sshd-session[5465]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:27:05.686076 systemd-logind[1599]: New session 11 of user core. Sep 13 00:27:05.688540 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 13 00:27:06.845198 sshd[5467]: Connection closed by 139.178.89.65 port 35550 Sep 13 00:27:06.845096 sshd-session[5465]: pam_unix(sshd:session): session closed for user core Sep 13 00:27:06.866256 systemd[1]: sshd@10-139.178.70.110:22-139.178.89.65:35550.service: Deactivated successfully. Sep 13 00:27:06.869605 systemd[1]: session-11.scope: Deactivated successfully. Sep 13 00:27:06.874031 systemd-logind[1599]: Session 11 logged out. Waiting for processes to exit. Sep 13 00:27:06.876166 systemd-logind[1599]: Removed session 11. Sep 13 00:27:08.176423 containerd[1628]: time="2025-09-13T00:27:08.176036582Z" level=info msg="TaskExit event in podsandbox handler container_id:\"422a5323a0686c3975fe24a38304028528b935b8204aef5d08e5a00107554bc6\" id:\"ace37ff511a529d780bcc5e13fba4ce740a1b2d4679419988ad806fd0ce04e0c\" pid:5492 exited_at:{seconds:1757723228 nanos:154440465}" Sep 13 00:27:10.212689 systemd[1]: Started sshd@11-139.178.70.110:22-34.123.134.194:49420.service - OpenSSH per-connection server daemon (34.123.134.194:49420). Sep 13 00:27:10.292965 containerd[1628]: time="2025-09-13T00:27:10.292938845Z" level=info msg="TaskExit event in podsandbox handler container_id:\"422a5323a0686c3975fe24a38304028528b935b8204aef5d08e5a00107554bc6\" id:\"fc6539640012ac7d75ae798b3bfefe08b1cd37c42d83a6d893f6d1813a0a1540\" pid:5516 exited_at:{seconds:1757723230 nanos:292686424}" Sep 13 00:27:10.970368 sshd[5526]: Received disconnect from 34.123.134.194 port 49420:11: Bye Bye [preauth] Sep 13 00:27:10.970368 sshd[5526]: Disconnected from authenticating user root 34.123.134.194 port 49420 [preauth] Sep 13 00:27:10.978409 systemd[1]: sshd@11-139.178.70.110:22-34.123.134.194:49420.service: Deactivated successfully. Sep 13 00:27:11.268890 containerd[1628]: time="2025-09-13T00:27:11.268576816Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3173d62298f596fbf01bd6fc6723bc7518cc759aebfb804ca3c7ddf7a34df79a\" id:\"8ede821f06f1b12c41dcfcb202eade20fea33675a2d110ff4e65dd29d01116d6\" pid:5551 exited_at:{seconds:1757723231 nanos:267562688}" Sep 13 00:27:11.745645 update_engine[1600]: I20250913 00:27:11.745517 1600 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Sep 13 00:27:11.745645 update_engine[1600]: I20250913 00:27:11.745565 1600 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Sep 13 00:27:11.772327 update_engine[1600]: I20250913 00:27:11.772230 1600 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Sep 13 00:27:11.787460 update_engine[1600]: I20250913 00:27:11.787340 1600 omaha_request_params.cc:62] Current group set to beta Sep 13 00:27:11.803116 update_engine[1600]: I20250913 00:27:11.801872 1600 update_attempter.cc:499] Already updated boot flags. Skipping. Sep 13 00:27:11.803116 update_engine[1600]: I20250913 00:27:11.801896 1600 update_attempter.cc:643] Scheduling an action processor start. Sep 13 00:27:11.803116 update_engine[1600]: I20250913 00:27:11.801915 1600 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Sep 13 00:27:11.803116 update_engine[1600]: I20250913 00:27:11.801952 1600 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Sep 13 00:27:11.803116 update_engine[1600]: I20250913 00:27:11.802011 1600 omaha_request_action.cc:271] Posting an Omaha request to disabled Sep 13 00:27:11.803116 update_engine[1600]: I20250913 00:27:11.802017 1600 omaha_request_action.cc:272] Request: Sep 13 00:27:11.803116 update_engine[1600]: Sep 13 00:27:11.803116 update_engine[1600]: Sep 13 00:27:11.803116 update_engine[1600]: Sep 13 00:27:11.803116 update_engine[1600]: Sep 13 00:27:11.803116 update_engine[1600]: Sep 13 00:27:11.803116 update_engine[1600]: Sep 13 00:27:11.803116 update_engine[1600]: Sep 13 00:27:11.803116 update_engine[1600]: Sep 13 00:27:11.803116 update_engine[1600]: I20250913 00:27:11.802021 1600 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Sep 13 00:27:11.861551 systemd[1]: Started sshd@12-139.178.70.110:22-139.178.89.65:56120.service - OpenSSH per-connection server daemon (139.178.89.65:56120). Sep 13 00:27:11.870653 locksmithd[1662]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Sep 13 00:27:11.906060 update_engine[1600]: I20250913 00:27:11.905762 1600 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Sep 13 00:27:11.910577 update_engine[1600]: I20250913 00:27:11.906020 1600 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Sep 13 00:27:11.910784 update_engine[1600]: E20250913 00:27:11.910764 1600 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Sep 13 00:27:11.910862 update_engine[1600]: I20250913 00:27:11.910852 1600 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Sep 13 00:27:12.486476 sshd[5561]: Accepted publickey for core from 139.178.89.65 port 56120 ssh2: RSA SHA256:7j1m2IeDi8iUn41dLAEnUdP9v/F2nQcYG+53Qqm2Rt8 Sep 13 00:27:12.494545 sshd-session[5561]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:27:12.498898 systemd-logind[1599]: New session 12 of user core. Sep 13 00:27:12.509448 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 13 00:27:15.344065 containerd[1628]: time="2025-09-13T00:27:15.259846448Z" level=info msg="TaskExit event in podsandbox handler container_id:\"de6f0d8fd132e5be22f5d90a9b3534d6fa9ee6fc02025f6473734576b285c18e\" id:\"31f1e17df62e3d7288ecb89e4ff9efbd295d931fb6ea91e5b5e2d7b74af6e3b0\" pid:5580 exit_status:1 exited_at:{seconds:1757723235 nanos:246040902}" Sep 13 00:27:15.372564 sshd[5563]: Connection closed by 139.178.89.65 port 56120 Sep 13 00:27:15.399435 sshd-session[5561]: pam_unix(sshd:session): session closed for user core Sep 13 00:27:15.411198 systemd[1]: Started sshd@13-139.178.70.110:22-139.178.89.65:56130.service - OpenSSH per-connection server daemon (139.178.89.65:56130). Sep 13 00:27:15.411975 systemd[1]: sshd@12-139.178.70.110:22-139.178.89.65:56120.service: Deactivated successfully. Sep 13 00:27:15.417278 systemd[1]: session-12.scope: Deactivated successfully. Sep 13 00:27:15.423664 systemd-logind[1599]: Session 12 logged out. Waiting for processes to exit. Sep 13 00:27:15.430203 systemd-logind[1599]: Removed session 12. Sep 13 00:27:15.576366 sshd[5607]: Accepted publickey for core from 139.178.89.65 port 56130 ssh2: RSA SHA256:7j1m2IeDi8iUn41dLAEnUdP9v/F2nQcYG+53Qqm2Rt8 Sep 13 00:27:15.574004 sshd-session[5607]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:27:15.584900 systemd-logind[1599]: New session 13 of user core. Sep 13 00:27:15.588746 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 13 00:27:16.096763 sshd[5614]: Connection closed by 139.178.89.65 port 56130 Sep 13 00:27:16.096647 sshd-session[5607]: pam_unix(sshd:session): session closed for user core Sep 13 00:27:16.105799 systemd[1]: sshd@13-139.178.70.110:22-139.178.89.65:56130.service: Deactivated successfully. Sep 13 00:27:16.107562 systemd[1]: session-13.scope: Deactivated successfully. Sep 13 00:27:16.109465 systemd-logind[1599]: Session 13 logged out. Waiting for processes to exit. Sep 13 00:27:16.112633 systemd[1]: Started sshd@14-139.178.70.110:22-139.178.89.65:56136.service - OpenSSH per-connection server daemon (139.178.89.65:56136). Sep 13 00:27:16.116034 systemd-logind[1599]: Removed session 13. Sep 13 00:27:16.209403 sshd[5626]: Accepted publickey for core from 139.178.89.65 port 56136 ssh2: RSA SHA256:7j1m2IeDi8iUn41dLAEnUdP9v/F2nQcYG+53Qqm2Rt8 Sep 13 00:27:16.211012 sshd-session[5626]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:27:16.218584 systemd-logind[1599]: New session 14 of user core. Sep 13 00:27:16.222531 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 13 00:27:16.501141 sshd[5628]: Connection closed by 139.178.89.65 port 56136 Sep 13 00:27:16.505426 sshd-session[5626]: pam_unix(sshd:session): session closed for user core Sep 13 00:27:16.508110 systemd[1]: sshd@14-139.178.70.110:22-139.178.89.65:56136.service: Deactivated successfully. Sep 13 00:27:16.509858 systemd[1]: session-14.scope: Deactivated successfully. Sep 13 00:27:16.511215 systemd-logind[1599]: Session 14 logged out. Waiting for processes to exit. Sep 13 00:27:16.512881 systemd-logind[1599]: Removed session 14. Sep 13 00:27:21.513750 systemd[1]: Started sshd@15-139.178.70.110:22-139.178.89.65:44588.service - OpenSSH per-connection server daemon (139.178.89.65:44588). Sep 13 00:27:21.650614 sshd[5642]: Accepted publickey for core from 139.178.89.65 port 44588 ssh2: RSA SHA256:7j1m2IeDi8iUn41dLAEnUdP9v/F2nQcYG+53Qqm2Rt8 Sep 13 00:27:21.651621 sshd-session[5642]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:27:21.657034 systemd-logind[1599]: New session 15 of user core. Sep 13 00:27:21.662447 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 13 00:27:22.524542 update_engine[1600]: I20250913 00:27:22.462036 1600 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Sep 13 00:27:22.544830 update_engine[1600]: I20250913 00:27:22.537303 1600 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Sep 13 00:27:22.544830 update_engine[1600]: I20250913 00:27:22.537529 1600 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Sep 13 00:27:22.544830 update_engine[1600]: E20250913 00:27:22.544119 1600 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Sep 13 00:27:22.544830 update_engine[1600]: I20250913 00:27:22.544180 1600 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Sep 13 00:27:22.934103 sshd[5646]: Connection closed by 139.178.89.65 port 44588 Sep 13 00:27:22.934595 sshd-session[5642]: pam_unix(sshd:session): session closed for user core Sep 13 00:27:22.942666 systemd[1]: sshd@15-139.178.70.110:22-139.178.89.65:44588.service: Deactivated successfully. Sep 13 00:27:22.944266 systemd[1]: session-15.scope: Deactivated successfully. Sep 13 00:27:22.945703 systemd-logind[1599]: Session 15 logged out. Waiting for processes to exit. Sep 13 00:27:22.947477 systemd[1]: Started sshd@16-139.178.70.110:22-139.178.89.65:44590.service - OpenSSH per-connection server daemon (139.178.89.65:44590). Sep 13 00:27:22.948701 systemd-logind[1599]: Removed session 15. Sep 13 00:27:22.993388 sshd[5659]: Accepted publickey for core from 139.178.89.65 port 44590 ssh2: RSA SHA256:7j1m2IeDi8iUn41dLAEnUdP9v/F2nQcYG+53Qqm2Rt8 Sep 13 00:27:22.994576 sshd-session[5659]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:27:22.998388 systemd-logind[1599]: New session 16 of user core. Sep 13 00:27:23.002459 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 13 00:27:23.520934 sshd[5661]: Connection closed by 139.178.89.65 port 44590 Sep 13 00:27:23.522877 sshd-session[5659]: pam_unix(sshd:session): session closed for user core Sep 13 00:27:23.531774 systemd[1]: sshd@16-139.178.70.110:22-139.178.89.65:44590.service: Deactivated successfully. Sep 13 00:27:23.534502 systemd[1]: session-16.scope: Deactivated successfully. Sep 13 00:27:23.535415 systemd-logind[1599]: Session 16 logged out. Waiting for processes to exit. Sep 13 00:27:23.544501 systemd[1]: Started sshd@17-139.178.70.110:22-139.178.89.65:44594.service - OpenSSH per-connection server daemon (139.178.89.65:44594). Sep 13 00:27:23.549370 systemd-logind[1599]: Removed session 16. Sep 13 00:27:23.627841 sshd[5671]: Accepted publickey for core from 139.178.89.65 port 44594 ssh2: RSA SHA256:7j1m2IeDi8iUn41dLAEnUdP9v/F2nQcYG+53Qqm2Rt8 Sep 13 00:27:23.628504 sshd-session[5671]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:27:23.632550 systemd-logind[1599]: New session 17 of user core. Sep 13 00:27:23.635448 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 13 00:27:24.450111 sshd[5673]: Connection closed by 139.178.89.65 port 44594 Sep 13 00:27:24.463107 sshd-session[5671]: pam_unix(sshd:session): session closed for user core Sep 13 00:27:24.466503 systemd[1]: Started sshd@18-139.178.70.110:22-139.178.89.65:44596.service - OpenSSH per-connection server daemon (139.178.89.65:44596). Sep 13 00:27:24.472256 systemd[1]: sshd@17-139.178.70.110:22-139.178.89.65:44594.service: Deactivated successfully. Sep 13 00:27:24.473470 systemd[1]: session-17.scope: Deactivated successfully. Sep 13 00:27:24.477015 systemd-logind[1599]: Session 17 logged out. Waiting for processes to exit. Sep 13 00:27:24.479207 systemd-logind[1599]: Removed session 17. Sep 13 00:27:24.575006 sshd[5683]: Accepted publickey for core from 139.178.89.65 port 44596 ssh2: RSA SHA256:7j1m2IeDi8iUn41dLAEnUdP9v/F2nQcYG+53Qqm2Rt8 Sep 13 00:27:24.575968 sshd-session[5683]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:27:24.583734 systemd-logind[1599]: New session 18 of user core. Sep 13 00:27:24.588644 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 13 00:27:25.423670 sshd[5692]: Connection closed by 139.178.89.65 port 44596 Sep 13 00:27:25.423000 sshd-session[5683]: pam_unix(sshd:session): session closed for user core Sep 13 00:27:25.433210 systemd[1]: sshd@18-139.178.70.110:22-139.178.89.65:44596.service: Deactivated successfully. Sep 13 00:27:25.435103 systemd[1]: session-18.scope: Deactivated successfully. Sep 13 00:27:25.437387 systemd-logind[1599]: Session 18 logged out. Waiting for processes to exit. Sep 13 00:27:25.441473 systemd[1]: Started sshd@19-139.178.70.110:22-139.178.89.65:44604.service - OpenSSH per-connection server daemon (139.178.89.65:44604). Sep 13 00:27:25.444317 systemd-logind[1599]: Removed session 18. Sep 13 00:27:25.488858 sshd[5703]: Accepted publickey for core from 139.178.89.65 port 44604 ssh2: RSA SHA256:7j1m2IeDi8iUn41dLAEnUdP9v/F2nQcYG+53Qqm2Rt8 Sep 13 00:27:25.489645 sshd-session[5703]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:27:25.493492 systemd-logind[1599]: New session 19 of user core. Sep 13 00:27:25.499467 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 13 00:27:25.626046 sshd[5705]: Connection closed by 139.178.89.65 port 44604 Sep 13 00:27:25.626230 sshd-session[5703]: pam_unix(sshd:session): session closed for user core Sep 13 00:27:25.628698 systemd[1]: sshd@19-139.178.70.110:22-139.178.89.65:44604.service: Deactivated successfully. Sep 13 00:27:25.629851 systemd[1]: session-19.scope: Deactivated successfully. Sep 13 00:27:25.632043 systemd-logind[1599]: Session 19 logged out. Waiting for processes to exit. Sep 13 00:27:25.633654 systemd-logind[1599]: Removed session 19. Sep 13 00:27:30.633523 systemd[1]: Started sshd@20-139.178.70.110:22-139.178.89.65:39336.service - OpenSSH per-connection server daemon (139.178.89.65:39336). Sep 13 00:27:30.798641 sshd[5718]: Accepted publickey for core from 139.178.89.65 port 39336 ssh2: RSA SHA256:7j1m2IeDi8iUn41dLAEnUdP9v/F2nQcYG+53Qqm2Rt8 Sep 13 00:27:30.801141 sshd-session[5718]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:27:30.805233 systemd-logind[1599]: New session 20 of user core. Sep 13 00:27:30.813437 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 13 00:27:32.340528 sshd[5720]: Connection closed by 139.178.89.65 port 39336 Sep 13 00:27:32.343814 sshd-session[5718]: pam_unix(sshd:session): session closed for user core Sep 13 00:27:32.348437 systemd[1]: sshd@20-139.178.70.110:22-139.178.89.65:39336.service: Deactivated successfully. Sep 13 00:27:32.351271 systemd-logind[1599]: Session 20 logged out. Waiting for processes to exit. Sep 13 00:27:32.351680 systemd[1]: session-20.scope: Deactivated successfully. Sep 13 00:27:32.353399 systemd-logind[1599]: Removed session 20. Sep 13 00:27:32.437766 update_engine[1600]: I20250913 00:27:32.437434 1600 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Sep 13 00:27:32.485123 update_engine[1600]: I20250913 00:27:32.453812 1600 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Sep 13 00:27:32.485123 update_engine[1600]: I20250913 00:27:32.454097 1600 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Sep 13 00:27:32.485123 update_engine[1600]: E20250913 00:27:32.457708 1600 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Sep 13 00:27:32.485123 update_engine[1600]: I20250913 00:27:32.457757 1600 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Sep 13 00:27:37.391429 systemd[1]: Started sshd@21-139.178.70.110:22-139.178.89.65:39352.service - OpenSSH per-connection server daemon (139.178.89.65:39352). Sep 13 00:27:37.596384 sshd[5756]: Accepted publickey for core from 139.178.89.65 port 39352 ssh2: RSA SHA256:7j1m2IeDi8iUn41dLAEnUdP9v/F2nQcYG+53Qqm2Rt8 Sep 13 00:27:37.598225 sshd-session[5756]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:27:37.603776 systemd-logind[1599]: New session 21 of user core. Sep 13 00:27:37.609579 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 13 00:27:40.143000 containerd[1628]: time="2025-09-13T00:27:40.117984626Z" level=info msg="TaskExit event in podsandbox handler container_id:\"422a5323a0686c3975fe24a38304028528b935b8204aef5d08e5a00107554bc6\" id:\"17aec7a81a0f34cce31545991ee8549d6ee2c486ffa862fea6fbcd90e3e747b9\" pid:5749 exited_at:{seconds:1757723259 nanos:907338490}" Sep 13 00:27:41.470693 containerd[1628]: time="2025-09-13T00:27:41.470667344Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3173d62298f596fbf01bd6fc6723bc7518cc759aebfb804ca3c7ddf7a34df79a\" id:\"124edefbd8bf8fa58976e8c1bec87378af1915f0100b0a8ddf75d6c5305d2fd0\" pid:5787 exited_at:{seconds:1757723261 nanos:470461795}" Sep 13 00:27:42.440573 update_engine[1600]: I20250913 00:27:42.440514 1600 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Sep 13 00:27:42.508938 update_engine[1600]: I20250913 00:27:42.495436 1600 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Sep 13 00:27:42.508938 update_engine[1600]: I20250913 00:27:42.495659 1600 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Sep 13 00:27:42.508938 update_engine[1600]: E20250913 00:27:42.501262 1600 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Sep 13 00:27:42.508938 update_engine[1600]: I20250913 00:27:42.501298 1600 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Sep 13 00:27:42.513694 update_engine[1600]: I20250913 00:27:42.513659 1600 omaha_request_action.cc:617] Omaha request response: Sep 13 00:27:42.557411 update_engine[1600]: E20250913 00:27:42.557369 1600 omaha_request_action.cc:636] Omaha request network transfer failed. Sep 13 00:27:43.125190 containerd[1628]: time="2025-09-13T00:27:43.125155857Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3173d62298f596fbf01bd6fc6723bc7518cc759aebfb804ca3c7ddf7a34df79a\" id:\"e9fa03cb25da690c930f3c6a94d4228445deea05de445d6b730c3d64c24beb28\" pid:5810 exited_at:{seconds:1757723263 nanos:122407401}" Sep 13 00:27:43.175366 update_engine[1600]: I20250913 00:27:43.174151 1600 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Sep 13 00:27:43.175366 update_engine[1600]: I20250913 00:27:43.174182 1600 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Sep 13 00:27:43.175366 update_engine[1600]: I20250913 00:27:43.174187 1600 update_attempter.cc:306] Processing Done. Sep 13 00:27:43.188785 update_engine[1600]: E20250913 00:27:43.188735 1600 update_attempter.cc:619] Update failed. Sep 13 00:27:43.188785 update_engine[1600]: I20250913 00:27:43.188772 1600 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Sep 13 00:27:43.188785 update_engine[1600]: I20250913 00:27:43.188781 1600 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Sep 13 00:27:43.188785 update_engine[1600]: I20250913 00:27:43.188784 1600 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Sep 13 00:27:43.216971 update_engine[1600]: I20250913 00:27:43.188868 1600 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Sep 13 00:27:43.216971 update_engine[1600]: I20250913 00:27:43.192040 1600 omaha_request_action.cc:271] Posting an Omaha request to disabled Sep 13 00:27:43.216971 update_engine[1600]: I20250913 00:27:43.192060 1600 omaha_request_action.cc:272] Request: Sep 13 00:27:43.216971 update_engine[1600]: Sep 13 00:27:43.216971 update_engine[1600]: Sep 13 00:27:43.216971 update_engine[1600]: Sep 13 00:27:43.216971 update_engine[1600]: Sep 13 00:27:43.216971 update_engine[1600]: Sep 13 00:27:43.216971 update_engine[1600]: Sep 13 00:27:43.216971 update_engine[1600]: I20250913 00:27:43.192065 1600 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Sep 13 00:27:43.216971 update_engine[1600]: I20250913 00:27:43.192806 1600 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Sep 13 00:27:43.216971 update_engine[1600]: I20250913 00:27:43.193007 1600 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Sep 13 00:27:43.216971 update_engine[1600]: E20250913 00:27:43.197414 1600 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Sep 13 00:27:43.216971 update_engine[1600]: I20250913 00:27:43.197451 1600 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Sep 13 00:27:43.216971 update_engine[1600]: I20250913 00:27:43.197456 1600 omaha_request_action.cc:617] Omaha request response: Sep 13 00:27:43.216971 update_engine[1600]: I20250913 00:27:43.197461 1600 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Sep 13 00:27:43.216971 update_engine[1600]: I20250913 00:27:43.197464 1600 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Sep 13 00:27:43.216971 update_engine[1600]: I20250913 00:27:43.197467 1600 update_attempter.cc:306] Processing Done. Sep 13 00:27:43.216971 update_engine[1600]: I20250913 00:27:43.197470 1600 update_attempter.cc:310] Error event sent. Sep 13 00:27:43.306735 update_engine[1600]: I20250913 00:27:43.197481 1600 update_check_scheduler.cc:74] Next update check in 44m33s Sep 13 00:27:43.306768 locksmithd[1662]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Sep 13 00:27:43.306768 locksmithd[1662]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Sep 13 00:27:46.129558 containerd[1628]: time="2025-09-13T00:27:46.129467436Z" level=info msg="TaskExit event in podsandbox handler container_id:\"de6f0d8fd132e5be22f5d90a9b3534d6fa9ee6fc02025f6473734576b285c18e\" id:\"51c16e659d0f97a8ab9d452b1d94b81634e4fb1180cf962b3670d60de29c281e\" pid:5831 exited_at:{seconds:1757723266 nanos:128750899}" Sep 13 00:27:46.620458 sshd[5763]: Connection closed by 139.178.89.65 port 39352 Sep 13 00:27:46.639387 sshd-session[5756]: pam_unix(sshd:session): session closed for user core Sep 13 00:27:46.665334 systemd-logind[1599]: Session 21 logged out. Waiting for processes to exit. Sep 13 00:27:46.665867 systemd[1]: sshd@21-139.178.70.110:22-139.178.89.65:39352.service: Deactivated successfully. Sep 13 00:27:46.667141 systemd[1]: session-21.scope: Deactivated successfully. Sep 13 00:27:46.670674 systemd-logind[1599]: Removed session 21.