Sep 12 05:56:26.701336 kernel: Linux version 6.12.46-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Fri Sep 12 04:02:32 -00 2025 Sep 12 05:56:26.701353 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=d36684c42387dba16669740eb40ca6a094be0dfb03f64a303630b6ac6cfe48d3 Sep 12 05:56:26.701359 kernel: Disabled fast string operations Sep 12 05:56:26.701363 kernel: BIOS-provided physical RAM map: Sep 12 05:56:26.701367 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ebff] usable Sep 12 05:56:26.701371 kernel: BIOS-e820: [mem 0x000000000009ec00-0x000000000009ffff] reserved Sep 12 05:56:26.701377 kernel: BIOS-e820: [mem 0x00000000000dc000-0x00000000000fffff] reserved Sep 12 05:56:26.701388 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007fedffff] usable Sep 12 05:56:26.701392 kernel: BIOS-e820: [mem 0x000000007fee0000-0x000000007fefefff] ACPI data Sep 12 05:56:26.701397 kernel: BIOS-e820: [mem 0x000000007feff000-0x000000007fefffff] ACPI NVS Sep 12 05:56:26.701401 kernel: BIOS-e820: [mem 0x000000007ff00000-0x000000007fffffff] usable Sep 12 05:56:26.701405 kernel: BIOS-e820: [mem 0x00000000f0000000-0x00000000f7ffffff] reserved Sep 12 05:56:26.701409 kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec0ffff] reserved Sep 12 05:56:26.701414 kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved Sep 12 05:56:26.701423 kernel: BIOS-e820: [mem 0x00000000fffe0000-0x00000000ffffffff] reserved Sep 12 05:56:26.701428 kernel: NX (Execute Disable) protection: active Sep 12 05:56:26.701433 kernel: APIC: Static calls initialized Sep 12 05:56:26.701438 kernel: SMBIOS 2.7 present. Sep 12 05:56:26.701443 kernel: DMI: VMware, Inc. VMware Virtual Platform/440BX Desktop Reference Platform, BIOS 6.00 05/28/2020 Sep 12 05:56:26.701469 kernel: DMI: Memory slots populated: 1/128 Sep 12 05:56:26.701476 kernel: vmware: hypercall mode: 0x00 Sep 12 05:56:26.701481 kernel: Hypervisor detected: VMware Sep 12 05:56:26.701486 kernel: vmware: TSC freq read from hypervisor : 3408.000 MHz Sep 12 05:56:26.701491 kernel: vmware: Host bus clock speed read from hypervisor : 66000000 Hz Sep 12 05:56:26.701496 kernel: vmware: using clock offset of 3505908112 ns Sep 12 05:56:26.701504 kernel: tsc: Detected 3408.000 MHz processor Sep 12 05:56:26.701524 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 12 05:56:26.701530 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 12 05:56:26.701535 kernel: last_pfn = 0x80000 max_arch_pfn = 0x400000000 Sep 12 05:56:26.701539 kernel: total RAM covered: 3072M Sep 12 05:56:26.701548 kernel: Found optimal setting for mtrr clean up Sep 12 05:56:26.701554 kernel: gran_size: 64K chunk_size: 64K num_reg: 2 lose cover RAM: 0G Sep 12 05:56:26.701559 kernel: MTRR map: 6 entries (5 fixed + 1 variable; max 21), built from 8 variable MTRRs Sep 12 05:56:26.701564 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 12 05:56:26.701569 kernel: Using GB pages for direct mapping Sep 12 05:56:26.701574 kernel: ACPI: Early table checksum verification disabled Sep 12 05:56:26.701582 kernel: ACPI: RSDP 0x00000000000F6A00 000024 (v02 PTLTD ) Sep 12 05:56:26.701587 kernel: ACPI: XSDT 0x000000007FEE965B 00005C (v01 INTEL 440BX 06040000 VMW 01324272) Sep 12 05:56:26.701592 kernel: ACPI: FACP 0x000000007FEFEE73 0000F4 (v04 INTEL 440BX 06040000 PTL 000F4240) Sep 12 05:56:26.701598 kernel: ACPI: DSDT 0x000000007FEEAD55 01411E (v01 PTLTD Custom 06040000 MSFT 03000001) Sep 12 05:56:26.701605 kernel: ACPI: FACS 0x000000007FEFFFC0 000040 Sep 12 05:56:26.701615 kernel: ACPI: FACS 0x000000007FEFFFC0 000040 Sep 12 05:56:26.701621 kernel: ACPI: BOOT 0x000000007FEEAD2D 000028 (v01 PTLTD $SBFTBL$ 06040000 LTP 00000001) Sep 12 05:56:26.701626 kernel: ACPI: APIC 0x000000007FEEA5EB 000742 (v01 PTLTD ? APIC 06040000 LTP 00000000) Sep 12 05:56:26.701632 kernel: ACPI: MCFG 0x000000007FEEA5AF 00003C (v01 PTLTD $PCITBL$ 06040000 LTP 00000001) Sep 12 05:56:26.701638 kernel: ACPI: SRAT 0x000000007FEE9757 0008A8 (v02 VMWARE MEMPLUG 06040000 VMW 00000001) Sep 12 05:56:26.701643 kernel: ACPI: HPET 0x000000007FEE971F 000038 (v01 VMWARE VMW HPET 06040000 VMW 00000001) Sep 12 05:56:26.701648 kernel: ACPI: WAET 0x000000007FEE96F7 000028 (v01 VMWARE VMW WAET 06040000 VMW 00000001) Sep 12 05:56:26.701653 kernel: ACPI: Reserving FACP table memory at [mem 0x7fefee73-0x7fefef66] Sep 12 05:56:26.701658 kernel: ACPI: Reserving DSDT table memory at [mem 0x7feead55-0x7fefee72] Sep 12 05:56:26.701663 kernel: ACPI: Reserving FACS table memory at [mem 0x7fefffc0-0x7fefffff] Sep 12 05:56:26.701668 kernel: ACPI: Reserving FACS table memory at [mem 0x7fefffc0-0x7fefffff] Sep 12 05:56:26.701673 kernel: ACPI: Reserving BOOT table memory at [mem 0x7feead2d-0x7feead54] Sep 12 05:56:26.701678 kernel: ACPI: Reserving APIC table memory at [mem 0x7feea5eb-0x7feead2c] Sep 12 05:56:26.701684 kernel: ACPI: Reserving MCFG table memory at [mem 0x7feea5af-0x7feea5ea] Sep 12 05:56:26.701689 kernel: ACPI: Reserving SRAT table memory at [mem 0x7fee9757-0x7fee9ffe] Sep 12 05:56:26.701694 kernel: ACPI: Reserving HPET table memory at [mem 0x7fee971f-0x7fee9756] Sep 12 05:56:26.701699 kernel: ACPI: Reserving WAET table memory at [mem 0x7fee96f7-0x7fee971e] Sep 12 05:56:26.701704 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Sep 12 05:56:26.701709 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Sep 12 05:56:26.701714 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000-0xbfffffff] hotplug Sep 12 05:56:26.701719 kernel: NUMA: Node 0 [mem 0x00001000-0x0009ffff] + [mem 0x00100000-0x7fffffff] -> [mem 0x00001000-0x7fffffff] Sep 12 05:56:26.701724 kernel: NODE_DATA(0) allocated [mem 0x7fff8dc0-0x7fffffff] Sep 12 05:56:26.701730 kernel: Zone ranges: Sep 12 05:56:26.701736 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 12 05:56:26.701741 kernel: DMA32 [mem 0x0000000001000000-0x000000007fffffff] Sep 12 05:56:26.701746 kernel: Normal empty Sep 12 05:56:26.701751 kernel: Device empty Sep 12 05:56:26.701756 kernel: Movable zone start for each node Sep 12 05:56:26.701761 kernel: Early memory node ranges Sep 12 05:56:26.701765 kernel: node 0: [mem 0x0000000000001000-0x000000000009dfff] Sep 12 05:56:26.701770 kernel: node 0: [mem 0x0000000000100000-0x000000007fedffff] Sep 12 05:56:26.701775 kernel: node 0: [mem 0x000000007ff00000-0x000000007fffffff] Sep 12 05:56:26.701781 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007fffffff] Sep 12 05:56:26.701786 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 12 05:56:26.701791 kernel: On node 0, zone DMA: 98 pages in unavailable ranges Sep 12 05:56:26.701797 kernel: On node 0, zone DMA32: 32 pages in unavailable ranges Sep 12 05:56:26.701801 kernel: ACPI: PM-Timer IO Port: 0x1008 Sep 12 05:56:26.701807 kernel: ACPI: LAPIC_NMI (acpi_id[0x00] high edge lint[0x1]) Sep 12 05:56:26.701811 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1]) Sep 12 05:56:26.701816 kernel: ACPI: LAPIC_NMI (acpi_id[0x02] high edge lint[0x1]) Sep 12 05:56:26.701821 kernel: ACPI: LAPIC_NMI (acpi_id[0x03] high edge lint[0x1]) Sep 12 05:56:26.701827 kernel: ACPI: LAPIC_NMI (acpi_id[0x04] high edge lint[0x1]) Sep 12 05:56:26.701832 kernel: ACPI: LAPIC_NMI (acpi_id[0x05] high edge lint[0x1]) Sep 12 05:56:26.701837 kernel: ACPI: LAPIC_NMI (acpi_id[0x06] high edge lint[0x1]) Sep 12 05:56:26.701842 kernel: ACPI: LAPIC_NMI (acpi_id[0x07] high edge lint[0x1]) Sep 12 05:56:26.701848 kernel: ACPI: LAPIC_NMI (acpi_id[0x08] high edge lint[0x1]) Sep 12 05:56:26.701853 kernel: ACPI: LAPIC_NMI (acpi_id[0x09] high edge lint[0x1]) Sep 12 05:56:26.701858 kernel: ACPI: LAPIC_NMI (acpi_id[0x0a] high edge lint[0x1]) Sep 12 05:56:26.701863 kernel: ACPI: LAPIC_NMI (acpi_id[0x0b] high edge lint[0x1]) Sep 12 05:56:26.701868 kernel: ACPI: LAPIC_NMI (acpi_id[0x0c] high edge lint[0x1]) Sep 12 05:56:26.701873 kernel: ACPI: LAPIC_NMI (acpi_id[0x0d] high edge lint[0x1]) Sep 12 05:56:26.701879 kernel: ACPI: LAPIC_NMI (acpi_id[0x0e] high edge lint[0x1]) Sep 12 05:56:26.701884 kernel: ACPI: LAPIC_NMI (acpi_id[0x0f] high edge lint[0x1]) Sep 12 05:56:26.701889 kernel: ACPI: LAPIC_NMI (acpi_id[0x10] high edge lint[0x1]) Sep 12 05:56:26.701893 kernel: ACPI: LAPIC_NMI (acpi_id[0x11] high edge lint[0x1]) Sep 12 05:56:26.701898 kernel: ACPI: LAPIC_NMI (acpi_id[0x12] high edge lint[0x1]) Sep 12 05:56:26.701903 kernel: ACPI: LAPIC_NMI (acpi_id[0x13] high edge lint[0x1]) Sep 12 05:56:26.701908 kernel: ACPI: LAPIC_NMI (acpi_id[0x14] high edge lint[0x1]) Sep 12 05:56:26.701913 kernel: ACPI: LAPIC_NMI (acpi_id[0x15] high edge lint[0x1]) Sep 12 05:56:26.701918 kernel: ACPI: LAPIC_NMI (acpi_id[0x16] high edge lint[0x1]) Sep 12 05:56:26.701923 kernel: ACPI: LAPIC_NMI (acpi_id[0x17] high edge lint[0x1]) Sep 12 05:56:26.701928 kernel: ACPI: LAPIC_NMI (acpi_id[0x18] high edge lint[0x1]) Sep 12 05:56:26.701933 kernel: ACPI: LAPIC_NMI (acpi_id[0x19] high edge lint[0x1]) Sep 12 05:56:26.701938 kernel: ACPI: LAPIC_NMI (acpi_id[0x1a] high edge lint[0x1]) Sep 12 05:56:26.701943 kernel: ACPI: LAPIC_NMI (acpi_id[0x1b] high edge lint[0x1]) Sep 12 05:56:26.701948 kernel: ACPI: LAPIC_NMI (acpi_id[0x1c] high edge lint[0x1]) Sep 12 05:56:26.701953 kernel: ACPI: LAPIC_NMI (acpi_id[0x1d] high edge lint[0x1]) Sep 12 05:56:26.701958 kernel: ACPI: LAPIC_NMI (acpi_id[0x1e] high edge lint[0x1]) Sep 12 05:56:26.701963 kernel: ACPI: LAPIC_NMI (acpi_id[0x1f] high edge lint[0x1]) Sep 12 05:56:26.701968 kernel: ACPI: LAPIC_NMI (acpi_id[0x20] high edge lint[0x1]) Sep 12 05:56:26.701972 kernel: ACPI: LAPIC_NMI (acpi_id[0x21] high edge lint[0x1]) Sep 12 05:56:26.701978 kernel: ACPI: LAPIC_NMI (acpi_id[0x22] high edge lint[0x1]) Sep 12 05:56:26.701983 kernel: ACPI: LAPIC_NMI (acpi_id[0x23] high edge lint[0x1]) Sep 12 05:56:26.701988 kernel: ACPI: LAPIC_NMI (acpi_id[0x24] high edge lint[0x1]) Sep 12 05:56:26.701993 kernel: ACPI: LAPIC_NMI (acpi_id[0x25] high edge lint[0x1]) Sep 12 05:56:26.701998 kernel: ACPI: LAPIC_NMI (acpi_id[0x26] high edge lint[0x1]) Sep 12 05:56:26.702003 kernel: ACPI: LAPIC_NMI (acpi_id[0x27] high edge lint[0x1]) Sep 12 05:56:26.702012 kernel: ACPI: LAPIC_NMI (acpi_id[0x28] high edge lint[0x1]) Sep 12 05:56:26.702018 kernel: ACPI: LAPIC_NMI (acpi_id[0x29] high edge lint[0x1]) Sep 12 05:56:26.702023 kernel: ACPI: LAPIC_NMI (acpi_id[0x2a] high edge lint[0x1]) Sep 12 05:56:26.702028 kernel: ACPI: LAPIC_NMI (acpi_id[0x2b] high edge lint[0x1]) Sep 12 05:56:26.702047 kernel: ACPI: LAPIC_NMI (acpi_id[0x2c] high edge lint[0x1]) Sep 12 05:56:26.702053 kernel: ACPI: LAPIC_NMI (acpi_id[0x2d] high edge lint[0x1]) Sep 12 05:56:26.702058 kernel: ACPI: LAPIC_NMI (acpi_id[0x2e] high edge lint[0x1]) Sep 12 05:56:26.702063 kernel: ACPI: LAPIC_NMI (acpi_id[0x2f] high edge lint[0x1]) Sep 12 05:56:26.702069 kernel: ACPI: LAPIC_NMI (acpi_id[0x30] high edge lint[0x1]) Sep 12 05:56:26.702074 kernel: ACPI: LAPIC_NMI (acpi_id[0x31] high edge lint[0x1]) Sep 12 05:56:26.702079 kernel: ACPI: LAPIC_NMI (acpi_id[0x32] high edge lint[0x1]) Sep 12 05:56:26.702084 kernel: ACPI: LAPIC_NMI (acpi_id[0x33] high edge lint[0x1]) Sep 12 05:56:26.702091 kernel: ACPI: LAPIC_NMI (acpi_id[0x34] high edge lint[0x1]) Sep 12 05:56:26.702096 kernel: ACPI: LAPIC_NMI (acpi_id[0x35] high edge lint[0x1]) Sep 12 05:56:26.702101 kernel: ACPI: LAPIC_NMI (acpi_id[0x36] high edge lint[0x1]) Sep 12 05:56:26.702106 kernel: ACPI: LAPIC_NMI (acpi_id[0x37] high edge lint[0x1]) Sep 12 05:56:26.702112 kernel: ACPI: LAPIC_NMI (acpi_id[0x38] high edge lint[0x1]) Sep 12 05:56:26.702117 kernel: ACPI: LAPIC_NMI (acpi_id[0x39] high edge lint[0x1]) Sep 12 05:56:26.702122 kernel: ACPI: LAPIC_NMI (acpi_id[0x3a] high edge lint[0x1]) Sep 12 05:56:26.702127 kernel: ACPI: LAPIC_NMI (acpi_id[0x3b] high edge lint[0x1]) Sep 12 05:56:26.702132 kernel: ACPI: LAPIC_NMI (acpi_id[0x3c] high edge lint[0x1]) Sep 12 05:56:26.702137 kernel: ACPI: LAPIC_NMI (acpi_id[0x3d] high edge lint[0x1]) Sep 12 05:56:26.702144 kernel: ACPI: LAPIC_NMI (acpi_id[0x3e] high edge lint[0x1]) Sep 12 05:56:26.702149 kernel: ACPI: LAPIC_NMI (acpi_id[0x3f] high edge lint[0x1]) Sep 12 05:56:26.702154 kernel: ACPI: LAPIC_NMI (acpi_id[0x40] high edge lint[0x1]) Sep 12 05:56:26.702159 kernel: ACPI: LAPIC_NMI (acpi_id[0x41] high edge lint[0x1]) Sep 12 05:56:26.702164 kernel: ACPI: LAPIC_NMI (acpi_id[0x42] high edge lint[0x1]) Sep 12 05:56:26.702170 kernel: ACPI: LAPIC_NMI (acpi_id[0x43] high edge lint[0x1]) Sep 12 05:56:26.702175 kernel: ACPI: LAPIC_NMI (acpi_id[0x44] high edge lint[0x1]) Sep 12 05:56:26.702180 kernel: ACPI: LAPIC_NMI (acpi_id[0x45] high edge lint[0x1]) Sep 12 05:56:26.702478 kernel: ACPI: LAPIC_NMI (acpi_id[0x46] high edge lint[0x1]) Sep 12 05:56:26.702489 kernel: ACPI: LAPIC_NMI (acpi_id[0x47] high edge lint[0x1]) Sep 12 05:56:26.702495 kernel: ACPI: LAPIC_NMI (acpi_id[0x48] high edge lint[0x1]) Sep 12 05:56:26.702500 kernel: ACPI: LAPIC_NMI (acpi_id[0x49] high edge lint[0x1]) Sep 12 05:56:26.702505 kernel: ACPI: LAPIC_NMI (acpi_id[0x4a] high edge lint[0x1]) Sep 12 05:56:26.702510 kernel: ACPI: LAPIC_NMI (acpi_id[0x4b] high edge lint[0x1]) Sep 12 05:56:26.702515 kernel: ACPI: LAPIC_NMI (acpi_id[0x4c] high edge lint[0x1]) Sep 12 05:56:26.702521 kernel: ACPI: LAPIC_NMI (acpi_id[0x4d] high edge lint[0x1]) Sep 12 05:56:26.702526 kernel: ACPI: LAPIC_NMI (acpi_id[0x4e] high edge lint[0x1]) Sep 12 05:56:26.702531 kernel: ACPI: LAPIC_NMI (acpi_id[0x4f] high edge lint[0x1]) Sep 12 05:56:26.702536 kernel: ACPI: LAPIC_NMI (acpi_id[0x50] high edge lint[0x1]) Sep 12 05:56:26.702543 kernel: ACPI: LAPIC_NMI (acpi_id[0x51] high edge lint[0x1]) Sep 12 05:56:26.702548 kernel: ACPI: LAPIC_NMI (acpi_id[0x52] high edge lint[0x1]) Sep 12 05:56:26.702553 kernel: ACPI: LAPIC_NMI (acpi_id[0x53] high edge lint[0x1]) Sep 12 05:56:26.702558 kernel: ACPI: LAPIC_NMI (acpi_id[0x54] high edge lint[0x1]) Sep 12 05:56:26.702564 kernel: ACPI: LAPIC_NMI (acpi_id[0x55] high edge lint[0x1]) Sep 12 05:56:26.702569 kernel: ACPI: LAPIC_NMI (acpi_id[0x56] high edge lint[0x1]) Sep 12 05:56:26.702574 kernel: ACPI: LAPIC_NMI (acpi_id[0x57] high edge lint[0x1]) Sep 12 05:56:26.702580 kernel: ACPI: LAPIC_NMI (acpi_id[0x58] high edge lint[0x1]) Sep 12 05:56:26.702585 kernel: ACPI: LAPIC_NMI (acpi_id[0x59] high edge lint[0x1]) Sep 12 05:56:26.702590 kernel: ACPI: LAPIC_NMI (acpi_id[0x5a] high edge lint[0x1]) Sep 12 05:56:26.702596 kernel: ACPI: LAPIC_NMI (acpi_id[0x5b] high edge lint[0x1]) Sep 12 05:56:26.702601 kernel: ACPI: LAPIC_NMI (acpi_id[0x5c] high edge lint[0x1]) Sep 12 05:56:26.702607 kernel: ACPI: LAPIC_NMI (acpi_id[0x5d] high edge lint[0x1]) Sep 12 05:56:26.702612 kernel: ACPI: LAPIC_NMI (acpi_id[0x5e] high edge lint[0x1]) Sep 12 05:56:26.702617 kernel: ACPI: LAPIC_NMI (acpi_id[0x5f] high edge lint[0x1]) Sep 12 05:56:26.702622 kernel: ACPI: LAPIC_NMI (acpi_id[0x60] high edge lint[0x1]) Sep 12 05:56:26.702627 kernel: ACPI: LAPIC_NMI (acpi_id[0x61] high edge lint[0x1]) Sep 12 05:56:26.702633 kernel: ACPI: LAPIC_NMI (acpi_id[0x62] high edge lint[0x1]) Sep 12 05:56:26.702638 kernel: ACPI: LAPIC_NMI (acpi_id[0x63] high edge lint[0x1]) Sep 12 05:56:26.702643 kernel: ACPI: LAPIC_NMI (acpi_id[0x64] high edge lint[0x1]) Sep 12 05:56:26.702650 kernel: ACPI: LAPIC_NMI (acpi_id[0x65] high edge lint[0x1]) Sep 12 05:56:26.702655 kernel: ACPI: LAPIC_NMI (acpi_id[0x66] high edge lint[0x1]) Sep 12 05:56:26.702660 kernel: ACPI: LAPIC_NMI (acpi_id[0x67] high edge lint[0x1]) Sep 12 05:56:26.702665 kernel: ACPI: LAPIC_NMI (acpi_id[0x68] high edge lint[0x1]) Sep 12 05:56:26.702670 kernel: ACPI: LAPIC_NMI (acpi_id[0x69] high edge lint[0x1]) Sep 12 05:56:26.702675 kernel: ACPI: LAPIC_NMI (acpi_id[0x6a] high edge lint[0x1]) Sep 12 05:56:26.702681 kernel: ACPI: LAPIC_NMI (acpi_id[0x6b] high edge lint[0x1]) Sep 12 05:56:26.702686 kernel: ACPI: LAPIC_NMI (acpi_id[0x6c] high edge lint[0x1]) Sep 12 05:56:26.702691 kernel: ACPI: LAPIC_NMI (acpi_id[0x6d] high edge lint[0x1]) Sep 12 05:56:26.702697 kernel: ACPI: LAPIC_NMI (acpi_id[0x6e] high edge lint[0x1]) Sep 12 05:56:26.702703 kernel: ACPI: LAPIC_NMI (acpi_id[0x6f] high edge lint[0x1]) Sep 12 05:56:26.702708 kernel: ACPI: LAPIC_NMI (acpi_id[0x70] high edge lint[0x1]) Sep 12 05:56:26.702713 kernel: ACPI: LAPIC_NMI (acpi_id[0x71] high edge lint[0x1]) Sep 12 05:56:26.702718 kernel: ACPI: LAPIC_NMI (acpi_id[0x72] high edge lint[0x1]) Sep 12 05:56:26.702723 kernel: ACPI: LAPIC_NMI (acpi_id[0x73] high edge lint[0x1]) Sep 12 05:56:26.702729 kernel: ACPI: LAPIC_NMI (acpi_id[0x74] high edge lint[0x1]) Sep 12 05:56:26.702734 kernel: ACPI: LAPIC_NMI (acpi_id[0x75] high edge lint[0x1]) Sep 12 05:56:26.702739 kernel: ACPI: LAPIC_NMI (acpi_id[0x76] high edge lint[0x1]) Sep 12 05:56:26.702744 kernel: ACPI: LAPIC_NMI (acpi_id[0x77] high edge lint[0x1]) Sep 12 05:56:26.702750 kernel: ACPI: LAPIC_NMI (acpi_id[0x78] high edge lint[0x1]) Sep 12 05:56:26.702756 kernel: ACPI: LAPIC_NMI (acpi_id[0x79] high edge lint[0x1]) Sep 12 05:56:26.702761 kernel: ACPI: LAPIC_NMI (acpi_id[0x7a] high edge lint[0x1]) Sep 12 05:56:26.702766 kernel: ACPI: LAPIC_NMI (acpi_id[0x7b] high edge lint[0x1]) Sep 12 05:56:26.702772 kernel: ACPI: LAPIC_NMI (acpi_id[0x7c] high edge lint[0x1]) Sep 12 05:56:26.702777 kernel: ACPI: LAPIC_NMI (acpi_id[0x7d] high edge lint[0x1]) Sep 12 05:56:26.702782 kernel: ACPI: LAPIC_NMI (acpi_id[0x7e] high edge lint[0x1]) Sep 12 05:56:26.702787 kernel: ACPI: LAPIC_NMI (acpi_id[0x7f] high edge lint[0x1]) Sep 12 05:56:26.702792 kernel: IOAPIC[0]: apic_id 1, version 17, address 0xfec00000, GSI 0-23 Sep 12 05:56:26.702798 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 high edge) Sep 12 05:56:26.702804 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 12 05:56:26.702810 kernel: ACPI: HPET id: 0x8086af01 base: 0xfed00000 Sep 12 05:56:26.702815 kernel: TSC deadline timer available Sep 12 05:56:26.702820 kernel: CPU topo: Max. logical packages: 128 Sep 12 05:56:26.702826 kernel: CPU topo: Max. logical dies: 128 Sep 12 05:56:26.702831 kernel: CPU topo: Max. dies per package: 1 Sep 12 05:56:26.702836 kernel: CPU topo: Max. threads per core: 1 Sep 12 05:56:26.702841 kernel: CPU topo: Num. cores per package: 1 Sep 12 05:56:26.702846 kernel: CPU topo: Num. threads per package: 1 Sep 12 05:56:26.702852 kernel: CPU topo: Allowing 2 present CPUs plus 126 hotplug CPUs Sep 12 05:56:26.702858 kernel: [mem 0x80000000-0xefffffff] available for PCI devices Sep 12 05:56:26.702863 kernel: Booting paravirtualized kernel on VMware hypervisor Sep 12 05:56:26.702887 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 12 05:56:26.702893 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:128 nr_cpu_ids:128 nr_node_ids:1 Sep 12 05:56:26.702898 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u262144 Sep 12 05:56:26.702904 kernel: pcpu-alloc: s207832 r8192 d29736 u262144 alloc=1*2097152 Sep 12 05:56:26.702909 kernel: pcpu-alloc: [0] 000 001 002 003 004 005 006 007 Sep 12 05:56:26.702914 kernel: pcpu-alloc: [0] 008 009 010 011 012 013 014 015 Sep 12 05:56:26.702920 kernel: pcpu-alloc: [0] 016 017 018 019 020 021 022 023 Sep 12 05:56:26.703196 kernel: pcpu-alloc: [0] 024 025 026 027 028 029 030 031 Sep 12 05:56:26.703203 kernel: pcpu-alloc: [0] 032 033 034 035 036 037 038 039 Sep 12 05:56:26.703208 kernel: pcpu-alloc: [0] 040 041 042 043 044 045 046 047 Sep 12 05:56:26.703213 kernel: pcpu-alloc: [0] 048 049 050 051 052 053 054 055 Sep 12 05:56:26.703218 kernel: pcpu-alloc: [0] 056 057 058 059 060 061 062 063 Sep 12 05:56:26.703224 kernel: pcpu-alloc: [0] 064 065 066 067 068 069 070 071 Sep 12 05:56:26.703229 kernel: pcpu-alloc: [0] 072 073 074 075 076 077 078 079 Sep 12 05:56:26.703234 kernel: pcpu-alloc: [0] 080 081 082 083 084 085 086 087 Sep 12 05:56:26.703240 kernel: pcpu-alloc: [0] 088 089 090 091 092 093 094 095 Sep 12 05:56:26.703247 kernel: pcpu-alloc: [0] 096 097 098 099 100 101 102 103 Sep 12 05:56:26.703252 kernel: pcpu-alloc: [0] 104 105 106 107 108 109 110 111 Sep 12 05:56:26.703257 kernel: pcpu-alloc: [0] 112 113 114 115 116 117 118 119 Sep 12 05:56:26.703263 kernel: pcpu-alloc: [0] 120 121 122 123 124 125 126 127 Sep 12 05:56:26.703269 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=d36684c42387dba16669740eb40ca6a094be0dfb03f64a303630b6ac6cfe48d3 Sep 12 05:56:26.703275 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 12 05:56:26.703281 kernel: random: crng init done Sep 12 05:56:26.703286 kernel: printk: log_buf_len individual max cpu contribution: 4096 bytes Sep 12 05:56:26.703293 kernel: printk: log_buf_len total cpu_extra contributions: 520192 bytes Sep 12 05:56:26.703298 kernel: printk: log_buf_len min size: 262144 bytes Sep 12 05:56:26.703304 kernel: printk: log_buf_len: 1048576 bytes Sep 12 05:56:26.703309 kernel: printk: early log buf free: 245592(93%) Sep 12 05:56:26.703315 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 12 05:56:26.703320 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Sep 12 05:56:26.703326 kernel: Fallback order for Node 0: 0 Sep 12 05:56:26.703331 kernel: Built 1 zonelists, mobility grouping on. Total pages: 524157 Sep 12 05:56:26.703337 kernel: Policy zone: DMA32 Sep 12 05:56:26.703343 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 12 05:56:26.703349 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=128, Nodes=1 Sep 12 05:56:26.703354 kernel: ftrace: allocating 40123 entries in 157 pages Sep 12 05:56:26.703360 kernel: ftrace: allocated 157 pages with 5 groups Sep 12 05:56:26.703365 kernel: Dynamic Preempt: voluntary Sep 12 05:56:26.703371 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 12 05:56:26.703376 kernel: rcu: RCU event tracing is enabled. Sep 12 05:56:26.703382 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=128. Sep 12 05:56:26.703387 kernel: Trampoline variant of Tasks RCU enabled. Sep 12 05:56:26.703394 kernel: Rude variant of Tasks RCU enabled. Sep 12 05:56:26.703399 kernel: Tracing variant of Tasks RCU enabled. Sep 12 05:56:26.703405 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 12 05:56:26.703410 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=128 Sep 12 05:56:26.703416 kernel: RCU Tasks: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. Sep 12 05:56:26.703421 kernel: RCU Tasks Rude: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. Sep 12 05:56:26.703427 kernel: RCU Tasks Trace: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. Sep 12 05:56:26.703433 kernel: NR_IRQS: 33024, nr_irqs: 1448, preallocated irqs: 16 Sep 12 05:56:26.703438 kernel: rcu: srcu_init: Setting srcu_struct sizes to big. Sep 12 05:56:26.703444 kernel: Console: colour VGA+ 80x25 Sep 12 05:56:26.703450 kernel: printk: legacy console [tty0] enabled Sep 12 05:56:26.703455 kernel: printk: legacy console [ttyS0] enabled Sep 12 05:56:26.703461 kernel: ACPI: Core revision 20240827 Sep 12 05:56:26.703466 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 133484882848 ns Sep 12 05:56:26.703472 kernel: APIC: Switch to symmetric I/O mode setup Sep 12 05:56:26.703493 kernel: x2apic enabled Sep 12 05:56:26.703498 kernel: APIC: Switched APIC routing to: physical x2apic Sep 12 05:56:26.703504 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Sep 12 05:56:26.703510 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x311fd3cd494, max_idle_ns: 440795223879 ns Sep 12 05:56:26.703516 kernel: Calibrating delay loop (skipped) preset value.. 6816.00 BogoMIPS (lpj=3408000) Sep 12 05:56:26.703521 kernel: Disabled fast string operations Sep 12 05:56:26.703526 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Sep 12 05:56:26.703532 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Sep 12 05:56:26.703537 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 12 05:56:26.703543 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall and VM exit Sep 12 05:56:26.703548 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS Sep 12 05:56:26.703553 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT Sep 12 05:56:26.703559 kernel: RETBleed: Mitigation: Enhanced IBRS Sep 12 05:56:26.703565 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Sep 12 05:56:26.703570 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Sep 12 05:56:26.703576 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Sep 12 05:56:26.703581 kernel: SRBDS: Unknown: Dependent on hypervisor status Sep 12 05:56:26.703586 kernel: GDS: Unknown: Dependent on hypervisor status Sep 12 05:56:26.703592 kernel: active return thunk: its_return_thunk Sep 12 05:56:26.703597 kernel: ITS: Mitigation: Aligned branch/return thunks Sep 12 05:56:26.703602 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 12 05:56:26.703608 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 12 05:56:26.703614 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 12 05:56:26.703619 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 12 05:56:26.703625 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Sep 12 05:56:26.703630 kernel: Freeing SMP alternatives memory: 32K Sep 12 05:56:26.703635 kernel: pid_max: default: 131072 minimum: 1024 Sep 12 05:56:26.703641 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Sep 12 05:56:26.703646 kernel: landlock: Up and running. Sep 12 05:56:26.703651 kernel: SELinux: Initializing. Sep 12 05:56:26.703658 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Sep 12 05:56:26.703663 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Sep 12 05:56:26.703669 kernel: smpboot: CPU0: Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz (family: 0x6, model: 0x9e, stepping: 0xd) Sep 12 05:56:26.703674 kernel: Performance Events: Skylake events, core PMU driver. Sep 12 05:56:26.703679 kernel: core: CPUID marked event: 'cpu cycles' unavailable Sep 12 05:56:26.703685 kernel: core: CPUID marked event: 'instructions' unavailable Sep 12 05:56:26.703690 kernel: core: CPUID marked event: 'bus cycles' unavailable Sep 12 05:56:26.703696 kernel: core: CPUID marked event: 'cache references' unavailable Sep 12 05:56:26.703701 kernel: core: CPUID marked event: 'cache misses' unavailable Sep 12 05:56:26.703707 kernel: core: CPUID marked event: 'branch instructions' unavailable Sep 12 05:56:26.703712 kernel: core: CPUID marked event: 'branch misses' unavailable Sep 12 05:56:26.703717 kernel: ... version: 1 Sep 12 05:56:26.703723 kernel: ... bit width: 48 Sep 12 05:56:26.703728 kernel: ... generic registers: 4 Sep 12 05:56:26.703733 kernel: ... value mask: 0000ffffffffffff Sep 12 05:56:26.703739 kernel: ... max period: 000000007fffffff Sep 12 05:56:26.703744 kernel: ... fixed-purpose events: 0 Sep 12 05:56:26.703749 kernel: ... event mask: 000000000000000f Sep 12 05:56:26.703756 kernel: signal: max sigframe size: 1776 Sep 12 05:56:26.703761 kernel: rcu: Hierarchical SRCU implementation. Sep 12 05:56:26.703766 kernel: rcu: Max phase no-delay instances is 400. Sep 12 05:56:26.703772 kernel: Timer migration: 3 hierarchy levels; 8 children per group; 3 crossnode level Sep 12 05:56:26.703777 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Sep 12 05:56:26.703782 kernel: smp: Bringing up secondary CPUs ... Sep 12 05:56:26.703788 kernel: smpboot: x86: Booting SMP configuration: Sep 12 05:56:26.703793 kernel: .... node #0, CPUs: #1 Sep 12 05:56:26.703798 kernel: Disabled fast string operations Sep 12 05:56:26.703804 kernel: smp: Brought up 1 node, 2 CPUs Sep 12 05:56:26.703810 kernel: smpboot: Total of 2 processors activated (13632.00 BogoMIPS) Sep 12 05:56:26.703816 kernel: Memory: 1924232K/2096628K available (14336K kernel code, 2432K rwdata, 9988K rodata, 54092K init, 2872K bss, 161020K reserved, 0K cma-reserved) Sep 12 05:56:26.703821 kernel: devtmpfs: initialized Sep 12 05:56:26.703827 kernel: x86/mm: Memory block size: 128MB Sep 12 05:56:26.703832 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7feff000-0x7fefffff] (4096 bytes) Sep 12 05:56:26.703837 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 12 05:56:26.703843 kernel: futex hash table entries: 32768 (order: 9, 2097152 bytes, linear) Sep 12 05:56:26.703848 kernel: pinctrl core: initialized pinctrl subsystem Sep 12 05:56:26.703854 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 12 05:56:26.703860 kernel: audit: initializing netlink subsys (disabled) Sep 12 05:56:26.703865 kernel: audit: type=2000 audit(1757656583.268:1): state=initialized audit_enabled=0 res=1 Sep 12 05:56:26.703870 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 12 05:56:26.703876 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 12 05:56:26.703881 kernel: cpuidle: using governor menu Sep 12 05:56:26.703886 kernel: Simple Boot Flag at 0x36 set to 0x80 Sep 12 05:56:26.703892 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 12 05:56:26.703897 kernel: dca service started, version 1.12.1 Sep 12 05:56:26.703909 kernel: PCI: ECAM [mem 0xf0000000-0xf7ffffff] (base 0xf0000000) for domain 0000 [bus 00-7f] Sep 12 05:56:26.703915 kernel: PCI: Using configuration type 1 for base access Sep 12 05:56:26.703921 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 12 05:56:26.703927 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 12 05:56:26.703932 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Sep 12 05:56:26.703938 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 12 05:56:26.703944 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Sep 12 05:56:26.703949 kernel: ACPI: Added _OSI(Module Device) Sep 12 05:56:26.703955 kernel: ACPI: Added _OSI(Processor Device) Sep 12 05:56:26.703961 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 12 05:56:26.703967 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 12 05:56:26.703973 kernel: ACPI: [Firmware Bug]: BIOS _OSI(Linux) query ignored Sep 12 05:56:26.703979 kernel: ACPI: Interpreter enabled Sep 12 05:56:26.703984 kernel: ACPI: PM: (supports S0 S1 S5) Sep 12 05:56:26.703990 kernel: ACPI: Using IOAPIC for interrupt routing Sep 12 05:56:26.703996 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 12 05:56:26.704001 kernel: PCI: Using E820 reservations for host bridge windows Sep 12 05:56:26.704007 kernel: ACPI: Enabled 4 GPEs in block 00 to 0F Sep 12 05:56:26.704014 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-7f]) Sep 12 05:56:26.704092 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 12 05:56:26.704144 kernel: acpi PNP0A03:00: _OSC: platform does not support [AER LTR] Sep 12 05:56:26.704201 kernel: acpi PNP0A03:00: _OSC: OS now controls [PCIeHotplug PME PCIeCapability] Sep 12 05:56:26.704210 kernel: PCI host bridge to bus 0000:00 Sep 12 05:56:26.704263 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Sep 12 05:56:26.704311 kernel: pci_bus 0000:00: root bus resource [mem 0x000cc000-0x000dbfff window] Sep 12 05:56:26.704353 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Sep 12 05:56:26.704395 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Sep 12 05:56:26.704439 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xfeff window] Sep 12 05:56:26.704510 kernel: pci_bus 0000:00: root bus resource [bus 00-7f] Sep 12 05:56:26.706323 kernel: pci 0000:00:00.0: [8086:7190] type 00 class 0x060000 conventional PCI endpoint Sep 12 05:56:26.706391 kernel: pci 0000:00:01.0: [8086:7191] type 01 class 0x060400 conventional PCI bridge Sep 12 05:56:26.706458 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Sep 12 05:56:26.706518 kernel: pci 0000:00:07.0: [8086:7110] type 00 class 0x060100 conventional PCI endpoint Sep 12 05:56:26.706574 kernel: pci 0000:00:07.1: [8086:7111] type 00 class 0x01018a conventional PCI endpoint Sep 12 05:56:26.706625 kernel: pci 0000:00:07.1: BAR 4 [io 0x1060-0x106f] Sep 12 05:56:26.706674 kernel: pci 0000:00:07.1: BAR 0 [io 0x01f0-0x01f7]: legacy IDE quirk Sep 12 05:56:26.706723 kernel: pci 0000:00:07.1: BAR 1 [io 0x03f6]: legacy IDE quirk Sep 12 05:56:26.706772 kernel: pci 0000:00:07.1: BAR 2 [io 0x0170-0x0177]: legacy IDE quirk Sep 12 05:56:26.706819 kernel: pci 0000:00:07.1: BAR 3 [io 0x0376]: legacy IDE quirk Sep 12 05:56:26.706873 kernel: pci 0000:00:07.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint Sep 12 05:56:26.706923 kernel: pci 0000:00:07.3: quirk: [io 0x1000-0x103f] claimed by PIIX4 ACPI Sep 12 05:56:26.706973 kernel: pci 0000:00:07.3: quirk: [io 0x1040-0x104f] claimed by PIIX4 SMB Sep 12 05:56:26.707026 kernel: pci 0000:00:07.7: [15ad:0740] type 00 class 0x088000 conventional PCI endpoint Sep 12 05:56:26.707075 kernel: pci 0000:00:07.7: BAR 0 [io 0x1080-0x10bf] Sep 12 05:56:26.707123 kernel: pci 0000:00:07.7: BAR 1 [mem 0xfebfe000-0xfebfffff 64bit] Sep 12 05:56:26.708961 kernel: pci 0000:00:0f.0: [15ad:0405] type 00 class 0x030000 conventional PCI endpoint Sep 12 05:56:26.709031 kernel: pci 0000:00:0f.0: BAR 0 [io 0x1070-0x107f] Sep 12 05:56:26.709088 kernel: pci 0000:00:0f.0: BAR 1 [mem 0xe8000000-0xefffffff pref] Sep 12 05:56:26.709139 kernel: pci 0000:00:0f.0: BAR 2 [mem 0xfe000000-0xfe7fffff] Sep 12 05:56:26.709228 kernel: pci 0000:00:0f.0: ROM [mem 0x00000000-0x00007fff pref] Sep 12 05:56:26.709284 kernel: pci 0000:00:0f.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Sep 12 05:56:26.709350 kernel: pci 0000:00:11.0: [15ad:0790] type 01 class 0x060401 conventional PCI bridge Sep 12 05:56:26.709400 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] (subtractive decode) Sep 12 05:56:26.709450 kernel: pci 0000:00:11.0: bridge window [io 0x2000-0x3fff] Sep 12 05:56:26.709502 kernel: pci 0000:00:11.0: bridge window [mem 0xfd600000-0xfdffffff] Sep 12 05:56:26.709549 kernel: pci 0000:00:11.0: bridge window [mem 0xe7b00000-0xe7ffffff 64bit pref] Sep 12 05:56:26.709602 kernel: pci 0000:00:15.0: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Sep 12 05:56:26.709652 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] Sep 12 05:56:26.709702 kernel: pci 0000:00:15.0: bridge window [io 0x4000-0x4fff] Sep 12 05:56:26.709751 kernel: pci 0000:00:15.0: bridge window [mem 0xfd500000-0xfd5fffff] Sep 12 05:56:26.709801 kernel: pci 0000:00:15.0: PME# supported from D0 D3hot D3cold Sep 12 05:56:26.709855 kernel: pci 0000:00:15.1: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Sep 12 05:56:26.709907 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] Sep 12 05:56:26.709957 kernel: pci 0000:00:15.1: bridge window [io 0x8000-0x8fff] Sep 12 05:56:26.710006 kernel: pci 0000:00:15.1: bridge window [mem 0xfd100000-0xfd1fffff] Sep 12 05:56:26.710055 kernel: pci 0000:00:15.1: bridge window [mem 0xe7800000-0xe78fffff 64bit pref] Sep 12 05:56:26.710104 kernel: pci 0000:00:15.1: PME# supported from D0 D3hot D3cold Sep 12 05:56:26.710158 kernel: pci 0000:00:15.2: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Sep 12 05:56:26.711253 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] Sep 12 05:56:26.711317 kernel: pci 0000:00:15.2: bridge window [io 0xc000-0xcfff] Sep 12 05:56:26.711371 kernel: pci 0000:00:15.2: bridge window [mem 0xfcd00000-0xfcdfffff] Sep 12 05:56:26.711424 kernel: pci 0000:00:15.2: bridge window [mem 0xe7400000-0xe74fffff 64bit pref] Sep 12 05:56:26.711523 kernel: pci 0000:00:15.2: PME# supported from D0 D3hot D3cold Sep 12 05:56:26.711579 kernel: pci 0000:00:15.3: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Sep 12 05:56:26.711633 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] Sep 12 05:56:26.711682 kernel: pci 0000:00:15.3: bridge window [mem 0xfc900000-0xfc9fffff] Sep 12 05:56:26.711730 kernel: pci 0000:00:15.3: bridge window [mem 0xe7000000-0xe70fffff 64bit pref] Sep 12 05:56:26.711779 kernel: pci 0000:00:15.3: PME# supported from D0 D3hot D3cold Sep 12 05:56:26.711831 kernel: pci 0000:00:15.4: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Sep 12 05:56:26.711881 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] Sep 12 05:56:26.711929 kernel: pci 0000:00:15.4: bridge window [mem 0xfc500000-0xfc5fffff] Sep 12 05:56:26.711976 kernel: pci 0000:00:15.4: bridge window [mem 0xe6c00000-0xe6cfffff 64bit pref] Sep 12 05:56:26.712027 kernel: pci 0000:00:15.4: PME# supported from D0 D3hot D3cold Sep 12 05:56:26.712080 kernel: pci 0000:00:15.5: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Sep 12 05:56:26.712129 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] Sep 12 05:56:26.712177 kernel: pci 0000:00:15.5: bridge window [mem 0xfc100000-0xfc1fffff] Sep 12 05:56:26.712245 kernel: pci 0000:00:15.5: bridge window [mem 0xe6800000-0xe68fffff 64bit pref] Sep 12 05:56:26.712296 kernel: pci 0000:00:15.5: PME# supported from D0 D3hot D3cold Sep 12 05:56:26.712350 kernel: pci 0000:00:15.6: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Sep 12 05:56:26.712402 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] Sep 12 05:56:26.712450 kernel: pci 0000:00:15.6: bridge window [mem 0xfbd00000-0xfbdfffff] Sep 12 05:56:26.712498 kernel: pci 0000:00:15.6: bridge window [mem 0xe6400000-0xe64fffff 64bit pref] Sep 12 05:56:26.712546 kernel: pci 0000:00:15.6: PME# supported from D0 D3hot D3cold Sep 12 05:56:26.712597 kernel: pci 0000:00:15.7: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Sep 12 05:56:26.712646 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] Sep 12 05:56:26.712694 kernel: pci 0000:00:15.7: bridge window [mem 0xfb900000-0xfb9fffff] Sep 12 05:56:26.712744 kernel: pci 0000:00:15.7: bridge window [mem 0xe6000000-0xe60fffff 64bit pref] Sep 12 05:56:26.712792 kernel: pci 0000:00:15.7: PME# supported from D0 D3hot D3cold Sep 12 05:56:26.712846 kernel: pci 0000:00:16.0: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Sep 12 05:56:26.712897 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] Sep 12 05:56:26.712944 kernel: pci 0000:00:16.0: bridge window [io 0x5000-0x5fff] Sep 12 05:56:26.712993 kernel: pci 0000:00:16.0: bridge window [mem 0xfd400000-0xfd4fffff] Sep 12 05:56:26.713040 kernel: pci 0000:00:16.0: PME# supported from D0 D3hot D3cold Sep 12 05:56:26.713092 kernel: pci 0000:00:16.1: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Sep 12 05:56:26.713143 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] Sep 12 05:56:26.715209 kernel: pci 0000:00:16.1: bridge window [io 0x9000-0x9fff] Sep 12 05:56:26.715274 kernel: pci 0000:00:16.1: bridge window [mem 0xfd000000-0xfd0fffff] Sep 12 05:56:26.715329 kernel: pci 0000:00:16.1: bridge window [mem 0xe7700000-0xe77fffff 64bit pref] Sep 12 05:56:26.715381 kernel: pci 0000:00:16.1: PME# supported from D0 D3hot D3cold Sep 12 05:56:26.715437 kernel: pci 0000:00:16.2: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Sep 12 05:56:26.715492 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] Sep 12 05:56:26.715542 kernel: pci 0000:00:16.2: bridge window [io 0xd000-0xdfff] Sep 12 05:56:26.715592 kernel: pci 0000:00:16.2: bridge window [mem 0xfcc00000-0xfccfffff] Sep 12 05:56:26.715641 kernel: pci 0000:00:16.2: bridge window [mem 0xe7300000-0xe73fffff 64bit pref] Sep 12 05:56:26.715691 kernel: pci 0000:00:16.2: PME# supported from D0 D3hot D3cold Sep 12 05:56:26.715745 kernel: pci 0000:00:16.3: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Sep 12 05:56:26.715795 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] Sep 12 05:56:26.715848 kernel: pci 0000:00:16.3: bridge window [mem 0xfc800000-0xfc8fffff] Sep 12 05:56:26.715897 kernel: pci 0000:00:16.3: bridge window [mem 0xe6f00000-0xe6ffffff 64bit pref] Sep 12 05:56:26.715945 kernel: pci 0000:00:16.3: PME# supported from D0 D3hot D3cold Sep 12 05:56:26.716001 kernel: pci 0000:00:16.4: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Sep 12 05:56:26.716052 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] Sep 12 05:56:26.716101 kernel: pci 0000:00:16.4: bridge window [mem 0xfc400000-0xfc4fffff] Sep 12 05:56:26.716150 kernel: pci 0000:00:16.4: bridge window [mem 0xe6b00000-0xe6bfffff 64bit pref] Sep 12 05:56:26.716211 kernel: pci 0000:00:16.4: PME# supported from D0 D3hot D3cold Sep 12 05:56:26.716272 kernel: pci 0000:00:16.5: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Sep 12 05:56:26.716322 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] Sep 12 05:56:26.716370 kernel: pci 0000:00:16.5: bridge window [mem 0xfc000000-0xfc0fffff] Sep 12 05:56:26.716419 kernel: pci 0000:00:16.5: bridge window [mem 0xe6700000-0xe67fffff 64bit pref] Sep 12 05:56:26.716467 kernel: pci 0000:00:16.5: PME# supported from D0 D3hot D3cold Sep 12 05:56:26.716521 kernel: pci 0000:00:16.6: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Sep 12 05:56:26.716571 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] Sep 12 05:56:26.716622 kernel: pci 0000:00:16.6: bridge window [mem 0xfbc00000-0xfbcfffff] Sep 12 05:56:26.716671 kernel: pci 0000:00:16.6: bridge window [mem 0xe6300000-0xe63fffff 64bit pref] Sep 12 05:56:26.716720 kernel: pci 0000:00:16.6: PME# supported from D0 D3hot D3cold Sep 12 05:56:26.716773 kernel: pci 0000:00:16.7: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Sep 12 05:56:26.716823 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] Sep 12 05:56:26.716872 kernel: pci 0000:00:16.7: bridge window [mem 0xfb800000-0xfb8fffff] Sep 12 05:56:26.716920 kernel: pci 0000:00:16.7: bridge window [mem 0xe5f00000-0xe5ffffff 64bit pref] Sep 12 05:56:26.716971 kernel: pci 0000:00:16.7: PME# supported from D0 D3hot D3cold Sep 12 05:56:26.717027 kernel: pci 0000:00:17.0: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Sep 12 05:56:26.717077 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] Sep 12 05:56:26.717125 kernel: pci 0000:00:17.0: bridge window [io 0x6000-0x6fff] Sep 12 05:56:26.717173 kernel: pci 0000:00:17.0: bridge window [mem 0xfd300000-0xfd3fffff] Sep 12 05:56:26.718334 kernel: pci 0000:00:17.0: bridge window [mem 0xe7a00000-0xe7afffff 64bit pref] Sep 12 05:56:26.718391 kernel: pci 0000:00:17.0: PME# supported from D0 D3hot D3cold Sep 12 05:56:26.718451 kernel: pci 0000:00:17.1: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Sep 12 05:56:26.718503 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] Sep 12 05:56:26.718553 kernel: pci 0000:00:17.1: bridge window [io 0xa000-0xafff] Sep 12 05:56:26.718603 kernel: pci 0000:00:17.1: bridge window [mem 0xfcf00000-0xfcffffff] Sep 12 05:56:26.718656 kernel: pci 0000:00:17.1: bridge window [mem 0xe7600000-0xe76fffff 64bit pref] Sep 12 05:56:26.718705 kernel: pci 0000:00:17.1: PME# supported from D0 D3hot D3cold Sep 12 05:56:26.718758 kernel: pci 0000:00:17.2: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Sep 12 05:56:26.718808 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] Sep 12 05:56:26.718856 kernel: pci 0000:00:17.2: bridge window [io 0xe000-0xefff] Sep 12 05:56:26.718905 kernel: pci 0000:00:17.2: bridge window [mem 0xfcb00000-0xfcbfffff] Sep 12 05:56:26.718955 kernel: pci 0000:00:17.2: bridge window [mem 0xe7200000-0xe72fffff 64bit pref] Sep 12 05:56:26.719006 kernel: pci 0000:00:17.2: PME# supported from D0 D3hot D3cold Sep 12 05:56:26.719062 kernel: pci 0000:00:17.3: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Sep 12 05:56:26.719111 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] Sep 12 05:56:26.719160 kernel: pci 0000:00:17.3: bridge window [mem 0xfc700000-0xfc7fffff] Sep 12 05:56:26.719226 kernel: pci 0000:00:17.3: bridge window [mem 0xe6e00000-0xe6efffff 64bit pref] Sep 12 05:56:26.719276 kernel: pci 0000:00:17.3: PME# supported from D0 D3hot D3cold Sep 12 05:56:26.719332 kernel: pci 0000:00:17.4: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Sep 12 05:56:26.719386 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] Sep 12 05:56:26.719435 kernel: pci 0000:00:17.4: bridge window [mem 0xfc300000-0xfc3fffff] Sep 12 05:56:26.719485 kernel: pci 0000:00:17.4: bridge window [mem 0xe6a00000-0xe6afffff 64bit pref] Sep 12 05:56:26.719534 kernel: pci 0000:00:17.4: PME# supported from D0 D3hot D3cold Sep 12 05:56:26.719587 kernel: pci 0000:00:17.5: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Sep 12 05:56:26.719636 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] Sep 12 05:56:26.719696 kernel: pci 0000:00:17.5: bridge window [mem 0xfbf00000-0xfbffffff] Sep 12 05:56:26.719749 kernel: pci 0000:00:17.5: bridge window [mem 0xe6600000-0xe66fffff 64bit pref] Sep 12 05:56:26.719799 kernel: pci 0000:00:17.5: PME# supported from D0 D3hot D3cold Sep 12 05:56:26.719852 kernel: pci 0000:00:17.6: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Sep 12 05:56:26.719902 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] Sep 12 05:56:26.719951 kernel: pci 0000:00:17.6: bridge window [mem 0xfbb00000-0xfbbfffff] Sep 12 05:56:26.719999 kernel: pci 0000:00:17.6: bridge window [mem 0xe6200000-0xe62fffff 64bit pref] Sep 12 05:56:26.720048 kernel: pci 0000:00:17.6: PME# supported from D0 D3hot D3cold Sep 12 05:56:26.720104 kernel: pci 0000:00:17.7: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Sep 12 05:56:26.720155 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] Sep 12 05:56:26.720236 kernel: pci 0000:00:17.7: bridge window [mem 0xfb700000-0xfb7fffff] Sep 12 05:56:26.720288 kernel: pci 0000:00:17.7: bridge window [mem 0xe5e00000-0xe5efffff 64bit pref] Sep 12 05:56:26.720337 kernel: pci 0000:00:17.7: PME# supported from D0 D3hot D3cold Sep 12 05:56:26.720393 kernel: pci 0000:00:18.0: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Sep 12 05:56:26.720447 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] Sep 12 05:56:26.720530 kernel: pci 0000:00:18.0: bridge window [io 0x7000-0x7fff] Sep 12 05:56:26.722257 kernel: pci 0000:00:18.0: bridge window [mem 0xfd200000-0xfd2fffff] Sep 12 05:56:26.722312 kernel: pci 0000:00:18.0: bridge window [mem 0xe7900000-0xe79fffff 64bit pref] Sep 12 05:56:26.722363 kernel: pci 0000:00:18.0: PME# supported from D0 D3hot D3cold Sep 12 05:56:26.722419 kernel: pci 0000:00:18.1: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Sep 12 05:56:26.722470 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] Sep 12 05:56:26.722520 kernel: pci 0000:00:18.1: bridge window [io 0xb000-0xbfff] Sep 12 05:56:26.722569 kernel: pci 0000:00:18.1: bridge window [mem 0xfce00000-0xfcefffff] Sep 12 05:56:26.722621 kernel: pci 0000:00:18.1: bridge window [mem 0xe7500000-0xe75fffff 64bit pref] Sep 12 05:56:26.722682 kernel: pci 0000:00:18.1: PME# supported from D0 D3hot D3cold Sep 12 05:56:26.722739 kernel: pci 0000:00:18.2: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Sep 12 05:56:26.722797 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] Sep 12 05:56:26.722855 kernel: pci 0000:00:18.2: bridge window [mem 0xfca00000-0xfcafffff] Sep 12 05:56:26.722905 kernel: pci 0000:00:18.2: bridge window [mem 0xe7100000-0xe71fffff 64bit pref] Sep 12 05:56:26.722954 kernel: pci 0000:00:18.2: PME# supported from D0 D3hot D3cold Sep 12 05:56:26.723011 kernel: pci 0000:00:18.3: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Sep 12 05:56:26.723061 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] Sep 12 05:56:26.723110 kernel: pci 0000:00:18.3: bridge window [mem 0xfc600000-0xfc6fffff] Sep 12 05:56:26.723159 kernel: pci 0000:00:18.3: bridge window [mem 0xe6d00000-0xe6dfffff 64bit pref] Sep 12 05:56:26.723216 kernel: pci 0000:00:18.3: PME# supported from D0 D3hot D3cold Sep 12 05:56:26.723270 kernel: pci 0000:00:18.4: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Sep 12 05:56:26.723320 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] Sep 12 05:56:26.723372 kernel: pci 0000:00:18.4: bridge window [mem 0xfc200000-0xfc2fffff] Sep 12 05:56:26.723420 kernel: pci 0000:00:18.4: bridge window [mem 0xe6900000-0xe69fffff 64bit pref] Sep 12 05:56:26.723491 kernel: pci 0000:00:18.4: PME# supported from D0 D3hot D3cold Sep 12 05:56:26.723563 kernel: pci 0000:00:18.5: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Sep 12 05:56:26.723612 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] Sep 12 05:56:26.723662 kernel: pci 0000:00:18.5: bridge window [mem 0xfbe00000-0xfbefffff] Sep 12 05:56:26.723710 kernel: pci 0000:00:18.5: bridge window [mem 0xe6500000-0xe65fffff 64bit pref] Sep 12 05:56:26.723762 kernel: pci 0000:00:18.5: PME# supported from D0 D3hot D3cold Sep 12 05:56:26.723816 kernel: pci 0000:00:18.6: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Sep 12 05:56:26.723866 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] Sep 12 05:56:26.723915 kernel: pci 0000:00:18.6: bridge window [mem 0xfba00000-0xfbafffff] Sep 12 05:56:26.723963 kernel: pci 0000:00:18.6: bridge window [mem 0xe6100000-0xe61fffff 64bit pref] Sep 12 05:56:26.724012 kernel: pci 0000:00:18.6: PME# supported from D0 D3hot D3cold Sep 12 05:56:26.724067 kernel: pci 0000:00:18.7: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Sep 12 05:56:26.724119 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] Sep 12 05:56:26.724167 kernel: pci 0000:00:18.7: bridge window [mem 0xfb600000-0xfb6fffff] Sep 12 05:56:26.725761 kernel: pci 0000:00:18.7: bridge window [mem 0xe5d00000-0xe5dfffff 64bit pref] Sep 12 05:56:26.725817 kernel: pci 0000:00:18.7: PME# supported from D0 D3hot D3cold Sep 12 05:56:26.725877 kernel: pci_bus 0000:01: extended config space not accessible Sep 12 05:56:26.725930 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Sep 12 05:56:26.725982 kernel: pci_bus 0000:02: extended config space not accessible Sep 12 05:56:26.725992 kernel: acpiphp: Slot [32] registered Sep 12 05:56:26.726001 kernel: acpiphp: Slot [33] registered Sep 12 05:56:26.726006 kernel: acpiphp: Slot [34] registered Sep 12 05:56:26.726012 kernel: acpiphp: Slot [35] registered Sep 12 05:56:26.726018 kernel: acpiphp: Slot [36] registered Sep 12 05:56:26.726023 kernel: acpiphp: Slot [37] registered Sep 12 05:56:26.726029 kernel: acpiphp: Slot [38] registered Sep 12 05:56:26.726035 kernel: acpiphp: Slot [39] registered Sep 12 05:56:26.726041 kernel: acpiphp: Slot [40] registered Sep 12 05:56:26.726046 kernel: acpiphp: Slot [41] registered Sep 12 05:56:26.726053 kernel: acpiphp: Slot [42] registered Sep 12 05:56:26.726059 kernel: acpiphp: Slot [43] registered Sep 12 05:56:26.726065 kernel: acpiphp: Slot [44] registered Sep 12 05:56:26.726070 kernel: acpiphp: Slot [45] registered Sep 12 05:56:26.726076 kernel: acpiphp: Slot [46] registered Sep 12 05:56:26.726082 kernel: acpiphp: Slot [47] registered Sep 12 05:56:26.726088 kernel: acpiphp: Slot [48] registered Sep 12 05:56:26.726093 kernel: acpiphp: Slot [49] registered Sep 12 05:56:26.726099 kernel: acpiphp: Slot [50] registered Sep 12 05:56:26.726105 kernel: acpiphp: Slot [51] registered Sep 12 05:56:26.726111 kernel: acpiphp: Slot [52] registered Sep 12 05:56:26.726117 kernel: acpiphp: Slot [53] registered Sep 12 05:56:26.726123 kernel: acpiphp: Slot [54] registered Sep 12 05:56:26.726129 kernel: acpiphp: Slot [55] registered Sep 12 05:56:26.726134 kernel: acpiphp: Slot [56] registered Sep 12 05:56:26.726140 kernel: acpiphp: Slot [57] registered Sep 12 05:56:26.726146 kernel: acpiphp: Slot [58] registered Sep 12 05:56:26.726151 kernel: acpiphp: Slot [59] registered Sep 12 05:56:26.726157 kernel: acpiphp: Slot [60] registered Sep 12 05:56:26.726163 kernel: acpiphp: Slot [61] registered Sep 12 05:56:26.726169 kernel: acpiphp: Slot [62] registered Sep 12 05:56:26.726175 kernel: acpiphp: Slot [63] registered Sep 12 05:56:26.728247 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] (subtractive decode) Sep 12 05:56:26.728312 kernel: pci 0000:00:11.0: bridge window [mem 0x000a0000-0x000bffff window] (subtractive decode) Sep 12 05:56:26.728361 kernel: pci 0000:00:11.0: bridge window [mem 0x000cc000-0x000dbfff window] (subtractive decode) Sep 12 05:56:26.728409 kernel: pci 0000:00:11.0: bridge window [mem 0xc0000000-0xfebfffff window] (subtractive decode) Sep 12 05:56:26.728457 kernel: pci 0000:00:11.0: bridge window [io 0x0000-0x0cf7 window] (subtractive decode) Sep 12 05:56:26.728507 kernel: pci 0000:00:11.0: bridge window [io 0x0d00-0xfeff window] (subtractive decode) Sep 12 05:56:26.728565 kernel: pci 0000:03:00.0: [15ad:07c0] type 00 class 0x010700 PCIe Endpoint Sep 12 05:56:26.728616 kernel: pci 0000:03:00.0: BAR 0 [io 0x4000-0x4007] Sep 12 05:56:26.728666 kernel: pci 0000:03:00.0: BAR 1 [mem 0xfd5f8000-0xfd5fffff 64bit] Sep 12 05:56:26.728715 kernel: pci 0000:03:00.0: ROM [mem 0x00000000-0x0000ffff pref] Sep 12 05:56:26.728763 kernel: pci 0000:03:00.0: PME# supported from D0 D3hot D3cold Sep 12 05:56:26.728812 kernel: pci 0000:03:00.0: disabling ASPM on pre-1.1 PCIe device. You can enable it with 'pcie_aspm=force' Sep 12 05:56:26.728865 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] Sep 12 05:56:26.728915 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] Sep 12 05:56:26.728966 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] Sep 12 05:56:26.729016 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] Sep 12 05:56:26.729067 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] Sep 12 05:56:26.729133 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] Sep 12 05:56:26.731218 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] Sep 12 05:56:26.731288 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] Sep 12 05:56:26.731350 kernel: pci 0000:0b:00.0: [15ad:07b0] type 00 class 0x020000 PCIe Endpoint Sep 12 05:56:26.731404 kernel: pci 0000:0b:00.0: BAR 0 [mem 0xfd4fc000-0xfd4fcfff] Sep 12 05:56:26.731463 kernel: pci 0000:0b:00.0: BAR 1 [mem 0xfd4fd000-0xfd4fdfff] Sep 12 05:56:26.731515 kernel: pci 0000:0b:00.0: BAR 2 [mem 0xfd4fe000-0xfd4fffff] Sep 12 05:56:26.731567 kernel: pci 0000:0b:00.0: BAR 3 [io 0x5000-0x500f] Sep 12 05:56:26.731618 kernel: pci 0000:0b:00.0: ROM [mem 0x00000000-0x0000ffff pref] Sep 12 05:56:26.731670 kernel: pci 0000:0b:00.0: supports D1 D2 Sep 12 05:56:26.731723 kernel: pci 0000:0b:00.0: PME# supported from D0 D1 D2 D3hot D3cold Sep 12 05:56:26.731775 kernel: pci 0000:0b:00.0: disabling ASPM on pre-1.1 PCIe device. You can enable it with 'pcie_aspm=force' Sep 12 05:56:26.731827 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] Sep 12 05:56:26.731880 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] Sep 12 05:56:26.731932 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] Sep 12 05:56:26.731984 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] Sep 12 05:56:26.732037 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] Sep 12 05:56:26.732102 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] Sep 12 05:56:26.732155 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] Sep 12 05:56:26.732239 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] Sep 12 05:56:26.732293 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] Sep 12 05:56:26.732346 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] Sep 12 05:56:26.732398 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] Sep 12 05:56:26.732449 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] Sep 12 05:56:26.732501 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] Sep 12 05:56:26.732556 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] Sep 12 05:56:26.732610 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] Sep 12 05:56:26.732678 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] Sep 12 05:56:26.732731 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] Sep 12 05:56:26.732783 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] Sep 12 05:56:26.732836 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] Sep 12 05:56:26.732887 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] Sep 12 05:56:26.732941 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] Sep 12 05:56:26.732993 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] Sep 12 05:56:26.733046 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] Sep 12 05:56:26.733097 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] Sep 12 05:56:26.733106 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 9 Sep 12 05:56:26.733113 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 0 Sep 12 05:56:26.733119 kernel: ACPI: PCI: Interrupt link LNKB disabled Sep 12 05:56:26.733125 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Sep 12 05:56:26.733133 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 10 Sep 12 05:56:26.733138 kernel: iommu: Default domain type: Translated Sep 12 05:56:26.733144 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 12 05:56:26.733150 kernel: PCI: Using ACPI for IRQ routing Sep 12 05:56:26.733156 kernel: PCI: pci_cache_line_size set to 64 bytes Sep 12 05:56:26.733162 kernel: e820: reserve RAM buffer [mem 0x0009ec00-0x0009ffff] Sep 12 05:56:26.733168 kernel: e820: reserve RAM buffer [mem 0x7fee0000-0x7fffffff] Sep 12 05:56:26.733642 kernel: pci 0000:00:0f.0: vgaarb: setting as boot VGA device Sep 12 05:56:26.733698 kernel: pci 0000:00:0f.0: vgaarb: bridge control possible Sep 12 05:56:26.733754 kernel: pci 0000:00:0f.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Sep 12 05:56:26.733764 kernel: vgaarb: loaded Sep 12 05:56:26.733770 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 Sep 12 05:56:26.733777 kernel: hpet0: 16 comparators, 64-bit 14.318180 MHz counter Sep 12 05:56:26.733783 kernel: clocksource: Switched to clocksource tsc-early Sep 12 05:56:26.733788 kernel: VFS: Disk quotas dquot_6.6.0 Sep 12 05:56:26.733795 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 12 05:56:26.733801 kernel: pnp: PnP ACPI init Sep 12 05:56:26.733856 kernel: system 00:00: [io 0x1000-0x103f] has been reserved Sep 12 05:56:26.733907 kernel: system 00:00: [io 0x1040-0x104f] has been reserved Sep 12 05:56:26.733953 kernel: system 00:00: [io 0x0cf0-0x0cf1] has been reserved Sep 12 05:56:26.734006 kernel: system 00:04: [mem 0xfed00000-0xfed003ff] has been reserved Sep 12 05:56:26.734056 kernel: pnp 00:06: [dma 2] Sep 12 05:56:26.734106 kernel: system 00:07: [io 0xfce0-0xfcff] has been reserved Sep 12 05:56:26.734152 kernel: system 00:07: [mem 0xf0000000-0xf7ffffff] has been reserved Sep 12 05:56:26.734213 kernel: system 00:07: [mem 0xfe800000-0xfe9fffff] has been reserved Sep 12 05:56:26.734222 kernel: pnp: PnP ACPI: found 8 devices Sep 12 05:56:26.734228 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 12 05:56:26.734234 kernel: NET: Registered PF_INET protocol family Sep 12 05:56:26.734240 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 12 05:56:26.734246 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Sep 12 05:56:26.734252 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 12 05:56:26.734258 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Sep 12 05:56:26.734266 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Sep 12 05:56:26.734272 kernel: TCP: Hash tables configured (established 16384 bind 16384) Sep 12 05:56:26.734278 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Sep 12 05:56:26.734284 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Sep 12 05:56:26.734290 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 12 05:56:26.734296 kernel: NET: Registered PF_XDP protocol family Sep 12 05:56:26.734348 kernel: pci 0000:00:15.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 Sep 12 05:56:26.734402 kernel: pci 0000:00:15.3: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Sep 12 05:56:26.734460 kernel: pci 0000:00:15.4: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Sep 12 05:56:26.734517 kernel: pci 0000:00:15.5: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Sep 12 05:56:26.734570 kernel: pci 0000:00:15.6: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Sep 12 05:56:26.734623 kernel: pci 0000:00:15.7: bridge window [io 0x1000-0x0fff] to [bus 0a] add_size 1000 Sep 12 05:56:26.734675 kernel: pci 0000:00:16.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 0b] add_size 200000 add_align 100000 Sep 12 05:56:26.734727 kernel: pci 0000:00:16.3: bridge window [io 0x1000-0x0fff] to [bus 0e] add_size 1000 Sep 12 05:56:26.734780 kernel: pci 0000:00:16.4: bridge window [io 0x1000-0x0fff] to [bus 0f] add_size 1000 Sep 12 05:56:26.734831 kernel: pci 0000:00:16.5: bridge window [io 0x1000-0x0fff] to [bus 10] add_size 1000 Sep 12 05:56:26.734887 kernel: pci 0000:00:16.6: bridge window [io 0x1000-0x0fff] to [bus 11] add_size 1000 Sep 12 05:56:26.734940 kernel: pci 0000:00:16.7: bridge window [io 0x1000-0x0fff] to [bus 12] add_size 1000 Sep 12 05:56:26.734991 kernel: pci 0000:00:17.3: bridge window [io 0x1000-0x0fff] to [bus 16] add_size 1000 Sep 12 05:56:26.735044 kernel: pci 0000:00:17.4: bridge window [io 0x1000-0x0fff] to [bus 17] add_size 1000 Sep 12 05:56:26.735096 kernel: pci 0000:00:17.5: bridge window [io 0x1000-0x0fff] to [bus 18] add_size 1000 Sep 12 05:56:26.735148 kernel: pci 0000:00:17.6: bridge window [io 0x1000-0x0fff] to [bus 19] add_size 1000 Sep 12 05:56:26.735585 kernel: pci 0000:00:17.7: bridge window [io 0x1000-0x0fff] to [bus 1a] add_size 1000 Sep 12 05:56:26.735646 kernel: pci 0000:00:18.2: bridge window [io 0x1000-0x0fff] to [bus 1d] add_size 1000 Sep 12 05:56:26.735703 kernel: pci 0000:00:18.3: bridge window [io 0x1000-0x0fff] to [bus 1e] add_size 1000 Sep 12 05:56:26.735757 kernel: pci 0000:00:18.4: bridge window [io 0x1000-0x0fff] to [bus 1f] add_size 1000 Sep 12 05:56:26.735810 kernel: pci 0000:00:18.5: bridge window [io 0x1000-0x0fff] to [bus 20] add_size 1000 Sep 12 05:56:26.735863 kernel: pci 0000:00:18.6: bridge window [io 0x1000-0x0fff] to [bus 21] add_size 1000 Sep 12 05:56:26.735916 kernel: pci 0000:00:18.7: bridge window [io 0x1000-0x0fff] to [bus 22] add_size 1000 Sep 12 05:56:26.735968 kernel: pci 0000:00:15.0: bridge window [mem 0xc0000000-0xc01fffff 64bit pref]: assigned Sep 12 05:56:26.736019 kernel: pci 0000:00:16.0: bridge window [mem 0xc0200000-0xc03fffff 64bit pref]: assigned Sep 12 05:56:26.736070 kernel: pci 0000:00:15.3: bridge window [io size 0x1000]: can't assign; no space Sep 12 05:56:26.736124 kernel: pci 0000:00:15.3: bridge window [io size 0x1000]: failed to assign Sep 12 05:56:26.736176 kernel: pci 0000:00:15.4: bridge window [io size 0x1000]: can't assign; no space Sep 12 05:56:26.736239 kernel: pci 0000:00:15.4: bridge window [io size 0x1000]: failed to assign Sep 12 05:56:26.736290 kernel: pci 0000:00:15.5: bridge window [io size 0x1000]: can't assign; no space Sep 12 05:56:26.736340 kernel: pci 0000:00:15.5: bridge window [io size 0x1000]: failed to assign Sep 12 05:56:26.736390 kernel: pci 0000:00:15.6: bridge window [io size 0x1000]: can't assign; no space Sep 12 05:56:26.736440 kernel: pci 0000:00:15.6: bridge window [io size 0x1000]: failed to assign Sep 12 05:56:26.736491 kernel: pci 0000:00:15.7: bridge window [io size 0x1000]: can't assign; no space Sep 12 05:56:26.736544 kernel: pci 0000:00:15.7: bridge window [io size 0x1000]: failed to assign Sep 12 05:56:26.736595 kernel: pci 0000:00:16.3: bridge window [io size 0x1000]: can't assign; no space Sep 12 05:56:26.736645 kernel: pci 0000:00:16.3: bridge window [io size 0x1000]: failed to assign Sep 12 05:56:26.736696 kernel: pci 0000:00:16.4: bridge window [io size 0x1000]: can't assign; no space Sep 12 05:56:26.736745 kernel: pci 0000:00:16.4: bridge window [io size 0x1000]: failed to assign Sep 12 05:56:26.736796 kernel: pci 0000:00:16.5: bridge window [io size 0x1000]: can't assign; no space Sep 12 05:56:26.736845 kernel: pci 0000:00:16.5: bridge window [io size 0x1000]: failed to assign Sep 12 05:56:26.736898 kernel: pci 0000:00:16.6: bridge window [io size 0x1000]: can't assign; no space Sep 12 05:56:26.736948 kernel: pci 0000:00:16.6: bridge window [io size 0x1000]: failed to assign Sep 12 05:56:26.736998 kernel: pci 0000:00:16.7: bridge window [io size 0x1000]: can't assign; no space Sep 12 05:56:26.737048 kernel: pci 0000:00:16.7: bridge window [io size 0x1000]: failed to assign Sep 12 05:56:26.737099 kernel: pci 0000:00:17.3: bridge window [io size 0x1000]: can't assign; no space Sep 12 05:56:26.737148 kernel: pci 0000:00:17.3: bridge window [io size 0x1000]: failed to assign Sep 12 05:56:26.737211 kernel: pci 0000:00:17.4: bridge window [io size 0x1000]: can't assign; no space Sep 12 05:56:26.737264 kernel: pci 0000:00:17.4: bridge window [io size 0x1000]: failed to assign Sep 12 05:56:26.737318 kernel: pci 0000:00:17.5: bridge window [io size 0x1000]: can't assign; no space Sep 12 05:56:26.737368 kernel: pci 0000:00:17.5: bridge window [io size 0x1000]: failed to assign Sep 12 05:56:26.737419 kernel: pci 0000:00:17.6: bridge window [io size 0x1000]: can't assign; no space Sep 12 05:56:26.737474 kernel: pci 0000:00:17.6: bridge window [io size 0x1000]: failed to assign Sep 12 05:56:26.737524 kernel: pci 0000:00:17.7: bridge window [io size 0x1000]: can't assign; no space Sep 12 05:56:26.737574 kernel: pci 0000:00:17.7: bridge window [io size 0x1000]: failed to assign Sep 12 05:56:26.737625 kernel: pci 0000:00:18.2: bridge window [io size 0x1000]: can't assign; no space Sep 12 05:56:26.737675 kernel: pci 0000:00:18.2: bridge window [io size 0x1000]: failed to assign Sep 12 05:56:26.737728 kernel: pci 0000:00:18.3: bridge window [io size 0x1000]: can't assign; no space Sep 12 05:56:26.737777 kernel: pci 0000:00:18.3: bridge window [io size 0x1000]: failed to assign Sep 12 05:56:26.737828 kernel: pci 0000:00:18.4: bridge window [io size 0x1000]: can't assign; no space Sep 12 05:56:26.737878 kernel: pci 0000:00:18.4: bridge window [io size 0x1000]: failed to assign Sep 12 05:56:26.737928 kernel: pci 0000:00:18.5: bridge window [io size 0x1000]: can't assign; no space Sep 12 05:56:26.737978 kernel: pci 0000:00:18.5: bridge window [io size 0x1000]: failed to assign Sep 12 05:56:26.738028 kernel: pci 0000:00:18.6: bridge window [io size 0x1000]: can't assign; no space Sep 12 05:56:26.738077 kernel: pci 0000:00:18.6: bridge window [io size 0x1000]: failed to assign Sep 12 05:56:26.738133 kernel: pci 0000:00:18.7: bridge window [io size 0x1000]: can't assign; no space Sep 12 05:56:26.738182 kernel: pci 0000:00:18.7: bridge window [io size 0x1000]: failed to assign Sep 12 05:56:26.738254 kernel: pci 0000:00:18.7: bridge window [io size 0x1000]: can't assign; no space Sep 12 05:56:26.738304 kernel: pci 0000:00:18.7: bridge window [io size 0x1000]: failed to assign Sep 12 05:56:26.738353 kernel: pci 0000:00:18.6: bridge window [io size 0x1000]: can't assign; no space Sep 12 05:56:26.738402 kernel: pci 0000:00:18.6: bridge window [io size 0x1000]: failed to assign Sep 12 05:56:26.738451 kernel: pci 0000:00:18.5: bridge window [io size 0x1000]: can't assign; no space Sep 12 05:56:26.738500 kernel: pci 0000:00:18.5: bridge window [io size 0x1000]: failed to assign Sep 12 05:56:26.738549 kernel: pci 0000:00:18.4: bridge window [io size 0x1000]: can't assign; no space Sep 12 05:56:26.738601 kernel: pci 0000:00:18.4: bridge window [io size 0x1000]: failed to assign Sep 12 05:56:26.738650 kernel: pci 0000:00:18.3: bridge window [io size 0x1000]: can't assign; no space Sep 12 05:56:26.738700 kernel: pci 0000:00:18.3: bridge window [io size 0x1000]: failed to assign Sep 12 05:56:26.738749 kernel: pci 0000:00:18.2: bridge window [io size 0x1000]: can't assign; no space Sep 12 05:56:26.738798 kernel: pci 0000:00:18.2: bridge window [io size 0x1000]: failed to assign Sep 12 05:56:26.738847 kernel: pci 0000:00:17.7: bridge window [io size 0x1000]: can't assign; no space Sep 12 05:56:26.738896 kernel: pci 0000:00:17.7: bridge window [io size 0x1000]: failed to assign Sep 12 05:56:26.738946 kernel: pci 0000:00:17.6: bridge window [io size 0x1000]: can't assign; no space Sep 12 05:56:26.738995 kernel: pci 0000:00:17.6: bridge window [io size 0x1000]: failed to assign Sep 12 05:56:26.739048 kernel: pci 0000:00:17.5: bridge window [io size 0x1000]: can't assign; no space Sep 12 05:56:26.739097 kernel: pci 0000:00:17.5: bridge window [io size 0x1000]: failed to assign Sep 12 05:56:26.739146 kernel: pci 0000:00:17.4: bridge window [io size 0x1000]: can't assign; no space Sep 12 05:56:26.739211 kernel: pci 0000:00:17.4: bridge window [io size 0x1000]: failed to assign Sep 12 05:56:26.739263 kernel: pci 0000:00:17.3: bridge window [io size 0x1000]: can't assign; no space Sep 12 05:56:26.739313 kernel: pci 0000:00:17.3: bridge window [io size 0x1000]: failed to assign Sep 12 05:56:26.739362 kernel: pci 0000:00:16.7: bridge window [io size 0x1000]: can't assign; no space Sep 12 05:56:26.739412 kernel: pci 0000:00:16.7: bridge window [io size 0x1000]: failed to assign Sep 12 05:56:26.739463 kernel: pci 0000:00:16.6: bridge window [io size 0x1000]: can't assign; no space Sep 12 05:56:26.739512 kernel: pci 0000:00:16.6: bridge window [io size 0x1000]: failed to assign Sep 12 05:56:26.739565 kernel: pci 0000:00:16.5: bridge window [io size 0x1000]: can't assign; no space Sep 12 05:56:26.739615 kernel: pci 0000:00:16.5: bridge window [io size 0x1000]: failed to assign Sep 12 05:56:26.739665 kernel: pci 0000:00:16.4: bridge window [io size 0x1000]: can't assign; no space Sep 12 05:56:26.739714 kernel: pci 0000:00:16.4: bridge window [io size 0x1000]: failed to assign Sep 12 05:56:26.739763 kernel: pci 0000:00:16.3: bridge window [io size 0x1000]: can't assign; no space Sep 12 05:56:26.739813 kernel: pci 0000:00:16.3: bridge window [io size 0x1000]: failed to assign Sep 12 05:56:26.739866 kernel: pci 0000:00:15.7: bridge window [io size 0x1000]: can't assign; no space Sep 12 05:56:26.739915 kernel: pci 0000:00:15.7: bridge window [io size 0x1000]: failed to assign Sep 12 05:56:26.739965 kernel: pci 0000:00:15.6: bridge window [io size 0x1000]: can't assign; no space Sep 12 05:56:26.740015 kernel: pci 0000:00:15.6: bridge window [io size 0x1000]: failed to assign Sep 12 05:56:26.740064 kernel: pci 0000:00:15.5: bridge window [io size 0x1000]: can't assign; no space Sep 12 05:56:26.740114 kernel: pci 0000:00:15.5: bridge window [io size 0x1000]: failed to assign Sep 12 05:56:26.740164 kernel: pci 0000:00:15.4: bridge window [io size 0x1000]: can't assign; no space Sep 12 05:56:26.740229 kernel: pci 0000:00:15.4: bridge window [io size 0x1000]: failed to assign Sep 12 05:56:26.740282 kernel: pci 0000:00:15.3: bridge window [io size 0x1000]: can't assign; no space Sep 12 05:56:26.740335 kernel: pci 0000:00:15.3: bridge window [io size 0x1000]: failed to assign Sep 12 05:56:26.740386 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Sep 12 05:56:26.740438 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] Sep 12 05:56:26.740493 kernel: pci 0000:00:11.0: bridge window [io 0x2000-0x3fff] Sep 12 05:56:26.740542 kernel: pci 0000:00:11.0: bridge window [mem 0xfd600000-0xfdffffff] Sep 12 05:56:26.740591 kernel: pci 0000:00:11.0: bridge window [mem 0xe7b00000-0xe7ffffff 64bit pref] Sep 12 05:56:26.740646 kernel: pci 0000:03:00.0: ROM [mem 0xfd500000-0xfd50ffff pref]: assigned Sep 12 05:56:26.740697 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] Sep 12 05:56:26.740750 kernel: pci 0000:00:15.0: bridge window [io 0x4000-0x4fff] Sep 12 05:56:26.740800 kernel: pci 0000:00:15.0: bridge window [mem 0xfd500000-0xfd5fffff] Sep 12 05:56:26.740850 kernel: pci 0000:00:15.0: bridge window [mem 0xc0000000-0xc01fffff 64bit pref] Sep 12 05:56:26.740902 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] Sep 12 05:56:26.740952 kernel: pci 0000:00:15.1: bridge window [io 0x8000-0x8fff] Sep 12 05:56:26.741002 kernel: pci 0000:00:15.1: bridge window [mem 0xfd100000-0xfd1fffff] Sep 12 05:56:26.741051 kernel: pci 0000:00:15.1: bridge window [mem 0xe7800000-0xe78fffff 64bit pref] Sep 12 05:56:26.741102 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] Sep 12 05:56:26.741152 kernel: pci 0000:00:15.2: bridge window [io 0xc000-0xcfff] Sep 12 05:56:26.741214 kernel: pci 0000:00:15.2: bridge window [mem 0xfcd00000-0xfcdfffff] Sep 12 05:56:26.741268 kernel: pci 0000:00:15.2: bridge window [mem 0xe7400000-0xe74fffff 64bit pref] Sep 12 05:56:26.741318 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] Sep 12 05:56:26.741367 kernel: pci 0000:00:15.3: bridge window [mem 0xfc900000-0xfc9fffff] Sep 12 05:56:26.741416 kernel: pci 0000:00:15.3: bridge window [mem 0xe7000000-0xe70fffff 64bit pref] Sep 12 05:56:26.741478 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] Sep 12 05:56:26.741530 kernel: pci 0000:00:15.4: bridge window [mem 0xfc500000-0xfc5fffff] Sep 12 05:56:26.741580 kernel: pci 0000:00:15.4: bridge window [mem 0xe6c00000-0xe6cfffff 64bit pref] Sep 12 05:56:26.741630 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] Sep 12 05:56:26.741682 kernel: pci 0000:00:15.5: bridge window [mem 0xfc100000-0xfc1fffff] Sep 12 05:56:26.741732 kernel: pci 0000:00:15.5: bridge window [mem 0xe6800000-0xe68fffff 64bit pref] Sep 12 05:56:26.741782 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] Sep 12 05:56:26.741832 kernel: pci 0000:00:15.6: bridge window [mem 0xfbd00000-0xfbdfffff] Sep 12 05:56:26.741881 kernel: pci 0000:00:15.6: bridge window [mem 0xe6400000-0xe64fffff 64bit pref] Sep 12 05:56:26.741931 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] Sep 12 05:56:26.741980 kernel: pci 0000:00:15.7: bridge window [mem 0xfb900000-0xfb9fffff] Sep 12 05:56:26.742031 kernel: pci 0000:00:15.7: bridge window [mem 0xe6000000-0xe60fffff 64bit pref] Sep 12 05:56:26.742085 kernel: pci 0000:0b:00.0: ROM [mem 0xfd400000-0xfd40ffff pref]: assigned Sep 12 05:56:26.742137 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] Sep 12 05:56:26.742193 kernel: pci 0000:00:16.0: bridge window [io 0x5000-0x5fff] Sep 12 05:56:26.742246 kernel: pci 0000:00:16.0: bridge window [mem 0xfd400000-0xfd4fffff] Sep 12 05:56:26.742296 kernel: pci 0000:00:16.0: bridge window [mem 0xc0200000-0xc03fffff 64bit pref] Sep 12 05:56:26.742348 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] Sep 12 05:56:26.742397 kernel: pci 0000:00:16.1: bridge window [io 0x9000-0x9fff] Sep 12 05:56:26.742447 kernel: pci 0000:00:16.1: bridge window [mem 0xfd000000-0xfd0fffff] Sep 12 05:56:26.742500 kernel: pci 0000:00:16.1: bridge window [mem 0xe7700000-0xe77fffff 64bit pref] Sep 12 05:56:26.742552 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] Sep 12 05:56:26.742602 kernel: pci 0000:00:16.2: bridge window [io 0xd000-0xdfff] Sep 12 05:56:26.742652 kernel: pci 0000:00:16.2: bridge window [mem 0xfcc00000-0xfccfffff] Sep 12 05:56:26.742702 kernel: pci 0000:00:16.2: bridge window [mem 0xe7300000-0xe73fffff 64bit pref] Sep 12 05:56:26.742754 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] Sep 12 05:56:26.742804 kernel: pci 0000:00:16.3: bridge window [mem 0xfc800000-0xfc8fffff] Sep 12 05:56:26.742853 kernel: pci 0000:00:16.3: bridge window [mem 0xe6f00000-0xe6ffffff 64bit pref] Sep 12 05:56:26.742907 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] Sep 12 05:56:26.742956 kernel: pci 0000:00:16.4: bridge window [mem 0xfc400000-0xfc4fffff] Sep 12 05:56:26.743007 kernel: pci 0000:00:16.4: bridge window [mem 0xe6b00000-0xe6bfffff 64bit pref] Sep 12 05:56:26.743058 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] Sep 12 05:56:26.743108 kernel: pci 0000:00:16.5: bridge window [mem 0xfc000000-0xfc0fffff] Sep 12 05:56:26.743158 kernel: pci 0000:00:16.5: bridge window [mem 0xe6700000-0xe67fffff 64bit pref] Sep 12 05:56:26.743224 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] Sep 12 05:56:26.743275 kernel: pci 0000:00:16.6: bridge window [mem 0xfbc00000-0xfbcfffff] Sep 12 05:56:26.743328 kernel: pci 0000:00:16.6: bridge window [mem 0xe6300000-0xe63fffff 64bit pref] Sep 12 05:56:26.743378 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] Sep 12 05:56:26.743428 kernel: pci 0000:00:16.7: bridge window [mem 0xfb800000-0xfb8fffff] Sep 12 05:56:26.743477 kernel: pci 0000:00:16.7: bridge window [mem 0xe5f00000-0xe5ffffff 64bit pref] Sep 12 05:56:26.743529 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] Sep 12 05:56:26.743579 kernel: pci 0000:00:17.0: bridge window [io 0x6000-0x6fff] Sep 12 05:56:26.743629 kernel: pci 0000:00:17.0: bridge window [mem 0xfd300000-0xfd3fffff] Sep 12 05:56:26.743681 kernel: pci 0000:00:17.0: bridge window [mem 0xe7a00000-0xe7afffff 64bit pref] Sep 12 05:56:26.743734 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] Sep 12 05:56:26.743785 kernel: pci 0000:00:17.1: bridge window [io 0xa000-0xafff] Sep 12 05:56:26.743834 kernel: pci 0000:00:17.1: bridge window [mem 0xfcf00000-0xfcffffff] Sep 12 05:56:26.743883 kernel: pci 0000:00:17.1: bridge window [mem 0xe7600000-0xe76fffff 64bit pref] Sep 12 05:56:26.743934 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] Sep 12 05:56:26.743984 kernel: pci 0000:00:17.2: bridge window [io 0xe000-0xefff] Sep 12 05:56:26.744034 kernel: pci 0000:00:17.2: bridge window [mem 0xfcb00000-0xfcbfffff] Sep 12 05:56:26.744083 kernel: pci 0000:00:17.2: bridge window [mem 0xe7200000-0xe72fffff 64bit pref] Sep 12 05:56:26.744133 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] Sep 12 05:56:26.744193 kernel: pci 0000:00:17.3: bridge window [mem 0xfc700000-0xfc7fffff] Sep 12 05:56:26.744258 kernel: pci 0000:00:17.3: bridge window [mem 0xe6e00000-0xe6efffff 64bit pref] Sep 12 05:56:26.744309 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] Sep 12 05:56:26.744359 kernel: pci 0000:00:17.4: bridge window [mem 0xfc300000-0xfc3fffff] Sep 12 05:56:26.744409 kernel: pci 0000:00:17.4: bridge window [mem 0xe6a00000-0xe6afffff 64bit pref] Sep 12 05:56:26.744466 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] Sep 12 05:56:26.744516 kernel: pci 0000:00:17.5: bridge window [mem 0xfbf00000-0xfbffffff] Sep 12 05:56:26.744566 kernel: pci 0000:00:17.5: bridge window [mem 0xe6600000-0xe66fffff 64bit pref] Sep 12 05:56:26.744620 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] Sep 12 05:56:26.744670 kernel: pci 0000:00:17.6: bridge window [mem 0xfbb00000-0xfbbfffff] Sep 12 05:56:26.744719 kernel: pci 0000:00:17.6: bridge window [mem 0xe6200000-0xe62fffff 64bit pref] Sep 12 05:56:26.744769 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] Sep 12 05:56:26.744818 kernel: pci 0000:00:17.7: bridge window [mem 0xfb700000-0xfb7fffff] Sep 12 05:56:26.744868 kernel: pci 0000:00:17.7: bridge window [mem 0xe5e00000-0xe5efffff 64bit pref] Sep 12 05:56:26.744919 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] Sep 12 05:56:26.744971 kernel: pci 0000:00:18.0: bridge window [io 0x7000-0x7fff] Sep 12 05:56:26.745021 kernel: pci 0000:00:18.0: bridge window [mem 0xfd200000-0xfd2fffff] Sep 12 05:56:26.745073 kernel: pci 0000:00:18.0: bridge window [mem 0xe7900000-0xe79fffff 64bit pref] Sep 12 05:56:26.745124 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] Sep 12 05:56:26.745174 kernel: pci 0000:00:18.1: bridge window [io 0xb000-0xbfff] Sep 12 05:56:26.745385 kernel: pci 0000:00:18.1: bridge window [mem 0xfce00000-0xfcefffff] Sep 12 05:56:26.745439 kernel: pci 0000:00:18.1: bridge window [mem 0xe7500000-0xe75fffff 64bit pref] Sep 12 05:56:26.745491 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] Sep 12 05:56:26.745541 kernel: pci 0000:00:18.2: bridge window [mem 0xfca00000-0xfcafffff] Sep 12 05:56:26.745594 kernel: pci 0000:00:18.2: bridge window [mem 0xe7100000-0xe71fffff 64bit pref] Sep 12 05:56:26.745645 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] Sep 12 05:56:26.745697 kernel: pci 0000:00:18.3: bridge window [mem 0xfc600000-0xfc6fffff] Sep 12 05:56:26.745747 kernel: pci 0000:00:18.3: bridge window [mem 0xe6d00000-0xe6dfffff 64bit pref] Sep 12 05:56:26.745798 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] Sep 12 05:56:26.745848 kernel: pci 0000:00:18.4: bridge window [mem 0xfc200000-0xfc2fffff] Sep 12 05:56:26.745898 kernel: pci 0000:00:18.4: bridge window [mem 0xe6900000-0xe69fffff 64bit pref] Sep 12 05:56:26.745951 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] Sep 12 05:56:26.746003 kernel: pci 0000:00:18.5: bridge window [mem 0xfbe00000-0xfbefffff] Sep 12 05:56:26.746053 kernel: pci 0000:00:18.5: bridge window [mem 0xe6500000-0xe65fffff 64bit pref] Sep 12 05:56:26.746105 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] Sep 12 05:56:26.746154 kernel: pci 0000:00:18.6: bridge window [mem 0xfba00000-0xfbafffff] Sep 12 05:56:26.746458 kernel: pci 0000:00:18.6: bridge window [mem 0xe6100000-0xe61fffff 64bit pref] Sep 12 05:56:26.746523 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] Sep 12 05:56:26.748215 kernel: pci 0000:00:18.7: bridge window [mem 0xfb600000-0xfb6fffff] Sep 12 05:56:26.748285 kernel: pci 0000:00:18.7: bridge window [mem 0xe5d00000-0xe5dfffff 64bit pref] Sep 12 05:56:26.748339 kernel: pci_bus 0000:00: resource 4 [mem 0x000a0000-0x000bffff window] Sep 12 05:56:26.748386 kernel: pci_bus 0000:00: resource 5 [mem 0x000cc000-0x000dbfff window] Sep 12 05:56:26.748430 kernel: pci_bus 0000:00: resource 6 [mem 0xc0000000-0xfebfffff window] Sep 12 05:56:26.748474 kernel: pci_bus 0000:00: resource 7 [io 0x0000-0x0cf7 window] Sep 12 05:56:26.748518 kernel: pci_bus 0000:00: resource 8 [io 0x0d00-0xfeff window] Sep 12 05:56:26.748566 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x3fff] Sep 12 05:56:26.748615 kernel: pci_bus 0000:02: resource 1 [mem 0xfd600000-0xfdffffff] Sep 12 05:56:26.748661 kernel: pci_bus 0000:02: resource 2 [mem 0xe7b00000-0xe7ffffff 64bit pref] Sep 12 05:56:26.748706 kernel: pci_bus 0000:02: resource 4 [mem 0x000a0000-0x000bffff window] Sep 12 05:56:26.748752 kernel: pci_bus 0000:02: resource 5 [mem 0x000cc000-0x000dbfff window] Sep 12 05:56:26.748797 kernel: pci_bus 0000:02: resource 6 [mem 0xc0000000-0xfebfffff window] Sep 12 05:56:26.748841 kernel: pci_bus 0000:02: resource 7 [io 0x0000-0x0cf7 window] Sep 12 05:56:26.748886 kernel: pci_bus 0000:02: resource 8 [io 0x0d00-0xfeff window] Sep 12 05:56:26.748941 kernel: pci_bus 0000:03: resource 0 [io 0x4000-0x4fff] Sep 12 05:56:26.748987 kernel: pci_bus 0000:03: resource 1 [mem 0xfd500000-0xfd5fffff] Sep 12 05:56:26.749032 kernel: pci_bus 0000:03: resource 2 [mem 0xc0000000-0xc01fffff 64bit pref] Sep 12 05:56:26.749083 kernel: pci_bus 0000:04: resource 0 [io 0x8000-0x8fff] Sep 12 05:56:26.749129 kernel: pci_bus 0000:04: resource 1 [mem 0xfd100000-0xfd1fffff] Sep 12 05:56:26.749174 kernel: pci_bus 0000:04: resource 2 [mem 0xe7800000-0xe78fffff 64bit pref] Sep 12 05:56:26.749246 kernel: pci_bus 0000:05: resource 0 [io 0xc000-0xcfff] Sep 12 05:56:26.749297 kernel: pci_bus 0000:05: resource 1 [mem 0xfcd00000-0xfcdfffff] Sep 12 05:56:26.749341 kernel: pci_bus 0000:05: resource 2 [mem 0xe7400000-0xe74fffff 64bit pref] Sep 12 05:56:26.749391 kernel: pci_bus 0000:06: resource 1 [mem 0xfc900000-0xfc9fffff] Sep 12 05:56:26.749437 kernel: pci_bus 0000:06: resource 2 [mem 0xe7000000-0xe70fffff 64bit pref] Sep 12 05:56:26.749491 kernel: pci_bus 0000:07: resource 1 [mem 0xfc500000-0xfc5fffff] Sep 12 05:56:26.749537 kernel: pci_bus 0000:07: resource 2 [mem 0xe6c00000-0xe6cfffff 64bit pref] Sep 12 05:56:26.749590 kernel: pci_bus 0000:08: resource 1 [mem 0xfc100000-0xfc1fffff] Sep 12 05:56:26.749636 kernel: pci_bus 0000:08: resource 2 [mem 0xe6800000-0xe68fffff 64bit pref] Sep 12 05:56:26.749685 kernel: pci_bus 0000:09: resource 1 [mem 0xfbd00000-0xfbdfffff] Sep 12 05:56:26.749731 kernel: pci_bus 0000:09: resource 2 [mem 0xe6400000-0xe64fffff 64bit pref] Sep 12 05:56:26.749781 kernel: pci_bus 0000:0a: resource 1 [mem 0xfb900000-0xfb9fffff] Sep 12 05:56:26.749827 kernel: pci_bus 0000:0a: resource 2 [mem 0xe6000000-0xe60fffff 64bit pref] Sep 12 05:56:26.749880 kernel: pci_bus 0000:0b: resource 0 [io 0x5000-0x5fff] Sep 12 05:56:26.749927 kernel: pci_bus 0000:0b: resource 1 [mem 0xfd400000-0xfd4fffff] Sep 12 05:56:26.749972 kernel: pci_bus 0000:0b: resource 2 [mem 0xc0200000-0xc03fffff 64bit pref] Sep 12 05:56:26.750024 kernel: pci_bus 0000:0c: resource 0 [io 0x9000-0x9fff] Sep 12 05:56:26.750070 kernel: pci_bus 0000:0c: resource 1 [mem 0xfd000000-0xfd0fffff] Sep 12 05:56:26.750115 kernel: pci_bus 0000:0c: resource 2 [mem 0xe7700000-0xe77fffff 64bit pref] Sep 12 05:56:26.750169 kernel: pci_bus 0000:0d: resource 0 [io 0xd000-0xdfff] Sep 12 05:56:26.750262 kernel: pci_bus 0000:0d: resource 1 [mem 0xfcc00000-0xfccfffff] Sep 12 05:56:26.750310 kernel: pci_bus 0000:0d: resource 2 [mem 0xe7300000-0xe73fffff 64bit pref] Sep 12 05:56:26.750360 kernel: pci_bus 0000:0e: resource 1 [mem 0xfc800000-0xfc8fffff] Sep 12 05:56:26.750407 kernel: pci_bus 0000:0e: resource 2 [mem 0xe6f00000-0xe6ffffff 64bit pref] Sep 12 05:56:26.750462 kernel: pci_bus 0000:0f: resource 1 [mem 0xfc400000-0xfc4fffff] Sep 12 05:56:26.750508 kernel: pci_bus 0000:0f: resource 2 [mem 0xe6b00000-0xe6bfffff 64bit pref] Sep 12 05:56:26.750562 kernel: pci_bus 0000:10: resource 1 [mem 0xfc000000-0xfc0fffff] Sep 12 05:56:26.750608 kernel: pci_bus 0000:10: resource 2 [mem 0xe6700000-0xe67fffff 64bit pref] Sep 12 05:56:26.750658 kernel: pci_bus 0000:11: resource 1 [mem 0xfbc00000-0xfbcfffff] Sep 12 05:56:26.750704 kernel: pci_bus 0000:11: resource 2 [mem 0xe6300000-0xe63fffff 64bit pref] Sep 12 05:56:26.750753 kernel: pci_bus 0000:12: resource 1 [mem 0xfb800000-0xfb8fffff] Sep 12 05:56:26.750799 kernel: pci_bus 0000:12: resource 2 [mem 0xe5f00000-0xe5ffffff 64bit pref] Sep 12 05:56:26.750852 kernel: pci_bus 0000:13: resource 0 [io 0x6000-0x6fff] Sep 12 05:56:26.750897 kernel: pci_bus 0000:13: resource 1 [mem 0xfd300000-0xfd3fffff] Sep 12 05:56:26.750942 kernel: pci_bus 0000:13: resource 2 [mem 0xe7a00000-0xe7afffff 64bit pref] Sep 12 05:56:26.750993 kernel: pci_bus 0000:14: resource 0 [io 0xa000-0xafff] Sep 12 05:56:26.751039 kernel: pci_bus 0000:14: resource 1 [mem 0xfcf00000-0xfcffffff] Sep 12 05:56:26.751084 kernel: pci_bus 0000:14: resource 2 [mem 0xe7600000-0xe76fffff 64bit pref] Sep 12 05:56:26.751134 kernel: pci_bus 0000:15: resource 0 [io 0xe000-0xefff] Sep 12 05:56:26.751182 kernel: pci_bus 0000:15: resource 1 [mem 0xfcb00000-0xfcbfffff] Sep 12 05:56:26.751237 kernel: pci_bus 0000:15: resource 2 [mem 0xe7200000-0xe72fffff 64bit pref] Sep 12 05:56:26.751287 kernel: pci_bus 0000:16: resource 1 [mem 0xfc700000-0xfc7fffff] Sep 12 05:56:26.751333 kernel: pci_bus 0000:16: resource 2 [mem 0xe6e00000-0xe6efffff 64bit pref] Sep 12 05:56:26.751382 kernel: pci_bus 0000:17: resource 1 [mem 0xfc300000-0xfc3fffff] Sep 12 05:56:26.751428 kernel: pci_bus 0000:17: resource 2 [mem 0xe6a00000-0xe6afffff 64bit pref] Sep 12 05:56:26.751487 kernel: pci_bus 0000:18: resource 1 [mem 0xfbf00000-0xfbffffff] Sep 12 05:56:26.751534 kernel: pci_bus 0000:18: resource 2 [mem 0xe6600000-0xe66fffff 64bit pref] Sep 12 05:56:26.751586 kernel: pci_bus 0000:19: resource 1 [mem 0xfbb00000-0xfbbfffff] Sep 12 05:56:26.751632 kernel: pci_bus 0000:19: resource 2 [mem 0xe6200000-0xe62fffff 64bit pref] Sep 12 05:56:26.751682 kernel: pci_bus 0000:1a: resource 1 [mem 0xfb700000-0xfb7fffff] Sep 12 05:56:26.751728 kernel: pci_bus 0000:1a: resource 2 [mem 0xe5e00000-0xe5efffff 64bit pref] Sep 12 05:56:26.751780 kernel: pci_bus 0000:1b: resource 0 [io 0x7000-0x7fff] Sep 12 05:56:26.751826 kernel: pci_bus 0000:1b: resource 1 [mem 0xfd200000-0xfd2fffff] Sep 12 05:56:26.751870 kernel: pci_bus 0000:1b: resource 2 [mem 0xe7900000-0xe79fffff 64bit pref] Sep 12 05:56:26.751920 kernel: pci_bus 0000:1c: resource 0 [io 0xb000-0xbfff] Sep 12 05:56:26.751965 kernel: pci_bus 0000:1c: resource 1 [mem 0xfce00000-0xfcefffff] Sep 12 05:56:26.752011 kernel: pci_bus 0000:1c: resource 2 [mem 0xe7500000-0xe75fffff 64bit pref] Sep 12 05:56:26.752062 kernel: pci_bus 0000:1d: resource 1 [mem 0xfca00000-0xfcafffff] Sep 12 05:56:26.752110 kernel: pci_bus 0000:1d: resource 2 [mem 0xe7100000-0xe71fffff 64bit pref] Sep 12 05:56:26.752159 kernel: pci_bus 0000:1e: resource 1 [mem 0xfc600000-0xfc6fffff] Sep 12 05:56:26.753443 kernel: pci_bus 0000:1e: resource 2 [mem 0xe6d00000-0xe6dfffff 64bit pref] Sep 12 05:56:26.753511 kernel: pci_bus 0000:1f: resource 1 [mem 0xfc200000-0xfc2fffff] Sep 12 05:56:26.753562 kernel: pci_bus 0000:1f: resource 2 [mem 0xe6900000-0xe69fffff 64bit pref] Sep 12 05:56:26.753615 kernel: pci_bus 0000:20: resource 1 [mem 0xfbe00000-0xfbefffff] Sep 12 05:56:26.753666 kernel: pci_bus 0000:20: resource 2 [mem 0xe6500000-0xe65fffff 64bit pref] Sep 12 05:56:26.753720 kernel: pci_bus 0000:21: resource 1 [mem 0xfba00000-0xfbafffff] Sep 12 05:56:26.753767 kernel: pci_bus 0000:21: resource 2 [mem 0xe6100000-0xe61fffff 64bit pref] Sep 12 05:56:26.753817 kernel: pci_bus 0000:22: resource 1 [mem 0xfb600000-0xfb6fffff] Sep 12 05:56:26.753864 kernel: pci_bus 0000:22: resource 2 [mem 0xe5d00000-0xe5dfffff 64bit pref] Sep 12 05:56:26.753921 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Sep 12 05:56:26.753932 kernel: PCI: CLS 32 bytes, default 64 Sep 12 05:56:26.753939 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Sep 12 05:56:26.753945 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x311fd3cd494, max_idle_ns: 440795223879 ns Sep 12 05:56:26.753951 kernel: clocksource: Switched to clocksource tsc Sep 12 05:56:26.753957 kernel: Initialise system trusted keyrings Sep 12 05:56:26.753964 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Sep 12 05:56:26.753970 kernel: Key type asymmetric registered Sep 12 05:56:26.753976 kernel: Asymmetric key parser 'x509' registered Sep 12 05:56:26.753981 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Sep 12 05:56:26.753988 kernel: io scheduler mq-deadline registered Sep 12 05:56:26.753994 kernel: io scheduler kyber registered Sep 12 05:56:26.754000 kernel: io scheduler bfq registered Sep 12 05:56:26.754054 kernel: pcieport 0000:00:15.0: PME: Signaling with IRQ 24 Sep 12 05:56:26.754106 kernel: pcieport 0000:00:15.0: pciehp: Slot #160 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 12 05:56:26.754159 kernel: pcieport 0000:00:15.1: PME: Signaling with IRQ 25 Sep 12 05:56:26.754226 kernel: pcieport 0000:00:15.1: pciehp: Slot #161 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 12 05:56:26.754279 kernel: pcieport 0000:00:15.2: PME: Signaling with IRQ 26 Sep 12 05:56:26.754334 kernel: pcieport 0000:00:15.2: pciehp: Slot #162 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 12 05:56:26.754386 kernel: pcieport 0000:00:15.3: PME: Signaling with IRQ 27 Sep 12 05:56:26.754437 kernel: pcieport 0000:00:15.3: pciehp: Slot #163 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 12 05:56:26.754495 kernel: pcieport 0000:00:15.4: PME: Signaling with IRQ 28 Sep 12 05:56:26.754545 kernel: pcieport 0000:00:15.4: pciehp: Slot #164 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 12 05:56:26.754597 kernel: pcieport 0000:00:15.5: PME: Signaling with IRQ 29 Sep 12 05:56:26.754649 kernel: pcieport 0000:00:15.5: pciehp: Slot #165 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 12 05:56:26.754703 kernel: pcieport 0000:00:15.6: PME: Signaling with IRQ 30 Sep 12 05:56:26.754754 kernel: pcieport 0000:00:15.6: pciehp: Slot #166 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 12 05:56:26.754804 kernel: pcieport 0000:00:15.7: PME: Signaling with IRQ 31 Sep 12 05:56:26.754854 kernel: pcieport 0000:00:15.7: pciehp: Slot #167 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 12 05:56:26.754906 kernel: pcieport 0000:00:16.0: PME: Signaling with IRQ 32 Sep 12 05:56:26.754956 kernel: pcieport 0000:00:16.0: pciehp: Slot #192 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 12 05:56:26.755008 kernel: pcieport 0000:00:16.1: PME: Signaling with IRQ 33 Sep 12 05:56:26.755061 kernel: pcieport 0000:00:16.1: pciehp: Slot #193 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 12 05:56:26.755112 kernel: pcieport 0000:00:16.2: PME: Signaling with IRQ 34 Sep 12 05:56:26.755165 kernel: pcieport 0000:00:16.2: pciehp: Slot #194 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 12 05:56:26.755234 kernel: pcieport 0000:00:16.3: PME: Signaling with IRQ 35 Sep 12 05:56:26.755294 kernel: pcieport 0000:00:16.3: pciehp: Slot #195 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 12 05:56:26.755349 kernel: pcieport 0000:00:16.4: PME: Signaling with IRQ 36 Sep 12 05:56:26.755400 kernel: pcieport 0000:00:16.4: pciehp: Slot #196 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 12 05:56:26.755452 kernel: pcieport 0000:00:16.5: PME: Signaling with IRQ 37 Sep 12 05:56:26.755505 kernel: pcieport 0000:00:16.5: pciehp: Slot #197 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 12 05:56:26.755558 kernel: pcieport 0000:00:16.6: PME: Signaling with IRQ 38 Sep 12 05:56:26.755609 kernel: pcieport 0000:00:16.6: pciehp: Slot #198 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 12 05:56:26.755660 kernel: pcieport 0000:00:16.7: PME: Signaling with IRQ 39 Sep 12 05:56:26.755713 kernel: pcieport 0000:00:16.7: pciehp: Slot #199 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 12 05:56:26.755764 kernel: pcieport 0000:00:17.0: PME: Signaling with IRQ 40 Sep 12 05:56:26.755814 kernel: pcieport 0000:00:17.0: pciehp: Slot #224 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 12 05:56:26.755868 kernel: pcieport 0000:00:17.1: PME: Signaling with IRQ 41 Sep 12 05:56:26.755919 kernel: pcieport 0000:00:17.1: pciehp: Slot #225 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 12 05:56:26.755970 kernel: pcieport 0000:00:17.2: PME: Signaling with IRQ 42 Sep 12 05:56:26.756021 kernel: pcieport 0000:00:17.2: pciehp: Slot #226 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 12 05:56:26.756072 kernel: pcieport 0000:00:17.3: PME: Signaling with IRQ 43 Sep 12 05:56:26.756122 kernel: pcieport 0000:00:17.3: pciehp: Slot #227 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 12 05:56:26.756174 kernel: pcieport 0000:00:17.4: PME: Signaling with IRQ 44 Sep 12 05:56:26.756251 kernel: pcieport 0000:00:17.4: pciehp: Slot #228 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 12 05:56:26.756307 kernel: pcieport 0000:00:17.5: PME: Signaling with IRQ 45 Sep 12 05:56:26.756358 kernel: pcieport 0000:00:17.5: pciehp: Slot #229 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 12 05:56:26.756410 kernel: pcieport 0000:00:17.6: PME: Signaling with IRQ 46 Sep 12 05:56:26.756460 kernel: pcieport 0000:00:17.6: pciehp: Slot #230 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 12 05:56:26.756512 kernel: pcieport 0000:00:17.7: PME: Signaling with IRQ 47 Sep 12 05:56:26.756562 kernel: pcieport 0000:00:17.7: pciehp: Slot #231 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 12 05:56:26.756613 kernel: pcieport 0000:00:18.0: PME: Signaling with IRQ 48 Sep 12 05:56:26.756666 kernel: pcieport 0000:00:18.0: pciehp: Slot #256 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 12 05:56:26.756718 kernel: pcieport 0000:00:18.1: PME: Signaling with IRQ 49 Sep 12 05:56:26.756768 kernel: pcieport 0000:00:18.1: pciehp: Slot #257 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 12 05:56:26.756819 kernel: pcieport 0000:00:18.2: PME: Signaling with IRQ 50 Sep 12 05:56:26.756869 kernel: pcieport 0000:00:18.2: pciehp: Slot #258 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 12 05:56:26.756920 kernel: pcieport 0000:00:18.3: PME: Signaling with IRQ 51 Sep 12 05:56:26.756970 kernel: pcieport 0000:00:18.3: pciehp: Slot #259 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 12 05:56:26.757021 kernel: pcieport 0000:00:18.4: PME: Signaling with IRQ 52 Sep 12 05:56:26.757074 kernel: pcieport 0000:00:18.4: pciehp: Slot #260 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 12 05:56:26.757125 kernel: pcieport 0000:00:18.5: PME: Signaling with IRQ 53 Sep 12 05:56:26.757175 kernel: pcieport 0000:00:18.5: pciehp: Slot #261 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 12 05:56:26.757245 kernel: pcieport 0000:00:18.6: PME: Signaling with IRQ 54 Sep 12 05:56:26.757298 kernel: pcieport 0000:00:18.6: pciehp: Slot #262 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 12 05:56:26.757349 kernel: pcieport 0000:00:18.7: PME: Signaling with IRQ 55 Sep 12 05:56:26.757400 kernel: pcieport 0000:00:18.7: pciehp: Slot #263 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 12 05:56:26.757412 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 12 05:56:26.757418 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 12 05:56:26.757425 kernel: 00:05: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 12 05:56:26.757431 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBC,PNP0f13:MOUS] at 0x60,0x64 irq 1,12 Sep 12 05:56:26.757438 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Sep 12 05:56:26.757447 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Sep 12 05:56:26.757501 kernel: rtc_cmos 00:01: registered as rtc0 Sep 12 05:56:26.757552 kernel: rtc_cmos 00:01: setting system clock to 2025-09-12T05:56:26 UTC (1757656586) Sep 12 05:56:26.757561 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Sep 12 05:56:26.757604 kernel: rtc_cmos 00:01: alarms up to one month, y3k, 114 bytes nvram Sep 12 05:56:26.757613 kernel: intel_pstate: CPU model not supported Sep 12 05:56:26.757620 kernel: NET: Registered PF_INET6 protocol family Sep 12 05:56:26.757626 kernel: Segment Routing with IPv6 Sep 12 05:56:26.757632 kernel: In-situ OAM (IOAM) with IPv6 Sep 12 05:56:26.757639 kernel: NET: Registered PF_PACKET protocol family Sep 12 05:56:26.757645 kernel: Key type dns_resolver registered Sep 12 05:56:26.757653 kernel: IPI shorthand broadcast: enabled Sep 12 05:56:26.757659 kernel: sched_clock: Marking stable (2557003407, 165186486)->(2737542650, -15352757) Sep 12 05:56:26.757666 kernel: registered taskstats version 1 Sep 12 05:56:26.757672 kernel: Loading compiled-in X.509 certificates Sep 12 05:56:26.757678 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.46-flatcar: c974434132f0296e0aaf9b1358c8dc50eba5c8b9' Sep 12 05:56:26.757684 kernel: Demotion targets for Node 0: null Sep 12 05:56:26.757690 kernel: Key type .fscrypt registered Sep 12 05:56:26.757696 kernel: Key type fscrypt-provisioning registered Sep 12 05:56:26.757703 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 12 05:56:26.757709 kernel: ima: Allocated hash algorithm: sha1 Sep 12 05:56:26.757715 kernel: ima: No architecture policies found Sep 12 05:56:26.757721 kernel: clk: Disabling unused clocks Sep 12 05:56:26.757729 kernel: Warning: unable to open an initial console. Sep 12 05:56:26.757735 kernel: Freeing unused kernel image (initmem) memory: 54092K Sep 12 05:56:26.757741 kernel: Write protecting the kernel read-only data: 24576k Sep 12 05:56:26.757748 kernel: Freeing unused kernel image (rodata/data gap) memory: 252K Sep 12 05:56:26.757754 kernel: Run /init as init process Sep 12 05:56:26.757761 kernel: with arguments: Sep 12 05:56:26.757768 kernel: /init Sep 12 05:56:26.757774 kernel: with environment: Sep 12 05:56:26.757780 kernel: HOME=/ Sep 12 05:56:26.757786 kernel: TERM=linux Sep 12 05:56:26.757792 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 12 05:56:26.757799 systemd[1]: Successfully made /usr/ read-only. Sep 12 05:56:26.757807 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 12 05:56:26.757815 systemd[1]: Detected virtualization vmware. Sep 12 05:56:26.757822 systemd[1]: Detected architecture x86-64. Sep 12 05:56:26.757828 systemd[1]: Running in initrd. Sep 12 05:56:26.757834 systemd[1]: No hostname configured, using default hostname. Sep 12 05:56:26.757841 systemd[1]: Hostname set to . Sep 12 05:56:26.757848 systemd[1]: Initializing machine ID from random generator. Sep 12 05:56:26.757854 systemd[1]: Queued start job for default target initrd.target. Sep 12 05:56:26.757861 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 05:56:26.757868 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 05:56:26.757876 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 12 05:56:26.757883 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 12 05:56:26.757890 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 12 05:56:26.757897 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 12 05:56:26.757904 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 12 05:56:26.757911 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 12 05:56:26.757918 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 05:56:26.757925 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 12 05:56:26.757931 systemd[1]: Reached target paths.target - Path Units. Sep 12 05:56:26.757938 systemd[1]: Reached target slices.target - Slice Units. Sep 12 05:56:26.757944 systemd[1]: Reached target swap.target - Swaps. Sep 12 05:56:26.757950 systemd[1]: Reached target timers.target - Timer Units. Sep 12 05:56:26.757957 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 12 05:56:26.757963 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 12 05:56:26.757970 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 12 05:56:26.757978 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Sep 12 05:56:26.757985 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 12 05:56:26.757992 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 12 05:56:26.757999 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 05:56:26.758006 systemd[1]: Reached target sockets.target - Socket Units. Sep 12 05:56:26.758012 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 12 05:56:26.758019 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 12 05:56:26.758025 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 12 05:56:26.758033 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Sep 12 05:56:26.758040 systemd[1]: Starting systemd-fsck-usr.service... Sep 12 05:56:26.758047 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 12 05:56:26.758053 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 12 05:56:26.758060 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 05:56:26.758080 systemd-journald[244]: Collecting audit messages is disabled. Sep 12 05:56:26.758098 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 12 05:56:26.758106 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 05:56:26.758113 systemd[1]: Finished systemd-fsck-usr.service. Sep 12 05:56:26.758121 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 12 05:56:26.758128 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 12 05:56:26.758134 kernel: Bridge firewalling registered Sep 12 05:56:26.758140 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 12 05:56:26.758147 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 12 05:56:26.758154 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 05:56:26.758160 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 12 05:56:26.758168 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 12 05:56:26.758176 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 12 05:56:26.758183 systemd-journald[244]: Journal started Sep 12 05:56:26.758220 systemd-journald[244]: Runtime Journal (/run/log/journal/6fa85312d73046088cc61caa0591c556) is 4.8M, max 38.8M, 34M free. Sep 12 05:56:26.706449 systemd-modules-load[245]: Inserted module 'overlay' Sep 12 05:56:26.727827 systemd-modules-load[245]: Inserted module 'br_netfilter' Sep 12 05:56:26.760201 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 12 05:56:26.762204 systemd[1]: Started systemd-journald.service - Journal Service. Sep 12 05:56:26.763675 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 05:56:26.765288 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 12 05:56:26.771267 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 05:56:26.772257 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 12 05:56:26.775393 systemd-tmpfiles[274]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Sep 12 05:56:26.777343 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 05:56:26.778410 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 12 05:56:26.783595 dracut-cmdline[282]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=d36684c42387dba16669740eb40ca6a094be0dfb03f64a303630b6ac6cfe48d3 Sep 12 05:56:26.806896 systemd-resolved[287]: Positive Trust Anchors: Sep 12 05:56:26.806906 systemd-resolved[287]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 12 05:56:26.806930 systemd-resolved[287]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 12 05:56:26.809511 systemd-resolved[287]: Defaulting to hostname 'linux'. Sep 12 05:56:26.810398 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 12 05:56:26.810535 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 12 05:56:26.839204 kernel: SCSI subsystem initialized Sep 12 05:56:26.855217 kernel: Loading iSCSI transport class v2.0-870. Sep 12 05:56:26.864211 kernel: iscsi: registered transport (tcp) Sep 12 05:56:26.886250 kernel: iscsi: registered transport (qla4xxx) Sep 12 05:56:26.886295 kernel: QLogic iSCSI HBA Driver Sep 12 05:56:26.896365 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 12 05:56:26.918306 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 12 05:56:26.918640 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 12 05:56:26.941754 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 12 05:56:26.942734 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 12 05:56:26.979208 kernel: raid6: avx2x4 gen() 46250 MB/s Sep 12 05:56:26.996246 kernel: raid6: avx2x2 gen() 51805 MB/s Sep 12 05:56:27.013422 kernel: raid6: avx2x1 gen() 44244 MB/s Sep 12 05:56:27.013471 kernel: raid6: using algorithm avx2x2 gen() 51805 MB/s Sep 12 05:56:27.031414 kernel: raid6: .... xor() 32099 MB/s, rmw enabled Sep 12 05:56:27.031465 kernel: raid6: using avx2x2 recovery algorithm Sep 12 05:56:27.045199 kernel: xor: automatically using best checksumming function avx Sep 12 05:56:27.147213 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 12 05:56:27.150558 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 12 05:56:27.151437 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 05:56:27.176838 systemd-udevd[493]: Using default interface naming scheme 'v255'. Sep 12 05:56:27.180402 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 05:56:27.181260 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 12 05:56:27.202694 dracut-pre-trigger[500]: rd.md=0: removing MD RAID activation Sep 12 05:56:27.216371 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 12 05:56:27.217236 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 12 05:56:27.289508 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 05:56:27.291296 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 12 05:56:27.364051 kernel: VMware PVSCSI driver - version 1.0.7.0-k Sep 12 05:56:27.364089 kernel: vmw_pvscsi: using 64bit dma Sep 12 05:56:27.364097 kernel: vmw_pvscsi: max_id: 16 Sep 12 05:56:27.364104 kernel: vmw_pvscsi: setting ring_pages to 8 Sep 12 05:56:27.372260 kernel: vmw_pvscsi: enabling reqCallThreshold Sep 12 05:56:27.372296 kernel: vmw_pvscsi: driver-based request coalescing enabled Sep 12 05:56:27.372305 kernel: vmw_pvscsi: using MSI-X Sep 12 05:56:27.372317 kernel: scsi host0: VMware PVSCSI storage adapter rev 2, req/cmp/msg rings: 8/8/1 pages, cmd_per_lun=254 Sep 12 05:56:27.375922 kernel: vmw_pvscsi 0000:03:00.0: VMware PVSCSI rev 2 host #0 Sep 12 05:56:27.376038 kernel: scsi 0:0:0:0: Direct-Access VMware Virtual disk 2.0 PQ: 0 ANSI: 6 Sep 12 05:56:27.376057 kernel: VMware vmxnet3 virtual NIC driver - version 1.9.0.0-k-NAPI Sep 12 05:56:27.387210 kernel: vmxnet3 0000:0b:00.0: # of Tx queues : 2, # of Rx queues : 2 Sep 12 05:56:27.397269 kernel: cryptd: max_cpu_qlen set to 1000 Sep 12 05:56:27.402209 kernel: vmxnet3 0000:0b:00.0 eth0: NIC Link is Up 10000 Mbps Sep 12 05:56:27.402365 (udev-worker)[550]: id: Truncating stdout of 'dmi_memory_id' up to 16384 byte. Sep 12 05:56:27.405199 kernel: libata version 3.00 loaded. Sep 12 05:56:27.411227 kernel: vmxnet3 0000:0b:00.0 ens192: renamed from eth0 Sep 12 05:56:27.411271 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 05:56:27.411350 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 05:56:27.412634 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 05:56:27.413196 kernel: ata_piix 0000:00:07.1: version 2.13 Sep 12 05:56:27.414197 kernel: scsi host1: ata_piix Sep 12 05:56:27.415319 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 05:56:27.417100 kernel: scsi host2: ata_piix Sep 12 05:56:27.417210 kernel: ata1: PATA max UDMA/33 cmd 0x1f0 ctl 0x3f6 bmdma 0x1060 irq 14 lpm-pol 0 Sep 12 05:56:27.417222 kernel: ata2: PATA max UDMA/33 cmd 0x170 ctl 0x376 bmdma 0x1068 irq 15 lpm-pol 0 Sep 12 05:56:27.423201 kernel: AES CTR mode by8 optimization enabled Sep 12 05:56:27.428229 kernel: sd 0:0:0:0: [sda] 17805312 512-byte logical blocks: (9.12 GB/8.49 GiB) Sep 12 05:56:27.430198 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input2 Sep 12 05:56:27.430216 kernel: sd 0:0:0:0: [sda] Write Protect is off Sep 12 05:56:27.431915 kernel: sd 0:0:0:0: [sda] Mode Sense: 31 00 00 00 Sep 12 05:56:27.432000 kernel: sd 0:0:0:0: [sda] Cache data unavailable Sep 12 05:56:27.432066 kernel: sd 0:0:0:0: [sda] Assuming drive cache: write through Sep 12 05:56:27.441208 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 12 05:56:27.443210 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Sep 12 05:56:27.448118 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 05:56:27.582206 kernel: ata2.00: ATAPI: VMware Virtual IDE CDROM Drive, 00000001, max UDMA/33 Sep 12 05:56:27.588277 kernel: scsi 2:0:0:0: CD-ROM NECVMWar VMware IDE CDR10 1.00 PQ: 0 ANSI: 5 Sep 12 05:56:27.611557 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 1x/1x writer dvd-ram cd/rw xa/form2 cdda tray Sep 12 05:56:27.611723 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Sep 12 05:56:27.621219 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Sep 12 05:56:27.642706 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_disk EFI-SYSTEM. Sep 12 05:56:27.648293 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_disk OEM. Sep 12 05:56:27.653597 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_disk ROOT. Sep 12 05:56:27.657857 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_disk USR-A. Sep 12 05:56:27.657989 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_disk USR-A. Sep 12 05:56:27.658652 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 12 05:56:27.747387 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 12 05:56:27.760209 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 12 05:56:27.906542 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 12 05:56:27.907147 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 12 05:56:27.907333 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 05:56:27.907597 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 12 05:56:27.908419 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 12 05:56:27.925322 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 12 05:56:28.758233 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 12 05:56:28.758272 disk-uuid[643]: The operation has completed successfully. Sep 12 05:56:28.796248 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 12 05:56:28.796313 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 12 05:56:28.806673 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 12 05:56:28.815043 sh[674]: Success Sep 12 05:56:28.830058 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 12 05:56:28.830090 kernel: device-mapper: uevent: version 1.0.3 Sep 12 05:56:28.830099 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Sep 12 05:56:28.837285 kernel: device-mapper: verity: sha256 using shash "sha256-avx2" Sep 12 05:56:28.875634 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 12 05:56:28.878227 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 12 05:56:28.887336 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 12 05:56:28.897202 kernel: BTRFS: device fsid 29ae74b1-0ab1-4a84-96e7-98d98e1ec77f devid 1 transid 35 /dev/mapper/usr (254:0) scanned by mount (686) Sep 12 05:56:28.899393 kernel: BTRFS info (device dm-0): first mount of filesystem 29ae74b1-0ab1-4a84-96e7-98d98e1ec77f Sep 12 05:56:28.899411 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Sep 12 05:56:28.907223 kernel: BTRFS info (device dm-0): enabling ssd optimizations Sep 12 05:56:28.907243 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 12 05:56:28.907255 kernel: BTRFS info (device dm-0): enabling free space tree Sep 12 05:56:28.909907 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 12 05:56:28.910232 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Sep 12 05:56:28.910805 systemd[1]: Starting afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments... Sep 12 05:56:28.912246 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 12 05:56:28.939210 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (709) Sep 12 05:56:28.942803 kernel: BTRFS info (device sda6): first mount of filesystem 88e8cff7-d302-45f0-bf99-3731957f99ae Sep 12 05:56:28.942832 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Sep 12 05:56:28.951265 kernel: BTRFS info (device sda6): enabling ssd optimizations Sep 12 05:56:28.951292 kernel: BTRFS info (device sda6): enabling free space tree Sep 12 05:56:28.956196 kernel: BTRFS info (device sda6): last unmount of filesystem 88e8cff7-d302-45f0-bf99-3731957f99ae Sep 12 05:56:28.956721 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 12 05:56:28.957549 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 12 05:56:28.977946 systemd[1]: Finished afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments. Sep 12 05:56:28.979259 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 12 05:56:29.068476 ignition[728]: Ignition 2.22.0 Sep 12 05:56:29.068482 ignition[728]: Stage: fetch-offline Sep 12 05:56:29.068499 ignition[728]: no configs at "/usr/lib/ignition/base.d" Sep 12 05:56:29.068503 ignition[728]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Sep 12 05:56:29.068551 ignition[728]: parsed url from cmdline: "" Sep 12 05:56:29.068552 ignition[728]: no config URL provided Sep 12 05:56:29.068555 ignition[728]: reading system config file "/usr/lib/ignition/user.ign" Sep 12 05:56:29.068558 ignition[728]: no config at "/usr/lib/ignition/user.ign" Sep 12 05:56:29.068917 ignition[728]: config successfully fetched Sep 12 05:56:29.068935 ignition[728]: parsing config with SHA512: c597c2d2cb4bd684c7227adde236b1e08e10283d2247e0865ed64ce6e57ece3c356ef4b1b59bb0a26e15ea7ce9f00afe1628293540ae96f04dc5281648007b64 Sep 12 05:56:29.071555 unknown[728]: fetched base config from "system" Sep 12 05:56:29.071561 unknown[728]: fetched user config from "vmware" Sep 12 05:56:29.072095 ignition[728]: fetch-offline: fetch-offline passed Sep 12 05:56:29.072129 ignition[728]: Ignition finished successfully Sep 12 05:56:29.072936 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 12 05:56:29.080320 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 12 05:56:29.081247 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 12 05:56:29.107767 systemd-networkd[866]: lo: Link UP Sep 12 05:56:29.107950 systemd-networkd[866]: lo: Gained carrier Sep 12 05:56:29.108740 systemd-networkd[866]: Enumeration completed Sep 12 05:56:29.108930 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 12 05:56:29.112088 kernel: vmxnet3 0000:0b:00.0 ens192: intr type 3, mode 0, 3 vectors allocated Sep 12 05:56:29.112183 kernel: vmxnet3 0000:0b:00.0 ens192: NIC Link is Up 10000 Mbps Sep 12 05:56:29.109080 systemd[1]: Reached target network.target - Network. Sep 12 05:56:29.109184 systemd-networkd[866]: ens192: Configuring with /etc/systemd/network/10-dracut-cmdline-99.network. Sep 12 05:56:29.111413 systemd-networkd[866]: ens192: Link UP Sep 12 05:56:29.111416 systemd-networkd[866]: ens192: Gained carrier Sep 12 05:56:29.112152 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 12 05:56:29.113265 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 12 05:56:29.138644 ignition[871]: Ignition 2.22.0 Sep 12 05:56:29.138885 ignition[871]: Stage: kargs Sep 12 05:56:29.138963 ignition[871]: no configs at "/usr/lib/ignition/base.d" Sep 12 05:56:29.138968 ignition[871]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Sep 12 05:56:29.139574 ignition[871]: kargs: kargs passed Sep 12 05:56:29.139601 ignition[871]: Ignition finished successfully Sep 12 05:56:29.141537 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 12 05:56:29.142416 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 12 05:56:29.156297 ignition[878]: Ignition 2.22.0 Sep 12 05:56:29.156310 ignition[878]: Stage: disks Sep 12 05:56:29.156391 ignition[878]: no configs at "/usr/lib/ignition/base.d" Sep 12 05:56:29.156397 ignition[878]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Sep 12 05:56:29.156989 ignition[878]: disks: disks passed Sep 12 05:56:29.157022 ignition[878]: Ignition finished successfully Sep 12 05:56:29.157881 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 12 05:56:29.158360 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 12 05:56:29.158585 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 12 05:56:29.158820 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 12 05:56:29.159019 systemd[1]: Reached target sysinit.target - System Initialization. Sep 12 05:56:29.159245 systemd[1]: Reached target basic.target - Basic System. Sep 12 05:56:29.159927 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 12 05:56:29.175379 systemd-fsck[887]: ROOT: clean, 15/1628000 files, 120826/1617920 blocks Sep 12 05:56:29.176931 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 12 05:56:29.177742 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 12 05:56:29.254201 kernel: EXT4-fs (sda9): mounted filesystem 2b8062f9-897a-46cb-bde4-2b62ba4cc712 r/w with ordered data mode. Quota mode: none. Sep 12 05:56:29.254752 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 12 05:56:29.255213 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 12 05:56:29.256297 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 12 05:56:29.258224 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 12 05:56:29.258619 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 12 05:56:29.258790 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 12 05:56:29.258804 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 12 05:56:29.267176 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 12 05:56:29.268252 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 12 05:56:29.273803 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (895) Sep 12 05:56:29.273896 kernel: BTRFS info (device sda6): first mount of filesystem 88e8cff7-d302-45f0-bf99-3731957f99ae Sep 12 05:56:29.273910 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Sep 12 05:56:29.278289 kernel: BTRFS info (device sda6): enabling ssd optimizations Sep 12 05:56:29.278311 kernel: BTRFS info (device sda6): enabling free space tree Sep 12 05:56:29.279025 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 12 05:56:29.292311 systemd-resolved[287]: Detected conflict on linux IN A 139.178.70.104 Sep 12 05:56:29.292319 systemd-resolved[287]: Hostname conflict, changing published hostname from 'linux' to 'linux6'. Sep 12 05:56:29.301797 initrd-setup-root[919]: cut: /sysroot/etc/passwd: No such file or directory Sep 12 05:56:29.304617 initrd-setup-root[926]: cut: /sysroot/etc/group: No such file or directory Sep 12 05:56:29.307062 initrd-setup-root[933]: cut: /sysroot/etc/shadow: No such file or directory Sep 12 05:56:29.308975 initrd-setup-root[940]: cut: /sysroot/etc/gshadow: No such file or directory Sep 12 05:56:29.366035 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 12 05:56:29.366949 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 12 05:56:29.368271 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 12 05:56:29.376239 kernel: BTRFS info (device sda6): last unmount of filesystem 88e8cff7-d302-45f0-bf99-3731957f99ae Sep 12 05:56:29.389647 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 12 05:56:29.394000 ignition[1007]: INFO : Ignition 2.22.0 Sep 12 05:56:29.394000 ignition[1007]: INFO : Stage: mount Sep 12 05:56:29.394308 ignition[1007]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 05:56:29.394308 ignition[1007]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Sep 12 05:56:29.394665 ignition[1007]: INFO : mount: mount passed Sep 12 05:56:29.395286 ignition[1007]: INFO : Ignition finished successfully Sep 12 05:56:29.395506 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 12 05:56:29.396474 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 12 05:56:29.896807 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 12 05:56:29.898162 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 12 05:56:29.918233 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (1019) Sep 12 05:56:29.921032 kernel: BTRFS info (device sda6): first mount of filesystem 88e8cff7-d302-45f0-bf99-3731957f99ae Sep 12 05:56:29.921092 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Sep 12 05:56:29.924606 kernel: BTRFS info (device sda6): enabling ssd optimizations Sep 12 05:56:29.924670 kernel: BTRFS info (device sda6): enabling free space tree Sep 12 05:56:29.925934 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 12 05:56:29.946147 ignition[1036]: INFO : Ignition 2.22.0 Sep 12 05:56:29.946147 ignition[1036]: INFO : Stage: files Sep 12 05:56:29.946512 ignition[1036]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 05:56:29.946512 ignition[1036]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Sep 12 05:56:29.946868 ignition[1036]: DEBUG : files: compiled without relabeling support, skipping Sep 12 05:56:29.947568 ignition[1036]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 12 05:56:29.947568 ignition[1036]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 12 05:56:29.948931 ignition[1036]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 12 05:56:29.949113 ignition[1036]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 12 05:56:29.949269 unknown[1036]: wrote ssh authorized keys file for user: core Sep 12 05:56:29.949464 ignition[1036]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 12 05:56:29.951552 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Sep 12 05:56:29.951760 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Sep 12 05:56:30.000537 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 12 05:56:30.346290 systemd-networkd[866]: ens192: Gained IPv6LL Sep 12 05:56:30.728583 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Sep 12 05:56:30.728583 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Sep 12 05:56:30.728989 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Sep 12 05:56:30.728989 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 12 05:56:30.728989 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 12 05:56:30.728989 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 12 05:56:30.728989 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 12 05:56:30.728989 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 12 05:56:30.728989 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 12 05:56:30.730371 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 12 05:56:30.730531 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 12 05:56:30.730531 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 12 05:56:30.732715 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 12 05:56:30.732983 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 12 05:56:30.732983 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Sep 12 05:56:31.025575 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Sep 12 05:56:31.297714 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 12 05:56:31.298125 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/etc/systemd/network/00-vmware.network" Sep 12 05:56:31.305534 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/etc/systemd/network/00-vmware.network" Sep 12 05:56:31.305780 ignition[1036]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Sep 12 05:56:31.308457 ignition[1036]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 12 05:56:31.308804 ignition[1036]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 12 05:56:31.309160 ignition[1036]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Sep 12 05:56:31.309160 ignition[1036]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Sep 12 05:56:31.309160 ignition[1036]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 12 05:56:31.309160 ignition[1036]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 12 05:56:31.309160 ignition[1036]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Sep 12 05:56:31.309160 ignition[1036]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Sep 12 05:56:31.364584 ignition[1036]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Sep 12 05:56:31.368017 ignition[1036]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Sep 12 05:56:31.368017 ignition[1036]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Sep 12 05:56:31.368017 ignition[1036]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Sep 12 05:56:31.368017 ignition[1036]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Sep 12 05:56:31.370066 ignition[1036]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 12 05:56:31.370066 ignition[1036]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 12 05:56:31.370066 ignition[1036]: INFO : files: files passed Sep 12 05:56:31.370066 ignition[1036]: INFO : Ignition finished successfully Sep 12 05:56:31.370801 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 12 05:56:31.371589 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 12 05:56:31.372263 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 12 05:56:31.386417 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 12 05:56:31.386643 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 12 05:56:31.389494 initrd-setup-root-after-ignition[1068]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 12 05:56:31.389494 initrd-setup-root-after-ignition[1068]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 12 05:56:31.390532 initrd-setup-root-after-ignition[1072]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 12 05:56:31.391284 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 12 05:56:31.391681 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 12 05:56:31.392302 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 12 05:56:31.423673 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 12 05:56:31.423740 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 12 05:56:31.424035 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 12 05:56:31.424164 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 12 05:56:31.424392 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 12 05:56:31.424881 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 12 05:56:31.438891 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 12 05:56:31.439749 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 12 05:56:31.451277 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 12 05:56:31.451647 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 05:56:31.451971 systemd[1]: Stopped target timers.target - Timer Units. Sep 12 05:56:31.452273 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 12 05:56:31.452356 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 12 05:56:31.452853 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 12 05:56:31.453131 systemd[1]: Stopped target basic.target - Basic System. Sep 12 05:56:31.453412 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 12 05:56:31.453739 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 12 05:56:31.453888 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 12 05:56:31.454029 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Sep 12 05:56:31.454171 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 12 05:56:31.454315 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 12 05:56:31.454470 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 12 05:56:31.454612 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 12 05:56:31.454751 systemd[1]: Stopped target swap.target - Swaps. Sep 12 05:56:31.454861 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 12 05:56:31.454929 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 12 05:56:31.455144 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 12 05:56:31.456287 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 05:56:31.456513 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 12 05:56:31.456564 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 05:56:31.456704 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 12 05:56:31.456767 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 12 05:56:31.457031 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 12 05:56:31.457093 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 12 05:56:31.457339 systemd[1]: Stopped target paths.target - Path Units. Sep 12 05:56:31.457471 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 12 05:56:31.457530 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 05:56:31.457705 systemd[1]: Stopped target slices.target - Slice Units. Sep 12 05:56:31.457889 systemd[1]: Stopped target sockets.target - Socket Units. Sep 12 05:56:31.458072 systemd[1]: iscsid.socket: Deactivated successfully. Sep 12 05:56:31.458116 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 12 05:56:31.458260 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 12 05:56:31.458303 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 12 05:56:31.458472 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 12 05:56:31.458535 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 12 05:56:31.458699 systemd[1]: ignition-files.service: Deactivated successfully. Sep 12 05:56:31.458755 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 12 05:56:31.460286 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 12 05:56:31.460397 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 12 05:56:31.460466 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 05:56:31.461038 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 12 05:56:31.462945 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 12 05:56:31.463047 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 05:56:31.463355 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 12 05:56:31.463414 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 12 05:56:31.466162 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 12 05:56:31.468280 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 12 05:56:31.479885 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 12 05:56:31.483294 ignition[1092]: INFO : Ignition 2.22.0 Sep 12 05:56:31.483294 ignition[1092]: INFO : Stage: umount Sep 12 05:56:31.483703 ignition[1092]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 05:56:31.483703 ignition[1092]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Sep 12 05:56:31.483944 ignition[1092]: INFO : umount: umount passed Sep 12 05:56:31.483944 ignition[1092]: INFO : Ignition finished successfully Sep 12 05:56:31.484574 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 12 05:56:31.484647 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 12 05:56:31.485046 systemd[1]: Stopped target network.target - Network. Sep 12 05:56:31.485153 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 12 05:56:31.485183 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 12 05:56:31.485377 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 12 05:56:31.485400 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 12 05:56:31.485549 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 12 05:56:31.485571 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 12 05:56:31.485722 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 12 05:56:31.485742 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 12 05:56:31.485949 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 12 05:56:31.486302 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 12 05:56:31.491727 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 12 05:56:31.491820 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 12 05:56:31.493710 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Sep 12 05:56:31.493973 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 12 05:56:31.494057 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 12 05:56:31.495162 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Sep 12 05:56:31.495967 systemd[1]: Stopped target network-pre.target - Preparation for Network. Sep 12 05:56:31.496145 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 12 05:56:31.496175 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 12 05:56:31.497365 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 12 05:56:31.497518 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 12 05:56:31.497546 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 12 05:56:31.497672 systemd[1]: afterburn-network-kargs.service: Deactivated successfully. Sep 12 05:56:31.497695 systemd[1]: Stopped afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments. Sep 12 05:56:31.497808 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 12 05:56:31.497832 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 12 05:56:31.497986 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 12 05:56:31.498007 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 12 05:56:31.498108 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 12 05:56:31.498129 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 05:56:31.498296 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 05:56:31.499168 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 12 05:56:31.499212 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Sep 12 05:56:31.513116 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 12 05:56:31.513318 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 12 05:56:31.514494 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 12 05:56:31.514616 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 05:56:31.515075 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 12 05:56:31.515114 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 12 05:56:31.515357 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 12 05:56:31.515381 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 05:56:31.515536 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 12 05:56:31.515568 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 12 05:56:31.515859 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 12 05:56:31.515893 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 12 05:56:31.516217 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 12 05:56:31.516252 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 05:56:31.517513 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 12 05:56:31.517621 systemd[1]: systemd-network-generator.service: Deactivated successfully. Sep 12 05:56:31.517647 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Sep 12 05:56:31.517824 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 12 05:56:31.517852 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 05:56:31.518014 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 05:56:31.518041 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 05:56:31.519029 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Sep 12 05:56:31.519060 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Sep 12 05:56:31.519085 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 12 05:56:31.530746 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 12 05:56:31.530822 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 12 05:56:31.564649 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 12 05:56:31.564748 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 12 05:56:31.565205 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 12 05:56:31.565342 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 12 05:56:31.565386 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 12 05:56:31.566157 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 12 05:56:31.586694 systemd[1]: Switching root. Sep 12 05:56:31.618338 systemd-journald[244]: Journal stopped Sep 12 05:56:33.018753 systemd-journald[244]: Received SIGTERM from PID 1 (systemd). Sep 12 05:56:33.018782 kernel: SELinux: policy capability network_peer_controls=1 Sep 12 05:56:33.018790 kernel: SELinux: policy capability open_perms=1 Sep 12 05:56:33.018796 kernel: SELinux: policy capability extended_socket_class=1 Sep 12 05:56:33.018801 kernel: SELinux: policy capability always_check_network=0 Sep 12 05:56:33.018807 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 12 05:56:33.018814 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 12 05:56:33.018819 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 12 05:56:33.018825 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 12 05:56:33.018830 kernel: SELinux: policy capability userspace_initial_context=0 Sep 12 05:56:33.018836 kernel: audit: type=1403 audit(1757656592.281:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 12 05:56:33.018842 systemd[1]: Successfully loaded SELinux policy in 52.893ms. Sep 12 05:56:33.018851 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 3.821ms. Sep 12 05:56:33.018858 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 12 05:56:33.018865 systemd[1]: Detected virtualization vmware. Sep 12 05:56:33.018871 systemd[1]: Detected architecture x86-64. Sep 12 05:56:33.018879 systemd[1]: Detected first boot. Sep 12 05:56:33.018885 systemd[1]: Initializing machine ID from random generator. Sep 12 05:56:33.018892 zram_generator::config[1136]: No configuration found. Sep 12 05:56:33.019819 kernel: vmw_vmci 0000:00:07.7: Using capabilities 0xc Sep 12 05:56:33.019835 kernel: Guest personality initialized and is active Sep 12 05:56:33.019842 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Sep 12 05:56:33.019848 kernel: Initialized host personality Sep 12 05:56:33.019856 kernel: NET: Registered PF_VSOCK protocol family Sep 12 05:56:33.019864 systemd[1]: Populated /etc with preset unit settings. Sep 12 05:56:33.019872 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Sep 12 05:56:33.019880 systemd[1]: COREOS_CUSTOM_PUBLIC_IPV4=$(ip addr show ens192 | grep -v "inet 10." | grep -Po "inet \K[\d.]+")" > ${OUTPUT}" Sep 12 05:56:33.019890 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Sep 12 05:56:33.019897 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 12 05:56:33.019903 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 12 05:56:33.019911 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 12 05:56:33.019918 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 12 05:56:33.019925 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 12 05:56:33.019932 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 12 05:56:33.019939 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 12 05:56:33.019947 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 12 05:56:33.019956 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 12 05:56:33.019965 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 12 05:56:33.019971 systemd[1]: Created slice user.slice - User and Session Slice. Sep 12 05:56:33.019980 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 05:56:33.019994 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 05:56:33.020002 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 12 05:56:33.020009 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 12 05:56:33.020015 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 12 05:56:33.020022 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 12 05:56:33.020031 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Sep 12 05:56:33.020038 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 05:56:33.020044 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 12 05:56:33.020051 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 12 05:56:33.020058 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 12 05:56:33.020064 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 12 05:56:33.020071 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 12 05:56:33.020078 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 05:56:33.020085 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 12 05:56:33.020092 systemd[1]: Reached target slices.target - Slice Units. Sep 12 05:56:33.020099 systemd[1]: Reached target swap.target - Swaps. Sep 12 05:56:33.020107 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 12 05:56:33.020115 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 12 05:56:33.020124 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Sep 12 05:56:33.020131 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 12 05:56:33.020138 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 12 05:56:33.020144 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 05:56:33.020151 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 12 05:56:33.020158 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 12 05:56:33.020165 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 12 05:56:33.020171 systemd[1]: Mounting media.mount - External Media Directory... Sep 12 05:56:33.020179 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 05:56:33.020318 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 12 05:56:33.020331 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 12 05:56:33.020338 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 12 05:56:33.020348 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 12 05:56:33.020360 systemd[1]: Reached target machines.target - Containers. Sep 12 05:56:33.020373 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 12 05:56:33.020385 systemd[1]: Starting ignition-delete-config.service - Ignition (delete config)... Sep 12 05:56:33.020395 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 12 05:56:33.020402 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 12 05:56:33.020410 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 05:56:33.020420 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 12 05:56:33.020432 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 05:56:33.020440 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 12 05:56:33.020447 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 05:56:33.020454 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 12 05:56:33.020464 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 12 05:56:33.020472 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 12 05:56:33.020482 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 12 05:56:33.020491 systemd[1]: Stopped systemd-fsck-usr.service. Sep 12 05:56:33.020499 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 12 05:56:33.022222 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 12 05:56:33.022234 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 12 05:56:33.022244 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 12 05:56:33.022252 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 12 05:56:33.022262 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Sep 12 05:56:33.022269 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 12 05:56:33.022279 systemd[1]: verity-setup.service: Deactivated successfully. Sep 12 05:56:33.022292 systemd[1]: Stopped verity-setup.service. Sep 12 05:56:33.022304 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 05:56:33.022332 systemd-journald[1229]: Collecting audit messages is disabled. Sep 12 05:56:33.022351 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 12 05:56:33.022361 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 12 05:56:33.022368 systemd[1]: Mounted media.mount - External Media Directory. Sep 12 05:56:33.022375 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 12 05:56:33.022382 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 12 05:56:33.022392 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 12 05:56:33.022402 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 12 05:56:33.022411 systemd-journald[1229]: Journal started Sep 12 05:56:33.022427 systemd-journald[1229]: Runtime Journal (/run/log/journal/0583c1562bc941a9b0d13c7c7f6c3919) is 4.8M, max 38.8M, 34M free. Sep 12 05:56:32.850032 systemd[1]: Queued start job for default target multi-user.target. Sep 12 05:56:32.862429 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Sep 12 05:56:32.862674 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 12 05:56:33.023038 jq[1206]: true Sep 12 05:56:33.026527 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 05:56:33.026557 systemd[1]: Started systemd-journald.service - Journal Service. Sep 12 05:56:33.027216 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 12 05:56:33.030214 kernel: loop: module loaded Sep 12 05:56:33.030419 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 12 05:56:33.031304 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 05:56:33.031429 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 05:56:33.031696 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 05:56:33.031804 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 05:56:33.032294 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 05:56:33.032407 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 05:56:33.032732 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 12 05:56:33.033056 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 12 05:56:33.034879 jq[1252]: true Sep 12 05:56:33.042207 kernel: fuse: init (API version 7.41) Sep 12 05:56:33.043442 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 12 05:56:33.043817 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 12 05:56:33.043842 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 12 05:56:33.045122 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Sep 12 05:56:33.049306 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 12 05:56:33.054272 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 05:56:33.065899 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 12 05:56:33.070359 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 12 05:56:33.070528 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 12 05:56:33.086068 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 12 05:56:33.086251 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 12 05:56:33.087248 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 12 05:56:33.090313 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 12 05:56:33.094658 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 12 05:56:33.095964 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 12 05:56:33.096291 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 12 05:56:33.096656 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 12 05:56:33.096966 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Sep 12 05:56:33.097262 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 12 05:56:33.099089 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 12 05:56:33.102265 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 12 05:56:33.102997 ignition[1259]: Ignition 2.22.0 Sep 12 05:56:33.103423 ignition[1259]: deleting config from guestinfo properties Sep 12 05:56:33.147183 kernel: ACPI: bus type drm_connector registered Sep 12 05:56:33.140725 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 12 05:56:33.143056 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 12 05:56:33.143251 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 12 05:56:33.151082 systemd-journald[1229]: Time spent on flushing to /var/log/journal/0583c1562bc941a9b0d13c7c7f6c3919 is 30.681ms for 1767 entries. Sep 12 05:56:33.151082 systemd-journald[1229]: System Journal (/var/log/journal/0583c1562bc941a9b0d13c7c7f6c3919) is 8M, max 584.8M, 576.8M free. Sep 12 05:56:33.235536 systemd-journald[1229]: Received client request to flush runtime journal. Sep 12 05:56:33.235583 kernel: loop0: detected capacity change from 0 to 2960 Sep 12 05:56:33.153113 ignition[1259]: Successfully deleted config Sep 12 05:56:33.154303 systemd[1]: Finished ignition-delete-config.service - Ignition (delete config). Sep 12 05:56:33.187699 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 12 05:56:33.187929 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 12 05:56:33.191373 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Sep 12 05:56:33.192565 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 12 05:56:33.228391 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 05:56:33.237279 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 12 05:56:33.239669 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Sep 12 05:56:33.249525 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 12 05:56:33.253214 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 12 05:56:33.251779 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 12 05:56:33.288207 kernel: loop1: detected capacity change from 0 to 128016 Sep 12 05:56:33.324378 systemd-tmpfiles[1303]: ACLs are not supported, ignoring. Sep 12 05:56:33.324391 systemd-tmpfiles[1303]: ACLs are not supported, ignoring. Sep 12 05:56:33.327178 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 05:56:33.370315 kernel: loop2: detected capacity change from 0 to 110984 Sep 12 05:56:33.400380 kernel: loop3: detected capacity change from 0 to 224512 Sep 12 05:56:33.511212 kernel: loop4: detected capacity change from 0 to 2960 Sep 12 05:56:33.524235 kernel: loop5: detected capacity change from 0 to 128016 Sep 12 05:56:33.579210 kernel: loop6: detected capacity change from 0 to 110984 Sep 12 05:56:33.637427 kernel: loop7: detected capacity change from 0 to 224512 Sep 12 05:56:33.655741 (sd-merge)[1310]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-vmware'. Sep 12 05:56:33.656075 (sd-merge)[1310]: Merged extensions into '/usr'. Sep 12 05:56:33.664277 systemd[1]: Reload requested from client PID 1280 ('systemd-sysext') (unit systemd-sysext.service)... Sep 12 05:56:33.664290 systemd[1]: Reloading... Sep 12 05:56:33.703211 zram_generator::config[1332]: No configuration found. Sep 12 05:56:33.855963 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Sep 12 05:56:33.900898 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 12 05:56:33.901265 systemd[1]: Reloading finished in 236 ms. Sep 12 05:56:33.920247 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 12 05:56:33.920615 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 12 05:56:33.928276 systemd[1]: Starting ensure-sysext.service... Sep 12 05:56:33.930258 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 12 05:56:33.932397 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 05:56:33.954353 systemd-tmpfiles[1393]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Sep 12 05:56:33.954374 systemd-tmpfiles[1393]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Sep 12 05:56:33.954561 systemd-tmpfiles[1393]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 12 05:56:33.954743 systemd-tmpfiles[1393]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 12 05:56:33.955306 systemd-tmpfiles[1393]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 12 05:56:33.955540 systemd-tmpfiles[1393]: ACLs are not supported, ignoring. Sep 12 05:56:33.955577 systemd-tmpfiles[1393]: ACLs are not supported, ignoring. Sep 12 05:56:33.957101 systemd-udevd[1394]: Using default interface naming scheme 'v255'. Sep 12 05:56:33.962140 systemd[1]: Reload requested from client PID 1392 ('systemctl') (unit ensure-sysext.service)... Sep 12 05:56:33.962152 systemd[1]: Reloading... Sep 12 05:56:33.967744 systemd-tmpfiles[1393]: Detected autofs mount point /boot during canonicalization of boot. Sep 12 05:56:33.967753 systemd-tmpfiles[1393]: Skipping /boot Sep 12 05:56:33.972753 systemd-tmpfiles[1393]: Detected autofs mount point /boot during canonicalization of boot. Sep 12 05:56:33.972763 systemd-tmpfiles[1393]: Skipping /boot Sep 12 05:56:34.000234 zram_generator::config[1422]: No configuration found. Sep 12 05:56:34.076620 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Sep 12 05:56:34.136386 systemd[1]: Reloading finished in 174 ms. Sep 12 05:56:34.138911 ldconfig[1271]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 12 05:56:34.143520 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 05:56:34.143913 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 05:56:34.148888 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 12 05:56:34.161174 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 12 05:56:34.165084 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 12 05:56:34.166439 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 12 05:56:34.169710 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 12 05:56:34.171097 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 12 05:56:34.173500 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 12 05:56:34.182453 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 05:56:34.184175 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 05:56:34.194009 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 05:56:34.195318 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 05:56:34.195602 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 05:56:34.195677 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 12 05:56:34.195746 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 05:56:34.201107 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 05:56:34.201291 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 05:56:34.201383 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 12 05:56:34.202632 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 12 05:56:34.202752 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 05:56:34.207677 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 12 05:56:34.208441 systemd[1]: Finished ensure-sysext.service. Sep 12 05:56:34.209572 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 05:56:34.210952 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 12 05:56:34.212370 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 05:56:34.212396 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 12 05:56:34.218579 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 12 05:56:34.218730 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 05:56:34.222975 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 05:56:34.223109 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 05:56:34.224647 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 12 05:56:34.226524 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 05:56:34.226653 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 05:56:34.227071 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 12 05:56:34.232490 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 12 05:56:34.234006 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 12 05:56:34.234349 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 12 05:56:34.234841 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 05:56:34.235162 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 05:56:34.236671 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 12 05:56:34.248547 augenrules[1543]: No rules Sep 12 05:56:34.248904 systemd[1]: audit-rules.service: Deactivated successfully. Sep 12 05:56:34.249502 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 12 05:56:34.255278 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 12 05:56:34.263255 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 12 05:56:34.263530 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 12 05:56:34.263776 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Sep 12 05:56:34.297688 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 12 05:56:34.384930 systemd-networkd[1507]: lo: Link UP Sep 12 05:56:34.385129 systemd-networkd[1507]: lo: Gained carrier Sep 12 05:56:34.386008 systemd-networkd[1507]: Enumeration completed Sep 12 05:56:34.387037 systemd-networkd[1507]: ens192: Configuring with /etc/systemd/network/00-vmware.network. Sep 12 05:56:34.389750 kernel: vmxnet3 0000:0b:00.0 ens192: intr type 3, mode 0, 3 vectors allocated Sep 12 05:56:34.389872 kernel: vmxnet3 0000:0b:00.0 ens192: NIC Link is Up 10000 Mbps Sep 12 05:56:34.390571 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 12 05:56:34.391168 systemd-networkd[1507]: ens192: Link UP Sep 12 05:56:34.391459 systemd-networkd[1507]: ens192: Gained carrier Sep 12 05:56:34.392143 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Sep 12 05:56:34.395327 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 12 05:56:34.409507 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 12 05:56:34.409674 systemd[1]: Reached target time-set.target - System Time Set. Sep 12 05:56:34.412202 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Sep 12 05:56:34.412606 systemd-resolved[1508]: Positive Trust Anchors: Sep 12 05:56:34.412764 systemd-resolved[1508]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 12 05:56:34.412791 systemd-resolved[1508]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 12 05:56:34.416207 kernel: ACPI: button: Power Button [PWRF] Sep 12 05:56:34.422456 systemd-resolved[1508]: Defaulting to hostname 'linux'. Sep 12 05:56:34.423394 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 12 05:56:34.423564 systemd[1]: Reached target network.target - Network. Sep 12 05:56:34.423660 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 12 05:56:34.423777 systemd[1]: Reached target sysinit.target - System Initialization. Sep 12 05:56:34.423923 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 12 05:56:34.424050 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 12 05:56:34.424165 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Sep 12 05:56:34.424356 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 12 05:56:34.424492 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 12 05:56:34.424608 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 12 05:56:34.424727 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 12 05:56:34.424747 systemd[1]: Reached target paths.target - Path Units. Sep 12 05:56:34.424835 systemd[1]: Reached target timers.target - Timer Units. Sep 12 05:56:34.433202 kernel: mousedev: PS/2 mouse device common for all mice Sep 12 05:56:34.436152 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 12 05:56:34.437358 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 12 05:56:34.438763 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Sep 12 05:56:34.439519 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Sep 12 05:56:34.439797 systemd[1]: Reached target ssh-access.target - SSH Access Available. Sep 12 05:56:34.444413 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 12 05:56:34.444780 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Sep 12 05:56:34.446000 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Sep 12 05:56:34.446308 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 12 05:56:34.448858 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_disk OEM. Sep 12 05:56:34.449819 systemd[1]: Reached target sockets.target - Socket Units. Sep 12 05:56:34.450158 systemd[1]: Reached target basic.target - Basic System. Sep 12 05:56:34.450439 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 12 05:56:34.450459 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 12 05:56:34.452061 systemd[1]: Starting containerd.service - containerd container runtime... Sep 12 05:56:34.455476 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 12 05:56:34.456242 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 12 05:56:34.460165 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 12 05:56:34.462108 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 12 05:56:34.462241 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 12 05:56:34.463428 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Sep 12 05:56:34.467054 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 12 05:56:34.475222 jq[1580]: false Sep 12 05:56:34.475677 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 12 05:56:34.478662 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 12 05:56:34.479534 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 12 05:56:34.480743 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 12 05:56:34.490113 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 12 05:56:34.490761 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 12 05:56:34.491248 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 12 05:56:34.493281 extend-filesystems[1581]: Found /dev/sda6 Sep 12 05:56:34.493987 systemd[1]: Starting update-engine.service - Update Engine... Sep 12 05:56:34.496467 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 12 05:56:34.496911 oslogin_cache_refresh[1582]: Refreshing passwd entry cache Sep 12 05:56:34.497759 google_oslogin_nss_cache[1582]: oslogin_cache_refresh[1582]: Refreshing passwd entry cache Sep 12 05:56:34.499925 systemd[1]: Starting vgauthd.service - VGAuth Service for open-vm-tools... Sep 12 05:56:34.503937 google_oslogin_nss_cache[1582]: oslogin_cache_refresh[1582]: Failure getting users, quitting Sep 12 05:56:34.504012 oslogin_cache_refresh[1582]: Failure getting users, quitting Sep 12 05:56:34.504621 google_oslogin_nss_cache[1582]: oslogin_cache_refresh[1582]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Sep 12 05:56:34.504621 google_oslogin_nss_cache[1582]: oslogin_cache_refresh[1582]: Refreshing group entry cache Sep 12 05:56:34.504368 oslogin_cache_refresh[1582]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Sep 12 05:56:34.504393 oslogin_cache_refresh[1582]: Refreshing group entry cache Sep 12 05:56:34.507202 kernel: piix4_smbus 0000:00:07.3: SMBus Host Controller not enabled! Sep 12 05:56:34.507667 google_oslogin_nss_cache[1582]: oslogin_cache_refresh[1582]: Failure getting groups, quitting Sep 12 05:56:34.507667 google_oslogin_nss_cache[1582]: oslogin_cache_refresh[1582]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Sep 12 05:56:34.507630 oslogin_cache_refresh[1582]: Failure getting groups, quitting Sep 12 05:56:34.507637 oslogin_cache_refresh[1582]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Sep 12 05:56:34.508532 extend-filesystems[1581]: Found /dev/sda9 Sep 12 05:56:34.509694 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 12 05:56:34.510007 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 12 05:56:34.510124 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 12 05:56:34.510204 extend-filesystems[1581]: Checking size of /dev/sda9 Sep 12 05:56:34.510281 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Sep 12 05:56:34.510393 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Sep 12 05:56:34.516356 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 12 05:56:34.517933 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 12 05:56:34.523195 jq[1594]: true Sep 12 05:56:34.534176 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 12 05:56:34.550100 dbus-daemon[1578]: [system] SELinux support is enabled Sep 12 05:56:34.550218 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 12 05:56:34.552046 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 12 05:56:34.552064 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 12 05:56:34.553792 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 12 05:56:34.553804 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 12 05:56:34.563421 tar[1601]: linux-amd64/LICENSE Sep 12 05:56:34.563421 tar[1601]: linux-amd64/helm Sep 12 05:56:34.564777 extend-filesystems[1581]: Old size kept for /dev/sda9 Sep 12 05:56:34.565437 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 12 05:56:34.566441 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 12 05:56:34.569848 systemd[1]: motdgen.service: Deactivated successfully. Sep 12 05:56:34.570002 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 12 05:56:34.572050 update_engine[1592]: I20250912 05:56:34.571993 1592 main.cc:92] Flatcar Update Engine starting Sep 12 05:56:34.572762 systemd[1]: Started vgauthd.service - VGAuth Service for open-vm-tools. Sep 12 05:56:34.577575 jq[1614]: true Sep 12 05:56:34.577785 systemd[1]: Starting vmtoolsd.service - Service for virtual machines hosted on VMware... Sep 12 05:56:34.581148 (ntainerd)[1620]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 12 05:56:34.581896 update_engine[1592]: I20250912 05:56:34.581862 1592 update_check_scheduler.cc:74] Next update check in 10m51s Sep 12 05:56:34.585153 systemd[1]: Started update-engine.service - Update Engine. Sep 12 05:56:34.599577 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 12 05:56:34.662441 unknown[1628]: Pref_Init: Using '/etc/vmware-tools/vgauth.conf' as preferences filepath Sep 12 05:56:34.664117 unknown[1628]: Core dump limit set to -1 Sep 12 05:56:34.666462 bash[1647]: Updated "/home/core/.ssh/authorized_keys" Sep 12 05:56:34.667164 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 12 05:56:34.668017 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Sep 12 05:56:34.669582 systemd[1]: Started vmtoolsd.service - Service for virtual machines hosted on VMware. Sep 12 05:56:34.732390 systemd-logind[1591]: New seat seat0. Sep 12 05:56:34.736164 systemd[1]: Started systemd-logind.service - User Login Management. Sep 12 05:56:34.778578 (udev-worker)[1497]: id: Truncating stdout of 'dmi_memory_id' up to 16384 byte. Sep 12 05:56:34.787484 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 05:58:07.936187 systemd-timesyncd[1526]: Contacted time server 139.177.202.26:123 (0.flatcar.pool.ntp.org). Sep 12 05:58:07.936319 systemd-timesyncd[1526]: Initial clock synchronization to Fri 2025-09-12 05:58:07.935858 UTC. Sep 12 05:58:07.936783 systemd-resolved[1508]: Clock change detected. Flushing caches. Sep 12 05:58:07.984900 sshd_keygen[1622]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 12 05:58:08.001251 systemd-logind[1591]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 12 05:58:08.005712 systemd-logind[1591]: Watching system buttons on /dev/input/event2 (Power Button) Sep 12 05:58:08.059585 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 12 05:58:08.062200 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 12 05:58:08.068811 locksmithd[1633]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 12 05:58:08.082610 systemd[1]: issuegen.service: Deactivated successfully. Sep 12 05:58:08.082798 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 12 05:58:08.084890 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 12 05:58:08.088495 containerd[1620]: time="2025-09-12T05:58:08Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Sep 12 05:58:08.089097 containerd[1620]: time="2025-09-12T05:58:08.088953696Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Sep 12 05:58:08.110036 containerd[1620]: time="2025-09-12T05:58:08.109923103Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="8.389µs" Sep 12 05:58:08.110036 containerd[1620]: time="2025-09-12T05:58:08.109947812Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Sep 12 05:58:08.110036 containerd[1620]: time="2025-09-12T05:58:08.109960484Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Sep 12 05:58:08.110301 containerd[1620]: time="2025-09-12T05:58:08.110288718Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Sep 12 05:58:08.111613 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 12 05:58:08.112587 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 12 05:58:08.113623 containerd[1620]: time="2025-09-12T05:58:08.113498826Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Sep 12 05:58:08.113623 containerd[1620]: time="2025-09-12T05:58:08.113530332Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 12 05:58:08.116123 containerd[1620]: time="2025-09-12T05:58:08.115006330Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 12 05:58:08.116123 containerd[1620]: time="2025-09-12T05:58:08.115019678Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 12 05:58:08.116123 containerd[1620]: time="2025-09-12T05:58:08.115168795Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 12 05:58:08.116123 containerd[1620]: time="2025-09-12T05:58:08.115180285Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 12 05:58:08.116123 containerd[1620]: time="2025-09-12T05:58:08.115188137Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 12 05:58:08.116123 containerd[1620]: time="2025-09-12T05:58:08.115192741Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Sep 12 05:58:08.116123 containerd[1620]: time="2025-09-12T05:58:08.115234744Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Sep 12 05:58:08.116123 containerd[1620]: time="2025-09-12T05:58:08.115344541Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 12 05:58:08.116123 containerd[1620]: time="2025-09-12T05:58:08.115360996Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 12 05:58:08.116123 containerd[1620]: time="2025-09-12T05:58:08.115367641Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Sep 12 05:58:08.116123 containerd[1620]: time="2025-09-12T05:58:08.115391362Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Sep 12 05:58:08.115282 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Sep 12 05:58:08.116454 containerd[1620]: time="2025-09-12T05:58:08.115521140Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Sep 12 05:58:08.115662 systemd[1]: Reached target getty.target - Login Prompts. Sep 12 05:58:08.119332 containerd[1620]: time="2025-09-12T05:58:08.119241219Z" level=info msg="metadata content store policy set" policy=shared Sep 12 05:58:08.123744 containerd[1620]: time="2025-09-12T05:58:08.123571996Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Sep 12 05:58:08.123744 containerd[1620]: time="2025-09-12T05:58:08.123610951Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Sep 12 05:58:08.123744 containerd[1620]: time="2025-09-12T05:58:08.123629482Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Sep 12 05:58:08.123744 containerd[1620]: time="2025-09-12T05:58:08.123637262Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Sep 12 05:58:08.123744 containerd[1620]: time="2025-09-12T05:58:08.123644375Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Sep 12 05:58:08.123744 containerd[1620]: time="2025-09-12T05:58:08.123650412Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Sep 12 05:58:08.124329 containerd[1620]: time="2025-09-12T05:58:08.124160239Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Sep 12 05:58:08.124329 containerd[1620]: time="2025-09-12T05:58:08.124182213Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Sep 12 05:58:08.124329 containerd[1620]: time="2025-09-12T05:58:08.124190302Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Sep 12 05:58:08.124329 containerd[1620]: time="2025-09-12T05:58:08.124197211Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Sep 12 05:58:08.124329 containerd[1620]: time="2025-09-12T05:58:08.124202348Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Sep 12 05:58:08.124329 containerd[1620]: time="2025-09-12T05:58:08.124209606Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Sep 12 05:58:08.124329 containerd[1620]: time="2025-09-12T05:58:08.124284592Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Sep 12 05:58:08.124329 containerd[1620]: time="2025-09-12T05:58:08.124296768Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Sep 12 05:58:08.125695 containerd[1620]: time="2025-09-12T05:58:08.124606824Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Sep 12 05:58:08.125695 containerd[1620]: time="2025-09-12T05:58:08.124624335Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Sep 12 05:58:08.125695 containerd[1620]: time="2025-09-12T05:58:08.124632963Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Sep 12 05:58:08.125695 containerd[1620]: time="2025-09-12T05:58:08.124642791Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Sep 12 05:58:08.125695 containerd[1620]: time="2025-09-12T05:58:08.124649311Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Sep 12 05:58:08.125695 containerd[1620]: time="2025-09-12T05:58:08.124655932Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Sep 12 05:58:08.125695 containerd[1620]: time="2025-09-12T05:58:08.124661944Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Sep 12 05:58:08.125695 containerd[1620]: time="2025-09-12T05:58:08.124667692Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Sep 12 05:58:08.127876 containerd[1620]: time="2025-09-12T05:58:08.124674626Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Sep 12 05:58:08.127876 containerd[1620]: time="2025-09-12T05:58:08.125953537Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Sep 12 05:58:08.127876 containerd[1620]: time="2025-09-12T05:58:08.125965435Z" level=info msg="Start snapshots syncer" Sep 12 05:58:08.127876 containerd[1620]: time="2025-09-12T05:58:08.125997649Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Sep 12 05:58:08.127951 containerd[1620]: time="2025-09-12T05:58:08.126156386Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Sep 12 05:58:08.127951 containerd[1620]: time="2025-09-12T05:58:08.126197020Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Sep 12 05:58:08.127951 containerd[1620]: time="2025-09-12T05:58:08.126245121Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Sep 12 05:58:08.127951 containerd[1620]: time="2025-09-12T05:58:08.126302926Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Sep 12 05:58:08.127951 containerd[1620]: time="2025-09-12T05:58:08.126315250Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Sep 12 05:58:08.127951 containerd[1620]: time="2025-09-12T05:58:08.126321261Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Sep 12 05:58:08.127951 containerd[1620]: time="2025-09-12T05:58:08.126329454Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Sep 12 05:58:08.127951 containerd[1620]: time="2025-09-12T05:58:08.126336259Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Sep 12 05:58:08.127951 containerd[1620]: time="2025-09-12T05:58:08.126342096Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Sep 12 05:58:08.127951 containerd[1620]: time="2025-09-12T05:58:08.126348352Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Sep 12 05:58:08.127951 containerd[1620]: time="2025-09-12T05:58:08.126362424Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Sep 12 05:58:08.127951 containerd[1620]: time="2025-09-12T05:58:08.126372117Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Sep 12 05:58:08.127951 containerd[1620]: time="2025-09-12T05:58:08.126378917Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Sep 12 05:58:08.127951 containerd[1620]: time="2025-09-12T05:58:08.126404080Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 12 05:58:08.127951 containerd[1620]: time="2025-09-12T05:58:08.126416102Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 12 05:58:08.127951 containerd[1620]: time="2025-09-12T05:58:08.126421539Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 12 05:58:08.127951 containerd[1620]: time="2025-09-12T05:58:08.126426518Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 12 05:58:08.127951 containerd[1620]: time="2025-09-12T05:58:08.126430732Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Sep 12 05:58:08.127951 containerd[1620]: time="2025-09-12T05:58:08.126438911Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Sep 12 05:58:08.127951 containerd[1620]: time="2025-09-12T05:58:08.126446800Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Sep 12 05:58:08.127951 containerd[1620]: time="2025-09-12T05:58:08.126455867Z" level=info msg="runtime interface created" Sep 12 05:58:08.127951 containerd[1620]: time="2025-09-12T05:58:08.126458776Z" level=info msg="created NRI interface" Sep 12 05:58:08.127951 containerd[1620]: time="2025-09-12T05:58:08.126463226Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Sep 12 05:58:08.127951 containerd[1620]: time="2025-09-12T05:58:08.126469241Z" level=info msg="Connect containerd service" Sep 12 05:58:08.127951 containerd[1620]: time="2025-09-12T05:58:08.126483705Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 12 05:58:08.129773 containerd[1620]: time="2025-09-12T05:58:08.129411551Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 12 05:58:08.226499 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 05:58:08.249448 containerd[1620]: time="2025-09-12T05:58:08.249423115Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 12 05:58:08.249557 containerd[1620]: time="2025-09-12T05:58:08.249481498Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 12 05:58:08.249557 containerd[1620]: time="2025-09-12T05:58:08.249499708Z" level=info msg="Start subscribing containerd event" Sep 12 05:58:08.249557 containerd[1620]: time="2025-09-12T05:58:08.249532676Z" level=info msg="Start recovering state" Sep 12 05:58:08.249602 containerd[1620]: time="2025-09-12T05:58:08.249593356Z" level=info msg="Start event monitor" Sep 12 05:58:08.249616 containerd[1620]: time="2025-09-12T05:58:08.249604448Z" level=info msg="Start cni network conf syncer for default" Sep 12 05:58:08.249616 containerd[1620]: time="2025-09-12T05:58:08.249608841Z" level=info msg="Start streaming server" Sep 12 05:58:08.249650 containerd[1620]: time="2025-09-12T05:58:08.249616230Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Sep 12 05:58:08.249650 containerd[1620]: time="2025-09-12T05:58:08.249622944Z" level=info msg="runtime interface starting up..." Sep 12 05:58:08.249650 containerd[1620]: time="2025-09-12T05:58:08.249626007Z" level=info msg="starting plugins..." Sep 12 05:58:08.249650 containerd[1620]: time="2025-09-12T05:58:08.249633094Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Sep 12 05:58:08.249703 containerd[1620]: time="2025-09-12T05:58:08.249693778Z" level=info msg="containerd successfully booted in 0.161411s" Sep 12 05:58:08.250622 systemd[1]: Started containerd.service - containerd container runtime. Sep 12 05:58:08.254203 tar[1601]: linux-amd64/README.md Sep 12 05:58:08.266786 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 12 05:58:08.962698 systemd-networkd[1507]: ens192: Gained IPv6LL Sep 12 05:58:08.964377 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 12 05:58:08.965591 systemd[1]: Reached target network-online.target - Network is Online. Sep 12 05:58:08.967179 systemd[1]: Starting coreos-metadata.service - VMware metadata agent... Sep 12 05:58:08.979679 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 05:58:08.981697 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 12 05:58:09.038092 systemd[1]: coreos-metadata.service: Deactivated successfully. Sep 12 05:58:09.038245 systemd[1]: Finished coreos-metadata.service - VMware metadata agent. Sep 12 05:58:09.038669 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 12 05:58:09.042421 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 12 05:58:10.416271 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 05:58:10.416871 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 12 05:58:10.417502 systemd[1]: Startup finished in 2.590s (kernel) + 5.689s (initrd) + 5.074s (userspace) = 13.353s. Sep 12 05:58:10.423085 (kubelet)[1801]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 05:58:10.447882 login[1726]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Sep 12 05:58:10.448981 login[1727]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Sep 12 05:58:10.455204 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 12 05:58:10.455906 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 12 05:58:10.462849 systemd-logind[1591]: New session 2 of user core. Sep 12 05:58:10.465903 systemd-logind[1591]: New session 1 of user core. Sep 12 05:58:10.473758 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 12 05:58:10.475427 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 12 05:58:10.492640 (systemd)[1808]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 12 05:58:10.493864 systemd-logind[1591]: New session c1 of user core. Sep 12 05:58:10.590544 systemd[1808]: Queued start job for default target default.target. Sep 12 05:58:10.599339 systemd[1808]: Created slice app.slice - User Application Slice. Sep 12 05:58:10.599355 systemd[1808]: Reached target paths.target - Paths. Sep 12 05:58:10.599381 systemd[1808]: Reached target timers.target - Timers. Sep 12 05:58:10.601598 systemd[1808]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 12 05:58:10.606627 systemd[1808]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 12 05:58:10.606657 systemd[1808]: Reached target sockets.target - Sockets. Sep 12 05:58:10.606680 systemd[1808]: Reached target basic.target - Basic System. Sep 12 05:58:10.606701 systemd[1808]: Reached target default.target - Main User Target. Sep 12 05:58:10.606716 systemd[1808]: Startup finished in 108ms. Sep 12 05:58:10.607054 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 12 05:58:10.617757 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 12 05:58:10.619011 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 12 05:58:11.202133 kubelet[1801]: E0912 05:58:11.202090 1801 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 05:58:11.203600 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 05:58:11.203709 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 05:58:11.203940 systemd[1]: kubelet.service: Consumed 722ms CPU time, 265.8M memory peak. Sep 12 05:58:21.454111 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 12 05:58:21.455116 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 05:58:21.806444 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 05:58:21.815716 (kubelet)[1852]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 05:58:21.869181 kubelet[1852]: E0912 05:58:21.869147 1852 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 05:58:21.871474 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 05:58:21.871631 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 05:58:21.872004 systemd[1]: kubelet.service: Consumed 98ms CPU time, 108.9M memory peak. Sep 12 05:58:32.122102 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 12 05:58:32.123835 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 05:58:32.463083 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 05:58:32.468853 (kubelet)[1867]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 05:58:32.518339 kubelet[1867]: E0912 05:58:32.518307 1867 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 05:58:32.519887 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 05:58:32.519972 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 05:58:32.520222 systemd[1]: kubelet.service: Consumed 106ms CPU time, 110.5M memory peak. Sep 12 05:58:38.034931 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 12 05:58:38.035832 systemd[1]: Started sshd@0-139.178.70.104:22-139.178.68.195:44472.service - OpenSSH per-connection server daemon (139.178.68.195:44472). Sep 12 05:58:38.088576 sshd[1875]: Accepted publickey for core from 139.178.68.195 port 44472 ssh2: RSA SHA256:YuRw/Z+mtbKlizBy5Mlbv/m4kMpLx5w83DuHM3gqcsA Sep 12 05:58:38.089284 sshd-session[1875]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 05:58:38.091898 systemd-logind[1591]: New session 3 of user core. Sep 12 05:58:38.098951 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 12 05:58:38.154416 systemd[1]: Started sshd@1-139.178.70.104:22-139.178.68.195:44476.service - OpenSSH per-connection server daemon (139.178.68.195:44476). Sep 12 05:58:38.195308 sshd[1881]: Accepted publickey for core from 139.178.68.195 port 44476 ssh2: RSA SHA256:YuRw/Z+mtbKlizBy5Mlbv/m4kMpLx5w83DuHM3gqcsA Sep 12 05:58:38.195852 sshd-session[1881]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 05:58:38.198469 systemd-logind[1591]: New session 4 of user core. Sep 12 05:58:38.208814 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 12 05:58:38.257585 sshd[1884]: Connection closed by 139.178.68.195 port 44476 Sep 12 05:58:38.257903 sshd-session[1881]: pam_unix(sshd:session): session closed for user core Sep 12 05:58:38.266954 systemd[1]: sshd@1-139.178.70.104:22-139.178.68.195:44476.service: Deactivated successfully. Sep 12 05:58:38.268008 systemd[1]: session-4.scope: Deactivated successfully. Sep 12 05:58:38.268576 systemd-logind[1591]: Session 4 logged out. Waiting for processes to exit. Sep 12 05:58:38.269961 systemd[1]: Started sshd@2-139.178.70.104:22-139.178.68.195:44490.service - OpenSSH per-connection server daemon (139.178.68.195:44490). Sep 12 05:58:38.270820 systemd-logind[1591]: Removed session 4. Sep 12 05:58:38.306786 sshd[1890]: Accepted publickey for core from 139.178.68.195 port 44490 ssh2: RSA SHA256:YuRw/Z+mtbKlizBy5Mlbv/m4kMpLx5w83DuHM3gqcsA Sep 12 05:58:38.307763 sshd-session[1890]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 05:58:38.310733 systemd-logind[1591]: New session 5 of user core. Sep 12 05:58:38.320669 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 12 05:58:38.368279 sshd[1893]: Connection closed by 139.178.68.195 port 44490 Sep 12 05:58:38.368705 sshd-session[1890]: pam_unix(sshd:session): session closed for user core Sep 12 05:58:38.378255 systemd[1]: sshd@2-139.178.70.104:22-139.178.68.195:44490.service: Deactivated successfully. Sep 12 05:58:38.379458 systemd[1]: session-5.scope: Deactivated successfully. Sep 12 05:58:38.380133 systemd-logind[1591]: Session 5 logged out. Waiting for processes to exit. Sep 12 05:58:38.381522 systemd[1]: Started sshd@3-139.178.70.104:22-139.178.68.195:44498.service - OpenSSH per-connection server daemon (139.178.68.195:44498). Sep 12 05:58:38.383079 systemd-logind[1591]: Removed session 5. Sep 12 05:58:38.422778 sshd[1899]: Accepted publickey for core from 139.178.68.195 port 44498 ssh2: RSA SHA256:YuRw/Z+mtbKlizBy5Mlbv/m4kMpLx5w83DuHM3gqcsA Sep 12 05:58:38.423754 sshd-session[1899]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 05:58:38.427634 systemd-logind[1591]: New session 6 of user core. Sep 12 05:58:38.437669 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 12 05:58:38.486462 sshd[1902]: Connection closed by 139.178.68.195 port 44498 Sep 12 05:58:38.487152 sshd-session[1899]: pam_unix(sshd:session): session closed for user core Sep 12 05:58:38.495582 systemd[1]: sshd@3-139.178.70.104:22-139.178.68.195:44498.service: Deactivated successfully. Sep 12 05:58:38.496525 systemd[1]: session-6.scope: Deactivated successfully. Sep 12 05:58:38.497020 systemd-logind[1591]: Session 6 logged out. Waiting for processes to exit. Sep 12 05:58:38.498312 systemd[1]: Started sshd@4-139.178.70.104:22-139.178.68.195:44500.service - OpenSSH per-connection server daemon (139.178.68.195:44500). Sep 12 05:58:38.499606 systemd-logind[1591]: Removed session 6. Sep 12 05:58:38.532523 sshd[1908]: Accepted publickey for core from 139.178.68.195 port 44500 ssh2: RSA SHA256:YuRw/Z+mtbKlizBy5Mlbv/m4kMpLx5w83DuHM3gqcsA Sep 12 05:58:38.533197 sshd-session[1908]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 05:58:38.535741 systemd-logind[1591]: New session 7 of user core. Sep 12 05:58:38.545663 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 12 05:58:38.603366 sudo[1912]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 12 05:58:38.604133 sudo[1912]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 05:58:38.617256 sudo[1912]: pam_unix(sudo:session): session closed for user root Sep 12 05:58:38.618185 sshd[1911]: Connection closed by 139.178.68.195 port 44500 Sep 12 05:58:38.618645 sshd-session[1908]: pam_unix(sshd:session): session closed for user core Sep 12 05:58:38.630248 systemd[1]: sshd@4-139.178.70.104:22-139.178.68.195:44500.service: Deactivated successfully. Sep 12 05:58:38.631861 systemd[1]: session-7.scope: Deactivated successfully. Sep 12 05:58:38.632595 systemd-logind[1591]: Session 7 logged out. Waiting for processes to exit. Sep 12 05:58:38.635092 systemd[1]: Started sshd@5-139.178.70.104:22-139.178.68.195:44504.service - OpenSSH per-connection server daemon (139.178.68.195:44504). Sep 12 05:58:38.635817 systemd-logind[1591]: Removed session 7. Sep 12 05:58:38.677782 sshd[1918]: Accepted publickey for core from 139.178.68.195 port 44504 ssh2: RSA SHA256:YuRw/Z+mtbKlizBy5Mlbv/m4kMpLx5w83DuHM3gqcsA Sep 12 05:58:38.678568 sshd-session[1918]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 05:58:38.681269 systemd-logind[1591]: New session 8 of user core. Sep 12 05:58:38.690733 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 12 05:58:38.741140 sudo[1923]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 12 05:58:38.741342 sudo[1923]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 05:58:38.744427 sudo[1923]: pam_unix(sudo:session): session closed for user root Sep 12 05:58:38.748351 sudo[1922]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Sep 12 05:58:38.748749 sudo[1922]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 05:58:38.756316 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 12 05:58:38.788198 augenrules[1945]: No rules Sep 12 05:58:38.788954 systemd[1]: audit-rules.service: Deactivated successfully. Sep 12 05:58:38.789108 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 12 05:58:38.789611 sudo[1922]: pam_unix(sudo:session): session closed for user root Sep 12 05:58:38.790514 sshd[1921]: Connection closed by 139.178.68.195 port 44504 Sep 12 05:58:38.790761 sshd-session[1918]: pam_unix(sshd:session): session closed for user core Sep 12 05:58:38.796156 systemd[1]: sshd@5-139.178.70.104:22-139.178.68.195:44504.service: Deactivated successfully. Sep 12 05:58:38.797065 systemd[1]: session-8.scope: Deactivated successfully. Sep 12 05:58:38.797793 systemd-logind[1591]: Session 8 logged out. Waiting for processes to exit. Sep 12 05:58:38.799629 systemd[1]: Started sshd@6-139.178.70.104:22-139.178.68.195:44512.service - OpenSSH per-connection server daemon (139.178.68.195:44512). Sep 12 05:58:38.800105 systemd-logind[1591]: Removed session 8. Sep 12 05:58:38.835439 sshd[1954]: Accepted publickey for core from 139.178.68.195 port 44512 ssh2: RSA SHA256:YuRw/Z+mtbKlizBy5Mlbv/m4kMpLx5w83DuHM3gqcsA Sep 12 05:58:38.836382 sshd-session[1954]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 05:58:38.839688 systemd-logind[1591]: New session 9 of user core. Sep 12 05:58:38.847664 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 12 05:58:38.897911 sudo[1958]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 12 05:58:38.898105 sudo[1958]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 05:58:39.180959 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 12 05:58:39.189871 (dockerd)[1977]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 12 05:58:39.412561 dockerd[1977]: time="2025-09-12T05:58:39.412526134Z" level=info msg="Starting up" Sep 12 05:58:39.413024 dockerd[1977]: time="2025-09-12T05:58:39.413009689Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Sep 12 05:58:39.418966 dockerd[1977]: time="2025-09-12T05:58:39.418941267Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Sep 12 05:58:39.441540 dockerd[1977]: time="2025-09-12T05:58:39.441493130Z" level=info msg="Loading containers: start." Sep 12 05:58:39.449569 kernel: Initializing XFRM netlink socket Sep 12 05:58:39.591926 systemd-networkd[1507]: docker0: Link UP Sep 12 05:58:39.592795 dockerd[1977]: time="2025-09-12T05:58:39.592774991Z" level=info msg="Loading containers: done." Sep 12 05:58:39.600573 dockerd[1977]: time="2025-09-12T05:58:39.600537993Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 12 05:58:39.600642 dockerd[1977]: time="2025-09-12T05:58:39.600595971Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Sep 12 05:58:39.600642 dockerd[1977]: time="2025-09-12T05:58:39.600635377Z" level=info msg="Initializing buildkit" Sep 12 05:58:39.609948 dockerd[1977]: time="2025-09-12T05:58:39.609935147Z" level=info msg="Completed buildkit initialization" Sep 12 05:58:39.613932 dockerd[1977]: time="2025-09-12T05:58:39.613917497Z" level=info msg="Daemon has completed initialization" Sep 12 05:58:39.614074 dockerd[1977]: time="2025-09-12T05:58:39.614043569Z" level=info msg="API listen on /run/docker.sock" Sep 12 05:58:39.614163 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 12 05:58:40.391198 containerd[1620]: time="2025-09-12T05:58:40.391166991Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\"" Sep 12 05:58:40.918524 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2109182098.mount: Deactivated successfully. Sep 12 05:58:41.902903 containerd[1620]: time="2025-09-12T05:58:41.902740468Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 05:58:41.909024 containerd[1620]: time="2025-09-12T05:58:41.908999653Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.9: active requests=0, bytes read=28837916" Sep 12 05:58:41.920524 containerd[1620]: time="2025-09-12T05:58:41.920492980Z" level=info msg="ImageCreate event name:\"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 05:58:41.926751 containerd[1620]: time="2025-09-12T05:58:41.926719684Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 05:58:41.927517 containerd[1620]: time="2025-09-12T05:58:41.927378501Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.9\" with image id \"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\", size \"28834515\" in 1.5361832s" Sep 12 05:58:41.927517 containerd[1620]: time="2025-09-12T05:58:41.927400841Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\" returns image reference \"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\"" Sep 12 05:58:41.927953 containerd[1620]: time="2025-09-12T05:58:41.927930288Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\"" Sep 12 05:58:42.619996 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Sep 12 05:58:42.623418 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 05:58:42.971919 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 05:58:42.980789 (kubelet)[2257]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 05:58:43.012008 kubelet[2257]: E0912 05:58:43.011955 2257 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 05:58:43.014182 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 05:58:43.014522 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 05:58:43.014774 systemd[1]: kubelet.service: Consumed 103ms CPU time, 109.8M memory peak. Sep 12 05:58:43.680270 containerd[1620]: time="2025-09-12T05:58:43.680240543Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 05:58:43.689652 containerd[1620]: time="2025-09-12T05:58:43.689614182Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.9: active requests=0, bytes read=24787027" Sep 12 05:58:43.701386 containerd[1620]: time="2025-09-12T05:58:43.701355832Z" level=info msg="ImageCreate event name:\"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 05:58:43.711514 containerd[1620]: time="2025-09-12T05:58:43.711485965Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 05:58:43.712191 containerd[1620]: time="2025-09-12T05:58:43.712102106Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.9\" with image id \"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\", size \"26421706\" in 1.784149342s" Sep 12 05:58:43.712191 containerd[1620]: time="2025-09-12T05:58:43.712126267Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\" returns image reference \"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\"" Sep 12 05:58:43.712614 containerd[1620]: time="2025-09-12T05:58:43.712600599Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\"" Sep 12 05:58:44.941961 containerd[1620]: time="2025-09-12T05:58:44.941928807Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 05:58:44.944213 containerd[1620]: time="2025-09-12T05:58:44.944098771Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.9: active requests=0, bytes read=19176289" Sep 12 05:58:44.949207 containerd[1620]: time="2025-09-12T05:58:44.949191816Z" level=info msg="ImageCreate event name:\"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 05:58:44.954237 containerd[1620]: time="2025-09-12T05:58:44.954223567Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 05:58:44.954672 containerd[1620]: time="2025-09-12T05:58:44.954655538Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.9\" with image id \"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\", size \"20810986\" in 1.241997363s" Sep 12 05:58:44.954703 containerd[1620]: time="2025-09-12T05:58:44.954672572Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\" returns image reference \"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\"" Sep 12 05:58:44.954970 containerd[1620]: time="2025-09-12T05:58:44.954949281Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\"" Sep 12 05:58:45.814944 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1139360259.mount: Deactivated successfully. Sep 12 05:58:46.260579 containerd[1620]: time="2025-09-12T05:58:46.260197082Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 05:58:46.265817 containerd[1620]: time="2025-09-12T05:58:46.265794836Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.9: active requests=0, bytes read=30924206" Sep 12 05:58:46.273133 containerd[1620]: time="2025-09-12T05:58:46.273088515Z" level=info msg="ImageCreate event name:\"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 05:58:46.279831 containerd[1620]: time="2025-09-12T05:58:46.279804084Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 05:58:46.280172 containerd[1620]: time="2025-09-12T05:58:46.280156947Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.9\" with image id \"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\", repo tag \"registry.k8s.io/kube-proxy:v1.32.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\", size \"30923225\" in 1.325191306s" Sep 12 05:58:46.280227 containerd[1620]: time="2025-09-12T05:58:46.280217468Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\" returns image reference \"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\"" Sep 12 05:58:46.280540 containerd[1620]: time="2025-09-12T05:58:46.280520769Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 12 05:58:46.904922 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3428057222.mount: Deactivated successfully. Sep 12 05:58:47.605581 containerd[1620]: time="2025-09-12T05:58:47.605452288Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 05:58:47.612569 containerd[1620]: time="2025-09-12T05:58:47.612521918Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Sep 12 05:58:47.617737 containerd[1620]: time="2025-09-12T05:58:47.617701633Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 05:58:47.626653 containerd[1620]: time="2025-09-12T05:58:47.626612879Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 05:58:47.627504 containerd[1620]: time="2025-09-12T05:58:47.627417468Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.346875079s" Sep 12 05:58:47.627504 containerd[1620]: time="2025-09-12T05:58:47.627440437Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Sep 12 05:58:47.627824 containerd[1620]: time="2025-09-12T05:58:47.627789231Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 12 05:58:48.055183 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2395566664.mount: Deactivated successfully. Sep 12 05:58:48.075139 containerd[1620]: time="2025-09-12T05:58:48.074698508Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 05:58:48.078793 containerd[1620]: time="2025-09-12T05:58:48.078777919Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Sep 12 05:58:48.081107 containerd[1620]: time="2025-09-12T05:58:48.081091736Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 05:58:48.085473 containerd[1620]: time="2025-09-12T05:58:48.085458074Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 05:58:48.085926 containerd[1620]: time="2025-09-12T05:58:48.085904400Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 458.091952ms" Sep 12 05:58:48.085964 containerd[1620]: time="2025-09-12T05:58:48.085927661Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Sep 12 05:58:48.086297 containerd[1620]: time="2025-09-12T05:58:48.086277734Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Sep 12 05:58:48.593860 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3506364915.mount: Deactivated successfully. Sep 12 05:58:50.050072 systemd[1]: Started sshd@7-139.178.70.104:22-80.94.95.115:36152.service - OpenSSH per-connection server daemon (80.94.95.115:36152). Sep 12 05:58:51.246622 containerd[1620]: time="2025-09-12T05:58:51.246357796Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 05:58:51.247264 containerd[1620]: time="2025-09-12T05:58:51.247241072Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57682056" Sep 12 05:58:51.247833 containerd[1620]: time="2025-09-12T05:58:51.247802463Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 05:58:51.250569 containerd[1620]: time="2025-09-12T05:58:51.250355091Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 05:58:51.251688 containerd[1620]: time="2025-09-12T05:58:51.251668690Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 3.165062539s" Sep 12 05:58:51.251778 containerd[1620]: time="2025-09-12T05:58:51.251764206Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Sep 12 05:58:52.640804 sshd[2382]: Connection closed by authenticating user root 80.94.95.115 port 36152 [preauth] Sep 12 05:58:52.641972 systemd[1]: sshd@7-139.178.70.104:22-80.94.95.115:36152.service: Deactivated successfully. Sep 12 05:58:53.065429 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Sep 12 05:58:53.067755 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 05:58:53.075889 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 12 05:58:53.075934 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 12 05:58:53.076162 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 05:58:53.076856 update_engine[1592]: I20250912 05:58:53.076599 1592 update_attempter.cc:509] Updating boot flags... Sep 12 05:58:53.078686 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 05:58:53.100379 systemd[1]: Reload requested from client PID 2429 ('systemctl') (unit session-9.scope)... Sep 12 05:58:53.100470 systemd[1]: Reloading... Sep 12 05:58:53.179576 zram_generator::config[2484]: No configuration found. Sep 12 05:58:53.269593 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Sep 12 05:58:53.336789 systemd[1]: Reloading finished in 235 ms. Sep 12 05:58:53.390290 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 12 05:58:53.390454 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 12 05:58:53.390720 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 05:58:53.390796 systemd[1]: kubelet.service: Consumed 44ms CPU time, 78.2M memory peak. Sep 12 05:58:53.392018 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 05:58:53.820239 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 05:58:53.824023 (kubelet)[2552]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 12 05:58:53.859944 kubelet[2552]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 05:58:53.860136 kubelet[2552]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 12 05:58:53.860166 kubelet[2552]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 05:58:53.860245 kubelet[2552]: I0912 05:58:53.860230 2552 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 12 05:58:54.246569 kubelet[2552]: I0912 05:58:54.246376 2552 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Sep 12 05:58:54.246569 kubelet[2552]: I0912 05:58:54.246392 2552 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 12 05:58:54.246569 kubelet[2552]: I0912 05:58:54.246546 2552 server.go:954] "Client rotation is on, will bootstrap in background" Sep 12 05:58:54.277193 kubelet[2552]: E0912 05:58:54.277122 2552 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://139.178.70.104:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 139.178.70.104:6443: connect: connection refused" logger="UnhandledError" Sep 12 05:58:54.278265 kubelet[2552]: I0912 05:58:54.278246 2552 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 12 05:58:54.289599 kubelet[2552]: I0912 05:58:54.289517 2552 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 12 05:58:54.294005 kubelet[2552]: I0912 05:58:54.293994 2552 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 12 05:58:54.295614 kubelet[2552]: I0912 05:58:54.295594 2552 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 12 05:58:54.295716 kubelet[2552]: I0912 05:58:54.295614 2552 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 12 05:58:54.297509 kubelet[2552]: I0912 05:58:54.297497 2552 topology_manager.go:138] "Creating topology manager with none policy" Sep 12 05:58:54.297536 kubelet[2552]: I0912 05:58:54.297511 2552 container_manager_linux.go:304] "Creating device plugin manager" Sep 12 05:58:54.298400 kubelet[2552]: I0912 05:58:54.298386 2552 state_mem.go:36] "Initialized new in-memory state store" Sep 12 05:58:54.301195 kubelet[2552]: I0912 05:58:54.301183 2552 kubelet.go:446] "Attempting to sync node with API server" Sep 12 05:58:54.301219 kubelet[2552]: I0912 05:58:54.301203 2552 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 12 05:58:54.301561 kubelet[2552]: I0912 05:58:54.301546 2552 kubelet.go:352] "Adding apiserver pod source" Sep 12 05:58:54.301589 kubelet[2552]: I0912 05:58:54.301568 2552 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 12 05:58:54.306138 kubelet[2552]: W0912 05:58:54.306113 2552 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://139.178.70.104:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 139.178.70.104:6443: connect: connection refused Sep 12 05:58:54.306185 kubelet[2552]: E0912 05:58:54.306143 2552 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://139.178.70.104:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 139.178.70.104:6443: connect: connection refused" logger="UnhandledError" Sep 12 05:58:54.306515 kubelet[2552]: W0912 05:58:54.306493 2552 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://139.178.70.104:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 139.178.70.104:6443: connect: connection refused Sep 12 05:58:54.306541 kubelet[2552]: E0912 05:58:54.306516 2552 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://139.178.70.104:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 139.178.70.104:6443: connect: connection refused" logger="UnhandledError" Sep 12 05:58:54.307544 kubelet[2552]: I0912 05:58:54.307526 2552 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Sep 12 05:58:54.310411 kubelet[2552]: I0912 05:58:54.309692 2552 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 12 05:58:54.310411 kubelet[2552]: W0912 05:58:54.309720 2552 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 12 05:58:54.310683 kubelet[2552]: I0912 05:58:54.310671 2552 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 12 05:58:54.310709 kubelet[2552]: I0912 05:58:54.310690 2552 server.go:1287] "Started kubelet" Sep 12 05:58:54.311714 kubelet[2552]: I0912 05:58:54.311284 2552 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Sep 12 05:58:54.312610 kubelet[2552]: I0912 05:58:54.312600 2552 server.go:479] "Adding debug handlers to kubelet server" Sep 12 05:58:54.315171 kubelet[2552]: I0912 05:58:54.314248 2552 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 12 05:58:54.315171 kubelet[2552]: I0912 05:58:54.314388 2552 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 12 05:58:54.316363 kubelet[2552]: I0912 05:58:54.315997 2552 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 12 05:58:54.317322 kubelet[2552]: E0912 05:58:54.315198 2552 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://139.178.70.104:6443/api/v1/namespaces/default/events\": dial tcp 139.178.70.104:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1864737cc5a94456 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-12 05:58:54.310679638 +0000 UTC m=+0.483979229,LastTimestamp:2025-09-12 05:58:54.310679638 +0000 UTC m=+0.483979229,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 12 05:58:54.317630 kubelet[2552]: I0912 05:58:54.317608 2552 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 12 05:58:54.321734 kubelet[2552]: E0912 05:58:54.321720 2552 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 12 05:58:54.321911 kubelet[2552]: E0912 05:58:54.321882 2552 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 05:58:54.321911 kubelet[2552]: I0912 05:58:54.321909 2552 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 12 05:58:54.322009 kubelet[2552]: I0912 05:58:54.321996 2552 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 12 05:58:54.322033 kubelet[2552]: I0912 05:58:54.322023 2552 reconciler.go:26] "Reconciler: start to sync state" Sep 12 05:58:54.322415 kubelet[2552]: I0912 05:58:54.322403 2552 factory.go:221] Registration of the systemd container factory successfully Sep 12 05:58:54.322544 kubelet[2552]: I0912 05:58:54.322453 2552 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 12 05:58:54.322613 kubelet[2552]: W0912 05:58:54.322596 2552 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://139.178.70.104:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 139.178.70.104:6443: connect: connection refused Sep 12 05:58:54.322631 kubelet[2552]: E0912 05:58:54.322619 2552 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://139.178.70.104:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 139.178.70.104:6443: connect: connection refused" logger="UnhandledError" Sep 12 05:58:54.324093 kubelet[2552]: I0912 05:58:54.323768 2552 factory.go:221] Registration of the containerd container factory successfully Sep 12 05:58:54.332113 kubelet[2552]: E0912 05:58:54.332073 2552 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.104:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.104:6443: connect: connection refused" interval="200ms" Sep 12 05:58:54.342222 kubelet[2552]: I0912 05:58:54.342195 2552 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 12 05:58:54.342801 kubelet[2552]: I0912 05:58:54.342786 2552 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 12 05:58:54.342801 kubelet[2552]: I0912 05:58:54.342799 2552 status_manager.go:227] "Starting to sync pod status with apiserver" Sep 12 05:58:54.342852 kubelet[2552]: I0912 05:58:54.342810 2552 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 12 05:58:54.342852 kubelet[2552]: I0912 05:58:54.342814 2552 kubelet.go:2382] "Starting kubelet main sync loop" Sep 12 05:58:54.342852 kubelet[2552]: E0912 05:58:54.342836 2552 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 12 05:58:54.346456 kubelet[2552]: W0912 05:58:54.346438 2552 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://139.178.70.104:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 139.178.70.104:6443: connect: connection refused Sep 12 05:58:54.346491 kubelet[2552]: E0912 05:58:54.346461 2552 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://139.178.70.104:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 139.178.70.104:6443: connect: connection refused" logger="UnhandledError" Sep 12 05:58:54.346873 kubelet[2552]: I0912 05:58:54.346860 2552 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 12 05:58:54.346873 kubelet[2552]: I0912 05:58:54.346869 2552 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 12 05:58:54.346918 kubelet[2552]: I0912 05:58:54.346877 2552 state_mem.go:36] "Initialized new in-memory state store" Sep 12 05:58:54.347845 kubelet[2552]: I0912 05:58:54.347835 2552 policy_none.go:49] "None policy: Start" Sep 12 05:58:54.347845 kubelet[2552]: I0912 05:58:54.347845 2552 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 12 05:58:54.347892 kubelet[2552]: I0912 05:58:54.347851 2552 state_mem.go:35] "Initializing new in-memory state store" Sep 12 05:58:54.352579 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 12 05:58:54.361216 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 12 05:58:54.376405 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 12 05:58:54.377174 kubelet[2552]: I0912 05:58:54.377160 2552 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 12 05:58:54.377268 kubelet[2552]: I0912 05:58:54.377257 2552 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 12 05:58:54.377294 kubelet[2552]: I0912 05:58:54.377268 2552 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 12 05:58:54.377624 kubelet[2552]: I0912 05:58:54.377535 2552 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 12 05:58:54.378481 kubelet[2552]: E0912 05:58:54.378467 2552 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 12 05:58:54.378528 kubelet[2552]: E0912 05:58:54.378490 2552 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Sep 12 05:58:54.460662 systemd[1]: Created slice kubepods-burstable-pod8de1e2c45bce28da8593cfb918c08f54.slice - libcontainer container kubepods-burstable-pod8de1e2c45bce28da8593cfb918c08f54.slice. Sep 12 05:58:54.478788 kubelet[2552]: I0912 05:58:54.478768 2552 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 12 05:58:54.484767 kubelet[2552]: E0912 05:58:54.479151 2552 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://139.178.70.104:6443/api/v1/nodes\": dial tcp 139.178.70.104:6443: connect: connection refused" node="localhost" Sep 12 05:58:54.489873 kubelet[2552]: E0912 05:58:54.489856 2552 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 12 05:58:54.493134 systemd[1]: Created slice kubepods-burstable-pod1403266a9792debaa127cd8df7a81c3c.slice - libcontainer container kubepods-burstable-pod1403266a9792debaa127cd8df7a81c3c.slice. Sep 12 05:58:54.497617 kubelet[2552]: E0912 05:58:54.494785 2552 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 12 05:58:54.497750 systemd[1]: Created slice kubepods-burstable-pod72a30db4fc25e4da65a3b99eba43be94.slice - libcontainer container kubepods-burstable-pod72a30db4fc25e4da65a3b99eba43be94.slice. Sep 12 05:58:54.499872 kubelet[2552]: E0912 05:58:54.499855 2552 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 12 05:58:54.523199 kubelet[2552]: I0912 05:58:54.523172 2552 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 05:58:54.530498 kubelet[2552]: I0912 05:58:54.523282 2552 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/72a30db4fc25e4da65a3b99eba43be94-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"72a30db4fc25e4da65a3b99eba43be94\") " pod="kube-system/kube-scheduler-localhost" Sep 12 05:58:54.530498 kubelet[2552]: I0912 05:58:54.523298 2552 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8de1e2c45bce28da8593cfb918c08f54-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"8de1e2c45bce28da8593cfb918c08f54\") " pod="kube-system/kube-apiserver-localhost" Sep 12 05:58:54.530498 kubelet[2552]: I0912 05:58:54.523312 2552 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8de1e2c45bce28da8593cfb918c08f54-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"8de1e2c45bce28da8593cfb918c08f54\") " pod="kube-system/kube-apiserver-localhost" Sep 12 05:58:54.530498 kubelet[2552]: I0912 05:58:54.523326 2552 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 05:58:54.530498 kubelet[2552]: I0912 05:58:54.523351 2552 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 05:58:54.699760 kubelet[2552]: I0912 05:58:54.523364 2552 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 05:58:54.699760 kubelet[2552]: I0912 05:58:54.523377 2552 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 05:58:54.699760 kubelet[2552]: I0912 05:58:54.523389 2552 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8de1e2c45bce28da8593cfb918c08f54-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"8de1e2c45bce28da8593cfb918c08f54\") " pod="kube-system/kube-apiserver-localhost" Sep 12 05:58:54.699760 kubelet[2552]: E0912 05:58:54.532668 2552 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.104:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.104:6443: connect: connection refused" interval="400ms" Sep 12 05:58:54.699760 kubelet[2552]: I0912 05:58:54.680991 2552 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 12 05:58:54.699760 kubelet[2552]: E0912 05:58:54.681194 2552 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://139.178.70.104:6443/api/v1/nodes\": dial tcp 139.178.70.104:6443: connect: connection refused" node="localhost" Sep 12 05:58:54.792353 containerd[1620]: time="2025-09-12T05:58:54.792284265Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:8de1e2c45bce28da8593cfb918c08f54,Namespace:kube-system,Attempt:0,}" Sep 12 05:58:54.802675 containerd[1620]: time="2025-09-12T05:58:54.802592115Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:72a30db4fc25e4da65a3b99eba43be94,Namespace:kube-system,Attempt:0,}" Sep 12 05:58:54.802817 containerd[1620]: time="2025-09-12T05:58:54.802785773Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:1403266a9792debaa127cd8df7a81c3c,Namespace:kube-system,Attempt:0,}" Sep 12 05:58:54.933340 kubelet[2552]: E0912 05:58:54.933314 2552 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.104:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.104:6443: connect: connection refused" interval="800ms" Sep 12 05:58:54.984617 containerd[1620]: time="2025-09-12T05:58:54.984437253Z" level=info msg="connecting to shim 065a9b0799edef35fbb44c77ddeac38f54b5c94490f65f2245053779c2586fea" address="unix:///run/containerd/s/ae6a7fef54ef48c781e32eab5debc3bf186f4e38aa952acb59bb0cade6ce9c0c" namespace=k8s.io protocol=ttrpc version=3 Sep 12 05:58:54.984973 containerd[1620]: time="2025-09-12T05:58:54.984927135Z" level=info msg="connecting to shim 9deced39ea47547144a22c6d8a054d8d1a8bd89bf00443b2ae28f043b9fda44c" address="unix:///run/containerd/s/28e35910d5866af34252558fd308fa09f6ba356cff8df006aed8ccb2813d01e7" namespace=k8s.io protocol=ttrpc version=3 Sep 12 05:58:54.988990 containerd[1620]: time="2025-09-12T05:58:54.988951129Z" level=info msg="connecting to shim b998ea9c99bae74c705aadce282bafc2a1d4027bad4500e3652181eefa1703cb" address="unix:///run/containerd/s/b2951ee99f792691cad72e2ae6be5a55e310de3ad5fb74c433b70afea6021d7e" namespace=k8s.io protocol=ttrpc version=3 Sep 12 05:58:55.082963 kubelet[2552]: I0912 05:58:55.082902 2552 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 12 05:58:55.083642 kubelet[2552]: E0912 05:58:55.083624 2552 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://139.178.70.104:6443/api/v1/nodes\": dial tcp 139.178.70.104:6443: connect: connection refused" node="localhost" Sep 12 05:58:55.150654 systemd[1]: Started cri-containerd-065a9b0799edef35fbb44c77ddeac38f54b5c94490f65f2245053779c2586fea.scope - libcontainer container 065a9b0799edef35fbb44c77ddeac38f54b5c94490f65f2245053779c2586fea. Sep 12 05:58:55.153027 systemd[1]: Started cri-containerd-9deced39ea47547144a22c6d8a054d8d1a8bd89bf00443b2ae28f043b9fda44c.scope - libcontainer container 9deced39ea47547144a22c6d8a054d8d1a8bd89bf00443b2ae28f043b9fda44c. Sep 12 05:58:55.153871 systemd[1]: Started cri-containerd-b998ea9c99bae74c705aadce282bafc2a1d4027bad4500e3652181eefa1703cb.scope - libcontainer container b998ea9c99bae74c705aadce282bafc2a1d4027bad4500e3652181eefa1703cb. Sep 12 05:58:55.206378 containerd[1620]: time="2025-09-12T05:58:55.206351144Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:72a30db4fc25e4da65a3b99eba43be94,Namespace:kube-system,Attempt:0,} returns sandbox id \"065a9b0799edef35fbb44c77ddeac38f54b5c94490f65f2245053779c2586fea\"" Sep 12 05:58:55.213233 containerd[1620]: time="2025-09-12T05:58:55.213142618Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:1403266a9792debaa127cd8df7a81c3c,Namespace:kube-system,Attempt:0,} returns sandbox id \"9deced39ea47547144a22c6d8a054d8d1a8bd89bf00443b2ae28f043b9fda44c\"" Sep 12 05:58:55.216544 containerd[1620]: time="2025-09-12T05:58:55.216529840Z" level=info msg="CreateContainer within sandbox \"065a9b0799edef35fbb44c77ddeac38f54b5c94490f65f2245053779c2586fea\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 12 05:58:55.217403 containerd[1620]: time="2025-09-12T05:58:55.216777994Z" level=info msg="CreateContainer within sandbox \"9deced39ea47547144a22c6d8a054d8d1a8bd89bf00443b2ae28f043b9fda44c\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 12 05:58:55.239715 containerd[1620]: time="2025-09-12T05:58:55.239683480Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:8de1e2c45bce28da8593cfb918c08f54,Namespace:kube-system,Attempt:0,} returns sandbox id \"b998ea9c99bae74c705aadce282bafc2a1d4027bad4500e3652181eefa1703cb\"" Sep 12 05:58:55.240966 containerd[1620]: time="2025-09-12T05:58:55.240945228Z" level=info msg="CreateContainer within sandbox \"b998ea9c99bae74c705aadce282bafc2a1d4027bad4500e3652181eefa1703cb\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 12 05:58:55.336850 containerd[1620]: time="2025-09-12T05:58:55.336569825Z" level=info msg="Container b46a8f6aa15ec9d49a9e7f4c27bd97dfae8a5fffafafa0800db13b842cfdb3e4: CDI devices from CRI Config.CDIDevices: []" Sep 12 05:58:55.336934 containerd[1620]: time="2025-09-12T05:58:55.336900124Z" level=info msg="Container 3a01178ab29d5365c370692597cc322f3188703fe50f8521d0cfe160d87d5213: CDI devices from CRI Config.CDIDevices: []" Sep 12 05:58:55.345607 containerd[1620]: time="2025-09-12T05:58:55.345583218Z" level=info msg="Container 1b8091cf2b4ad3f20257094800cf3d9d37d1e393eeaff0fbfdb3652e2dc612f1: CDI devices from CRI Config.CDIDevices: []" Sep 12 05:58:55.348764 kubelet[2552]: W0912 05:58:55.348715 2552 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://139.178.70.104:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 139.178.70.104:6443: connect: connection refused Sep 12 05:58:55.348764 kubelet[2552]: E0912 05:58:55.348748 2552 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://139.178.70.104:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 139.178.70.104:6443: connect: connection refused" logger="UnhandledError" Sep 12 05:58:55.352648 kubelet[2552]: W0912 05:58:55.352592 2552 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://139.178.70.104:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 139.178.70.104:6443: connect: connection refused Sep 12 05:58:55.352648 kubelet[2552]: E0912 05:58:55.352623 2552 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://139.178.70.104:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 139.178.70.104:6443: connect: connection refused" logger="UnhandledError" Sep 12 05:58:55.370211 kubelet[2552]: W0912 05:58:55.370173 2552 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://139.178.70.104:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 139.178.70.104:6443: connect: connection refused Sep 12 05:58:55.370363 kubelet[2552]: E0912 05:58:55.370255 2552 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://139.178.70.104:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 139.178.70.104:6443: connect: connection refused" logger="UnhandledError" Sep 12 05:58:55.406221 containerd[1620]: time="2025-09-12T05:58:55.406184994Z" level=info msg="CreateContainer within sandbox \"b998ea9c99bae74c705aadce282bafc2a1d4027bad4500e3652181eefa1703cb\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"1b8091cf2b4ad3f20257094800cf3d9d37d1e393eeaff0fbfdb3652e2dc612f1\"" Sep 12 05:58:55.406860 containerd[1620]: time="2025-09-12T05:58:55.406839972Z" level=info msg="StartContainer for \"1b8091cf2b4ad3f20257094800cf3d9d37d1e393eeaff0fbfdb3652e2dc612f1\"" Sep 12 05:58:55.407808 containerd[1620]: time="2025-09-12T05:58:55.407768822Z" level=info msg="connecting to shim 1b8091cf2b4ad3f20257094800cf3d9d37d1e393eeaff0fbfdb3652e2dc612f1" address="unix:///run/containerd/s/b2951ee99f792691cad72e2ae6be5a55e310de3ad5fb74c433b70afea6021d7e" protocol=ttrpc version=3 Sep 12 05:58:55.409182 containerd[1620]: time="2025-09-12T05:58:55.409135714Z" level=info msg="CreateContainer within sandbox \"065a9b0799edef35fbb44c77ddeac38f54b5c94490f65f2245053779c2586fea\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"3a01178ab29d5365c370692597cc322f3188703fe50f8521d0cfe160d87d5213\"" Sep 12 05:58:55.410806 containerd[1620]: time="2025-09-12T05:58:55.410714248Z" level=info msg="StartContainer for \"3a01178ab29d5365c370692597cc322f3188703fe50f8521d0cfe160d87d5213\"" Sep 12 05:58:55.412323 containerd[1620]: time="2025-09-12T05:58:55.412302201Z" level=info msg="connecting to shim 3a01178ab29d5365c370692597cc322f3188703fe50f8521d0cfe160d87d5213" address="unix:///run/containerd/s/ae6a7fef54ef48c781e32eab5debc3bf186f4e38aa952acb59bb0cade6ce9c0c" protocol=ttrpc version=3 Sep 12 05:58:55.427893 containerd[1620]: time="2025-09-12T05:58:55.427798368Z" level=info msg="CreateContainer within sandbox \"9deced39ea47547144a22c6d8a054d8d1a8bd89bf00443b2ae28f043b9fda44c\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"b46a8f6aa15ec9d49a9e7f4c27bd97dfae8a5fffafafa0800db13b842cfdb3e4\"" Sep 12 05:58:55.428066 containerd[1620]: time="2025-09-12T05:58:55.428047638Z" level=info msg="StartContainer for \"b46a8f6aa15ec9d49a9e7f4c27bd97dfae8a5fffafafa0800db13b842cfdb3e4\"" Sep 12 05:58:55.428751 containerd[1620]: time="2025-09-12T05:58:55.428734179Z" level=info msg="connecting to shim b46a8f6aa15ec9d49a9e7f4c27bd97dfae8a5fffafafa0800db13b842cfdb3e4" address="unix:///run/containerd/s/28e35910d5866af34252558fd308fa09f6ba356cff8df006aed8ccb2813d01e7" protocol=ttrpc version=3 Sep 12 05:58:55.431661 systemd[1]: Started cri-containerd-1b8091cf2b4ad3f20257094800cf3d9d37d1e393eeaff0fbfdb3652e2dc612f1.scope - libcontainer container 1b8091cf2b4ad3f20257094800cf3d9d37d1e393eeaff0fbfdb3652e2dc612f1. Sep 12 05:58:55.432520 systemd[1]: Started cri-containerd-3a01178ab29d5365c370692597cc322f3188703fe50f8521d0cfe160d87d5213.scope - libcontainer container 3a01178ab29d5365c370692597cc322f3188703fe50f8521d0cfe160d87d5213. Sep 12 05:58:55.445639 systemd[1]: Started cri-containerd-b46a8f6aa15ec9d49a9e7f4c27bd97dfae8a5fffafafa0800db13b842cfdb3e4.scope - libcontainer container b46a8f6aa15ec9d49a9e7f4c27bd97dfae8a5fffafafa0800db13b842cfdb3e4. Sep 12 05:58:55.493827 containerd[1620]: time="2025-09-12T05:58:55.493790816Z" level=info msg="StartContainer for \"b46a8f6aa15ec9d49a9e7f4c27bd97dfae8a5fffafafa0800db13b842cfdb3e4\" returns successfully" Sep 12 05:58:55.507582 containerd[1620]: time="2025-09-12T05:58:55.507496974Z" level=info msg="StartContainer for \"1b8091cf2b4ad3f20257094800cf3d9d37d1e393eeaff0fbfdb3652e2dc612f1\" returns successfully" Sep 12 05:58:55.515390 containerd[1620]: time="2025-09-12T05:58:55.515366002Z" level=info msg="StartContainer for \"3a01178ab29d5365c370692597cc322f3188703fe50f8521d0cfe160d87d5213\" returns successfully" Sep 12 05:58:55.734517 kubelet[2552]: E0912 05:58:55.734488 2552 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.104:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.104:6443: connect: connection refused" interval="1.6s" Sep 12 05:58:55.802031 kubelet[2552]: E0912 05:58:55.801968 2552 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://139.178.70.104:6443/api/v1/namespaces/default/events\": dial tcp 139.178.70.104:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1864737cc5a94456 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-12 05:58:54.310679638 +0000 UTC m=+0.483979229,LastTimestamp:2025-09-12 05:58:54.310679638 +0000 UTC m=+0.483979229,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 12 05:58:55.854990 kubelet[2552]: W0912 05:58:55.854953 2552 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://139.178.70.104:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 139.178.70.104:6443: connect: connection refused Sep 12 05:58:55.855068 kubelet[2552]: E0912 05:58:55.854994 2552 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://139.178.70.104:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 139.178.70.104:6443: connect: connection refused" logger="UnhandledError" Sep 12 05:58:55.885173 kubelet[2552]: I0912 05:58:55.885157 2552 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 12 05:58:55.885380 kubelet[2552]: E0912 05:58:55.885365 2552 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://139.178.70.104:6443/api/v1/nodes\": dial tcp 139.178.70.104:6443: connect: connection refused" node="localhost" Sep 12 05:58:56.357314 kubelet[2552]: E0912 05:58:56.357202 2552 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 12 05:58:56.358682 kubelet[2552]: E0912 05:58:56.358671 2552 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 12 05:58:56.360892 kubelet[2552]: E0912 05:58:56.360858 2552 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 12 05:58:57.000085 kubelet[2552]: E0912 05:58:57.000062 2552 csi_plugin.go:308] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Sep 12 05:58:57.336853 kubelet[2552]: E0912 05:58:57.336717 2552 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Sep 12 05:58:57.351508 kubelet[2552]: E0912 05:58:57.351490 2552 csi_plugin.go:308] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Sep 12 05:58:57.362454 kubelet[2552]: E0912 05:58:57.362260 2552 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 12 05:58:57.362454 kubelet[2552]: E0912 05:58:57.362390 2552 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 12 05:58:57.362849 kubelet[2552]: E0912 05:58:57.362837 2552 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 12 05:58:57.486631 kubelet[2552]: I0912 05:58:57.486612 2552 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 12 05:58:57.494511 kubelet[2552]: I0912 05:58:57.494493 2552 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 12 05:58:57.494511 kubelet[2552]: E0912 05:58:57.494514 2552 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Sep 12 05:58:57.499499 kubelet[2552]: E0912 05:58:57.499481 2552 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 05:58:57.600489 kubelet[2552]: E0912 05:58:57.600389 2552 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 05:58:57.700534 kubelet[2552]: E0912 05:58:57.700503 2552 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 05:58:57.801263 kubelet[2552]: E0912 05:58:57.801236 2552 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 05:58:57.901633 kubelet[2552]: E0912 05:58:57.901608 2552 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 05:58:58.002302 kubelet[2552]: E0912 05:58:58.002274 2552 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 05:58:58.102992 kubelet[2552]: E0912 05:58:58.102971 2552 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 05:58:58.203669 kubelet[2552]: E0912 05:58:58.203596 2552 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 05:58:58.308080 kubelet[2552]: I0912 05:58:58.308062 2552 apiserver.go:52] "Watching apiserver" Sep 12 05:58:58.322240 kubelet[2552]: I0912 05:58:58.322156 2552 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 12 05:58:58.325588 kubelet[2552]: I0912 05:58:58.325429 2552 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 12 05:58:58.331436 kubelet[2552]: I0912 05:58:58.331417 2552 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 12 05:58:58.333568 kubelet[2552]: I0912 05:58:58.333475 2552 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 12 05:58:58.337285 systemd[1]: Reload requested from client PID 2821 ('systemctl') (unit session-9.scope)... Sep 12 05:58:58.337308 systemd[1]: Reloading... Sep 12 05:58:58.364026 kubelet[2552]: I0912 05:58:58.364008 2552 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 12 05:58:58.364913 kubelet[2552]: I0912 05:58:58.364203 2552 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 12 05:58:58.369579 kubelet[2552]: E0912 05:58:58.368208 2552 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Sep 12 05:58:58.369646 kubelet[2552]: E0912 05:58:58.369626 2552 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 12 05:58:58.391565 zram_generator::config[2868]: No configuration found. Sep 12 05:58:58.469736 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Sep 12 05:58:58.546312 systemd[1]: Reloading finished in 208 ms. Sep 12 05:58:58.561484 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 05:58:58.561752 kubelet[2552]: I0912 05:58:58.561717 2552 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 12 05:58:58.573120 systemd[1]: kubelet.service: Deactivated successfully. Sep 12 05:58:58.573279 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 05:58:58.573311 systemd[1]: kubelet.service: Consumed 656ms CPU time, 128.2M memory peak. Sep 12 05:58:58.574912 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 05:58:58.936190 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 05:58:58.943828 (kubelet)[2932]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 12 05:58:58.975176 kubelet[2932]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 05:58:58.975377 kubelet[2932]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 12 05:58:58.975404 kubelet[2932]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 05:58:58.975623 kubelet[2932]: I0912 05:58:58.975599 2932 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 12 05:58:58.980054 kubelet[2932]: I0912 05:58:58.980039 2932 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Sep 12 05:58:58.980054 kubelet[2932]: I0912 05:58:58.980052 2932 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 12 05:58:58.980182 kubelet[2932]: I0912 05:58:58.980171 2932 server.go:954] "Client rotation is on, will bootstrap in background" Sep 12 05:58:58.980842 kubelet[2932]: I0912 05:58:58.980831 2932 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 12 05:58:58.989415 kubelet[2932]: I0912 05:58:58.989398 2932 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 12 05:58:58.991543 kubelet[2932]: I0912 05:58:58.991529 2932 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 12 05:58:58.995435 kubelet[2932]: I0912 05:58:58.995308 2932 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 12 05:58:58.995435 kubelet[2932]: I0912 05:58:58.995413 2932 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 12 05:58:58.995521 kubelet[2932]: I0912 05:58:58.995426 2932 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 12 05:58:58.995592 kubelet[2932]: I0912 05:58:58.995524 2932 topology_manager.go:138] "Creating topology manager with none policy" Sep 12 05:58:58.995592 kubelet[2932]: I0912 05:58:58.995530 2932 container_manager_linux.go:304] "Creating device plugin manager" Sep 12 05:58:58.995592 kubelet[2932]: I0912 05:58:58.995577 2932 state_mem.go:36] "Initialized new in-memory state store" Sep 12 05:58:58.995699 kubelet[2932]: I0912 05:58:58.995690 2932 kubelet.go:446] "Attempting to sync node with API server" Sep 12 05:58:58.995721 kubelet[2932]: I0912 05:58:58.995705 2932 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 12 05:58:58.995721 kubelet[2932]: I0912 05:58:58.995719 2932 kubelet.go:352] "Adding apiserver pod source" Sep 12 05:58:58.996615 kubelet[2932]: I0912 05:58:58.995725 2932 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 12 05:58:58.997052 kubelet[2932]: I0912 05:58:58.997040 2932 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Sep 12 05:58:58.997257 kubelet[2932]: I0912 05:58:58.997247 2932 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 12 05:58:58.997474 kubelet[2932]: I0912 05:58:58.997466 2932 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 12 05:58:58.997499 kubelet[2932]: I0912 05:58:58.997482 2932 server.go:1287] "Started kubelet" Sep 12 05:58:58.999622 kubelet[2932]: I0912 05:58:58.999606 2932 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 12 05:58:59.006868 kubelet[2932]: I0912 05:58:59.006838 2932 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Sep 12 05:58:59.007923 kubelet[2932]: I0912 05:58:59.007792 2932 server.go:479] "Adding debug handlers to kubelet server" Sep 12 05:58:59.007966 kubelet[2932]: I0912 05:58:59.007949 2932 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 12 05:58:59.008093 kubelet[2932]: E0912 05:58:59.008079 2932 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 05:58:59.009280 kubelet[2932]: I0912 05:58:59.008981 2932 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 12 05:58:59.009280 kubelet[2932]: I0912 05:58:59.009088 2932 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 12 05:58:59.009280 kubelet[2932]: I0912 05:58:59.009186 2932 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 12 05:58:59.009411 kubelet[2932]: I0912 05:58:59.009395 2932 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 12 05:58:59.009483 kubelet[2932]: I0912 05:58:59.009463 2932 reconciler.go:26] "Reconciler: start to sync state" Sep 12 05:58:59.010962 kubelet[2932]: I0912 05:58:59.010456 2932 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 12 05:58:59.011369 kubelet[2932]: I0912 05:58:59.011354 2932 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 12 05:58:59.011393 kubelet[2932]: I0912 05:58:59.011371 2932 status_manager.go:227] "Starting to sync pod status with apiserver" Sep 12 05:58:59.011393 kubelet[2932]: I0912 05:58:59.011380 2932 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 12 05:58:59.011393 kubelet[2932]: I0912 05:58:59.011384 2932 kubelet.go:2382] "Starting kubelet main sync loop" Sep 12 05:58:59.011444 kubelet[2932]: E0912 05:58:59.011405 2932 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 12 05:58:59.016039 kubelet[2932]: I0912 05:58:59.015989 2932 factory.go:221] Registration of the systemd container factory successfully Sep 12 05:58:59.017810 kubelet[2932]: I0912 05:58:59.016218 2932 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 12 05:58:59.018740 kubelet[2932]: E0912 05:58:59.018712 2932 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 12 05:58:59.019830 kubelet[2932]: I0912 05:58:59.019822 2932 factory.go:221] Registration of the containerd container factory successfully Sep 12 05:58:59.043596 kubelet[2932]: I0912 05:58:59.043579 2932 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 12 05:58:59.043596 kubelet[2932]: I0912 05:58:59.043592 2932 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 12 05:58:59.043695 kubelet[2932]: I0912 05:58:59.043605 2932 state_mem.go:36] "Initialized new in-memory state store" Sep 12 05:58:59.043735 kubelet[2932]: I0912 05:58:59.043720 2932 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 12 05:58:59.043760 kubelet[2932]: I0912 05:58:59.043729 2932 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 12 05:58:59.043760 kubelet[2932]: I0912 05:58:59.043740 2932 policy_none.go:49] "None policy: Start" Sep 12 05:58:59.043760 kubelet[2932]: I0912 05:58:59.043746 2932 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 12 05:58:59.043760 kubelet[2932]: I0912 05:58:59.043752 2932 state_mem.go:35] "Initializing new in-memory state store" Sep 12 05:58:59.043821 kubelet[2932]: I0912 05:58:59.043808 2932 state_mem.go:75] "Updated machine memory state" Sep 12 05:58:59.045944 kubelet[2932]: I0912 05:58:59.045905 2932 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 12 05:58:59.046587 kubelet[2932]: I0912 05:58:59.046436 2932 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 12 05:58:59.046587 kubelet[2932]: I0912 05:58:59.046444 2932 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 12 05:58:59.046587 kubelet[2932]: I0912 05:58:59.046541 2932 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 12 05:58:59.047737 kubelet[2932]: E0912 05:58:59.047728 2932 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 12 05:58:59.112052 kubelet[2932]: I0912 05:58:59.112024 2932 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 12 05:58:59.112224 kubelet[2932]: I0912 05:58:59.112047 2932 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 12 05:58:59.112270 kubelet[2932]: I0912 05:58:59.112087 2932 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 12 05:58:59.115009 kubelet[2932]: E0912 05:58:59.114997 2932 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 12 05:58:59.115254 kubelet[2932]: E0912 05:58:59.115141 2932 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Sep 12 05:58:59.115254 kubelet[2932]: E0912 05:58:59.115193 2932 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Sep 12 05:58:59.148259 kubelet[2932]: I0912 05:58:59.148244 2932 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 12 05:58:59.151451 kubelet[2932]: I0912 05:58:59.151434 2932 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Sep 12 05:58:59.151520 kubelet[2932]: I0912 05:58:59.151477 2932 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 12 05:58:59.310486 kubelet[2932]: I0912 05:58:59.310415 2932 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8de1e2c45bce28da8593cfb918c08f54-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"8de1e2c45bce28da8593cfb918c08f54\") " pod="kube-system/kube-apiserver-localhost" Sep 12 05:58:59.310486 kubelet[2932]: I0912 05:58:59.310437 2932 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8de1e2c45bce28da8593cfb918c08f54-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"8de1e2c45bce28da8593cfb918c08f54\") " pod="kube-system/kube-apiserver-localhost" Sep 12 05:58:59.310486 kubelet[2932]: I0912 05:58:59.310448 2932 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 05:58:59.310486 kubelet[2932]: I0912 05:58:59.310459 2932 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 05:58:59.310486 kubelet[2932]: I0912 05:58:59.310468 2932 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 05:58:59.310643 kubelet[2932]: I0912 05:58:59.310482 2932 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 05:58:59.310643 kubelet[2932]: I0912 05:58:59.310493 2932 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8de1e2c45bce28da8593cfb918c08f54-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"8de1e2c45bce28da8593cfb918c08f54\") " pod="kube-system/kube-apiserver-localhost" Sep 12 05:58:59.310643 kubelet[2932]: I0912 05:58:59.310502 2932 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 05:58:59.310643 kubelet[2932]: I0912 05:58:59.310511 2932 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/72a30db4fc25e4da65a3b99eba43be94-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"72a30db4fc25e4da65a3b99eba43be94\") " pod="kube-system/kube-scheduler-localhost" Sep 12 05:58:59.996753 kubelet[2932]: I0912 05:58:59.996652 2932 apiserver.go:52] "Watching apiserver" Sep 12 05:59:00.010176 kubelet[2932]: I0912 05:59:00.010150 2932 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 12 05:59:00.049271 kubelet[2932]: I0912 05:59:00.049223 2932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.049213124 podStartE2EDuration="2.049213124s" podCreationTimestamp="2025-09-12 05:58:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 05:59:00.045836772 +0000 UTC m=+1.097218756" watchObservedRunningTime="2025-09-12 05:59:00.049213124 +0000 UTC m=+1.100595107" Sep 12 05:59:00.052442 kubelet[2932]: I0912 05:59:00.052344 2932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=2.052337028 podStartE2EDuration="2.052337028s" podCreationTimestamp="2025-09-12 05:58:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 05:59:00.049429008 +0000 UTC m=+1.100810988" watchObservedRunningTime="2025-09-12 05:59:00.052337028 +0000 UTC m=+1.103719009" Sep 12 05:59:00.056667 kubelet[2932]: I0912 05:59:00.056638 2932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=2.056629344 podStartE2EDuration="2.056629344s" podCreationTimestamp="2025-09-12 05:58:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 05:59:00.052894446 +0000 UTC m=+1.104276431" watchObservedRunningTime="2025-09-12 05:59:00.056629344 +0000 UTC m=+1.108011334" Sep 12 05:59:05.061468 kubelet[2932]: I0912 05:59:05.061431 2932 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 12 05:59:05.062242 kubelet[2932]: I0912 05:59:05.061747 2932 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 12 05:59:05.062265 containerd[1620]: time="2025-09-12T05:59:05.061634320Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 12 05:59:06.109191 systemd[1]: Created slice kubepods-besteffort-podc96a69a7_763b_4846_8aca_972d2614995a.slice - libcontainer container kubepods-besteffort-podc96a69a7_763b_4846_8aca_972d2614995a.slice. Sep 12 05:59:06.155154 kubelet[2932]: I0912 05:59:06.155127 2932 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c96a69a7-763b-4846-8aca-972d2614995a-xtables-lock\") pod \"kube-proxy-jvswk\" (UID: \"c96a69a7-763b-4846-8aca-972d2614995a\") " pod="kube-system/kube-proxy-jvswk" Sep 12 05:59:06.155154 kubelet[2932]: I0912 05:59:06.155154 2932 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/c96a69a7-763b-4846-8aca-972d2614995a-kube-proxy\") pod \"kube-proxy-jvswk\" (UID: \"c96a69a7-763b-4846-8aca-972d2614995a\") " pod="kube-system/kube-proxy-jvswk" Sep 12 05:59:06.155474 kubelet[2932]: I0912 05:59:06.155205 2932 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cd5mk\" (UniqueName: \"kubernetes.io/projected/c96a69a7-763b-4846-8aca-972d2614995a-kube-api-access-cd5mk\") pod \"kube-proxy-jvswk\" (UID: \"c96a69a7-763b-4846-8aca-972d2614995a\") " pod="kube-system/kube-proxy-jvswk" Sep 12 05:59:06.155474 kubelet[2932]: I0912 05:59:06.155222 2932 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c96a69a7-763b-4846-8aca-972d2614995a-lib-modules\") pod \"kube-proxy-jvswk\" (UID: \"c96a69a7-763b-4846-8aca-972d2614995a\") " pod="kube-system/kube-proxy-jvswk" Sep 12 05:59:06.168594 systemd[1]: Created slice kubepods-besteffort-pod1eff1b42_ec4e_4e5a_89f8_4fa1e24cb13f.slice - libcontainer container kubepods-besteffort-pod1eff1b42_ec4e_4e5a_89f8_4fa1e24cb13f.slice. Sep 12 05:59:06.256017 kubelet[2932]: I0912 05:59:06.255691 2932 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/1eff1b42-ec4e-4e5a-89f8-4fa1e24cb13f-var-lib-calico\") pod \"tigera-operator-755d956888-7h959\" (UID: \"1eff1b42-ec4e-4e5a-89f8-4fa1e24cb13f\") " pod="tigera-operator/tigera-operator-755d956888-7h959" Sep 12 05:59:06.256017 kubelet[2932]: I0912 05:59:06.255721 2932 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w92xz\" (UniqueName: \"kubernetes.io/projected/1eff1b42-ec4e-4e5a-89f8-4fa1e24cb13f-kube-api-access-w92xz\") pod \"tigera-operator-755d956888-7h959\" (UID: \"1eff1b42-ec4e-4e5a-89f8-4fa1e24cb13f\") " pod="tigera-operator/tigera-operator-755d956888-7h959" Sep 12 05:59:06.425827 containerd[1620]: time="2025-09-12T05:59:06.425800162Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-jvswk,Uid:c96a69a7-763b-4846-8aca-972d2614995a,Namespace:kube-system,Attempt:0,}" Sep 12 05:59:06.434988 containerd[1620]: time="2025-09-12T05:59:06.434925148Z" level=info msg="connecting to shim fffd7829808cb3a850346fb8494751cff9c478570f479e9cfc1526b4f6001264" address="unix:///run/containerd/s/a39cacd0005db24ebb55673ea596e51aaaeb680d6eeaddd6f7ad728c226b5378" namespace=k8s.io protocol=ttrpc version=3 Sep 12 05:59:06.452643 systemd[1]: Started cri-containerd-fffd7829808cb3a850346fb8494751cff9c478570f479e9cfc1526b4f6001264.scope - libcontainer container fffd7829808cb3a850346fb8494751cff9c478570f479e9cfc1526b4f6001264. Sep 12 05:59:06.466549 containerd[1620]: time="2025-09-12T05:59:06.466515785Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-jvswk,Uid:c96a69a7-763b-4846-8aca-972d2614995a,Namespace:kube-system,Attempt:0,} returns sandbox id \"fffd7829808cb3a850346fb8494751cff9c478570f479e9cfc1526b4f6001264\"" Sep 12 05:59:06.468855 containerd[1620]: time="2025-09-12T05:59:06.468758410Z" level=info msg="CreateContainer within sandbox \"fffd7829808cb3a850346fb8494751cff9c478570f479e9cfc1526b4f6001264\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 12 05:59:06.471126 containerd[1620]: time="2025-09-12T05:59:06.471111310Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-755d956888-7h959,Uid:1eff1b42-ec4e-4e5a-89f8-4fa1e24cb13f,Namespace:tigera-operator,Attempt:0,}" Sep 12 05:59:06.478542 containerd[1620]: time="2025-09-12T05:59:06.478527124Z" level=info msg="Container 451168f4b2b2668e653ca42b7bc848b4999fe4a0fd516ba2891997e53d9003b2: CDI devices from CRI Config.CDIDevices: []" Sep 12 05:59:06.487475 containerd[1620]: time="2025-09-12T05:59:06.487453786Z" level=info msg="CreateContainer within sandbox \"fffd7829808cb3a850346fb8494751cff9c478570f479e9cfc1526b4f6001264\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"451168f4b2b2668e653ca42b7bc848b4999fe4a0fd516ba2891997e53d9003b2\"" Sep 12 05:59:06.488528 containerd[1620]: time="2025-09-12T05:59:06.487979666Z" level=info msg="StartContainer for \"451168f4b2b2668e653ca42b7bc848b4999fe4a0fd516ba2891997e53d9003b2\"" Sep 12 05:59:06.489320 containerd[1620]: time="2025-09-12T05:59:06.489302997Z" level=info msg="connecting to shim 451168f4b2b2668e653ca42b7bc848b4999fe4a0fd516ba2891997e53d9003b2" address="unix:///run/containerd/s/a39cacd0005db24ebb55673ea596e51aaaeb680d6eeaddd6f7ad728c226b5378" protocol=ttrpc version=3 Sep 12 05:59:06.491259 containerd[1620]: time="2025-09-12T05:59:06.491236469Z" level=info msg="connecting to shim 61d1ed505f39b264b4836a04891a5b34055105575827f073d062651d771bc792" address="unix:///run/containerd/s/2cf4aa6b04a76db86ad7e999c7c747e9982946ec17a5dcc9059787f4bc77e388" namespace=k8s.io protocol=ttrpc version=3 Sep 12 05:59:06.507760 systemd[1]: Started cri-containerd-451168f4b2b2668e653ca42b7bc848b4999fe4a0fd516ba2891997e53d9003b2.scope - libcontainer container 451168f4b2b2668e653ca42b7bc848b4999fe4a0fd516ba2891997e53d9003b2. Sep 12 05:59:06.510983 systemd[1]: Started cri-containerd-61d1ed505f39b264b4836a04891a5b34055105575827f073d062651d771bc792.scope - libcontainer container 61d1ed505f39b264b4836a04891a5b34055105575827f073d062651d771bc792. Sep 12 05:59:06.538285 containerd[1620]: time="2025-09-12T05:59:06.538264797Z" level=info msg="StartContainer for \"451168f4b2b2668e653ca42b7bc848b4999fe4a0fd516ba2891997e53d9003b2\" returns successfully" Sep 12 05:59:06.555380 containerd[1620]: time="2025-09-12T05:59:06.555354925Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-755d956888-7h959,Uid:1eff1b42-ec4e-4e5a-89f8-4fa1e24cb13f,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"61d1ed505f39b264b4836a04891a5b34055105575827f073d062651d771bc792\"" Sep 12 05:59:06.556878 containerd[1620]: time="2025-09-12T05:59:06.556416556Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\"" Sep 12 05:59:07.062275 kubelet[2932]: I0912 05:59:07.062140 2932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-jvswk" podStartSLOduration=1.062127252 podStartE2EDuration="1.062127252s" podCreationTimestamp="2025-09-12 05:59:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 05:59:07.061694812 +0000 UTC m=+8.113076812" watchObservedRunningTime="2025-09-12 05:59:07.062127252 +0000 UTC m=+8.113509244" Sep 12 05:59:07.265328 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3007002917.mount: Deactivated successfully. Sep 12 05:59:08.425538 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount417043562.mount: Deactivated successfully. Sep 12 05:59:09.672043 containerd[1620]: time="2025-09-12T05:59:09.671641921Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 05:59:09.672043 containerd[1620]: time="2025-09-12T05:59:09.672003762Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.6: active requests=0, bytes read=25062609" Sep 12 05:59:09.672043 containerd[1620]: time="2025-09-12T05:59:09.672019063Z" level=info msg="ImageCreate event name:\"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 05:59:09.673123 containerd[1620]: time="2025-09-12T05:59:09.673111582Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 05:59:09.673530 containerd[1620]: time="2025-09-12T05:59:09.673512947Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.6\" with image id \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\", repo tag \"quay.io/tigera/operator:v1.38.6\", repo digest \"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\", size \"25058604\" in 3.117076358s" Sep 12 05:59:09.673581 containerd[1620]: time="2025-09-12T05:59:09.673530902Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\" returns image reference \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\"" Sep 12 05:59:09.675145 containerd[1620]: time="2025-09-12T05:59:09.675134578Z" level=info msg="CreateContainer within sandbox \"61d1ed505f39b264b4836a04891a5b34055105575827f073d062651d771bc792\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Sep 12 05:59:09.681810 containerd[1620]: time="2025-09-12T05:59:09.681789443Z" level=info msg="Container 137ba466abc55e95975cbc56117216839db8f141c49d1d8802f50c4e2b46be32: CDI devices from CRI Config.CDIDevices: []" Sep 12 05:59:09.693496 containerd[1620]: time="2025-09-12T05:59:09.693480285Z" level=info msg="CreateContainer within sandbox \"61d1ed505f39b264b4836a04891a5b34055105575827f073d062651d771bc792\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"137ba466abc55e95975cbc56117216839db8f141c49d1d8802f50c4e2b46be32\"" Sep 12 05:59:09.694477 containerd[1620]: time="2025-09-12T05:59:09.693927551Z" level=info msg="StartContainer for \"137ba466abc55e95975cbc56117216839db8f141c49d1d8802f50c4e2b46be32\"" Sep 12 05:59:09.694477 containerd[1620]: time="2025-09-12T05:59:09.694297712Z" level=info msg="connecting to shim 137ba466abc55e95975cbc56117216839db8f141c49d1d8802f50c4e2b46be32" address="unix:///run/containerd/s/2cf4aa6b04a76db86ad7e999c7c747e9982946ec17a5dcc9059787f4bc77e388" protocol=ttrpc version=3 Sep 12 05:59:09.710651 systemd[1]: Started cri-containerd-137ba466abc55e95975cbc56117216839db8f141c49d1d8802f50c4e2b46be32.scope - libcontainer container 137ba466abc55e95975cbc56117216839db8f141c49d1d8802f50c4e2b46be32. Sep 12 05:59:09.728236 containerd[1620]: time="2025-09-12T05:59:09.728208533Z" level=info msg="StartContainer for \"137ba466abc55e95975cbc56117216839db8f141c49d1d8802f50c4e2b46be32\" returns successfully" Sep 12 05:59:10.061631 kubelet[2932]: I0912 05:59:10.061524 2932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-755d956888-7h959" podStartSLOduration=0.943397266 podStartE2EDuration="4.061513319s" podCreationTimestamp="2025-09-12 05:59:06 +0000 UTC" firstStartedPulling="2025-09-12 05:59:06.555975699 +0000 UTC m=+7.607357681" lastFinishedPulling="2025-09-12 05:59:09.674091752 +0000 UTC m=+10.725473734" observedRunningTime="2025-09-12 05:59:10.061000805 +0000 UTC m=+11.112382795" watchObservedRunningTime="2025-09-12 05:59:10.061513319 +0000 UTC m=+11.112895310" Sep 12 05:59:14.883368 sudo[1958]: pam_unix(sudo:session): session closed for user root Sep 12 05:59:14.887249 sshd[1957]: Connection closed by 139.178.68.195 port 44512 Sep 12 05:59:14.896853 sshd-session[1954]: pam_unix(sshd:session): session closed for user core Sep 12 05:59:14.899543 systemd[1]: sshd@6-139.178.70.104:22-139.178.68.195:44512.service: Deactivated successfully. Sep 12 05:59:14.900992 systemd[1]: session-9.scope: Deactivated successfully. Sep 12 05:59:14.901367 systemd[1]: session-9.scope: Consumed 2.857s CPU time, 153.1M memory peak. Sep 12 05:59:14.903428 systemd-logind[1591]: Session 9 logged out. Waiting for processes to exit. Sep 12 05:59:14.904316 systemd-logind[1591]: Removed session 9. Sep 12 05:59:17.535061 systemd[1]: Created slice kubepods-besteffort-podcc536617_7830_4d66_b93c_15a1586e688f.slice - libcontainer container kubepods-besteffort-podcc536617_7830_4d66_b93c_15a1586e688f.slice. Sep 12 05:59:17.629438 kubelet[2932]: I0912 05:59:17.629350 2932 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cc536617-7830-4d66-b93c-15a1586e688f-tigera-ca-bundle\") pod \"calico-typha-598bbb56b9-7c99n\" (UID: \"cc536617-7830-4d66-b93c-15a1586e688f\") " pod="calico-system/calico-typha-598bbb56b9-7c99n" Sep 12 05:59:17.629438 kubelet[2932]: I0912 05:59:17.629382 2932 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/cc536617-7830-4d66-b93c-15a1586e688f-typha-certs\") pod \"calico-typha-598bbb56b9-7c99n\" (UID: \"cc536617-7830-4d66-b93c-15a1586e688f\") " pod="calico-system/calico-typha-598bbb56b9-7c99n" Sep 12 05:59:17.629438 kubelet[2932]: I0912 05:59:17.629396 2932 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vv6zp\" (UniqueName: \"kubernetes.io/projected/cc536617-7830-4d66-b93c-15a1586e688f-kube-api-access-vv6zp\") pod \"calico-typha-598bbb56b9-7c99n\" (UID: \"cc536617-7830-4d66-b93c-15a1586e688f\") " pod="calico-system/calico-typha-598bbb56b9-7c99n" Sep 12 05:59:17.841139 containerd[1620]: time="2025-09-12T05:59:17.841061738Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-598bbb56b9-7c99n,Uid:cc536617-7830-4d66-b93c-15a1586e688f,Namespace:calico-system,Attempt:0,}" Sep 12 05:59:17.865677 containerd[1620]: time="2025-09-12T05:59:17.865648542Z" level=info msg="connecting to shim 568a976a55d41f8f90fa659d259f8616f3f7649f357d3e04d5d00b399a2bb857" address="unix:///run/containerd/s/9e76e52caef02942d74d0efd15332c822793c8e2b53516115c81291f46dc7158" namespace=k8s.io protocol=ttrpc version=3 Sep 12 05:59:17.887669 systemd[1]: Started cri-containerd-568a976a55d41f8f90fa659d259f8616f3f7649f357d3e04d5d00b399a2bb857.scope - libcontainer container 568a976a55d41f8f90fa659d259f8616f3f7649f357d3e04d5d00b399a2bb857. Sep 12 05:59:17.926314 systemd[1]: Created slice kubepods-besteffort-podb0c66f65_7755_4f52_9519_4ab1cb1a4eaf.slice - libcontainer container kubepods-besteffort-podb0c66f65_7755_4f52_9519_4ab1cb1a4eaf.slice. Sep 12 05:59:17.963842 containerd[1620]: time="2025-09-12T05:59:17.963775931Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-598bbb56b9-7c99n,Uid:cc536617-7830-4d66-b93c-15a1586e688f,Namespace:calico-system,Attempt:0,} returns sandbox id \"568a976a55d41f8f90fa659d259f8616f3f7649f357d3e04d5d00b399a2bb857\"" Sep 12 05:59:17.965164 containerd[1620]: time="2025-09-12T05:59:17.965120795Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\"" Sep 12 05:59:18.031738 kubelet[2932]: I0912 05:59:18.031057 2932 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/b0c66f65-7755-4f52-9519-4ab1cb1a4eaf-var-run-calico\") pod \"calico-node-4jcz6\" (UID: \"b0c66f65-7755-4f52-9519-4ab1cb1a4eaf\") " pod="calico-system/calico-node-4jcz6" Sep 12 05:59:18.031738 kubelet[2932]: I0912 05:59:18.031087 2932 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/b0c66f65-7755-4f52-9519-4ab1cb1a4eaf-var-lib-calico\") pod \"calico-node-4jcz6\" (UID: \"b0c66f65-7755-4f52-9519-4ab1cb1a4eaf\") " pod="calico-system/calico-node-4jcz6" Sep 12 05:59:18.031738 kubelet[2932]: I0912 05:59:18.031099 2932 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/b0c66f65-7755-4f52-9519-4ab1cb1a4eaf-cni-log-dir\") pod \"calico-node-4jcz6\" (UID: \"b0c66f65-7755-4f52-9519-4ab1cb1a4eaf\") " pod="calico-system/calico-node-4jcz6" Sep 12 05:59:18.031738 kubelet[2932]: I0912 05:59:18.031108 2932 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b0c66f65-7755-4f52-9519-4ab1cb1a4eaf-lib-modules\") pod \"calico-node-4jcz6\" (UID: \"b0c66f65-7755-4f52-9519-4ab1cb1a4eaf\") " pod="calico-system/calico-node-4jcz6" Sep 12 05:59:18.031738 kubelet[2932]: I0912 05:59:18.031118 2932 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/b0c66f65-7755-4f52-9519-4ab1cb1a4eaf-policysync\") pod \"calico-node-4jcz6\" (UID: \"b0c66f65-7755-4f52-9519-4ab1cb1a4eaf\") " pod="calico-system/calico-node-4jcz6" Sep 12 05:59:18.031898 kubelet[2932]: I0912 05:59:18.031127 2932 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dwhxr\" (UniqueName: \"kubernetes.io/projected/b0c66f65-7755-4f52-9519-4ab1cb1a4eaf-kube-api-access-dwhxr\") pod \"calico-node-4jcz6\" (UID: \"b0c66f65-7755-4f52-9519-4ab1cb1a4eaf\") " pod="calico-system/calico-node-4jcz6" Sep 12 05:59:18.031898 kubelet[2932]: I0912 05:59:18.031140 2932 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b0c66f65-7755-4f52-9519-4ab1cb1a4eaf-tigera-ca-bundle\") pod \"calico-node-4jcz6\" (UID: \"b0c66f65-7755-4f52-9519-4ab1cb1a4eaf\") " pod="calico-system/calico-node-4jcz6" Sep 12 05:59:18.031898 kubelet[2932]: I0912 05:59:18.031150 2932 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/b0c66f65-7755-4f52-9519-4ab1cb1a4eaf-cni-net-dir\") pod \"calico-node-4jcz6\" (UID: \"b0c66f65-7755-4f52-9519-4ab1cb1a4eaf\") " pod="calico-system/calico-node-4jcz6" Sep 12 05:59:18.031898 kubelet[2932]: I0912 05:59:18.031161 2932 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/b0c66f65-7755-4f52-9519-4ab1cb1a4eaf-cni-bin-dir\") pod \"calico-node-4jcz6\" (UID: \"b0c66f65-7755-4f52-9519-4ab1cb1a4eaf\") " pod="calico-system/calico-node-4jcz6" Sep 12 05:59:18.031898 kubelet[2932]: I0912 05:59:18.031170 2932 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b0c66f65-7755-4f52-9519-4ab1cb1a4eaf-xtables-lock\") pod \"calico-node-4jcz6\" (UID: \"b0c66f65-7755-4f52-9519-4ab1cb1a4eaf\") " pod="calico-system/calico-node-4jcz6" Sep 12 05:59:18.031985 kubelet[2932]: I0912 05:59:18.031180 2932 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/b0c66f65-7755-4f52-9519-4ab1cb1a4eaf-flexvol-driver-host\") pod \"calico-node-4jcz6\" (UID: \"b0c66f65-7755-4f52-9519-4ab1cb1a4eaf\") " pod="calico-system/calico-node-4jcz6" Sep 12 05:59:18.031985 kubelet[2932]: I0912 05:59:18.031190 2932 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/b0c66f65-7755-4f52-9519-4ab1cb1a4eaf-node-certs\") pod \"calico-node-4jcz6\" (UID: \"b0c66f65-7755-4f52-9519-4ab1cb1a4eaf\") " pod="calico-system/calico-node-4jcz6" Sep 12 05:59:18.137319 kubelet[2932]: E0912 05:59:18.137257 2932 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:59:18.137319 kubelet[2932]: W0912 05:59:18.137277 2932 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:59:18.139484 kubelet[2932]: E0912 05:59:18.139455 2932 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:59:18.144425 kubelet[2932]: E0912 05:59:18.144409 2932 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:59:18.144425 kubelet[2932]: W0912 05:59:18.144420 2932 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:59:18.144505 kubelet[2932]: E0912 05:59:18.144432 2932 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:59:18.230086 containerd[1620]: time="2025-09-12T05:59:18.230035078Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-4jcz6,Uid:b0c66f65-7755-4f52-9519-4ab1cb1a4eaf,Namespace:calico-system,Attempt:0,}" Sep 12 05:59:18.233640 kubelet[2932]: E0912 05:59:18.233602 2932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dftxb" podUID="fbcdb877-a070-4ed8-b9fc-c8a9acc42275" Sep 12 05:59:18.246068 containerd[1620]: time="2025-09-12T05:59:18.246027898Z" level=info msg="connecting to shim c0253f76351b36402bdfa1361c4b49e9eba567aeca6500238dbec87c844b22c4" address="unix:///run/containerd/s/e32586b24658170bf8d1742b6fe910d8836ec7e5527f4d27cd0f5650213c7028" namespace=k8s.io protocol=ttrpc version=3 Sep 12 05:59:18.299683 systemd[1]: Started cri-containerd-c0253f76351b36402bdfa1361c4b49e9eba567aeca6500238dbec87c844b22c4.scope - libcontainer container c0253f76351b36402bdfa1361c4b49e9eba567aeca6500238dbec87c844b22c4. Sep 12 05:59:18.320222 kubelet[2932]: E0912 05:59:18.320205 2932 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:59:18.320222 kubelet[2932]: W0912 05:59:18.320218 2932 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:59:18.320464 kubelet[2932]: E0912 05:59:18.320231 2932 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:59:18.320464 kubelet[2932]: E0912 05:59:18.320333 2932 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:59:18.320464 kubelet[2932]: W0912 05:59:18.320338 2932 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:59:18.320464 kubelet[2932]: E0912 05:59:18.320343 2932 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:59:18.320464 kubelet[2932]: E0912 05:59:18.320423 2932 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:59:18.320464 kubelet[2932]: W0912 05:59:18.320428 2932 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:59:18.320464 kubelet[2932]: E0912 05:59:18.320433 2932 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:59:18.322495 kubelet[2932]: E0912 05:59:18.320624 2932 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:59:18.322495 kubelet[2932]: W0912 05:59:18.320629 2932 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:59:18.322495 kubelet[2932]: E0912 05:59:18.320633 2932 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:59:18.322495 kubelet[2932]: E0912 05:59:18.321664 2932 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:59:18.322495 kubelet[2932]: W0912 05:59:18.321670 2932 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:59:18.322495 kubelet[2932]: E0912 05:59:18.321676 2932 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:59:18.322495 kubelet[2932]: E0912 05:59:18.321752 2932 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:59:18.322495 kubelet[2932]: W0912 05:59:18.321756 2932 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:59:18.322495 kubelet[2932]: E0912 05:59:18.321761 2932 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:59:18.322495 kubelet[2932]: E0912 05:59:18.321842 2932 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:59:18.322671 kubelet[2932]: W0912 05:59:18.321847 2932 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:59:18.322671 kubelet[2932]: E0912 05:59:18.321851 2932 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:59:18.322753 kubelet[2932]: E0912 05:59:18.322745 2932 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:59:18.322753 kubelet[2932]: W0912 05:59:18.322751 2932 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:59:18.322801 kubelet[2932]: E0912 05:59:18.322757 2932 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:59:18.322855 kubelet[2932]: E0912 05:59:18.322850 2932 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:59:18.322888 kubelet[2932]: W0912 05:59:18.322855 2932 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:59:18.322888 kubelet[2932]: E0912 05:59:18.322860 2932 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:59:18.322943 kubelet[2932]: E0912 05:59:18.322937 2932 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:59:18.322943 kubelet[2932]: W0912 05:59:18.322942 2932 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:59:18.323008 kubelet[2932]: E0912 05:59:18.322947 2932 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:59:18.323052 kubelet[2932]: E0912 05:59:18.323016 2932 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:59:18.323052 kubelet[2932]: W0912 05:59:18.323021 2932 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:59:18.323052 kubelet[2932]: E0912 05:59:18.323025 2932 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:59:18.323112 kubelet[2932]: E0912 05:59:18.323098 2932 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:59:18.323112 kubelet[2932]: W0912 05:59:18.323104 2932 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:59:18.323112 kubelet[2932]: E0912 05:59:18.323111 2932 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:59:18.323208 kubelet[2932]: E0912 05:59:18.323201 2932 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:59:18.323208 kubelet[2932]: W0912 05:59:18.323206 2932 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:59:18.323257 kubelet[2932]: E0912 05:59:18.323211 2932 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:59:18.323292 kubelet[2932]: E0912 05:59:18.323282 2932 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:59:18.323292 kubelet[2932]: W0912 05:59:18.323287 2932 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:59:18.323292 kubelet[2932]: E0912 05:59:18.323291 2932 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:59:18.323384 kubelet[2932]: E0912 05:59:18.323364 2932 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:59:18.323384 kubelet[2932]: W0912 05:59:18.323369 2932 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:59:18.323384 kubelet[2932]: E0912 05:59:18.323373 2932 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:59:18.323461 kubelet[2932]: E0912 05:59:18.323445 2932 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:59:18.323461 kubelet[2932]: W0912 05:59:18.323449 2932 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:59:18.323461 kubelet[2932]: E0912 05:59:18.323454 2932 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:59:18.324619 kubelet[2932]: E0912 05:59:18.324610 2932 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:59:18.324619 kubelet[2932]: W0912 05:59:18.324617 2932 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:59:18.324677 kubelet[2932]: E0912 05:59:18.324623 2932 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:59:18.324710 kubelet[2932]: E0912 05:59:18.324702 2932 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:59:18.324710 kubelet[2932]: W0912 05:59:18.324708 2932 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:59:18.324757 kubelet[2932]: E0912 05:59:18.324721 2932 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:59:18.324795 kubelet[2932]: E0912 05:59:18.324785 2932 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:59:18.324827 kubelet[2932]: W0912 05:59:18.324789 2932 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:59:18.324827 kubelet[2932]: E0912 05:59:18.324801 2932 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:59:18.324888 kubelet[2932]: E0912 05:59:18.324880 2932 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:59:18.324888 kubelet[2932]: W0912 05:59:18.324886 2932 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:59:18.324936 kubelet[2932]: E0912 05:59:18.324891 2932 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:59:18.333309 kubelet[2932]: E0912 05:59:18.333191 2932 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:59:18.333309 kubelet[2932]: W0912 05:59:18.333206 2932 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:59:18.333309 kubelet[2932]: E0912 05:59:18.333218 2932 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:59:18.333309 kubelet[2932]: I0912 05:59:18.333234 2932 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fbcdb877-a070-4ed8-b9fc-c8a9acc42275-kubelet-dir\") pod \"csi-node-driver-dftxb\" (UID: \"fbcdb877-a070-4ed8-b9fc-c8a9acc42275\") " pod="calico-system/csi-node-driver-dftxb" Sep 12 05:59:18.333476 kubelet[2932]: E0912 05:59:18.333469 2932 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:59:18.333559 kubelet[2932]: W0912 05:59:18.333507 2932 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:59:18.333559 kubelet[2932]: E0912 05:59:18.333519 2932 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:59:18.333559 kubelet[2932]: I0912 05:59:18.333529 2932 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/fbcdb877-a070-4ed8-b9fc-c8a9acc42275-socket-dir\") pod \"csi-node-driver-dftxb\" (UID: \"fbcdb877-a070-4ed8-b9fc-c8a9acc42275\") " pod="calico-system/csi-node-driver-dftxb" Sep 12 05:59:18.333765 kubelet[2932]: E0912 05:59:18.333691 2932 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:59:18.333765 kubelet[2932]: W0912 05:59:18.333698 2932 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:59:18.333765 kubelet[2932]: E0912 05:59:18.333706 2932 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:59:18.333765 kubelet[2932]: I0912 05:59:18.333717 2932 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8zmmw\" (UniqueName: \"kubernetes.io/projected/fbcdb877-a070-4ed8-b9fc-c8a9acc42275-kube-api-access-8zmmw\") pod \"csi-node-driver-dftxb\" (UID: \"fbcdb877-a070-4ed8-b9fc-c8a9acc42275\") " pod="calico-system/csi-node-driver-dftxb" Sep 12 05:59:18.333871 kubelet[2932]: E0912 05:59:18.333865 2932 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:59:18.333903 kubelet[2932]: W0912 05:59:18.333898 2932 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:59:18.333939 kubelet[2932]: E0912 05:59:18.333934 2932 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:59:18.333972 kubelet[2932]: I0912 05:59:18.333966 2932 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/fbcdb877-a070-4ed8-b9fc-c8a9acc42275-varrun\") pod \"csi-node-driver-dftxb\" (UID: \"fbcdb877-a070-4ed8-b9fc-c8a9acc42275\") " pod="calico-system/csi-node-driver-dftxb" Sep 12 05:59:18.334110 kubelet[2932]: E0912 05:59:18.334098 2932 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:59:18.334110 kubelet[2932]: W0912 05:59:18.334108 2932 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:59:18.334156 kubelet[2932]: E0912 05:59:18.334117 2932 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:59:18.334196 kubelet[2932]: E0912 05:59:18.334190 2932 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:59:18.334196 kubelet[2932]: W0912 05:59:18.334195 2932 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:59:18.334232 kubelet[2932]: E0912 05:59:18.334206 2932 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:59:18.334303 kubelet[2932]: E0912 05:59:18.334296 2932 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:59:18.334303 kubelet[2932]: W0912 05:59:18.334302 2932 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:59:18.334346 kubelet[2932]: E0912 05:59:18.334309 2932 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:59:18.334391 kubelet[2932]: E0912 05:59:18.334383 2932 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:59:18.334391 kubelet[2932]: W0912 05:59:18.334391 2932 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:59:18.334431 kubelet[2932]: E0912 05:59:18.334396 2932 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:59:18.334472 kubelet[2932]: E0912 05:59:18.334465 2932 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:59:18.334472 kubelet[2932]: W0912 05:59:18.334471 2932 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:59:18.334515 kubelet[2932]: E0912 05:59:18.334483 2932 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:59:18.334871 kubelet[2932]: E0912 05:59:18.334861 2932 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:59:18.334871 kubelet[2932]: W0912 05:59:18.334868 2932 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:59:18.334924 kubelet[2932]: E0912 05:59:18.334875 2932 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:59:18.335048 kubelet[2932]: E0912 05:59:18.334946 2932 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:59:18.335048 kubelet[2932]: W0912 05:59:18.334950 2932 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:59:18.335048 kubelet[2932]: E0912 05:59:18.334955 2932 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:59:18.335048 kubelet[2932]: I0912 05:59:18.334966 2932 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/fbcdb877-a070-4ed8-b9fc-c8a9acc42275-registration-dir\") pod \"csi-node-driver-dftxb\" (UID: \"fbcdb877-a070-4ed8-b9fc-c8a9acc42275\") " pod="calico-system/csi-node-driver-dftxb" Sep 12 05:59:18.335048 kubelet[2932]: E0912 05:59:18.335048 2932 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:59:18.335142 kubelet[2932]: W0912 05:59:18.335053 2932 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:59:18.335142 kubelet[2932]: E0912 05:59:18.335139 2932 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:59:18.335614 kubelet[2932]: E0912 05:59:18.335528 2932 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:59:18.335614 kubelet[2932]: W0912 05:59:18.335534 2932 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:59:18.335614 kubelet[2932]: E0912 05:59:18.335542 2932 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:59:18.335850 kubelet[2932]: E0912 05:59:18.335767 2932 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:59:18.335850 kubelet[2932]: W0912 05:59:18.335773 2932 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:59:18.335850 kubelet[2932]: E0912 05:59:18.335779 2932 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:59:18.336034 kubelet[2932]: E0912 05:59:18.335996 2932 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:59:18.336034 kubelet[2932]: W0912 05:59:18.336001 2932 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:59:18.336034 kubelet[2932]: E0912 05:59:18.336007 2932 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:59:18.362235 containerd[1620]: time="2025-09-12T05:59:18.362210034Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-4jcz6,Uid:b0c66f65-7755-4f52-9519-4ab1cb1a4eaf,Namespace:calico-system,Attempt:0,} returns sandbox id \"c0253f76351b36402bdfa1361c4b49e9eba567aeca6500238dbec87c844b22c4\"" Sep 12 05:59:18.436665 kubelet[2932]: E0912 05:59:18.435518 2932 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:59:18.436665 kubelet[2932]: W0912 05:59:18.435532 2932 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:59:18.436665 kubelet[2932]: E0912 05:59:18.435544 2932 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:59:18.437535 kubelet[2932]: E0912 05:59:18.436736 2932 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:59:18.437535 kubelet[2932]: W0912 05:59:18.436747 2932 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:59:18.437535 kubelet[2932]: E0912 05:59:18.436767 2932 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:59:18.437803 kubelet[2932]: E0912 05:59:18.437677 2932 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:59:18.437803 kubelet[2932]: W0912 05:59:18.437684 2932 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:59:18.437803 kubelet[2932]: E0912 05:59:18.437726 2932 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:59:18.437930 kubelet[2932]: E0912 05:59:18.437924 2932 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:59:18.437990 kubelet[2932]: W0912 05:59:18.437978 2932 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:59:18.438119 kubelet[2932]: E0912 05:59:18.438097 2932 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:59:18.438281 kubelet[2932]: E0912 05:59:18.438267 2932 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:59:18.438281 kubelet[2932]: W0912 05:59:18.438274 2932 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:59:18.438430 kubelet[2932]: E0912 05:59:18.438424 2932 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:59:18.438480 kubelet[2932]: W0912 05:59:18.438474 2932 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:59:18.438544 kubelet[2932]: E0912 05:59:18.438508 2932 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:59:18.438584 kubelet[2932]: E0912 05:59:18.438462 2932 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:59:18.438713 kubelet[2932]: E0912 05:59:18.438707 2932 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:59:18.438762 kubelet[2932]: W0912 05:59:18.438756 2932 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:59:18.438815 kubelet[2932]: E0912 05:59:18.438809 2932 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:59:18.438952 kubelet[2932]: E0912 05:59:18.438940 2932 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:59:18.438952 kubelet[2932]: W0912 05:59:18.438945 2932 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:59:18.439045 kubelet[2932]: E0912 05:59:18.438995 2932 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:59:18.439122 kubelet[2932]: E0912 05:59:18.439117 2932 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:59:18.439201 kubelet[2932]: W0912 05:59:18.439147 2932 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:59:18.439201 kubelet[2932]: E0912 05:59:18.439159 2932 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:59:18.439390 kubelet[2932]: E0912 05:59:18.439377 2932 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:59:18.439390 kubelet[2932]: W0912 05:59:18.439383 2932 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:59:18.439524 kubelet[2932]: E0912 05:59:18.439503 2932 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:59:18.439597 kubelet[2932]: E0912 05:59:18.439584 2932 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:59:18.439597 kubelet[2932]: W0912 05:59:18.439589 2932 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:59:18.439729 kubelet[2932]: E0912 05:59:18.439717 2932 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:59:18.439814 kubelet[2932]: E0912 05:59:18.439771 2932 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:59:18.439814 kubelet[2932]: W0912 05:59:18.439776 2932 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:59:18.439880 kubelet[2932]: E0912 05:59:18.439869 2932 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:59:18.439932 kubelet[2932]: E0912 05:59:18.439927 2932 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:59:18.439980 kubelet[2932]: W0912 05:59:18.439956 2932 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:59:18.440034 kubelet[2932]: E0912 05:59:18.440028 2932 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:59:18.440143 kubelet[2932]: E0912 05:59:18.440129 2932 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:59:18.440143 kubelet[2932]: W0912 05:59:18.440136 2932 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:59:18.440215 kubelet[2932]: E0912 05:59:18.440208 2932 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:59:18.440361 kubelet[2932]: E0912 05:59:18.440341 2932 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:59:18.440361 kubelet[2932]: W0912 05:59:18.440346 2932 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:59:18.440438 kubelet[2932]: E0912 05:59:18.440405 2932 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:59:18.441052 kubelet[2932]: E0912 05:59:18.441037 2932 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:59:18.441052 kubelet[2932]: W0912 05:59:18.441044 2932 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:59:18.441132 kubelet[2932]: E0912 05:59:18.441104 2932 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:59:18.441278 kubelet[2932]: E0912 05:59:18.441258 2932 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:59:18.441278 kubelet[2932]: W0912 05:59:18.441264 2932 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:59:18.441538 kubelet[2932]: E0912 05:59:18.441429 2932 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:59:18.442126 kubelet[2932]: E0912 05:59:18.441972 2932 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:59:18.442126 kubelet[2932]: W0912 05:59:18.441978 2932 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:59:18.442126 kubelet[2932]: E0912 05:59:18.442088 2932 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:59:18.442343 kubelet[2932]: E0912 05:59:18.442317 2932 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:59:18.442343 kubelet[2932]: W0912 05:59:18.442323 2932 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:59:18.442662 kubelet[2932]: E0912 05:59:18.442655 2932 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:59:18.442826 kubelet[2932]: E0912 05:59:18.442815 2932 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:59:18.442934 kubelet[2932]: W0912 05:59:18.442917 2932 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:59:18.443065 kubelet[2932]: E0912 05:59:18.443032 2932 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:59:18.443283 kubelet[2932]: E0912 05:59:18.443277 2932 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:59:18.443318 kubelet[2932]: W0912 05:59:18.443313 2932 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:59:18.443412 kubelet[2932]: E0912 05:59:18.443392 2932 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:59:18.443695 kubelet[2932]: E0912 05:59:18.443681 2932 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:59:18.443695 kubelet[2932]: W0912 05:59:18.443688 2932 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:59:18.444004 kubelet[2932]: E0912 05:59:18.443953 2932 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:59:18.444269 kubelet[2932]: E0912 05:59:18.444201 2932 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:59:18.444269 kubelet[2932]: W0912 05:59:18.444208 2932 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:59:18.444269 kubelet[2932]: E0912 05:59:18.444213 2932 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:59:18.444540 kubelet[2932]: E0912 05:59:18.444520 2932 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:59:18.444675 kubelet[2932]: W0912 05:59:18.444575 2932 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:59:18.444675 kubelet[2932]: E0912 05:59:18.444582 2932 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:59:18.448770 kubelet[2932]: E0912 05:59:18.448598 2932 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:59:18.448770 kubelet[2932]: W0912 05:59:18.448606 2932 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:59:18.448925 kubelet[2932]: E0912 05:59:18.448917 2932 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:59:18.450929 kubelet[2932]: E0912 05:59:18.450900 2932 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:59:18.450929 kubelet[2932]: W0912 05:59:18.450907 2932 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:59:18.450929 kubelet[2932]: E0912 05:59:18.450914 2932 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:59:19.451366 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1733436178.mount: Deactivated successfully. Sep 12 05:59:20.012465 kubelet[2932]: E0912 05:59:20.012131 2932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dftxb" podUID="fbcdb877-a070-4ed8-b9fc-c8a9acc42275" Sep 12 05:59:20.996079 containerd[1620]: time="2025-09-12T05:59:20.995949111Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 05:59:21.009836 containerd[1620]: time="2025-09-12T05:59:21.005147933Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.3: active requests=0, bytes read=35237389" Sep 12 05:59:21.012367 containerd[1620]: time="2025-09-12T05:59:21.012108094Z" level=info msg="ImageCreate event name:\"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 05:59:21.020257 containerd[1620]: time="2025-09-12T05:59:21.020235554Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 05:59:21.020691 containerd[1620]: time="2025-09-12T05:59:21.020674552Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.3\" with image id \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\", size \"35237243\" in 3.055527995s" Sep 12 05:59:21.020720 containerd[1620]: time="2025-09-12T05:59:21.020692348Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\" returns image reference \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\"" Sep 12 05:59:21.021493 containerd[1620]: time="2025-09-12T05:59:21.021369915Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\"" Sep 12 05:59:21.033968 containerd[1620]: time="2025-09-12T05:59:21.033943328Z" level=info msg="CreateContainer within sandbox \"568a976a55d41f8f90fa659d259f8616f3f7649f357d3e04d5d00b399a2bb857\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Sep 12 05:59:21.038250 containerd[1620]: time="2025-09-12T05:59:21.037138381Z" level=info msg="Container bbc75bc4f09fec006165ea92f063a6afccce1e12ddca80a054025705136e8490: CDI devices from CRI Config.CDIDevices: []" Sep 12 05:59:21.043502 containerd[1620]: time="2025-09-12T05:59:21.043439923Z" level=info msg="CreateContainer within sandbox \"568a976a55d41f8f90fa659d259f8616f3f7649f357d3e04d5d00b399a2bb857\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"bbc75bc4f09fec006165ea92f063a6afccce1e12ddca80a054025705136e8490\"" Sep 12 05:59:21.044433 containerd[1620]: time="2025-09-12T05:59:21.043993168Z" level=info msg="StartContainer for \"bbc75bc4f09fec006165ea92f063a6afccce1e12ddca80a054025705136e8490\"" Sep 12 05:59:21.045500 containerd[1620]: time="2025-09-12T05:59:21.045442263Z" level=info msg="connecting to shim bbc75bc4f09fec006165ea92f063a6afccce1e12ddca80a054025705136e8490" address="unix:///run/containerd/s/9e76e52caef02942d74d0efd15332c822793c8e2b53516115c81291f46dc7158" protocol=ttrpc version=3 Sep 12 05:59:21.066705 systemd[1]: Started cri-containerd-bbc75bc4f09fec006165ea92f063a6afccce1e12ddca80a054025705136e8490.scope - libcontainer container bbc75bc4f09fec006165ea92f063a6afccce1e12ddca80a054025705136e8490. Sep 12 05:59:21.127274 containerd[1620]: time="2025-09-12T05:59:21.127207913Z" level=info msg="StartContainer for \"bbc75bc4f09fec006165ea92f063a6afccce1e12ddca80a054025705136e8490\" returns successfully" Sep 12 05:59:22.012461 kubelet[2932]: E0912 05:59:22.012422 2932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dftxb" podUID="fbcdb877-a070-4ed8-b9fc-c8a9acc42275" Sep 12 05:59:22.147316 kubelet[2932]: E0912 05:59:22.147230 2932 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:59:22.147316 kubelet[2932]: W0912 05:59:22.147249 2932 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:59:22.147316 kubelet[2932]: E0912 05:59:22.147262 2932 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:59:22.147529 kubelet[2932]: E0912 05:59:22.147473 2932 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:59:22.147529 kubelet[2932]: W0912 05:59:22.147479 2932 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:59:22.147529 kubelet[2932]: E0912 05:59:22.147486 2932 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:59:22.147711 kubelet[2932]: E0912 05:59:22.147576 2932 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:59:22.147711 kubelet[2932]: W0912 05:59:22.147581 2932 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:59:22.147711 kubelet[2932]: E0912 05:59:22.147586 2932 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:59:22.147788 kubelet[2932]: E0912 05:59:22.147782 2932 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:59:22.147823 kubelet[2932]: W0912 05:59:22.147817 2932 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:59:22.147855 kubelet[2932]: E0912 05:59:22.147850 2932 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:59:22.148010 kubelet[2932]: E0912 05:59:22.147962 2932 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:59:22.148010 kubelet[2932]: W0912 05:59:22.147968 2932 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:59:22.148010 kubelet[2932]: E0912 05:59:22.147973 2932 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:59:22.148097 kubelet[2932]: E0912 05:59:22.148092 2932 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:59:22.148128 kubelet[2932]: W0912 05:59:22.148123 2932 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:59:22.148160 kubelet[2932]: E0912 05:59:22.148155 2932 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:59:22.148309 kubelet[2932]: E0912 05:59:22.148257 2932 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:59:22.148309 kubelet[2932]: W0912 05:59:22.148263 2932 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:59:22.148309 kubelet[2932]: E0912 05:59:22.148268 2932 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:59:22.149350 kubelet[2932]: E0912 05:59:22.148467 2932 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:59:22.149350 kubelet[2932]: W0912 05:59:22.148473 2932 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:59:22.149350 kubelet[2932]: E0912 05:59:22.148478 2932 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:59:22.149350 kubelet[2932]: E0912 05:59:22.148568 2932 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:59:22.149350 kubelet[2932]: W0912 05:59:22.148573 2932 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:59:22.149350 kubelet[2932]: E0912 05:59:22.148578 2932 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:59:22.149350 kubelet[2932]: E0912 05:59:22.148652 2932 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:59:22.149350 kubelet[2932]: W0912 05:59:22.148656 2932 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:59:22.149350 kubelet[2932]: E0912 05:59:22.148660 2932 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:59:22.149350 kubelet[2932]: E0912 05:59:22.148738 2932 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:59:22.149503 kubelet[2932]: W0912 05:59:22.148742 2932 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:59:22.149503 kubelet[2932]: E0912 05:59:22.148746 2932 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:59:22.149503 kubelet[2932]: E0912 05:59:22.148830 2932 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:59:22.149503 kubelet[2932]: W0912 05:59:22.148835 2932 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:59:22.149503 kubelet[2932]: E0912 05:59:22.148839 2932 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:59:22.149503 kubelet[2932]: E0912 05:59:22.148920 2932 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:59:22.149503 kubelet[2932]: W0912 05:59:22.148925 2932 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:59:22.149503 kubelet[2932]: E0912 05:59:22.148929 2932 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:59:22.149503 kubelet[2932]: E0912 05:59:22.149009 2932 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:59:22.149503 kubelet[2932]: W0912 05:59:22.149014 2932 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:59:22.149681 kubelet[2932]: E0912 05:59:22.149019 2932 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:59:22.149681 kubelet[2932]: E0912 05:59:22.149098 2932 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:59:22.149681 kubelet[2932]: W0912 05:59:22.149103 2932 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:59:22.149681 kubelet[2932]: E0912 05:59:22.149108 2932 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:59:22.163684 kubelet[2932]: E0912 05:59:22.163619 2932 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:59:22.163684 kubelet[2932]: W0912 05:59:22.163632 2932 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:59:22.163684 kubelet[2932]: E0912 05:59:22.163643 2932 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:59:22.163872 kubelet[2932]: E0912 05:59:22.163867 2932 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:59:22.163948 kubelet[2932]: W0912 05:59:22.163907 2932 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:59:22.163948 kubelet[2932]: E0912 05:59:22.163920 2932 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:59:22.164136 kubelet[2932]: E0912 05:59:22.164083 2932 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:59:22.164136 kubelet[2932]: W0912 05:59:22.164090 2932 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:59:22.164287 kubelet[2932]: E0912 05:59:22.164185 2932 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:59:22.164450 kubelet[2932]: E0912 05:59:22.164417 2932 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:59:22.164450 kubelet[2932]: W0912 05:59:22.164424 2932 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:59:22.164450 kubelet[2932]: E0912 05:59:22.164434 2932 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:59:22.164586 kubelet[2932]: E0912 05:59:22.164519 2932 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:59:22.164586 kubelet[2932]: W0912 05:59:22.164526 2932 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:59:22.164586 kubelet[2932]: E0912 05:59:22.164536 2932 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:59:22.164633 kubelet[2932]: E0912 05:59:22.164627 2932 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:59:22.164633 kubelet[2932]: W0912 05:59:22.164631 2932 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:59:22.164667 kubelet[2932]: E0912 05:59:22.164647 2932 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:59:22.164763 kubelet[2932]: E0912 05:59:22.164710 2932 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:59:22.164763 kubelet[2932]: W0912 05:59:22.164724 2932 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:59:22.164763 kubelet[2932]: E0912 05:59:22.164732 2932 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:59:22.164966 kubelet[2932]: E0912 05:59:22.164821 2932 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:59:22.164966 kubelet[2932]: W0912 05:59:22.164826 2932 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:59:22.164966 kubelet[2932]: E0912 05:59:22.164832 2932 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:59:22.164966 kubelet[2932]: E0912 05:59:22.164906 2932 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:59:22.164966 kubelet[2932]: W0912 05:59:22.164911 2932 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:59:22.164966 kubelet[2932]: E0912 05:59:22.164917 2932 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:59:22.165121 kubelet[2932]: E0912 05:59:22.165108 2932 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:59:22.165121 kubelet[2932]: W0912 05:59:22.165114 2932 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:59:22.165213 kubelet[2932]: E0912 05:59:22.165170 2932 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:59:22.165273 kubelet[2932]: E0912 05:59:22.165268 2932 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:59:22.165305 kubelet[2932]: W0912 05:59:22.165300 2932 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:59:22.165339 kubelet[2932]: E0912 05:59:22.165334 2932 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:59:22.165561 kubelet[2932]: E0912 05:59:22.165464 2932 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:59:22.165561 kubelet[2932]: W0912 05:59:22.165470 2932 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:59:22.165561 kubelet[2932]: E0912 05:59:22.165477 2932 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:59:22.165761 kubelet[2932]: E0912 05:59:22.165712 2932 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:59:22.165761 kubelet[2932]: W0912 05:59:22.165721 2932 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:59:22.165761 kubelet[2932]: E0912 05:59:22.165729 2932 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:59:22.165828 kubelet[2932]: E0912 05:59:22.165812 2932 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:59:22.165828 kubelet[2932]: W0912 05:59:22.165816 2932 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:59:22.165828 kubelet[2932]: E0912 05:59:22.165821 2932 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:59:22.170123 kubelet[2932]: E0912 05:59:22.165894 2932 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:59:22.170123 kubelet[2932]: W0912 05:59:22.165898 2932 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:59:22.170123 kubelet[2932]: E0912 05:59:22.165902 2932 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:59:22.170123 kubelet[2932]: E0912 05:59:22.165980 2932 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:59:22.170123 kubelet[2932]: W0912 05:59:22.165984 2932 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:59:22.170123 kubelet[2932]: E0912 05:59:22.165989 2932 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:59:22.170123 kubelet[2932]: E0912 05:59:22.166133 2932 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:59:22.170123 kubelet[2932]: W0912 05:59:22.166138 2932 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:59:22.170123 kubelet[2932]: E0912 05:59:22.166145 2932 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:59:22.170123 kubelet[2932]: E0912 05:59:22.166219 2932 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:59:22.174141 kubelet[2932]: W0912 05:59:22.166223 2932 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:59:22.174141 kubelet[2932]: E0912 05:59:22.166227 2932 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:59:22.267104 containerd[1620]: time="2025-09-12T05:59:22.266419296Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 05:59:22.267353 containerd[1620]: time="2025-09-12T05:59:22.267138869Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3: active requests=0, bytes read=4446660" Sep 12 05:59:22.267790 containerd[1620]: time="2025-09-12T05:59:22.267389826Z" level=info msg="ImageCreate event name:\"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 05:59:22.269031 containerd[1620]: time="2025-09-12T05:59:22.269004395Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 05:59:22.269796 containerd[1620]: time="2025-09-12T05:59:22.269774713Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" with image id \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\", size \"5939323\" in 1.248386667s" Sep 12 05:59:22.269846 containerd[1620]: time="2025-09-12T05:59:22.269795876Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" returns image reference \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\"" Sep 12 05:59:22.271898 containerd[1620]: time="2025-09-12T05:59:22.271867889Z" level=info msg="CreateContainer within sandbox \"c0253f76351b36402bdfa1361c4b49e9eba567aeca6500238dbec87c844b22c4\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Sep 12 05:59:22.276151 containerd[1620]: time="2025-09-12T05:59:22.276123293Z" level=info msg="Container 042862a8e86537a50cd293214031b16d91d8f9a7b3271f358a27e192a55d24e1: CDI devices from CRI Config.CDIDevices: []" Sep 12 05:59:22.282372 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount151225106.mount: Deactivated successfully. Sep 12 05:59:22.291773 containerd[1620]: time="2025-09-12T05:59:22.291646873Z" level=info msg="CreateContainer within sandbox \"c0253f76351b36402bdfa1361c4b49e9eba567aeca6500238dbec87c844b22c4\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"042862a8e86537a50cd293214031b16d91d8f9a7b3271f358a27e192a55d24e1\"" Sep 12 05:59:22.293387 containerd[1620]: time="2025-09-12T05:59:22.293341004Z" level=info msg="StartContainer for \"042862a8e86537a50cd293214031b16d91d8f9a7b3271f358a27e192a55d24e1\"" Sep 12 05:59:22.294293 containerd[1620]: time="2025-09-12T05:59:22.294266671Z" level=info msg="connecting to shim 042862a8e86537a50cd293214031b16d91d8f9a7b3271f358a27e192a55d24e1" address="unix:///run/containerd/s/e32586b24658170bf8d1742b6fe910d8836ec7e5527f4d27cd0f5650213c7028" protocol=ttrpc version=3 Sep 12 05:59:22.336667 systemd[1]: Started cri-containerd-042862a8e86537a50cd293214031b16d91d8f9a7b3271f358a27e192a55d24e1.scope - libcontainer container 042862a8e86537a50cd293214031b16d91d8f9a7b3271f358a27e192a55d24e1. Sep 12 05:59:22.392660 systemd[1]: cri-containerd-042862a8e86537a50cd293214031b16d91d8f9a7b3271f358a27e192a55d24e1.scope: Deactivated successfully. Sep 12 05:59:22.395712 containerd[1620]: time="2025-09-12T05:59:22.395656156Z" level=info msg="StartContainer for \"042862a8e86537a50cd293214031b16d91d8f9a7b3271f358a27e192a55d24e1\" returns successfully" Sep 12 05:59:22.406770 containerd[1620]: time="2025-09-12T05:59:22.406746136Z" level=info msg="received exit event container_id:\"042862a8e86537a50cd293214031b16d91d8f9a7b3271f358a27e192a55d24e1\" id:\"042862a8e86537a50cd293214031b16d91d8f9a7b3271f358a27e192a55d24e1\" pid:3585 exited_at:{seconds:1757656762 nanos:394466217}" Sep 12 05:59:22.478454 containerd[1620]: time="2025-09-12T05:59:22.478426227Z" level=info msg="TaskExit event in podsandbox handler container_id:\"042862a8e86537a50cd293214031b16d91d8f9a7b3271f358a27e192a55d24e1\" id:\"042862a8e86537a50cd293214031b16d91d8f9a7b3271f358a27e192a55d24e1\" pid:3585 exited_at:{seconds:1757656762 nanos:394466217}" Sep 12 05:59:22.511311 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-042862a8e86537a50cd293214031b16d91d8f9a7b3271f358a27e192a55d24e1-rootfs.mount: Deactivated successfully. Sep 12 05:59:23.096485 kubelet[2932]: I0912 05:59:23.096217 2932 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 12 05:59:23.097454 containerd[1620]: time="2025-09-12T05:59:23.097426389Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\"" Sep 12 05:59:23.112728 kubelet[2932]: I0912 05:59:23.112136 2932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-598bbb56b9-7c99n" podStartSLOduration=3.055635325 podStartE2EDuration="6.112116328s" podCreationTimestamp="2025-09-12 05:59:17 +0000 UTC" firstStartedPulling="2025-09-12 05:59:17.964724183 +0000 UTC m=+19.016106163" lastFinishedPulling="2025-09-12 05:59:21.021205185 +0000 UTC m=+22.072587166" observedRunningTime="2025-09-12 05:59:22.104907351 +0000 UTC m=+23.156289337" watchObservedRunningTime="2025-09-12 05:59:23.112116328 +0000 UTC m=+24.163498321" Sep 12 05:59:24.012503 kubelet[2932]: E0912 05:59:24.012468 2932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dftxb" podUID="fbcdb877-a070-4ed8-b9fc-c8a9acc42275" Sep 12 05:59:26.011675 kubelet[2932]: E0912 05:59:26.011642 2932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dftxb" podUID="fbcdb877-a070-4ed8-b9fc-c8a9acc42275" Sep 12 05:59:27.124577 containerd[1620]: time="2025-09-12T05:59:27.124534643Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 05:59:27.125190 containerd[1620]: time="2025-09-12T05:59:27.125168381Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.3: active requests=0, bytes read=70440613" Sep 12 05:59:27.125643 containerd[1620]: time="2025-09-12T05:59:27.125621426Z" level=info msg="ImageCreate event name:\"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 05:59:27.127087 containerd[1620]: time="2025-09-12T05:59:27.127058699Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 05:59:27.127955 containerd[1620]: time="2025-09-12T05:59:27.127926156Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.3\" with image id \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\", size \"71933316\" in 4.030342577s" Sep 12 05:59:27.127955 containerd[1620]: time="2025-09-12T05:59:27.127949118Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\" returns image reference \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\"" Sep 12 05:59:27.132284 containerd[1620]: time="2025-09-12T05:59:27.132197173Z" level=info msg="CreateContainer within sandbox \"c0253f76351b36402bdfa1361c4b49e9eba567aeca6500238dbec87c844b22c4\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Sep 12 05:59:27.138990 containerd[1620]: time="2025-09-12T05:59:27.138945252Z" level=info msg="Container 18fab32d9469c131ca31a9e92fe6d0166a3f00834ec13c32b54fea59a48df12c: CDI devices from CRI Config.CDIDevices: []" Sep 12 05:59:27.193313 containerd[1620]: time="2025-09-12T05:59:27.193286459Z" level=info msg="CreateContainer within sandbox \"c0253f76351b36402bdfa1361c4b49e9eba567aeca6500238dbec87c844b22c4\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"18fab32d9469c131ca31a9e92fe6d0166a3f00834ec13c32b54fea59a48df12c\"" Sep 12 05:59:27.199636 containerd[1620]: time="2025-09-12T05:59:27.193803874Z" level=info msg="StartContainer for \"18fab32d9469c131ca31a9e92fe6d0166a3f00834ec13c32b54fea59a48df12c\"" Sep 12 05:59:27.199636 containerd[1620]: time="2025-09-12T05:59:27.194623428Z" level=info msg="connecting to shim 18fab32d9469c131ca31a9e92fe6d0166a3f00834ec13c32b54fea59a48df12c" address="unix:///run/containerd/s/e32586b24658170bf8d1742b6fe910d8836ec7e5527f4d27cd0f5650213c7028" protocol=ttrpc version=3 Sep 12 05:59:27.217718 systemd[1]: Started cri-containerd-18fab32d9469c131ca31a9e92fe6d0166a3f00834ec13c32b54fea59a48df12c.scope - libcontainer container 18fab32d9469c131ca31a9e92fe6d0166a3f00834ec13c32b54fea59a48df12c. Sep 12 05:59:27.262876 containerd[1620]: time="2025-09-12T05:59:27.262850208Z" level=info msg="StartContainer for \"18fab32d9469c131ca31a9e92fe6d0166a3f00834ec13c32b54fea59a48df12c\" returns successfully" Sep 12 05:59:28.012807 kubelet[2932]: E0912 05:59:28.012709 2932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dftxb" podUID="fbcdb877-a070-4ed8-b9fc-c8a9acc42275" Sep 12 05:59:28.954178 systemd[1]: cri-containerd-18fab32d9469c131ca31a9e92fe6d0166a3f00834ec13c32b54fea59a48df12c.scope: Deactivated successfully. Sep 12 05:59:28.954417 systemd[1]: cri-containerd-18fab32d9469c131ca31a9e92fe6d0166a3f00834ec13c32b54fea59a48df12c.scope: Consumed 303ms CPU time, 172.3M memory peak, 20K read from disk, 171.3M written to disk. Sep 12 05:59:28.966016 containerd[1620]: time="2025-09-12T05:59:28.961361879Z" level=info msg="received exit event container_id:\"18fab32d9469c131ca31a9e92fe6d0166a3f00834ec13c32b54fea59a48df12c\" id:\"18fab32d9469c131ca31a9e92fe6d0166a3f00834ec13c32b54fea59a48df12c\" pid:3643 exited_at:{seconds:1757656768 nanos:957139240}" Sep 12 05:59:28.966016 containerd[1620]: time="2025-09-12T05:59:28.961676912Z" level=info msg="TaskExit event in podsandbox handler container_id:\"18fab32d9469c131ca31a9e92fe6d0166a3f00834ec13c32b54fea59a48df12c\" id:\"18fab32d9469c131ca31a9e92fe6d0166a3f00834ec13c32b54fea59a48df12c\" pid:3643 exited_at:{seconds:1757656768 nanos:957139240}" Sep 12 05:59:28.998778 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-18fab32d9469c131ca31a9e92fe6d0166a3f00834ec13c32b54fea59a48df12c-rootfs.mount: Deactivated successfully. Sep 12 05:59:29.003775 kubelet[2932]: I0912 05:59:29.003751 2932 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Sep 12 05:59:29.036142 systemd[1]: Created slice kubepods-burstable-pod6ec3ddfc_1130_408d_ac97_54d9db3a5eea.slice - libcontainer container kubepods-burstable-pod6ec3ddfc_1130_408d_ac97_54d9db3a5eea.slice. Sep 12 05:59:29.045253 systemd[1]: Created slice kubepods-burstable-podf931fde1_f624_43e2_baac_64bdf6ab8d43.slice - libcontainer container kubepods-burstable-podf931fde1_f624_43e2_baac_64bdf6ab8d43.slice. Sep 12 05:59:29.052133 systemd[1]: Created slice kubepods-besteffort-pod774c6aa7_4dc3_4ca4_8443_40327e47969b.slice - libcontainer container kubepods-besteffort-pod774c6aa7_4dc3_4ca4_8443_40327e47969b.slice. Sep 12 05:59:29.063261 systemd[1]: Created slice kubepods-besteffort-pod7f7cffc0_28f8_4c8f_bf71_ba46730893da.slice - libcontainer container kubepods-besteffort-pod7f7cffc0_28f8_4c8f_bf71_ba46730893da.slice. Sep 12 05:59:29.069343 systemd[1]: Created slice kubepods-besteffort-podef258437_2a1e_4777_808a_2650cb571a2d.slice - libcontainer container kubepods-besteffort-podef258437_2a1e_4777_808a_2650cb571a2d.slice. Sep 12 05:59:29.077087 systemd[1]: Created slice kubepods-besteffort-pod7d032fb4_42c5_43bf_9040_c36bcf9cf53d.slice - libcontainer container kubepods-besteffort-pod7d032fb4_42c5_43bf_9040_c36bcf9cf53d.slice. Sep 12 05:59:29.082217 systemd[1]: Created slice kubepods-besteffort-pod85098826_2ef9_4be9_a714_c70c61ec7d55.slice - libcontainer container kubepods-besteffort-pod85098826_2ef9_4be9_a714_c70c61ec7d55.slice. Sep 12 05:59:29.110584 kubelet[2932]: I0912 05:59:29.110475 2932 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jvrxv\" (UniqueName: \"kubernetes.io/projected/774c6aa7-4dc3-4ca4-8443-40327e47969b-kube-api-access-jvrxv\") pod \"whisker-577db48588-vmfv8\" (UID: \"774c6aa7-4dc3-4ca4-8443-40327e47969b\") " pod="calico-system/whisker-577db48588-vmfv8" Sep 12 05:59:29.110584 kubelet[2932]: I0912 05:59:29.110502 2932 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vmw7n\" (UniqueName: \"kubernetes.io/projected/ef258437-2a1e-4777-808a-2650cb571a2d-kube-api-access-vmw7n\") pod \"calico-apiserver-8d7f475b6-6ctkw\" (UID: \"ef258437-2a1e-4777-808a-2650cb571a2d\") " pod="calico-apiserver/calico-apiserver-8d7f475b6-6ctkw" Sep 12 05:59:29.110584 kubelet[2932]: I0912 05:59:29.110514 2932 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xsjwf\" (UniqueName: \"kubernetes.io/projected/7d032fb4-42c5-43bf-9040-c36bcf9cf53d-kube-api-access-xsjwf\") pod \"calico-apiserver-8d7f475b6-hf4g9\" (UID: \"7d032fb4-42c5-43bf-9040-c36bcf9cf53d\") " pod="calico-apiserver/calico-apiserver-8d7f475b6-hf4g9" Sep 12 05:59:29.110584 kubelet[2932]: I0912 05:59:29.110526 2932 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/ef258437-2a1e-4777-808a-2650cb571a2d-calico-apiserver-certs\") pod \"calico-apiserver-8d7f475b6-6ctkw\" (UID: \"ef258437-2a1e-4777-808a-2650cb571a2d\") " pod="calico-apiserver/calico-apiserver-8d7f475b6-6ctkw" Sep 12 05:59:29.110584 kubelet[2932]: I0912 05:59:29.110535 2932 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5lf89\" (UniqueName: \"kubernetes.io/projected/f931fde1-f624-43e2-baac-64bdf6ab8d43-kube-api-access-5lf89\") pod \"coredns-668d6bf9bc-pzmsg\" (UID: \"f931fde1-f624-43e2-baac-64bdf6ab8d43\") " pod="kube-system/coredns-668d6bf9bc-pzmsg" Sep 12 05:59:29.111687 kubelet[2932]: I0912 05:59:29.110546 2932 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/85098826-2ef9-4be9-a714-c70c61ec7d55-goldmane-key-pair\") pod \"goldmane-54d579b49d-7t4n7\" (UID: \"85098826-2ef9-4be9-a714-c70c61ec7d55\") " pod="calico-system/goldmane-54d579b49d-7t4n7" Sep 12 05:59:29.111687 kubelet[2932]: I0912 05:59:29.110566 2932 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/7d032fb4-42c5-43bf-9040-c36bcf9cf53d-calico-apiserver-certs\") pod \"calico-apiserver-8d7f475b6-hf4g9\" (UID: \"7d032fb4-42c5-43bf-9040-c36bcf9cf53d\") " pod="calico-apiserver/calico-apiserver-8d7f475b6-hf4g9" Sep 12 05:59:29.111687 kubelet[2932]: I0912 05:59:29.110871 2932 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z5v4w\" (UniqueName: \"kubernetes.io/projected/7f7cffc0-28f8-4c8f-bf71-ba46730893da-kube-api-access-z5v4w\") pod \"calico-kube-controllers-868998bbf7-9bvbh\" (UID: \"7f7cffc0-28f8-4c8f-bf71-ba46730893da\") " pod="calico-system/calico-kube-controllers-868998bbf7-9bvbh" Sep 12 05:59:29.111687 kubelet[2932]: I0912 05:59:29.110894 2932 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f931fde1-f624-43e2-baac-64bdf6ab8d43-config-volume\") pod \"coredns-668d6bf9bc-pzmsg\" (UID: \"f931fde1-f624-43e2-baac-64bdf6ab8d43\") " pod="kube-system/coredns-668d6bf9bc-pzmsg" Sep 12 05:59:29.111687 kubelet[2932]: I0912 05:59:29.110904 2932 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/85098826-2ef9-4be9-a714-c70c61ec7d55-goldmane-ca-bundle\") pod \"goldmane-54d579b49d-7t4n7\" (UID: \"85098826-2ef9-4be9-a714-c70c61ec7d55\") " pod="calico-system/goldmane-54d579b49d-7t4n7" Sep 12 05:59:29.111775 kubelet[2932]: I0912 05:59:29.110933 2932 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6ec3ddfc-1130-408d-ac97-54d9db3a5eea-config-volume\") pod \"coredns-668d6bf9bc-bfh87\" (UID: \"6ec3ddfc-1130-408d-ac97-54d9db3a5eea\") " pod="kube-system/coredns-668d6bf9bc-bfh87" Sep 12 05:59:29.111775 kubelet[2932]: I0912 05:59:29.110946 2932 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7f7cffc0-28f8-4c8f-bf71-ba46730893da-tigera-ca-bundle\") pod \"calico-kube-controllers-868998bbf7-9bvbh\" (UID: \"7f7cffc0-28f8-4c8f-bf71-ba46730893da\") " pod="calico-system/calico-kube-controllers-868998bbf7-9bvbh" Sep 12 05:59:29.111775 kubelet[2932]: I0912 05:59:29.110961 2932 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/774c6aa7-4dc3-4ca4-8443-40327e47969b-whisker-backend-key-pair\") pod \"whisker-577db48588-vmfv8\" (UID: \"774c6aa7-4dc3-4ca4-8443-40327e47969b\") " pod="calico-system/whisker-577db48588-vmfv8" Sep 12 05:59:29.111775 kubelet[2932]: I0912 05:59:29.111159 2932 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sfv6f\" (UniqueName: \"kubernetes.io/projected/6ec3ddfc-1130-408d-ac97-54d9db3a5eea-kube-api-access-sfv6f\") pod \"coredns-668d6bf9bc-bfh87\" (UID: \"6ec3ddfc-1130-408d-ac97-54d9db3a5eea\") " pod="kube-system/coredns-668d6bf9bc-bfh87" Sep 12 05:59:29.111775 kubelet[2932]: I0912 05:59:29.111173 2932 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r7sjv\" (UniqueName: \"kubernetes.io/projected/85098826-2ef9-4be9-a714-c70c61ec7d55-kube-api-access-r7sjv\") pod \"goldmane-54d579b49d-7t4n7\" (UID: \"85098826-2ef9-4be9-a714-c70c61ec7d55\") " pod="calico-system/goldmane-54d579b49d-7t4n7" Sep 12 05:59:29.112126 kubelet[2932]: I0912 05:59:29.111182 2932 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/774c6aa7-4dc3-4ca4-8443-40327e47969b-whisker-ca-bundle\") pod \"whisker-577db48588-vmfv8\" (UID: \"774c6aa7-4dc3-4ca4-8443-40327e47969b\") " pod="calico-system/whisker-577db48588-vmfv8" Sep 12 05:59:29.112126 kubelet[2932]: I0912 05:59:29.111192 2932 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/85098826-2ef9-4be9-a714-c70c61ec7d55-config\") pod \"goldmane-54d579b49d-7t4n7\" (UID: \"85098826-2ef9-4be9-a714-c70c61ec7d55\") " pod="calico-system/goldmane-54d579b49d-7t4n7" Sep 12 05:59:29.160689 containerd[1620]: time="2025-09-12T05:59:29.160639922Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\"" Sep 12 05:59:29.339750 containerd[1620]: time="2025-09-12T05:59:29.339367438Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-bfh87,Uid:6ec3ddfc-1130-408d-ac97-54d9db3a5eea,Namespace:kube-system,Attempt:0,}" Sep 12 05:59:29.351504 containerd[1620]: time="2025-09-12T05:59:29.351471409Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-pzmsg,Uid:f931fde1-f624-43e2-baac-64bdf6ab8d43,Namespace:kube-system,Attempt:0,}" Sep 12 05:59:29.365830 containerd[1620]: time="2025-09-12T05:59:29.365651847Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-577db48588-vmfv8,Uid:774c6aa7-4dc3-4ca4-8443-40327e47969b,Namespace:calico-system,Attempt:0,}" Sep 12 05:59:29.367727 containerd[1620]: time="2025-09-12T05:59:29.367711618Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-868998bbf7-9bvbh,Uid:7f7cffc0-28f8-4c8f-bf71-ba46730893da,Namespace:calico-system,Attempt:0,}" Sep 12 05:59:29.395109 containerd[1620]: time="2025-09-12T05:59:29.395085424Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-7t4n7,Uid:85098826-2ef9-4be9-a714-c70c61ec7d55,Namespace:calico-system,Attempt:0,}" Sep 12 05:59:29.395725 containerd[1620]: time="2025-09-12T05:59:29.395645179Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8d7f475b6-6ctkw,Uid:ef258437-2a1e-4777-808a-2650cb571a2d,Namespace:calico-apiserver,Attempt:0,}" Sep 12 05:59:29.395725 containerd[1620]: time="2025-09-12T05:59:29.395718642Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8d7f475b6-hf4g9,Uid:7d032fb4-42c5-43bf-9040-c36bcf9cf53d,Namespace:calico-apiserver,Attempt:0,}" Sep 12 05:59:29.683263 containerd[1620]: time="2025-09-12T05:59:29.683155250Z" level=error msg="Failed to destroy network for sandbox \"5a45463d9835b5ae1e9da1e3ab85971cf9f3ce24a82b91ef444afc87502fdd25\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 05:59:29.684334 containerd[1620]: time="2025-09-12T05:59:29.684308628Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-868998bbf7-9bvbh,Uid:7f7cffc0-28f8-4c8f-bf71-ba46730893da,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"5a45463d9835b5ae1e9da1e3ab85971cf9f3ce24a82b91ef444afc87502fdd25\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 05:59:29.687180 kubelet[2932]: E0912 05:59:29.687139 2932 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5a45463d9835b5ae1e9da1e3ab85971cf9f3ce24a82b91ef444afc87502fdd25\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 05:59:29.687257 kubelet[2932]: E0912 05:59:29.687203 2932 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5a45463d9835b5ae1e9da1e3ab85971cf9f3ce24a82b91ef444afc87502fdd25\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-868998bbf7-9bvbh" Sep 12 05:59:29.687257 kubelet[2932]: E0912 05:59:29.687218 2932 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5a45463d9835b5ae1e9da1e3ab85971cf9f3ce24a82b91ef444afc87502fdd25\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-868998bbf7-9bvbh" Sep 12 05:59:29.687301 kubelet[2932]: E0912 05:59:29.687255 2932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-868998bbf7-9bvbh_calico-system(7f7cffc0-28f8-4c8f-bf71-ba46730893da)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-868998bbf7-9bvbh_calico-system(7f7cffc0-28f8-4c8f-bf71-ba46730893da)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5a45463d9835b5ae1e9da1e3ab85971cf9f3ce24a82b91ef444afc87502fdd25\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-868998bbf7-9bvbh" podUID="7f7cffc0-28f8-4c8f-bf71-ba46730893da" Sep 12 05:59:29.696746 containerd[1620]: time="2025-09-12T05:59:29.696718443Z" level=error msg="Failed to destroy network for sandbox \"476db242f1a165a85557c686f0970ba809a8f6e866573a0eb92362e7ad06d24f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 05:59:29.697307 containerd[1620]: time="2025-09-12T05:59:29.697272082Z" level=error msg="Failed to destroy network for sandbox \"76abee77f968a386120c5626a7243ccd6855b1d186967c4dbb6aef1be827e0d6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 05:59:29.697429 containerd[1620]: time="2025-09-12T05:59:29.697379236Z" level=error msg="Failed to destroy network for sandbox \"5368f0aea8d3528c05ea21881a38702317dbb328e2d8e4428d7a4ff5a1df0b03\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 05:59:29.698193 containerd[1620]: time="2025-09-12T05:59:29.698170711Z" level=error msg="Failed to destroy network for sandbox \"6a1c2e01fbe5ec0d9827d63e037c499f5c8e8a56a7a9d2e543efb76933c24cc2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 05:59:29.698378 containerd[1620]: time="2025-09-12T05:59:29.698361998Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-pzmsg,Uid:f931fde1-f624-43e2-baac-64bdf6ab8d43,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"476db242f1a165a85557c686f0970ba809a8f6e866573a0eb92362e7ad06d24f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 05:59:29.698786 containerd[1620]: time="2025-09-12T05:59:29.698767133Z" level=error msg="Failed to destroy network for sandbox \"6bc9ad1210fa7a3765934b473005c4d84b0d3ac49d280215dad8e6965f871225\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 05:59:29.699779 kubelet[2932]: E0912 05:59:29.699746 2932 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"476db242f1a165a85557c686f0970ba809a8f6e866573a0eb92362e7ad06d24f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 05:59:29.699830 kubelet[2932]: E0912 05:59:29.699790 2932 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"476db242f1a165a85557c686f0970ba809a8f6e866573a0eb92362e7ad06d24f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-pzmsg" Sep 12 05:59:29.699830 kubelet[2932]: E0912 05:59:29.699804 2932 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"476db242f1a165a85557c686f0970ba809a8f6e866573a0eb92362e7ad06d24f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-pzmsg" Sep 12 05:59:29.699872 kubelet[2932]: E0912 05:59:29.699837 2932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-pzmsg_kube-system(f931fde1-f624-43e2-baac-64bdf6ab8d43)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-pzmsg_kube-system(f931fde1-f624-43e2-baac-64bdf6ab8d43)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"476db242f1a165a85557c686f0970ba809a8f6e866573a0eb92362e7ad06d24f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-pzmsg" podUID="f931fde1-f624-43e2-baac-64bdf6ab8d43" Sep 12 05:59:29.700987 containerd[1620]: time="2025-09-12T05:59:29.699967577Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-bfh87,Uid:6ec3ddfc-1130-408d-ac97-54d9db3a5eea,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"76abee77f968a386120c5626a7243ccd6855b1d186967c4dbb6aef1be827e0d6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 05:59:29.701202 containerd[1620]: time="2025-09-12T05:59:29.701179399Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-577db48588-vmfv8,Uid:774c6aa7-4dc3-4ca4-8443-40327e47969b,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"5368f0aea8d3528c05ea21881a38702317dbb328e2d8e4428d7a4ff5a1df0b03\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 05:59:29.701631 containerd[1620]: time="2025-09-12T05:59:29.701497319Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8d7f475b6-6ctkw,Uid:ef258437-2a1e-4777-808a-2650cb571a2d,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"6a1c2e01fbe5ec0d9827d63e037c499f5c8e8a56a7a9d2e543efb76933c24cc2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 05:59:29.702612 containerd[1620]: time="2025-09-12T05:59:29.702592663Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-7t4n7,Uid:85098826-2ef9-4be9-a714-c70c61ec7d55,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"6bc9ad1210fa7a3765934b473005c4d84b0d3ac49d280215dad8e6965f871225\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 05:59:29.702784 kubelet[2932]: E0912 05:59:29.702768 2932 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6bc9ad1210fa7a3765934b473005c4d84b0d3ac49d280215dad8e6965f871225\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 05:59:29.702885 containerd[1620]: time="2025-09-12T05:59:29.702873848Z" level=error msg="Failed to destroy network for sandbox \"3cbeae8f09c484c6157a50e3d715c794d1a686a57d554e38c58bc72e253a37c9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 05:59:29.703176 kubelet[2932]: E0912 05:59:29.702953 2932 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6bc9ad1210fa7a3765934b473005c4d84b0d3ac49d280215dad8e6965f871225\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-7t4n7" Sep 12 05:59:29.703176 kubelet[2932]: E0912 05:59:29.702766 2932 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"76abee77f968a386120c5626a7243ccd6855b1d186967c4dbb6aef1be827e0d6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 05:59:29.703176 kubelet[2932]: E0912 05:59:29.702978 2932 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6bc9ad1210fa7a3765934b473005c4d84b0d3ac49d280215dad8e6965f871225\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-7t4n7" Sep 12 05:59:29.703176 kubelet[2932]: E0912 05:59:29.702999 2932 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"76abee77f968a386120c5626a7243ccd6855b1d186967c4dbb6aef1be827e0d6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-bfh87" Sep 12 05:59:29.703275 kubelet[2932]: E0912 05:59:29.703003 2932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-54d579b49d-7t4n7_calico-system(85098826-2ef9-4be9-a714-c70c61ec7d55)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-54d579b49d-7t4n7_calico-system(85098826-2ef9-4be9-a714-c70c61ec7d55)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6bc9ad1210fa7a3765934b473005c4d84b0d3ac49d280215dad8e6965f871225\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-54d579b49d-7t4n7" podUID="85098826-2ef9-4be9-a714-c70c61ec7d55" Sep 12 05:59:29.703275 kubelet[2932]: E0912 05:59:29.702791 2932 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5368f0aea8d3528c05ea21881a38702317dbb328e2d8e4428d7a4ff5a1df0b03\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 05:59:29.703275 kubelet[2932]: E0912 05:59:29.703122 2932 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5368f0aea8d3528c05ea21881a38702317dbb328e2d8e4428d7a4ff5a1df0b03\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-577db48588-vmfv8" Sep 12 05:59:29.703352 kubelet[2932]: E0912 05:59:29.703131 2932 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5368f0aea8d3528c05ea21881a38702317dbb328e2d8e4428d7a4ff5a1df0b03\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-577db48588-vmfv8" Sep 12 05:59:29.703352 kubelet[2932]: E0912 05:59:29.703158 2932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-577db48588-vmfv8_calico-system(774c6aa7-4dc3-4ca4-8443-40327e47969b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-577db48588-vmfv8_calico-system(774c6aa7-4dc3-4ca4-8443-40327e47969b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5368f0aea8d3528c05ea21881a38702317dbb328e2d8e4428d7a4ff5a1df0b03\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-577db48588-vmfv8" podUID="774c6aa7-4dc3-4ca4-8443-40327e47969b" Sep 12 05:59:29.703934 kubelet[2932]: E0912 05:59:29.702809 2932 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6a1c2e01fbe5ec0d9827d63e037c499f5c8e8a56a7a9d2e543efb76933c24cc2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 05:59:29.703934 kubelet[2932]: E0912 05:59:29.703752 2932 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6a1c2e01fbe5ec0d9827d63e037c499f5c8e8a56a7a9d2e543efb76933c24cc2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-8d7f475b6-6ctkw" Sep 12 05:59:29.703934 kubelet[2932]: E0912 05:59:29.703769 2932 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6a1c2e01fbe5ec0d9827d63e037c499f5c8e8a56a7a9d2e543efb76933c24cc2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-8d7f475b6-6ctkw" Sep 12 05:59:29.704096 kubelet[2932]: E0912 05:59:29.703800 2932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-8d7f475b6-6ctkw_calico-apiserver(ef258437-2a1e-4777-808a-2650cb571a2d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-8d7f475b6-6ctkw_calico-apiserver(ef258437-2a1e-4777-808a-2650cb571a2d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6a1c2e01fbe5ec0d9827d63e037c499f5c8e8a56a7a9d2e543efb76933c24cc2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-8d7f475b6-6ctkw" podUID="ef258437-2a1e-4777-808a-2650cb571a2d" Sep 12 05:59:29.704096 kubelet[2932]: E0912 05:59:29.703012 2932 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"76abee77f968a386120c5626a7243ccd6855b1d186967c4dbb6aef1be827e0d6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-bfh87" Sep 12 05:59:29.704096 kubelet[2932]: E0912 05:59:29.703968 2932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-bfh87_kube-system(6ec3ddfc-1130-408d-ac97-54d9db3a5eea)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-bfh87_kube-system(6ec3ddfc-1130-408d-ac97-54d9db3a5eea)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"76abee77f968a386120c5626a7243ccd6855b1d186967c4dbb6aef1be827e0d6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-bfh87" podUID="6ec3ddfc-1130-408d-ac97-54d9db3a5eea" Sep 12 05:59:29.704185 containerd[1620]: time="2025-09-12T05:59:29.703985155Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8d7f475b6-hf4g9,Uid:7d032fb4-42c5-43bf-9040-c36bcf9cf53d,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"3cbeae8f09c484c6157a50e3d715c794d1a686a57d554e38c58bc72e253a37c9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 05:59:29.705343 kubelet[2932]: E0912 05:59:29.705314 2932 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3cbeae8f09c484c6157a50e3d715c794d1a686a57d554e38c58bc72e253a37c9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 05:59:29.705402 kubelet[2932]: E0912 05:59:29.705352 2932 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3cbeae8f09c484c6157a50e3d715c794d1a686a57d554e38c58bc72e253a37c9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-8d7f475b6-hf4g9" Sep 12 05:59:29.705402 kubelet[2932]: E0912 05:59:29.705367 2932 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3cbeae8f09c484c6157a50e3d715c794d1a686a57d554e38c58bc72e253a37c9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-8d7f475b6-hf4g9" Sep 12 05:59:29.705975 kubelet[2932]: E0912 05:59:29.705395 2932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-8d7f475b6-hf4g9_calico-apiserver(7d032fb4-42c5-43bf-9040-c36bcf9cf53d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-8d7f475b6-hf4g9_calico-apiserver(7d032fb4-42c5-43bf-9040-c36bcf9cf53d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3cbeae8f09c484c6157a50e3d715c794d1a686a57d554e38c58bc72e253a37c9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-8d7f475b6-hf4g9" podUID="7d032fb4-42c5-43bf-9040-c36bcf9cf53d" Sep 12 05:59:30.016372 systemd[1]: Created slice kubepods-besteffort-podfbcdb877_a070_4ed8_b9fc_c8a9acc42275.slice - libcontainer container kubepods-besteffort-podfbcdb877_a070_4ed8_b9fc_c8a9acc42275.slice. Sep 12 05:59:30.019213 containerd[1620]: time="2025-09-12T05:59:30.019189791Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dftxb,Uid:fbcdb877-a070-4ed8-b9fc-c8a9acc42275,Namespace:calico-system,Attempt:0,}" Sep 12 05:59:30.057088 containerd[1620]: time="2025-09-12T05:59:30.056996459Z" level=error msg="Failed to destroy network for sandbox \"2dee3b13c5700ac8602dac5db216955e36b23164332eb9cb044d44bfa3c7ea19\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 05:59:30.058292 systemd[1]: run-netns-cni\x2d42590980\x2d0603\x2d6714\x2da345\x2d10b64eb95ea5.mount: Deactivated successfully. Sep 12 05:59:30.062157 containerd[1620]: time="2025-09-12T05:59:30.062137813Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dftxb,Uid:fbcdb877-a070-4ed8-b9fc-c8a9acc42275,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"2dee3b13c5700ac8602dac5db216955e36b23164332eb9cb044d44bfa3c7ea19\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 05:59:30.062366 kubelet[2932]: E0912 05:59:30.062344 2932 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2dee3b13c5700ac8602dac5db216955e36b23164332eb9cb044d44bfa3c7ea19\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 05:59:30.062489 kubelet[2932]: E0912 05:59:30.062470 2932 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2dee3b13c5700ac8602dac5db216955e36b23164332eb9cb044d44bfa3c7ea19\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-dftxb" Sep 12 05:59:30.062572 kubelet[2932]: E0912 05:59:30.062542 2932 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2dee3b13c5700ac8602dac5db216955e36b23164332eb9cb044d44bfa3c7ea19\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-dftxb" Sep 12 05:59:30.062668 kubelet[2932]: E0912 05:59:30.062648 2932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-dftxb_calico-system(fbcdb877-a070-4ed8-b9fc-c8a9acc42275)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-dftxb_calico-system(fbcdb877-a070-4ed8-b9fc-c8a9acc42275)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2dee3b13c5700ac8602dac5db216955e36b23164332eb9cb044d44bfa3c7ea19\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-dftxb" podUID="fbcdb877-a070-4ed8-b9fc-c8a9acc42275" Sep 12 05:59:33.980744 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2659276036.mount: Deactivated successfully. Sep 12 05:59:34.308220 containerd[1620]: time="2025-09-12T05:59:34.308088891Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 05:59:34.359123 containerd[1620]: time="2025-09-12T05:59:34.359094830Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.3: active requests=0, bytes read=157078339" Sep 12 05:59:34.376893 containerd[1620]: time="2025-09-12T05:59:34.376855982Z" level=info msg="ImageCreate event name:\"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 05:59:34.411979 containerd[1620]: time="2025-09-12T05:59:34.411839685Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 05:59:34.422139 containerd[1620]: time="2025-09-12T05:59:34.422086323Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.3\" with image id \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\", size \"157078201\" in 5.252047233s" Sep 12 05:59:34.422139 containerd[1620]: time="2025-09-12T05:59:34.422111928Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\" returns image reference \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\"" Sep 12 05:59:34.484405 containerd[1620]: time="2025-09-12T05:59:34.484356237Z" level=info msg="CreateContainer within sandbox \"c0253f76351b36402bdfa1361c4b49e9eba567aeca6500238dbec87c844b22c4\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Sep 12 05:59:34.570579 containerd[1620]: time="2025-09-12T05:59:34.569368550Z" level=info msg="Container 833a3ebe243ff92db2c758f98d6bc4bb50ad82c3a83fe5cafe2fec6d15abb4b7: CDI devices from CRI Config.CDIDevices: []" Sep 12 05:59:34.570733 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1308766787.mount: Deactivated successfully. Sep 12 05:59:34.663014 containerd[1620]: time="2025-09-12T05:59:34.662991208Z" level=info msg="CreateContainer within sandbox \"c0253f76351b36402bdfa1361c4b49e9eba567aeca6500238dbec87c844b22c4\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"833a3ebe243ff92db2c758f98d6bc4bb50ad82c3a83fe5cafe2fec6d15abb4b7\"" Sep 12 05:59:34.663415 containerd[1620]: time="2025-09-12T05:59:34.663394656Z" level=info msg="StartContainer for \"833a3ebe243ff92db2c758f98d6bc4bb50ad82c3a83fe5cafe2fec6d15abb4b7\"" Sep 12 05:59:34.667519 containerd[1620]: time="2025-09-12T05:59:34.667092317Z" level=info msg="connecting to shim 833a3ebe243ff92db2c758f98d6bc4bb50ad82c3a83fe5cafe2fec6d15abb4b7" address="unix:///run/containerd/s/e32586b24658170bf8d1742b6fe910d8836ec7e5527f4d27cd0f5650213c7028" protocol=ttrpc version=3 Sep 12 05:59:34.807009 systemd[1]: Started cri-containerd-833a3ebe243ff92db2c758f98d6bc4bb50ad82c3a83fe5cafe2fec6d15abb4b7.scope - libcontainer container 833a3ebe243ff92db2c758f98d6bc4bb50ad82c3a83fe5cafe2fec6d15abb4b7. Sep 12 05:59:34.867342 containerd[1620]: time="2025-09-12T05:59:34.866360001Z" level=info msg="StartContainer for \"833a3ebe243ff92db2c758f98d6bc4bb50ad82c3a83fe5cafe2fec6d15abb4b7\" returns successfully" Sep 12 05:59:35.133568 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Sep 12 05:59:35.140828 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Sep 12 05:59:35.398165 containerd[1620]: time="2025-09-12T05:59:35.397880369Z" level=info msg="TaskExit event in podsandbox handler container_id:\"833a3ebe243ff92db2c758f98d6bc4bb50ad82c3a83fe5cafe2fec6d15abb4b7\" id:\"978e50ad0c5127f0141da9dc95f29420686b2c5ff1e4b70a7d8471c58e751934\" pid:3963 exit_status:1 exited_at:{seconds:1757656775 nanos:397644793}" Sep 12 05:59:36.146301 kubelet[2932]: I0912 05:59:36.146187 2932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-4jcz6" podStartSLOduration=3.086539985 podStartE2EDuration="19.146163321s" podCreationTimestamp="2025-09-12 05:59:17 +0000 UTC" firstStartedPulling="2025-09-12 05:59:18.363017558 +0000 UTC m=+19.414399539" lastFinishedPulling="2025-09-12 05:59:34.422640891 +0000 UTC m=+35.474022875" observedRunningTime="2025-09-12 05:59:35.419987035 +0000 UTC m=+36.471369025" watchObservedRunningTime="2025-09-12 05:59:36.146163321 +0000 UTC m=+37.197545306" Sep 12 05:59:36.163728 kubelet[2932]: I0912 05:59:36.163152 2932 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jvrxv\" (UniqueName: \"kubernetes.io/projected/774c6aa7-4dc3-4ca4-8443-40327e47969b-kube-api-access-jvrxv\") pod \"774c6aa7-4dc3-4ca4-8443-40327e47969b\" (UID: \"774c6aa7-4dc3-4ca4-8443-40327e47969b\") " Sep 12 05:59:36.165313 kubelet[2932]: I0912 05:59:36.163857 2932 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/774c6aa7-4dc3-4ca4-8443-40327e47969b-whisker-backend-key-pair\") pod \"774c6aa7-4dc3-4ca4-8443-40327e47969b\" (UID: \"774c6aa7-4dc3-4ca4-8443-40327e47969b\") " Sep 12 05:59:36.171809 systemd[1]: var-lib-kubelet-pods-774c6aa7\x2d4dc3\x2d4ca4\x2d8443\x2d40327e47969b-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Sep 12 05:59:36.172583 kubelet[2932]: I0912 05:59:36.172326 2932 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/774c6aa7-4dc3-4ca4-8443-40327e47969b-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "774c6aa7-4dc3-4ca4-8443-40327e47969b" (UID: "774c6aa7-4dc3-4ca4-8443-40327e47969b"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 12 05:59:36.172629 kubelet[2932]: I0912 05:59:36.172616 2932 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/774c6aa7-4dc3-4ca4-8443-40327e47969b-kube-api-access-jvrxv" (OuterVolumeSpecName: "kube-api-access-jvrxv") pod "774c6aa7-4dc3-4ca4-8443-40327e47969b" (UID: "774c6aa7-4dc3-4ca4-8443-40327e47969b"). InnerVolumeSpecName "kube-api-access-jvrxv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 12 05:59:36.174121 systemd[1]: var-lib-kubelet-pods-774c6aa7\x2d4dc3\x2d4ca4\x2d8443\x2d40327e47969b-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2djvrxv.mount: Deactivated successfully. Sep 12 05:59:36.248219 containerd[1620]: time="2025-09-12T05:59:36.248190203Z" level=info msg="TaskExit event in podsandbox handler container_id:\"833a3ebe243ff92db2c758f98d6bc4bb50ad82c3a83fe5cafe2fec6d15abb4b7\" id:\"636e2bcf35d1fa8d899b4b5e48e00b72105f41e5d71178903336ba0c20b350f0\" pid:4007 exit_status:1 exited_at:{seconds:1757656776 nanos:247767663}" Sep 12 05:59:36.264211 kubelet[2932]: I0912 05:59:36.264176 2932 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/774c6aa7-4dc3-4ca4-8443-40327e47969b-whisker-ca-bundle\") pod \"774c6aa7-4dc3-4ca4-8443-40327e47969b\" (UID: \"774c6aa7-4dc3-4ca4-8443-40327e47969b\") " Sep 12 05:59:36.264666 kubelet[2932]: I0912 05:59:36.264316 2932 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-jvrxv\" (UniqueName: \"kubernetes.io/projected/774c6aa7-4dc3-4ca4-8443-40327e47969b-kube-api-access-jvrxv\") on node \"localhost\" DevicePath \"\"" Sep 12 05:59:36.264666 kubelet[2932]: I0912 05:59:36.264332 2932 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/774c6aa7-4dc3-4ca4-8443-40327e47969b-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Sep 12 05:59:36.264666 kubelet[2932]: I0912 05:59:36.264643 2932 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/774c6aa7-4dc3-4ca4-8443-40327e47969b-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "774c6aa7-4dc3-4ca4-8443-40327e47969b" (UID: "774c6aa7-4dc3-4ca4-8443-40327e47969b"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 12 05:59:36.365166 kubelet[2932]: I0912 05:59:36.365133 2932 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/774c6aa7-4dc3-4ca4-8443-40327e47969b-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Sep 12 05:59:36.479762 systemd[1]: Removed slice kubepods-besteffort-pod774c6aa7_4dc3_4ca4_8443_40327e47969b.slice - libcontainer container kubepods-besteffort-pod774c6aa7_4dc3_4ca4_8443_40327e47969b.slice. Sep 12 05:59:36.782291 kubelet[2932]: I0912 05:59:36.782010 2932 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 12 05:59:36.969269 kubelet[2932]: I0912 05:59:36.968689 2932 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2171c083-b9fa-4076-b8d4-cde26293400b-whisker-ca-bundle\") pod \"whisker-6c76686fdc-w2w9w\" (UID: \"2171c083-b9fa-4076-b8d4-cde26293400b\") " pod="calico-system/whisker-6c76686fdc-w2w9w" Sep 12 05:59:36.969383 kubelet[2932]: I0912 05:59:36.969373 2932 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/2171c083-b9fa-4076-b8d4-cde26293400b-whisker-backend-key-pair\") pod \"whisker-6c76686fdc-w2w9w\" (UID: \"2171c083-b9fa-4076-b8d4-cde26293400b\") " pod="calico-system/whisker-6c76686fdc-w2w9w" Sep 12 05:59:36.969485 kubelet[2932]: I0912 05:59:36.969459 2932 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-55qns\" (UniqueName: \"kubernetes.io/projected/2171c083-b9fa-4076-b8d4-cde26293400b-kube-api-access-55qns\") pod \"whisker-6c76686fdc-w2w9w\" (UID: \"2171c083-b9fa-4076-b8d4-cde26293400b\") " pod="calico-system/whisker-6c76686fdc-w2w9w" Sep 12 05:59:36.972854 systemd[1]: Created slice kubepods-besteffort-pod2171c083_b9fa_4076_b8d4_cde26293400b.slice - libcontainer container kubepods-besteffort-pod2171c083_b9fa_4076_b8d4_cde26293400b.slice. Sep 12 05:59:37.015454 kubelet[2932]: I0912 05:59:37.015427 2932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="774c6aa7-4dc3-4ca4-8443-40327e47969b" path="/var/lib/kubelet/pods/774c6aa7-4dc3-4ca4-8443-40327e47969b/volumes" Sep 12 05:59:37.336937 containerd[1620]: time="2025-09-12T05:59:37.336903599Z" level=info msg="TaskExit event in podsandbox handler container_id:\"833a3ebe243ff92db2c758f98d6bc4bb50ad82c3a83fe5cafe2fec6d15abb4b7\" id:\"e42facafa289f0545a1d5955541a23967fab431b8bee00cfa91de2986e96853f\" pid:4119 exit_status:1 exited_at:{seconds:1757656777 nanos:336625480}" Sep 12 05:59:37.342308 containerd[1620]: time="2025-09-12T05:59:37.342285005Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6c76686fdc-w2w9w,Uid:2171c083-b9fa-4076-b8d4-cde26293400b,Namespace:calico-system,Attempt:0,}" Sep 12 05:59:38.104751 systemd-networkd[1507]: vxlan.calico: Link UP Sep 12 05:59:38.104756 systemd-networkd[1507]: vxlan.calico: Gained carrier Sep 12 05:59:39.202628 systemd-networkd[1507]: vxlan.calico: Gained IPv6LL Sep 12 05:59:40.042780 containerd[1620]: time="2025-09-12T05:59:40.042748104Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8d7f475b6-6ctkw,Uid:ef258437-2a1e-4777-808a-2650cb571a2d,Namespace:calico-apiserver,Attempt:0,}" Sep 12 05:59:40.182655 systemd-networkd[1507]: cali9ebff428472: Link UP Sep 12 05:59:40.182966 systemd-networkd[1507]: cali9ebff428472: Gained carrier Sep 12 05:59:40.214913 containerd[1620]: 2025-09-12 05:59:38.186 [INFO][4169] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--6c76686fdc--w2w9w-eth0 whisker-6c76686fdc- calico-system 2171c083-b9fa-4076-b8d4-cde26293400b 860 0 2025-09-12 05:59:36 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:6c76686fdc projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-6c76686fdc-w2w9w eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali9ebff428472 [] [] }} ContainerID="5ff86726d8902b172dcc5d4360c4f7c18e63e06303968b3bd11820b3d5029161" Namespace="calico-system" Pod="whisker-6c76686fdc-w2w9w" WorkloadEndpoint="localhost-k8s-whisker--6c76686fdc--w2w9w-" Sep 12 05:59:40.214913 containerd[1620]: 2025-09-12 05:59:38.186 [INFO][4169] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5ff86726d8902b172dcc5d4360c4f7c18e63e06303968b3bd11820b3d5029161" Namespace="calico-system" Pod="whisker-6c76686fdc-w2w9w" WorkloadEndpoint="localhost-k8s-whisker--6c76686fdc--w2w9w-eth0" Sep 12 05:59:40.214913 containerd[1620]: 2025-09-12 05:59:39.703 [INFO][4220] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5ff86726d8902b172dcc5d4360c4f7c18e63e06303968b3bd11820b3d5029161" HandleID="k8s-pod-network.5ff86726d8902b172dcc5d4360c4f7c18e63e06303968b3bd11820b3d5029161" Workload="localhost-k8s-whisker--6c76686fdc--w2w9w-eth0" Sep 12 05:59:40.214913 containerd[1620]: 2025-09-12 05:59:39.759 [INFO][4220] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="5ff86726d8902b172dcc5d4360c4f7c18e63e06303968b3bd11820b3d5029161" HandleID="k8s-pod-network.5ff86726d8902b172dcc5d4360c4f7c18e63e06303968b3bd11820b3d5029161" Workload="localhost-k8s-whisker--6c76686fdc--w2w9w-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0006008b0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-6c76686fdc-w2w9w", "timestamp":"2025-09-12 05:59:39.703779053 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 05:59:40.214913 containerd[1620]: 2025-09-12 05:59:39.759 [INFO][4220] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 05:59:40.214913 containerd[1620]: 2025-09-12 05:59:39.763 [INFO][4220] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 05:59:40.214913 containerd[1620]: 2025-09-12 05:59:39.773 [INFO][4220] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 12 05:59:40.214913 containerd[1620]: 2025-09-12 05:59:39.902 [INFO][4220] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.5ff86726d8902b172dcc5d4360c4f7c18e63e06303968b3bd11820b3d5029161" host="localhost" Sep 12 05:59:40.214913 containerd[1620]: 2025-09-12 05:59:40.105 [INFO][4220] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 12 05:59:40.214913 containerd[1620]: 2025-09-12 05:59:40.114 [INFO][4220] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 12 05:59:40.214913 containerd[1620]: 2025-09-12 05:59:40.115 [INFO][4220] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 12 05:59:40.214913 containerd[1620]: 2025-09-12 05:59:40.117 [INFO][4220] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 12 05:59:40.214913 containerd[1620]: 2025-09-12 05:59:40.117 [INFO][4220] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.5ff86726d8902b172dcc5d4360c4f7c18e63e06303968b3bd11820b3d5029161" host="localhost" Sep 12 05:59:40.214913 containerd[1620]: 2025-09-12 05:59:40.117 [INFO][4220] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.5ff86726d8902b172dcc5d4360c4f7c18e63e06303968b3bd11820b3d5029161 Sep 12 05:59:40.214913 containerd[1620]: 2025-09-12 05:59:40.121 [INFO][4220] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.5ff86726d8902b172dcc5d4360c4f7c18e63e06303968b3bd11820b3d5029161" host="localhost" Sep 12 05:59:40.214913 containerd[1620]: 2025-09-12 05:59:40.138 [INFO][4220] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.5ff86726d8902b172dcc5d4360c4f7c18e63e06303968b3bd11820b3d5029161" host="localhost" Sep 12 05:59:40.214913 containerd[1620]: 2025-09-12 05:59:40.138 [INFO][4220] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.5ff86726d8902b172dcc5d4360c4f7c18e63e06303968b3bd11820b3d5029161" host="localhost" Sep 12 05:59:40.214913 containerd[1620]: 2025-09-12 05:59:40.138 [INFO][4220] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 05:59:40.214913 containerd[1620]: 2025-09-12 05:59:40.138 [INFO][4220] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="5ff86726d8902b172dcc5d4360c4f7c18e63e06303968b3bd11820b3d5029161" HandleID="k8s-pod-network.5ff86726d8902b172dcc5d4360c4f7c18e63e06303968b3bd11820b3d5029161" Workload="localhost-k8s-whisker--6c76686fdc--w2w9w-eth0" Sep 12 05:59:40.217870 containerd[1620]: 2025-09-12 05:59:40.141 [INFO][4169] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5ff86726d8902b172dcc5d4360c4f7c18e63e06303968b3bd11820b3d5029161" Namespace="calico-system" Pod="whisker-6c76686fdc-w2w9w" WorkloadEndpoint="localhost-k8s-whisker--6c76686fdc--w2w9w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--6c76686fdc--w2w9w-eth0", GenerateName:"whisker-6c76686fdc-", Namespace:"calico-system", SelfLink:"", UID:"2171c083-b9fa-4076-b8d4-cde26293400b", ResourceVersion:"860", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 5, 59, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6c76686fdc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-6c76686fdc-w2w9w", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali9ebff428472", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 05:59:40.217870 containerd[1620]: 2025-09-12 05:59:40.141 [INFO][4169] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="5ff86726d8902b172dcc5d4360c4f7c18e63e06303968b3bd11820b3d5029161" Namespace="calico-system" Pod="whisker-6c76686fdc-w2w9w" WorkloadEndpoint="localhost-k8s-whisker--6c76686fdc--w2w9w-eth0" Sep 12 05:59:40.217870 containerd[1620]: 2025-09-12 05:59:40.141 [INFO][4169] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9ebff428472 ContainerID="5ff86726d8902b172dcc5d4360c4f7c18e63e06303968b3bd11820b3d5029161" Namespace="calico-system" Pod="whisker-6c76686fdc-w2w9w" WorkloadEndpoint="localhost-k8s-whisker--6c76686fdc--w2w9w-eth0" Sep 12 05:59:40.217870 containerd[1620]: 2025-09-12 05:59:40.188 [INFO][4169] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5ff86726d8902b172dcc5d4360c4f7c18e63e06303968b3bd11820b3d5029161" Namespace="calico-system" Pod="whisker-6c76686fdc-w2w9w" WorkloadEndpoint="localhost-k8s-whisker--6c76686fdc--w2w9w-eth0" Sep 12 05:59:40.217870 containerd[1620]: 2025-09-12 05:59:40.189 [INFO][4169] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5ff86726d8902b172dcc5d4360c4f7c18e63e06303968b3bd11820b3d5029161" Namespace="calico-system" Pod="whisker-6c76686fdc-w2w9w" WorkloadEndpoint="localhost-k8s-whisker--6c76686fdc--w2w9w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--6c76686fdc--w2w9w-eth0", GenerateName:"whisker-6c76686fdc-", Namespace:"calico-system", SelfLink:"", UID:"2171c083-b9fa-4076-b8d4-cde26293400b", ResourceVersion:"860", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 5, 59, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6c76686fdc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5ff86726d8902b172dcc5d4360c4f7c18e63e06303968b3bd11820b3d5029161", Pod:"whisker-6c76686fdc-w2w9w", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali9ebff428472", MAC:"f6:f6:6a:a7:a8:6c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 05:59:40.217870 containerd[1620]: 2025-09-12 05:59:40.209 [INFO][4169] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5ff86726d8902b172dcc5d4360c4f7c18e63e06303968b3bd11820b3d5029161" Namespace="calico-system" Pod="whisker-6c76686fdc-w2w9w" WorkloadEndpoint="localhost-k8s-whisker--6c76686fdc--w2w9w-eth0" Sep 12 05:59:40.266601 systemd-networkd[1507]: cali9e706d65309: Link UP Sep 12 05:59:40.268309 systemd-networkd[1507]: cali9e706d65309: Gained carrier Sep 12 05:59:40.293609 containerd[1620]: 2025-09-12 05:59:40.096 [INFO][4267] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--8d7f475b6--6ctkw-eth0 calico-apiserver-8d7f475b6- calico-apiserver ef258437-2a1e-4777-808a-2650cb571a2d 790 0 2025-09-12 05:59:15 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:8d7f475b6 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-8d7f475b6-6ctkw eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali9e706d65309 [] [] }} ContainerID="844d748fe916fac2bfa860fae932beec631426bb4cedeef194ef452eef3e3401" Namespace="calico-apiserver" Pod="calico-apiserver-8d7f475b6-6ctkw" WorkloadEndpoint="localhost-k8s-calico--apiserver--8d7f475b6--6ctkw-" Sep 12 05:59:40.293609 containerd[1620]: 2025-09-12 05:59:40.096 [INFO][4267] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="844d748fe916fac2bfa860fae932beec631426bb4cedeef194ef452eef3e3401" Namespace="calico-apiserver" Pod="calico-apiserver-8d7f475b6-6ctkw" WorkloadEndpoint="localhost-k8s-calico--apiserver--8d7f475b6--6ctkw-eth0" Sep 12 05:59:40.293609 containerd[1620]: 2025-09-12 05:59:40.112 [INFO][4279] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="844d748fe916fac2bfa860fae932beec631426bb4cedeef194ef452eef3e3401" HandleID="k8s-pod-network.844d748fe916fac2bfa860fae932beec631426bb4cedeef194ef452eef3e3401" Workload="localhost-k8s-calico--apiserver--8d7f475b6--6ctkw-eth0" Sep 12 05:59:40.293609 containerd[1620]: 2025-09-12 05:59:40.112 [INFO][4279] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="844d748fe916fac2bfa860fae932beec631426bb4cedeef194ef452eef3e3401" HandleID="k8s-pod-network.844d748fe916fac2bfa860fae932beec631426bb4cedeef194ef452eef3e3401" Workload="localhost-k8s-calico--apiserver--8d7f475b6--6ctkw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f180), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-8d7f475b6-6ctkw", "timestamp":"2025-09-12 05:59:40.112005555 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 05:59:40.293609 containerd[1620]: 2025-09-12 05:59:40.112 [INFO][4279] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 05:59:40.293609 containerd[1620]: 2025-09-12 05:59:40.138 [INFO][4279] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 05:59:40.293609 containerd[1620]: 2025-09-12 05:59:40.138 [INFO][4279] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 12 05:59:40.293609 containerd[1620]: 2025-09-12 05:59:40.143 [INFO][4279] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.844d748fe916fac2bfa860fae932beec631426bb4cedeef194ef452eef3e3401" host="localhost" Sep 12 05:59:40.293609 containerd[1620]: 2025-09-12 05:59:40.184 [INFO][4279] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 12 05:59:40.293609 containerd[1620]: 2025-09-12 05:59:40.222 [INFO][4279] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 12 05:59:40.293609 containerd[1620]: 2025-09-12 05:59:40.225 [INFO][4279] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 12 05:59:40.293609 containerd[1620]: 2025-09-12 05:59:40.229 [INFO][4279] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 12 05:59:40.293609 containerd[1620]: 2025-09-12 05:59:40.230 [INFO][4279] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.844d748fe916fac2bfa860fae932beec631426bb4cedeef194ef452eef3e3401" host="localhost" Sep 12 05:59:40.293609 containerd[1620]: 2025-09-12 05:59:40.233 [INFO][4279] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.844d748fe916fac2bfa860fae932beec631426bb4cedeef194ef452eef3e3401 Sep 12 05:59:40.293609 containerd[1620]: 2025-09-12 05:59:40.240 [INFO][4279] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.844d748fe916fac2bfa860fae932beec631426bb4cedeef194ef452eef3e3401" host="localhost" Sep 12 05:59:40.293609 containerd[1620]: 2025-09-12 05:59:40.252 [INFO][4279] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.844d748fe916fac2bfa860fae932beec631426bb4cedeef194ef452eef3e3401" host="localhost" Sep 12 05:59:40.293609 containerd[1620]: 2025-09-12 05:59:40.252 [INFO][4279] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.844d748fe916fac2bfa860fae932beec631426bb4cedeef194ef452eef3e3401" host="localhost" Sep 12 05:59:40.293609 containerd[1620]: 2025-09-12 05:59:40.252 [INFO][4279] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 05:59:40.293609 containerd[1620]: 2025-09-12 05:59:40.252 [INFO][4279] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="844d748fe916fac2bfa860fae932beec631426bb4cedeef194ef452eef3e3401" HandleID="k8s-pod-network.844d748fe916fac2bfa860fae932beec631426bb4cedeef194ef452eef3e3401" Workload="localhost-k8s-calico--apiserver--8d7f475b6--6ctkw-eth0" Sep 12 05:59:40.296344 containerd[1620]: 2025-09-12 05:59:40.262 [INFO][4267] cni-plugin/k8s.go 418: Populated endpoint ContainerID="844d748fe916fac2bfa860fae932beec631426bb4cedeef194ef452eef3e3401" Namespace="calico-apiserver" Pod="calico-apiserver-8d7f475b6-6ctkw" WorkloadEndpoint="localhost-k8s-calico--apiserver--8d7f475b6--6ctkw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--8d7f475b6--6ctkw-eth0", GenerateName:"calico-apiserver-8d7f475b6-", Namespace:"calico-apiserver", SelfLink:"", UID:"ef258437-2a1e-4777-808a-2650cb571a2d", ResourceVersion:"790", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 5, 59, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8d7f475b6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-8d7f475b6-6ctkw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9e706d65309", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 05:59:40.296344 containerd[1620]: 2025-09-12 05:59:40.262 [INFO][4267] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="844d748fe916fac2bfa860fae932beec631426bb4cedeef194ef452eef3e3401" Namespace="calico-apiserver" Pod="calico-apiserver-8d7f475b6-6ctkw" WorkloadEndpoint="localhost-k8s-calico--apiserver--8d7f475b6--6ctkw-eth0" Sep 12 05:59:40.296344 containerd[1620]: 2025-09-12 05:59:40.262 [INFO][4267] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9e706d65309 ContainerID="844d748fe916fac2bfa860fae932beec631426bb4cedeef194ef452eef3e3401" Namespace="calico-apiserver" Pod="calico-apiserver-8d7f475b6-6ctkw" WorkloadEndpoint="localhost-k8s-calico--apiserver--8d7f475b6--6ctkw-eth0" Sep 12 05:59:40.296344 containerd[1620]: 2025-09-12 05:59:40.270 [INFO][4267] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="844d748fe916fac2bfa860fae932beec631426bb4cedeef194ef452eef3e3401" Namespace="calico-apiserver" Pod="calico-apiserver-8d7f475b6-6ctkw" WorkloadEndpoint="localhost-k8s-calico--apiserver--8d7f475b6--6ctkw-eth0" Sep 12 05:59:40.296344 containerd[1620]: 2025-09-12 05:59:40.272 [INFO][4267] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="844d748fe916fac2bfa860fae932beec631426bb4cedeef194ef452eef3e3401" Namespace="calico-apiserver" Pod="calico-apiserver-8d7f475b6-6ctkw" WorkloadEndpoint="localhost-k8s-calico--apiserver--8d7f475b6--6ctkw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--8d7f475b6--6ctkw-eth0", GenerateName:"calico-apiserver-8d7f475b6-", Namespace:"calico-apiserver", SelfLink:"", UID:"ef258437-2a1e-4777-808a-2650cb571a2d", ResourceVersion:"790", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 5, 59, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8d7f475b6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"844d748fe916fac2bfa860fae932beec631426bb4cedeef194ef452eef3e3401", Pod:"calico-apiserver-8d7f475b6-6ctkw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9e706d65309", MAC:"b6:a0:ff:1d:95:d9", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 05:59:40.296344 containerd[1620]: 2025-09-12 05:59:40.289 [INFO][4267] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="844d748fe916fac2bfa860fae932beec631426bb4cedeef194ef452eef3e3401" Namespace="calico-apiserver" Pod="calico-apiserver-8d7f475b6-6ctkw" WorkloadEndpoint="localhost-k8s-calico--apiserver--8d7f475b6--6ctkw-eth0" Sep 12 05:59:40.344473 containerd[1620]: time="2025-09-12T05:59:40.344439286Z" level=info msg="connecting to shim 5ff86726d8902b172dcc5d4360c4f7c18e63e06303968b3bd11820b3d5029161" address="unix:///run/containerd/s/72b665ceadd9a48bbacbef68260902786d364c26be81112a3a1520aee97ae4af" namespace=k8s.io protocol=ttrpc version=3 Sep 12 05:59:40.345144 containerd[1620]: time="2025-09-12T05:59:40.345129135Z" level=info msg="connecting to shim 844d748fe916fac2bfa860fae932beec631426bb4cedeef194ef452eef3e3401" address="unix:///run/containerd/s/6c7abb9940a40a3dd57a5edf180f55ee54fcbafad1a53fcf6ebc3dae992c3908" namespace=k8s.io protocol=ttrpc version=3 Sep 12 05:59:40.369767 systemd[1]: Started cri-containerd-5ff86726d8902b172dcc5d4360c4f7c18e63e06303968b3bd11820b3d5029161.scope - libcontainer container 5ff86726d8902b172dcc5d4360c4f7c18e63e06303968b3bd11820b3d5029161. Sep 12 05:59:40.380905 systemd[1]: Started cri-containerd-844d748fe916fac2bfa860fae932beec631426bb4cedeef194ef452eef3e3401.scope - libcontainer container 844d748fe916fac2bfa860fae932beec631426bb4cedeef194ef452eef3e3401. Sep 12 05:59:40.387608 systemd-resolved[1508]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 12 05:59:40.393453 systemd-resolved[1508]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 12 05:59:40.467678 containerd[1620]: time="2025-09-12T05:59:40.467614181Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8d7f475b6-6ctkw,Uid:ef258437-2a1e-4777-808a-2650cb571a2d,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"844d748fe916fac2bfa860fae932beec631426bb4cedeef194ef452eef3e3401\"" Sep 12 05:59:40.490517 containerd[1620]: time="2025-09-12T05:59:40.490496486Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6c76686fdc-w2w9w,Uid:2171c083-b9fa-4076-b8d4-cde26293400b,Namespace:calico-system,Attempt:0,} returns sandbox id \"5ff86726d8902b172dcc5d4360c4f7c18e63e06303968b3bd11820b3d5029161\"" Sep 12 05:59:40.561637 containerd[1620]: time="2025-09-12T05:59:40.561102854Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\"" Sep 12 05:59:41.013170 containerd[1620]: time="2025-09-12T05:59:41.013136643Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-7t4n7,Uid:85098826-2ef9-4be9-a714-c70c61ec7d55,Namespace:calico-system,Attempt:0,}" Sep 12 05:59:41.121698 systemd-networkd[1507]: cali1f68884b55e: Link UP Sep 12 05:59:41.121832 systemd-networkd[1507]: cali1f68884b55e: Gained carrier Sep 12 05:59:41.152816 containerd[1620]: 2025-09-12 05:59:41.066 [INFO][4399] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--54d579b49d--7t4n7-eth0 goldmane-54d579b49d- calico-system 85098826-2ef9-4be9-a714-c70c61ec7d55 793 0 2025-09-12 05:59:17 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:54d579b49d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-54d579b49d-7t4n7 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali1f68884b55e [] [] }} ContainerID="7f0664e387356346cd938b42900bec1f24aa7f207969ec7a8165ec1009370bbf" Namespace="calico-system" Pod="goldmane-54d579b49d-7t4n7" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--7t4n7-" Sep 12 05:59:41.152816 containerd[1620]: 2025-09-12 05:59:41.066 [INFO][4399] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7f0664e387356346cd938b42900bec1f24aa7f207969ec7a8165ec1009370bbf" Namespace="calico-system" Pod="goldmane-54d579b49d-7t4n7" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--7t4n7-eth0" Sep 12 05:59:41.152816 containerd[1620]: 2025-09-12 05:59:41.080 [INFO][4411] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7f0664e387356346cd938b42900bec1f24aa7f207969ec7a8165ec1009370bbf" HandleID="k8s-pod-network.7f0664e387356346cd938b42900bec1f24aa7f207969ec7a8165ec1009370bbf" Workload="localhost-k8s-goldmane--54d579b49d--7t4n7-eth0" Sep 12 05:59:41.152816 containerd[1620]: 2025-09-12 05:59:41.080 [INFO][4411] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="7f0664e387356346cd938b42900bec1f24aa7f207969ec7a8165ec1009370bbf" HandleID="k8s-pod-network.7f0664e387356346cd938b42900bec1f24aa7f207969ec7a8165ec1009370bbf" Workload="localhost-k8s-goldmane--54d579b49d--7t4n7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002cb740), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-54d579b49d-7t4n7", "timestamp":"2025-09-12 05:59:41.080664648 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 05:59:41.152816 containerd[1620]: 2025-09-12 05:59:41.080 [INFO][4411] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 05:59:41.152816 containerd[1620]: 2025-09-12 05:59:41.080 [INFO][4411] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 05:59:41.152816 containerd[1620]: 2025-09-12 05:59:41.080 [INFO][4411] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 12 05:59:41.152816 containerd[1620]: 2025-09-12 05:59:41.084 [INFO][4411] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7f0664e387356346cd938b42900bec1f24aa7f207969ec7a8165ec1009370bbf" host="localhost" Sep 12 05:59:41.152816 containerd[1620]: 2025-09-12 05:59:41.086 [INFO][4411] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 12 05:59:41.152816 containerd[1620]: 2025-09-12 05:59:41.090 [INFO][4411] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 12 05:59:41.152816 containerd[1620]: 2025-09-12 05:59:41.091 [INFO][4411] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 12 05:59:41.152816 containerd[1620]: 2025-09-12 05:59:41.094 [INFO][4411] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 12 05:59:41.152816 containerd[1620]: 2025-09-12 05:59:41.094 [INFO][4411] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.7f0664e387356346cd938b42900bec1f24aa7f207969ec7a8165ec1009370bbf" host="localhost" Sep 12 05:59:41.152816 containerd[1620]: 2025-09-12 05:59:41.095 [INFO][4411] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.7f0664e387356346cd938b42900bec1f24aa7f207969ec7a8165ec1009370bbf Sep 12 05:59:41.152816 containerd[1620]: 2025-09-12 05:59:41.108 [INFO][4411] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.7f0664e387356346cd938b42900bec1f24aa7f207969ec7a8165ec1009370bbf" host="localhost" Sep 12 05:59:41.152816 containerd[1620]: 2025-09-12 05:59:41.114 [INFO][4411] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.7f0664e387356346cd938b42900bec1f24aa7f207969ec7a8165ec1009370bbf" host="localhost" Sep 12 05:59:41.152816 containerd[1620]: 2025-09-12 05:59:41.114 [INFO][4411] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.7f0664e387356346cd938b42900bec1f24aa7f207969ec7a8165ec1009370bbf" host="localhost" Sep 12 05:59:41.152816 containerd[1620]: 2025-09-12 05:59:41.114 [INFO][4411] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 05:59:41.152816 containerd[1620]: 2025-09-12 05:59:41.114 [INFO][4411] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="7f0664e387356346cd938b42900bec1f24aa7f207969ec7a8165ec1009370bbf" HandleID="k8s-pod-network.7f0664e387356346cd938b42900bec1f24aa7f207969ec7a8165ec1009370bbf" Workload="localhost-k8s-goldmane--54d579b49d--7t4n7-eth0" Sep 12 05:59:41.164959 containerd[1620]: 2025-09-12 05:59:41.116 [INFO][4399] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7f0664e387356346cd938b42900bec1f24aa7f207969ec7a8165ec1009370bbf" Namespace="calico-system" Pod="goldmane-54d579b49d-7t4n7" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--7t4n7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--54d579b49d--7t4n7-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"85098826-2ef9-4be9-a714-c70c61ec7d55", ResourceVersion:"793", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 5, 59, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-54d579b49d-7t4n7", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali1f68884b55e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 05:59:41.164959 containerd[1620]: 2025-09-12 05:59:41.116 [INFO][4399] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="7f0664e387356346cd938b42900bec1f24aa7f207969ec7a8165ec1009370bbf" Namespace="calico-system" Pod="goldmane-54d579b49d-7t4n7" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--7t4n7-eth0" Sep 12 05:59:41.164959 containerd[1620]: 2025-09-12 05:59:41.116 [INFO][4399] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1f68884b55e ContainerID="7f0664e387356346cd938b42900bec1f24aa7f207969ec7a8165ec1009370bbf" Namespace="calico-system" Pod="goldmane-54d579b49d-7t4n7" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--7t4n7-eth0" Sep 12 05:59:41.164959 containerd[1620]: 2025-09-12 05:59:41.122 [INFO][4399] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7f0664e387356346cd938b42900bec1f24aa7f207969ec7a8165ec1009370bbf" Namespace="calico-system" Pod="goldmane-54d579b49d-7t4n7" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--7t4n7-eth0" Sep 12 05:59:41.164959 containerd[1620]: 2025-09-12 05:59:41.123 [INFO][4399] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7f0664e387356346cd938b42900bec1f24aa7f207969ec7a8165ec1009370bbf" Namespace="calico-system" Pod="goldmane-54d579b49d-7t4n7" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--7t4n7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--54d579b49d--7t4n7-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"85098826-2ef9-4be9-a714-c70c61ec7d55", ResourceVersion:"793", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 5, 59, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7f0664e387356346cd938b42900bec1f24aa7f207969ec7a8165ec1009370bbf", Pod:"goldmane-54d579b49d-7t4n7", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali1f68884b55e", MAC:"76:b3:9a:4c:7e:dc", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 05:59:41.164959 containerd[1620]: 2025-09-12 05:59:41.150 [INFO][4399] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7f0664e387356346cd938b42900bec1f24aa7f207969ec7a8165ec1009370bbf" Namespace="calico-system" Pod="goldmane-54d579b49d-7t4n7" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--7t4n7-eth0" Sep 12 05:59:41.353859 containerd[1620]: time="2025-09-12T05:59:41.353494297Z" level=info msg="connecting to shim 7f0664e387356346cd938b42900bec1f24aa7f207969ec7a8165ec1009370bbf" address="unix:///run/containerd/s/aca768cc179a8a2b09ad00bb136dfda51a1d51b61fc22323606b8b9d24f6169f" namespace=k8s.io protocol=ttrpc version=3 Sep 12 05:59:41.401728 systemd[1]: Started cri-containerd-7f0664e387356346cd938b42900bec1f24aa7f207969ec7a8165ec1009370bbf.scope - libcontainer container 7f0664e387356346cd938b42900bec1f24aa7f207969ec7a8165ec1009370bbf. Sep 12 05:59:41.411949 systemd-resolved[1508]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 12 05:59:41.446683 containerd[1620]: time="2025-09-12T05:59:41.446656616Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-7t4n7,Uid:85098826-2ef9-4be9-a714-c70c61ec7d55,Namespace:calico-system,Attempt:0,} returns sandbox id \"7f0664e387356346cd938b42900bec1f24aa7f207969ec7a8165ec1009370bbf\"" Sep 12 05:59:41.570660 systemd-networkd[1507]: cali9ebff428472: Gained IPv6LL Sep 12 05:59:41.826653 systemd-networkd[1507]: cali9e706d65309: Gained IPv6LL Sep 12 05:59:42.402684 systemd-networkd[1507]: cali1f68884b55e: Gained IPv6LL Sep 12 05:59:42.538333 containerd[1620]: time="2025-09-12T05:59:42.538301144Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 05:59:42.545560 containerd[1620]: time="2025-09-12T05:59:42.545524857Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.3: active requests=0, bytes read=4661291" Sep 12 05:59:42.555605 containerd[1620]: time="2025-09-12T05:59:42.555563082Z" level=info msg="ImageCreate event name:\"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 05:59:42.562522 containerd[1620]: time="2025-09-12T05:59:42.562479427Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 05:59:42.562871 containerd[1620]: time="2025-09-12T05:59:42.562754652Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.3\" with image id \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\", size \"6153986\" in 2.00162718s" Sep 12 05:59:42.562871 containerd[1620]: time="2025-09-12T05:59:42.562771391Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\" returns image reference \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\"" Sep 12 05:59:42.563560 containerd[1620]: time="2025-09-12T05:59:42.563535441Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 12 05:59:42.611963 containerd[1620]: time="2025-09-12T05:59:42.611937480Z" level=info msg="CreateContainer within sandbox \"5ff86726d8902b172dcc5d4360c4f7c18e63e06303968b3bd11820b3d5029161\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Sep 12 05:59:42.652718 containerd[1620]: time="2025-09-12T05:59:42.652037973Z" level=info msg="Container 223d65186d6836edd362588606082944f866550fd3e3d478599e77aed39e4101: CDI devices from CRI Config.CDIDevices: []" Sep 12 05:59:42.703383 containerd[1620]: time="2025-09-12T05:59:42.703311519Z" level=info msg="CreateContainer within sandbox \"5ff86726d8902b172dcc5d4360c4f7c18e63e06303968b3bd11820b3d5029161\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"223d65186d6836edd362588606082944f866550fd3e3d478599e77aed39e4101\"" Sep 12 05:59:42.704163 containerd[1620]: time="2025-09-12T05:59:42.703913226Z" level=info msg="StartContainer for \"223d65186d6836edd362588606082944f866550fd3e3d478599e77aed39e4101\"" Sep 12 05:59:42.705437 containerd[1620]: time="2025-09-12T05:59:42.705417310Z" level=info msg="connecting to shim 223d65186d6836edd362588606082944f866550fd3e3d478599e77aed39e4101" address="unix:///run/containerd/s/72b665ceadd9a48bbacbef68260902786d364c26be81112a3a1520aee97ae4af" protocol=ttrpc version=3 Sep 12 05:59:42.720705 systemd[1]: Started cri-containerd-223d65186d6836edd362588606082944f866550fd3e3d478599e77aed39e4101.scope - libcontainer container 223d65186d6836edd362588606082944f866550fd3e3d478599e77aed39e4101. Sep 12 05:59:42.779433 containerd[1620]: time="2025-09-12T05:59:42.779408679Z" level=info msg="StartContainer for \"223d65186d6836edd362588606082944f866550fd3e3d478599e77aed39e4101\" returns successfully" Sep 12 05:59:43.013021 containerd[1620]: time="2025-09-12T05:59:43.012685505Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-868998bbf7-9bvbh,Uid:7f7cffc0-28f8-4c8f-bf71-ba46730893da,Namespace:calico-system,Attempt:0,}" Sep 12 05:59:43.013021 containerd[1620]: time="2025-09-12T05:59:43.012932133Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8d7f475b6-hf4g9,Uid:7d032fb4-42c5-43bf-9040-c36bcf9cf53d,Namespace:calico-apiserver,Attempt:0,}" Sep 12 05:59:43.014016 containerd[1620]: time="2025-09-12T05:59:43.013967491Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-bfh87,Uid:6ec3ddfc-1130-408d-ac97-54d9db3a5eea,Namespace:kube-system,Attempt:0,}" Sep 12 05:59:43.215498 systemd-networkd[1507]: cali05303b8f089: Link UP Sep 12 05:59:43.216611 systemd-networkd[1507]: cali05303b8f089: Gained carrier Sep 12 05:59:43.239391 containerd[1620]: 2025-09-12 05:59:43.097 [INFO][4511] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--868998bbf7--9bvbh-eth0 calico-kube-controllers-868998bbf7- calico-system 7f7cffc0-28f8-4c8f-bf71-ba46730893da 792 0 2025-09-12 05:59:18 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:868998bbf7 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-868998bbf7-9bvbh eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali05303b8f089 [] [] }} ContainerID="b08f603120eea5e4e455e563126d35a80e694e6852bf013cc745f9c57ba1c4fc" Namespace="calico-system" Pod="calico-kube-controllers-868998bbf7-9bvbh" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--868998bbf7--9bvbh-" Sep 12 05:59:43.239391 containerd[1620]: 2025-09-12 05:59:43.097 [INFO][4511] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b08f603120eea5e4e455e563126d35a80e694e6852bf013cc745f9c57ba1c4fc" Namespace="calico-system" Pod="calico-kube-controllers-868998bbf7-9bvbh" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--868998bbf7--9bvbh-eth0" Sep 12 05:59:43.239391 containerd[1620]: 2025-09-12 05:59:43.115 [INFO][4523] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b08f603120eea5e4e455e563126d35a80e694e6852bf013cc745f9c57ba1c4fc" HandleID="k8s-pod-network.b08f603120eea5e4e455e563126d35a80e694e6852bf013cc745f9c57ba1c4fc" Workload="localhost-k8s-calico--kube--controllers--868998bbf7--9bvbh-eth0" Sep 12 05:59:43.239391 containerd[1620]: 2025-09-12 05:59:43.115 [INFO][4523] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b08f603120eea5e4e455e563126d35a80e694e6852bf013cc745f9c57ba1c4fc" HandleID="k8s-pod-network.b08f603120eea5e4e455e563126d35a80e694e6852bf013cc745f9c57ba1c4fc" Workload="localhost-k8s-calico--kube--controllers--868998bbf7--9bvbh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024eff0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-868998bbf7-9bvbh", "timestamp":"2025-09-12 05:59:43.115536166 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 05:59:43.239391 containerd[1620]: 2025-09-12 05:59:43.115 [INFO][4523] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 05:59:43.239391 containerd[1620]: 2025-09-12 05:59:43.115 [INFO][4523] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 05:59:43.239391 containerd[1620]: 2025-09-12 05:59:43.115 [INFO][4523] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 12 05:59:43.239391 containerd[1620]: 2025-09-12 05:59:43.124 [INFO][4523] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b08f603120eea5e4e455e563126d35a80e694e6852bf013cc745f9c57ba1c4fc" host="localhost" Sep 12 05:59:43.239391 containerd[1620]: 2025-09-12 05:59:43.157 [INFO][4523] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 12 05:59:43.239391 containerd[1620]: 2025-09-12 05:59:43.161 [INFO][4523] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 12 05:59:43.239391 containerd[1620]: 2025-09-12 05:59:43.162 [INFO][4523] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 12 05:59:43.239391 containerd[1620]: 2025-09-12 05:59:43.165 [INFO][4523] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 12 05:59:43.239391 containerd[1620]: 2025-09-12 05:59:43.165 [INFO][4523] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.b08f603120eea5e4e455e563126d35a80e694e6852bf013cc745f9c57ba1c4fc" host="localhost" Sep 12 05:59:43.239391 containerd[1620]: 2025-09-12 05:59:43.166 [INFO][4523] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.b08f603120eea5e4e455e563126d35a80e694e6852bf013cc745f9c57ba1c4fc Sep 12 05:59:43.239391 containerd[1620]: 2025-09-12 05:59:43.177 [INFO][4523] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.b08f603120eea5e4e455e563126d35a80e694e6852bf013cc745f9c57ba1c4fc" host="localhost" Sep 12 05:59:43.239391 containerd[1620]: 2025-09-12 05:59:43.206 [INFO][4523] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.b08f603120eea5e4e455e563126d35a80e694e6852bf013cc745f9c57ba1c4fc" host="localhost" Sep 12 05:59:43.239391 containerd[1620]: 2025-09-12 05:59:43.207 [INFO][4523] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.b08f603120eea5e4e455e563126d35a80e694e6852bf013cc745f9c57ba1c4fc" host="localhost" Sep 12 05:59:43.239391 containerd[1620]: 2025-09-12 05:59:43.207 [INFO][4523] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 05:59:43.239391 containerd[1620]: 2025-09-12 05:59:43.207 [INFO][4523] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="b08f603120eea5e4e455e563126d35a80e694e6852bf013cc745f9c57ba1c4fc" HandleID="k8s-pod-network.b08f603120eea5e4e455e563126d35a80e694e6852bf013cc745f9c57ba1c4fc" Workload="localhost-k8s-calico--kube--controllers--868998bbf7--9bvbh-eth0" Sep 12 05:59:43.247519 containerd[1620]: 2025-09-12 05:59:43.212 [INFO][4511] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b08f603120eea5e4e455e563126d35a80e694e6852bf013cc745f9c57ba1c4fc" Namespace="calico-system" Pod="calico-kube-controllers-868998bbf7-9bvbh" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--868998bbf7--9bvbh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--868998bbf7--9bvbh-eth0", GenerateName:"calico-kube-controllers-868998bbf7-", Namespace:"calico-system", SelfLink:"", UID:"7f7cffc0-28f8-4c8f-bf71-ba46730893da", ResourceVersion:"792", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 5, 59, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"868998bbf7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-868998bbf7-9bvbh", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali05303b8f089", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 05:59:43.247519 containerd[1620]: 2025-09-12 05:59:43.212 [INFO][4511] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="b08f603120eea5e4e455e563126d35a80e694e6852bf013cc745f9c57ba1c4fc" Namespace="calico-system" Pod="calico-kube-controllers-868998bbf7-9bvbh" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--868998bbf7--9bvbh-eth0" Sep 12 05:59:43.247519 containerd[1620]: 2025-09-12 05:59:43.212 [INFO][4511] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali05303b8f089 ContainerID="b08f603120eea5e4e455e563126d35a80e694e6852bf013cc745f9c57ba1c4fc" Namespace="calico-system" Pod="calico-kube-controllers-868998bbf7-9bvbh" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--868998bbf7--9bvbh-eth0" Sep 12 05:59:43.247519 containerd[1620]: 2025-09-12 05:59:43.217 [INFO][4511] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b08f603120eea5e4e455e563126d35a80e694e6852bf013cc745f9c57ba1c4fc" Namespace="calico-system" Pod="calico-kube-controllers-868998bbf7-9bvbh" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--868998bbf7--9bvbh-eth0" Sep 12 05:59:43.247519 containerd[1620]: 2025-09-12 05:59:43.217 [INFO][4511] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b08f603120eea5e4e455e563126d35a80e694e6852bf013cc745f9c57ba1c4fc" Namespace="calico-system" Pod="calico-kube-controllers-868998bbf7-9bvbh" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--868998bbf7--9bvbh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--868998bbf7--9bvbh-eth0", GenerateName:"calico-kube-controllers-868998bbf7-", Namespace:"calico-system", SelfLink:"", UID:"7f7cffc0-28f8-4c8f-bf71-ba46730893da", ResourceVersion:"792", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 5, 59, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"868998bbf7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b08f603120eea5e4e455e563126d35a80e694e6852bf013cc745f9c57ba1c4fc", Pod:"calico-kube-controllers-868998bbf7-9bvbh", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali05303b8f089", MAC:"7a:63:68:93:bf:ad", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 05:59:43.247676 containerd[1620]: 2025-09-12 05:59:43.237 [INFO][4511] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b08f603120eea5e4e455e563126d35a80e694e6852bf013cc745f9c57ba1c4fc" Namespace="calico-system" Pod="calico-kube-controllers-868998bbf7-9bvbh" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--868998bbf7--9bvbh-eth0" Sep 12 05:59:43.322360 systemd-networkd[1507]: cali54776ee12de: Link UP Sep 12 05:59:43.323435 systemd-networkd[1507]: cali54776ee12de: Gained carrier Sep 12 05:59:43.355493 containerd[1620]: 2025-09-12 05:59:43.138 [INFO][4527] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--8d7f475b6--hf4g9-eth0 calico-apiserver-8d7f475b6- calico-apiserver 7d032fb4-42c5-43bf-9040-c36bcf9cf53d 788 0 2025-09-12 05:59:15 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:8d7f475b6 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-8d7f475b6-hf4g9 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali54776ee12de [] [] }} ContainerID="a79497b095456a95e30186f327396562ca7fccdbb6e50890b6cf4389653a3041" Namespace="calico-apiserver" Pod="calico-apiserver-8d7f475b6-hf4g9" WorkloadEndpoint="localhost-k8s-calico--apiserver--8d7f475b6--hf4g9-" Sep 12 05:59:43.355493 containerd[1620]: 2025-09-12 05:59:43.139 [INFO][4527] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a79497b095456a95e30186f327396562ca7fccdbb6e50890b6cf4389653a3041" Namespace="calico-apiserver" Pod="calico-apiserver-8d7f475b6-hf4g9" WorkloadEndpoint="localhost-k8s-calico--apiserver--8d7f475b6--hf4g9-eth0" Sep 12 05:59:43.355493 containerd[1620]: 2025-09-12 05:59:43.193 [INFO][4555] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a79497b095456a95e30186f327396562ca7fccdbb6e50890b6cf4389653a3041" HandleID="k8s-pod-network.a79497b095456a95e30186f327396562ca7fccdbb6e50890b6cf4389653a3041" Workload="localhost-k8s-calico--apiserver--8d7f475b6--hf4g9-eth0" Sep 12 05:59:43.355493 containerd[1620]: 2025-09-12 05:59:43.193 [INFO][4555] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a79497b095456a95e30186f327396562ca7fccdbb6e50890b6cf4389653a3041" HandleID="k8s-pod-network.a79497b095456a95e30186f327396562ca7fccdbb6e50890b6cf4389653a3041" Workload="localhost-k8s-calico--apiserver--8d7f475b6--hf4g9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000259210), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-8d7f475b6-hf4g9", "timestamp":"2025-09-12 05:59:43.193341883 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 05:59:43.355493 containerd[1620]: 2025-09-12 05:59:43.193 [INFO][4555] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 05:59:43.355493 containerd[1620]: 2025-09-12 05:59:43.207 [INFO][4555] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 05:59:43.355493 containerd[1620]: 2025-09-12 05:59:43.207 [INFO][4555] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 12 05:59:43.355493 containerd[1620]: 2025-09-12 05:59:43.225 [INFO][4555] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a79497b095456a95e30186f327396562ca7fccdbb6e50890b6cf4389653a3041" host="localhost" Sep 12 05:59:43.355493 containerd[1620]: 2025-09-12 05:59:43.261 [INFO][4555] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 12 05:59:43.355493 containerd[1620]: 2025-09-12 05:59:43.263 [INFO][4555] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 12 05:59:43.355493 containerd[1620]: 2025-09-12 05:59:43.264 [INFO][4555] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 12 05:59:43.355493 containerd[1620]: 2025-09-12 05:59:43.278 [INFO][4555] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 12 05:59:43.355493 containerd[1620]: 2025-09-12 05:59:43.280 [INFO][4555] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.a79497b095456a95e30186f327396562ca7fccdbb6e50890b6cf4389653a3041" host="localhost" Sep 12 05:59:43.355493 containerd[1620]: 2025-09-12 05:59:43.281 [INFO][4555] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.a79497b095456a95e30186f327396562ca7fccdbb6e50890b6cf4389653a3041 Sep 12 05:59:43.355493 containerd[1620]: 2025-09-12 05:59:43.307 [INFO][4555] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.a79497b095456a95e30186f327396562ca7fccdbb6e50890b6cf4389653a3041" host="localhost" Sep 12 05:59:43.355493 containerd[1620]: 2025-09-12 05:59:43.317 [INFO][4555] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.a79497b095456a95e30186f327396562ca7fccdbb6e50890b6cf4389653a3041" host="localhost" Sep 12 05:59:43.355493 containerd[1620]: 2025-09-12 05:59:43.317 [INFO][4555] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.a79497b095456a95e30186f327396562ca7fccdbb6e50890b6cf4389653a3041" host="localhost" Sep 12 05:59:43.355493 containerd[1620]: 2025-09-12 05:59:43.317 [INFO][4555] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 05:59:43.355493 containerd[1620]: 2025-09-12 05:59:43.317 [INFO][4555] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="a79497b095456a95e30186f327396562ca7fccdbb6e50890b6cf4389653a3041" HandleID="k8s-pod-network.a79497b095456a95e30186f327396562ca7fccdbb6e50890b6cf4389653a3041" Workload="localhost-k8s-calico--apiserver--8d7f475b6--hf4g9-eth0" Sep 12 05:59:43.407191 containerd[1620]: 2025-09-12 05:59:43.319 [INFO][4527] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a79497b095456a95e30186f327396562ca7fccdbb6e50890b6cf4389653a3041" Namespace="calico-apiserver" Pod="calico-apiserver-8d7f475b6-hf4g9" WorkloadEndpoint="localhost-k8s-calico--apiserver--8d7f475b6--hf4g9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--8d7f475b6--hf4g9-eth0", GenerateName:"calico-apiserver-8d7f475b6-", Namespace:"calico-apiserver", SelfLink:"", UID:"7d032fb4-42c5-43bf-9040-c36bcf9cf53d", ResourceVersion:"788", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 5, 59, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8d7f475b6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-8d7f475b6-hf4g9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali54776ee12de", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 05:59:43.407191 containerd[1620]: 2025-09-12 05:59:43.319 [INFO][4527] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="a79497b095456a95e30186f327396562ca7fccdbb6e50890b6cf4389653a3041" Namespace="calico-apiserver" Pod="calico-apiserver-8d7f475b6-hf4g9" WorkloadEndpoint="localhost-k8s-calico--apiserver--8d7f475b6--hf4g9-eth0" Sep 12 05:59:43.407191 containerd[1620]: 2025-09-12 05:59:43.320 [INFO][4527] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali54776ee12de ContainerID="a79497b095456a95e30186f327396562ca7fccdbb6e50890b6cf4389653a3041" Namespace="calico-apiserver" Pod="calico-apiserver-8d7f475b6-hf4g9" WorkloadEndpoint="localhost-k8s-calico--apiserver--8d7f475b6--hf4g9-eth0" Sep 12 05:59:43.407191 containerd[1620]: 2025-09-12 05:59:43.322 [INFO][4527] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a79497b095456a95e30186f327396562ca7fccdbb6e50890b6cf4389653a3041" Namespace="calico-apiserver" Pod="calico-apiserver-8d7f475b6-hf4g9" WorkloadEndpoint="localhost-k8s-calico--apiserver--8d7f475b6--hf4g9-eth0" Sep 12 05:59:43.407191 containerd[1620]: 2025-09-12 05:59:43.322 [INFO][4527] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a79497b095456a95e30186f327396562ca7fccdbb6e50890b6cf4389653a3041" Namespace="calico-apiserver" Pod="calico-apiserver-8d7f475b6-hf4g9" WorkloadEndpoint="localhost-k8s-calico--apiserver--8d7f475b6--hf4g9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--8d7f475b6--hf4g9-eth0", GenerateName:"calico-apiserver-8d7f475b6-", Namespace:"calico-apiserver", SelfLink:"", UID:"7d032fb4-42c5-43bf-9040-c36bcf9cf53d", ResourceVersion:"788", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 5, 59, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8d7f475b6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a79497b095456a95e30186f327396562ca7fccdbb6e50890b6cf4389653a3041", Pod:"calico-apiserver-8d7f475b6-hf4g9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali54776ee12de", MAC:"52:44:b4:47:ed:2e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 05:59:43.407191 containerd[1620]: 2025-09-12 05:59:43.353 [INFO][4527] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a79497b095456a95e30186f327396562ca7fccdbb6e50890b6cf4389653a3041" Namespace="calico-apiserver" Pod="calico-apiserver-8d7f475b6-hf4g9" WorkloadEndpoint="localhost-k8s-calico--apiserver--8d7f475b6--hf4g9-eth0" Sep 12 05:59:43.436979 systemd-networkd[1507]: cali083fa637418: Link UP Sep 12 05:59:43.438355 systemd-networkd[1507]: cali083fa637418: Gained carrier Sep 12 05:59:43.461428 containerd[1620]: 2025-09-12 05:59:43.161 [INFO][4539] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--bfh87-eth0 coredns-668d6bf9bc- kube-system 6ec3ddfc-1130-408d-ac97-54d9db3a5eea 779 0 2025-09-12 05:59:06 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-bfh87 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali083fa637418 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="6746a795ca02d4d699a919055bbf8bf269c9ffd1e61a70e88dfa9b6c3193c364" Namespace="kube-system" Pod="coredns-668d6bf9bc-bfh87" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--bfh87-" Sep 12 05:59:43.461428 containerd[1620]: 2025-09-12 05:59:43.161 [INFO][4539] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6746a795ca02d4d699a919055bbf8bf269c9ffd1e61a70e88dfa9b6c3193c364" Namespace="kube-system" Pod="coredns-668d6bf9bc-bfh87" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--bfh87-eth0" Sep 12 05:59:43.461428 containerd[1620]: 2025-09-12 05:59:43.204 [INFO][4561] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6746a795ca02d4d699a919055bbf8bf269c9ffd1e61a70e88dfa9b6c3193c364" HandleID="k8s-pod-network.6746a795ca02d4d699a919055bbf8bf269c9ffd1e61a70e88dfa9b6c3193c364" Workload="localhost-k8s-coredns--668d6bf9bc--bfh87-eth0" Sep 12 05:59:43.461428 containerd[1620]: 2025-09-12 05:59:43.204 [INFO][4561] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="6746a795ca02d4d699a919055bbf8bf269c9ffd1e61a70e88dfa9b6c3193c364" HandleID="k8s-pod-network.6746a795ca02d4d699a919055bbf8bf269c9ffd1e61a70e88dfa9b6c3193c364" Workload="localhost-k8s-coredns--668d6bf9bc--bfh87-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f180), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-bfh87", "timestamp":"2025-09-12 05:59:43.204789337 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 05:59:43.461428 containerd[1620]: 2025-09-12 05:59:43.204 [INFO][4561] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 05:59:43.461428 containerd[1620]: 2025-09-12 05:59:43.317 [INFO][4561] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 05:59:43.461428 containerd[1620]: 2025-09-12 05:59:43.317 [INFO][4561] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 12 05:59:43.461428 containerd[1620]: 2025-09-12 05:59:43.325 [INFO][4561] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6746a795ca02d4d699a919055bbf8bf269c9ffd1e61a70e88dfa9b6c3193c364" host="localhost" Sep 12 05:59:43.461428 containerd[1620]: 2025-09-12 05:59:43.401 [INFO][4561] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 12 05:59:43.461428 containerd[1620]: 2025-09-12 05:59:43.403 [INFO][4561] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 12 05:59:43.461428 containerd[1620]: 2025-09-12 05:59:43.404 [INFO][4561] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 12 05:59:43.461428 containerd[1620]: 2025-09-12 05:59:43.405 [INFO][4561] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 12 05:59:43.461428 containerd[1620]: 2025-09-12 05:59:43.405 [INFO][4561] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.6746a795ca02d4d699a919055bbf8bf269c9ffd1e61a70e88dfa9b6c3193c364" host="localhost" Sep 12 05:59:43.461428 containerd[1620]: 2025-09-12 05:59:43.406 [INFO][4561] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.6746a795ca02d4d699a919055bbf8bf269c9ffd1e61a70e88dfa9b6c3193c364 Sep 12 05:59:43.461428 containerd[1620]: 2025-09-12 05:59:43.413 [INFO][4561] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.6746a795ca02d4d699a919055bbf8bf269c9ffd1e61a70e88dfa9b6c3193c364" host="localhost" Sep 12 05:59:43.461428 containerd[1620]: 2025-09-12 05:59:43.428 [INFO][4561] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.6746a795ca02d4d699a919055bbf8bf269c9ffd1e61a70e88dfa9b6c3193c364" host="localhost" Sep 12 05:59:43.461428 containerd[1620]: 2025-09-12 05:59:43.428 [INFO][4561] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.6746a795ca02d4d699a919055bbf8bf269c9ffd1e61a70e88dfa9b6c3193c364" host="localhost" Sep 12 05:59:43.461428 containerd[1620]: 2025-09-12 05:59:43.429 [INFO][4561] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 05:59:43.461428 containerd[1620]: 2025-09-12 05:59:43.429 [INFO][4561] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="6746a795ca02d4d699a919055bbf8bf269c9ffd1e61a70e88dfa9b6c3193c364" HandleID="k8s-pod-network.6746a795ca02d4d699a919055bbf8bf269c9ffd1e61a70e88dfa9b6c3193c364" Workload="localhost-k8s-coredns--668d6bf9bc--bfh87-eth0" Sep 12 05:59:43.467783 containerd[1620]: 2025-09-12 05:59:43.432 [INFO][4539] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6746a795ca02d4d699a919055bbf8bf269c9ffd1e61a70e88dfa9b6c3193c364" Namespace="kube-system" Pod="coredns-668d6bf9bc-bfh87" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--bfh87-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--bfh87-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"6ec3ddfc-1130-408d-ac97-54d9db3a5eea", ResourceVersion:"779", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 5, 59, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-bfh87", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali083fa637418", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 05:59:43.467783 containerd[1620]: 2025-09-12 05:59:43.432 [INFO][4539] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="6746a795ca02d4d699a919055bbf8bf269c9ffd1e61a70e88dfa9b6c3193c364" Namespace="kube-system" Pod="coredns-668d6bf9bc-bfh87" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--bfh87-eth0" Sep 12 05:59:43.467783 containerd[1620]: 2025-09-12 05:59:43.432 [INFO][4539] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali083fa637418 ContainerID="6746a795ca02d4d699a919055bbf8bf269c9ffd1e61a70e88dfa9b6c3193c364" Namespace="kube-system" Pod="coredns-668d6bf9bc-bfh87" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--bfh87-eth0" Sep 12 05:59:43.467783 containerd[1620]: 2025-09-12 05:59:43.438 [INFO][4539] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6746a795ca02d4d699a919055bbf8bf269c9ffd1e61a70e88dfa9b6c3193c364" Namespace="kube-system" Pod="coredns-668d6bf9bc-bfh87" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--bfh87-eth0" Sep 12 05:59:43.467783 containerd[1620]: 2025-09-12 05:59:43.440 [INFO][4539] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6746a795ca02d4d699a919055bbf8bf269c9ffd1e61a70e88dfa9b6c3193c364" Namespace="kube-system" Pod="coredns-668d6bf9bc-bfh87" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--bfh87-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--bfh87-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"6ec3ddfc-1130-408d-ac97-54d9db3a5eea", ResourceVersion:"779", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 5, 59, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6746a795ca02d4d699a919055bbf8bf269c9ffd1e61a70e88dfa9b6c3193c364", Pod:"coredns-668d6bf9bc-bfh87", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali083fa637418", MAC:"56:f9:12:92:0e:d4", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 05:59:43.468122 containerd[1620]: 2025-09-12 05:59:43.459 [INFO][4539] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6746a795ca02d4d699a919055bbf8bf269c9ffd1e61a70e88dfa9b6c3193c364" Namespace="kube-system" Pod="coredns-668d6bf9bc-bfh87" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--bfh87-eth0" Sep 12 05:59:43.529222 containerd[1620]: time="2025-09-12T05:59:43.528958009Z" level=info msg="connecting to shim b08f603120eea5e4e455e563126d35a80e694e6852bf013cc745f9c57ba1c4fc" address="unix:///run/containerd/s/64e29d4abc655ad6b496f973e3414a96ea491ca70aded6359d51760301e63842" namespace=k8s.io protocol=ttrpc version=3 Sep 12 05:59:43.529408 containerd[1620]: time="2025-09-12T05:59:43.529382420Z" level=info msg="connecting to shim a79497b095456a95e30186f327396562ca7fccdbb6e50890b6cf4389653a3041" address="unix:///run/containerd/s/d6d59668fb16425d2c0f78f9228df84d233871c2e6d0109754ffd7c934b81f50" namespace=k8s.io protocol=ttrpc version=3 Sep 12 05:59:43.542726 containerd[1620]: time="2025-09-12T05:59:43.542689138Z" level=info msg="connecting to shim 6746a795ca02d4d699a919055bbf8bf269c9ffd1e61a70e88dfa9b6c3193c364" address="unix:///run/containerd/s/d9fa1804484370bd1ee13cc170f8b2e9b3400cf0e872b874340ca88e9d403b16" namespace=k8s.io protocol=ttrpc version=3 Sep 12 05:59:43.554709 systemd[1]: Started cri-containerd-b08f603120eea5e4e455e563126d35a80e694e6852bf013cc745f9c57ba1c4fc.scope - libcontainer container b08f603120eea5e4e455e563126d35a80e694e6852bf013cc745f9c57ba1c4fc. Sep 12 05:59:43.558818 systemd[1]: Started cri-containerd-a79497b095456a95e30186f327396562ca7fccdbb6e50890b6cf4389653a3041.scope - libcontainer container a79497b095456a95e30186f327396562ca7fccdbb6e50890b6cf4389653a3041. Sep 12 05:59:43.570665 systemd[1]: Started cri-containerd-6746a795ca02d4d699a919055bbf8bf269c9ffd1e61a70e88dfa9b6c3193c364.scope - libcontainer container 6746a795ca02d4d699a919055bbf8bf269c9ffd1e61a70e88dfa9b6c3193c364. Sep 12 05:59:43.578958 systemd-resolved[1508]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 12 05:59:43.579692 systemd-resolved[1508]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 12 05:59:43.582057 systemd-resolved[1508]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 12 05:59:43.619531 containerd[1620]: time="2025-09-12T05:59:43.619505368Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8d7f475b6-hf4g9,Uid:7d032fb4-42c5-43bf-9040-c36bcf9cf53d,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"a79497b095456a95e30186f327396562ca7fccdbb6e50890b6cf4389653a3041\"" Sep 12 05:59:43.629652 containerd[1620]: time="2025-09-12T05:59:43.629404369Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-bfh87,Uid:6ec3ddfc-1130-408d-ac97-54d9db3a5eea,Namespace:kube-system,Attempt:0,} returns sandbox id \"6746a795ca02d4d699a919055bbf8bf269c9ffd1e61a70e88dfa9b6c3193c364\"" Sep 12 05:59:43.632280 containerd[1620]: time="2025-09-12T05:59:43.632253710Z" level=info msg="CreateContainer within sandbox \"6746a795ca02d4d699a919055bbf8bf269c9ffd1e61a70e88dfa9b6c3193c364\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 12 05:59:43.657977 containerd[1620]: time="2025-09-12T05:59:43.657949850Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-868998bbf7-9bvbh,Uid:7f7cffc0-28f8-4c8f-bf71-ba46730893da,Namespace:calico-system,Attempt:0,} returns sandbox id \"b08f603120eea5e4e455e563126d35a80e694e6852bf013cc745f9c57ba1c4fc\"" Sep 12 05:59:43.670240 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount778453561.mount: Deactivated successfully. Sep 12 05:59:43.672254 containerd[1620]: time="2025-09-12T05:59:43.671531352Z" level=info msg="Container d80e05c570ba864057c97e6ce22cb43dde35b75d68024325a0fa3d8493046e6e: CDI devices from CRI Config.CDIDevices: []" Sep 12 05:59:43.676205 containerd[1620]: time="2025-09-12T05:59:43.676181464Z" level=info msg="CreateContainer within sandbox \"6746a795ca02d4d699a919055bbf8bf269c9ffd1e61a70e88dfa9b6c3193c364\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d80e05c570ba864057c97e6ce22cb43dde35b75d68024325a0fa3d8493046e6e\"" Sep 12 05:59:43.676638 containerd[1620]: time="2025-09-12T05:59:43.676620955Z" level=info msg="StartContainer for \"d80e05c570ba864057c97e6ce22cb43dde35b75d68024325a0fa3d8493046e6e\"" Sep 12 05:59:43.677063 containerd[1620]: time="2025-09-12T05:59:43.677047817Z" level=info msg="connecting to shim d80e05c570ba864057c97e6ce22cb43dde35b75d68024325a0fa3d8493046e6e" address="unix:///run/containerd/s/d9fa1804484370bd1ee13cc170f8b2e9b3400cf0e872b874340ca88e9d403b16" protocol=ttrpc version=3 Sep 12 05:59:43.692688 systemd[1]: Started cri-containerd-d80e05c570ba864057c97e6ce22cb43dde35b75d68024325a0fa3d8493046e6e.scope - libcontainer container d80e05c570ba864057c97e6ce22cb43dde35b75d68024325a0fa3d8493046e6e. Sep 12 05:59:43.716232 containerd[1620]: time="2025-09-12T05:59:43.716199772Z" level=info msg="StartContainer for \"d80e05c570ba864057c97e6ce22cb43dde35b75d68024325a0fa3d8493046e6e\" returns successfully" Sep 12 05:59:44.013264 containerd[1620]: time="2025-09-12T05:59:44.013141615Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dftxb,Uid:fbcdb877-a070-4ed8-b9fc-c8a9acc42275,Namespace:calico-system,Attempt:0,}" Sep 12 05:59:44.023281 containerd[1620]: time="2025-09-12T05:59:44.023224997Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-pzmsg,Uid:f931fde1-f624-43e2-baac-64bdf6ab8d43,Namespace:kube-system,Attempt:0,}" Sep 12 05:59:44.119974 systemd-networkd[1507]: cali5217f5a60b4: Link UP Sep 12 05:59:44.122106 systemd-networkd[1507]: cali5217f5a60b4: Gained carrier Sep 12 05:59:44.139571 containerd[1620]: 2025-09-12 05:59:44.047 [INFO][4773] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--dftxb-eth0 csi-node-driver- calico-system fbcdb877-a070-4ed8-b9fc-c8a9acc42275 674 0 2025-09-12 05:59:18 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:6c96d95cc7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-dftxb eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali5217f5a60b4 [] [] }} ContainerID="f21c8952d5bec6afa3703c23100ec44209d90fce74a0944f9ca8f80e4fa65c5e" Namespace="calico-system" Pod="csi-node-driver-dftxb" WorkloadEndpoint="localhost-k8s-csi--node--driver--dftxb-" Sep 12 05:59:44.139571 containerd[1620]: 2025-09-12 05:59:44.047 [INFO][4773] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f21c8952d5bec6afa3703c23100ec44209d90fce74a0944f9ca8f80e4fa65c5e" Namespace="calico-system" Pod="csi-node-driver-dftxb" WorkloadEndpoint="localhost-k8s-csi--node--driver--dftxb-eth0" Sep 12 05:59:44.139571 containerd[1620]: 2025-09-12 05:59:44.082 [INFO][4798] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f21c8952d5bec6afa3703c23100ec44209d90fce74a0944f9ca8f80e4fa65c5e" HandleID="k8s-pod-network.f21c8952d5bec6afa3703c23100ec44209d90fce74a0944f9ca8f80e4fa65c5e" Workload="localhost-k8s-csi--node--driver--dftxb-eth0" Sep 12 05:59:44.139571 containerd[1620]: 2025-09-12 05:59:44.082 [INFO][4798] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f21c8952d5bec6afa3703c23100ec44209d90fce74a0944f9ca8f80e4fa65c5e" HandleID="k8s-pod-network.f21c8952d5bec6afa3703c23100ec44209d90fce74a0944f9ca8f80e4fa65c5e" Workload="localhost-k8s-csi--node--driver--dftxb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000332880), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-dftxb", "timestamp":"2025-09-12 05:59:44.082689796 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 05:59:44.139571 containerd[1620]: 2025-09-12 05:59:44.082 [INFO][4798] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 05:59:44.139571 containerd[1620]: 2025-09-12 05:59:44.082 [INFO][4798] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 05:59:44.139571 containerd[1620]: 2025-09-12 05:59:44.082 [INFO][4798] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 12 05:59:44.139571 containerd[1620]: 2025-09-12 05:59:44.087 [INFO][4798] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f21c8952d5bec6afa3703c23100ec44209d90fce74a0944f9ca8f80e4fa65c5e" host="localhost" Sep 12 05:59:44.139571 containerd[1620]: 2025-09-12 05:59:44.091 [INFO][4798] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 12 05:59:44.139571 containerd[1620]: 2025-09-12 05:59:44.095 [INFO][4798] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 12 05:59:44.139571 containerd[1620]: 2025-09-12 05:59:44.097 [INFO][4798] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 12 05:59:44.139571 containerd[1620]: 2025-09-12 05:59:44.098 [INFO][4798] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 12 05:59:44.139571 containerd[1620]: 2025-09-12 05:59:44.098 [INFO][4798] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.f21c8952d5bec6afa3703c23100ec44209d90fce74a0944f9ca8f80e4fa65c5e" host="localhost" Sep 12 05:59:44.139571 containerd[1620]: 2025-09-12 05:59:44.099 [INFO][4798] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.f21c8952d5bec6afa3703c23100ec44209d90fce74a0944f9ca8f80e4fa65c5e Sep 12 05:59:44.139571 containerd[1620]: 2025-09-12 05:59:44.103 [INFO][4798] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.f21c8952d5bec6afa3703c23100ec44209d90fce74a0944f9ca8f80e4fa65c5e" host="localhost" Sep 12 05:59:44.139571 containerd[1620]: 2025-09-12 05:59:44.109 [INFO][4798] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.f21c8952d5bec6afa3703c23100ec44209d90fce74a0944f9ca8f80e4fa65c5e" host="localhost" Sep 12 05:59:44.139571 containerd[1620]: 2025-09-12 05:59:44.110 [INFO][4798] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.f21c8952d5bec6afa3703c23100ec44209d90fce74a0944f9ca8f80e4fa65c5e" host="localhost" Sep 12 05:59:44.139571 containerd[1620]: 2025-09-12 05:59:44.110 [INFO][4798] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 05:59:44.139571 containerd[1620]: 2025-09-12 05:59:44.110 [INFO][4798] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="f21c8952d5bec6afa3703c23100ec44209d90fce74a0944f9ca8f80e4fa65c5e" HandleID="k8s-pod-network.f21c8952d5bec6afa3703c23100ec44209d90fce74a0944f9ca8f80e4fa65c5e" Workload="localhost-k8s-csi--node--driver--dftxb-eth0" Sep 12 05:59:44.139976 containerd[1620]: 2025-09-12 05:59:44.112 [INFO][4773] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f21c8952d5bec6afa3703c23100ec44209d90fce74a0944f9ca8f80e4fa65c5e" Namespace="calico-system" Pod="csi-node-driver-dftxb" WorkloadEndpoint="localhost-k8s-csi--node--driver--dftxb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--dftxb-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"fbcdb877-a070-4ed8-b9fc-c8a9acc42275", ResourceVersion:"674", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 5, 59, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-dftxb", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali5217f5a60b4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 05:59:44.139976 containerd[1620]: 2025-09-12 05:59:44.112 [INFO][4773] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="f21c8952d5bec6afa3703c23100ec44209d90fce74a0944f9ca8f80e4fa65c5e" Namespace="calico-system" Pod="csi-node-driver-dftxb" WorkloadEndpoint="localhost-k8s-csi--node--driver--dftxb-eth0" Sep 12 05:59:44.139976 containerd[1620]: 2025-09-12 05:59:44.112 [INFO][4773] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5217f5a60b4 ContainerID="f21c8952d5bec6afa3703c23100ec44209d90fce74a0944f9ca8f80e4fa65c5e" Namespace="calico-system" Pod="csi-node-driver-dftxb" WorkloadEndpoint="localhost-k8s-csi--node--driver--dftxb-eth0" Sep 12 05:59:44.139976 containerd[1620]: 2025-09-12 05:59:44.122 [INFO][4773] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f21c8952d5bec6afa3703c23100ec44209d90fce74a0944f9ca8f80e4fa65c5e" Namespace="calico-system" Pod="csi-node-driver-dftxb" WorkloadEndpoint="localhost-k8s-csi--node--driver--dftxb-eth0" Sep 12 05:59:44.139976 containerd[1620]: 2025-09-12 05:59:44.123 [INFO][4773] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f21c8952d5bec6afa3703c23100ec44209d90fce74a0944f9ca8f80e4fa65c5e" Namespace="calico-system" Pod="csi-node-driver-dftxb" WorkloadEndpoint="localhost-k8s-csi--node--driver--dftxb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--dftxb-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"fbcdb877-a070-4ed8-b9fc-c8a9acc42275", ResourceVersion:"674", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 5, 59, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f21c8952d5bec6afa3703c23100ec44209d90fce74a0944f9ca8f80e4fa65c5e", Pod:"csi-node-driver-dftxb", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali5217f5a60b4", MAC:"1a:9f:8b:08:12:18", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 05:59:44.139976 containerd[1620]: 2025-09-12 05:59:44.129 [INFO][4773] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f21c8952d5bec6afa3703c23100ec44209d90fce74a0944f9ca8f80e4fa65c5e" Namespace="calico-system" Pod="csi-node-driver-dftxb" WorkloadEndpoint="localhost-k8s-csi--node--driver--dftxb-eth0" Sep 12 05:59:44.169789 containerd[1620]: time="2025-09-12T05:59:44.169689040Z" level=info msg="connecting to shim f21c8952d5bec6afa3703c23100ec44209d90fce74a0944f9ca8f80e4fa65c5e" address="unix:///run/containerd/s/126bbf7c07e72c9a2fbb33ac944aa5b22c90d12e62a2d6299671805932e6a6c1" namespace=k8s.io protocol=ttrpc version=3 Sep 12 05:59:44.190687 systemd[1]: Started cri-containerd-f21c8952d5bec6afa3703c23100ec44209d90fce74a0944f9ca8f80e4fa65c5e.scope - libcontainer container f21c8952d5bec6afa3703c23100ec44209d90fce74a0944f9ca8f80e4fa65c5e. Sep 12 05:59:44.205937 systemd-resolved[1508]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 12 05:59:44.233943 systemd-networkd[1507]: calicc20ddd0cc6: Link UP Sep 12 05:59:44.234411 systemd-networkd[1507]: calicc20ddd0cc6: Gained carrier Sep 12 05:59:44.262311 containerd[1620]: 2025-09-12 05:59:44.061 [INFO][4778] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--pzmsg-eth0 coredns-668d6bf9bc- kube-system f931fde1-f624-43e2-baac-64bdf6ab8d43 783 0 2025-09-12 05:59:06 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-pzmsg eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calicc20ddd0cc6 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="87d42ddf052ba83c8f6e61d4c6e5d2e403288046f2b921edd4b4d64abd073e03" Namespace="kube-system" Pod="coredns-668d6bf9bc-pzmsg" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--pzmsg-" Sep 12 05:59:44.262311 containerd[1620]: 2025-09-12 05:59:44.061 [INFO][4778] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="87d42ddf052ba83c8f6e61d4c6e5d2e403288046f2b921edd4b4d64abd073e03" Namespace="kube-system" Pod="coredns-668d6bf9bc-pzmsg" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--pzmsg-eth0" Sep 12 05:59:44.262311 containerd[1620]: 2025-09-12 05:59:44.103 [INFO][4805] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="87d42ddf052ba83c8f6e61d4c6e5d2e403288046f2b921edd4b4d64abd073e03" HandleID="k8s-pod-network.87d42ddf052ba83c8f6e61d4c6e5d2e403288046f2b921edd4b4d64abd073e03" Workload="localhost-k8s-coredns--668d6bf9bc--pzmsg-eth0" Sep 12 05:59:44.262311 containerd[1620]: 2025-09-12 05:59:44.103 [INFO][4805] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="87d42ddf052ba83c8f6e61d4c6e5d2e403288046f2b921edd4b4d64abd073e03" HandleID="k8s-pod-network.87d42ddf052ba83c8f6e61d4c6e5d2e403288046f2b921edd4b4d64abd073e03" Workload="localhost-k8s-coredns--668d6bf9bc--pzmsg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00032d4f0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-pzmsg", "timestamp":"2025-09-12 05:59:44.102346611 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 05:59:44.262311 containerd[1620]: 2025-09-12 05:59:44.103 [INFO][4805] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 05:59:44.262311 containerd[1620]: 2025-09-12 05:59:44.110 [INFO][4805] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 05:59:44.262311 containerd[1620]: 2025-09-12 05:59:44.110 [INFO][4805] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 12 05:59:44.262311 containerd[1620]: 2025-09-12 05:59:44.188 [INFO][4805] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.87d42ddf052ba83c8f6e61d4c6e5d2e403288046f2b921edd4b4d64abd073e03" host="localhost" Sep 12 05:59:44.262311 containerd[1620]: 2025-09-12 05:59:44.191 [INFO][4805] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 12 05:59:44.262311 containerd[1620]: 2025-09-12 05:59:44.195 [INFO][4805] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 12 05:59:44.262311 containerd[1620]: 2025-09-12 05:59:44.196 [INFO][4805] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 12 05:59:44.262311 containerd[1620]: 2025-09-12 05:59:44.198 [INFO][4805] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 12 05:59:44.262311 containerd[1620]: 2025-09-12 05:59:44.198 [INFO][4805] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.87d42ddf052ba83c8f6e61d4c6e5d2e403288046f2b921edd4b4d64abd073e03" host="localhost" Sep 12 05:59:44.262311 containerd[1620]: 2025-09-12 05:59:44.198 [INFO][4805] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.87d42ddf052ba83c8f6e61d4c6e5d2e403288046f2b921edd4b4d64abd073e03 Sep 12 05:59:44.262311 containerd[1620]: 2025-09-12 05:59:44.202 [INFO][4805] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.87d42ddf052ba83c8f6e61d4c6e5d2e403288046f2b921edd4b4d64abd073e03" host="localhost" Sep 12 05:59:44.262311 containerd[1620]: 2025-09-12 05:59:44.228 [INFO][4805] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.87d42ddf052ba83c8f6e61d4c6e5d2e403288046f2b921edd4b4d64abd073e03" host="localhost" Sep 12 05:59:44.262311 containerd[1620]: 2025-09-12 05:59:44.228 [INFO][4805] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.87d42ddf052ba83c8f6e61d4c6e5d2e403288046f2b921edd4b4d64abd073e03" host="localhost" Sep 12 05:59:44.262311 containerd[1620]: 2025-09-12 05:59:44.228 [INFO][4805] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 05:59:44.262311 containerd[1620]: 2025-09-12 05:59:44.228 [INFO][4805] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="87d42ddf052ba83c8f6e61d4c6e5d2e403288046f2b921edd4b4d64abd073e03" HandleID="k8s-pod-network.87d42ddf052ba83c8f6e61d4c6e5d2e403288046f2b921edd4b4d64abd073e03" Workload="localhost-k8s-coredns--668d6bf9bc--pzmsg-eth0" Sep 12 05:59:44.272233 containerd[1620]: 2025-09-12 05:59:44.230 [INFO][4778] cni-plugin/k8s.go 418: Populated endpoint ContainerID="87d42ddf052ba83c8f6e61d4c6e5d2e403288046f2b921edd4b4d64abd073e03" Namespace="kube-system" Pod="coredns-668d6bf9bc-pzmsg" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--pzmsg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--pzmsg-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"f931fde1-f624-43e2-baac-64bdf6ab8d43", ResourceVersion:"783", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 5, 59, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-pzmsg", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calicc20ddd0cc6", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 05:59:44.272233 containerd[1620]: 2025-09-12 05:59:44.230 [INFO][4778] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="87d42ddf052ba83c8f6e61d4c6e5d2e403288046f2b921edd4b4d64abd073e03" Namespace="kube-system" Pod="coredns-668d6bf9bc-pzmsg" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--pzmsg-eth0" Sep 12 05:59:44.272233 containerd[1620]: 2025-09-12 05:59:44.230 [INFO][4778] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calicc20ddd0cc6 ContainerID="87d42ddf052ba83c8f6e61d4c6e5d2e403288046f2b921edd4b4d64abd073e03" Namespace="kube-system" Pod="coredns-668d6bf9bc-pzmsg" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--pzmsg-eth0" Sep 12 05:59:44.272233 containerd[1620]: 2025-09-12 05:59:44.237 [INFO][4778] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="87d42ddf052ba83c8f6e61d4c6e5d2e403288046f2b921edd4b4d64abd073e03" Namespace="kube-system" Pod="coredns-668d6bf9bc-pzmsg" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--pzmsg-eth0" Sep 12 05:59:44.272233 containerd[1620]: 2025-09-12 05:59:44.240 [INFO][4778] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="87d42ddf052ba83c8f6e61d4c6e5d2e403288046f2b921edd4b4d64abd073e03" Namespace="kube-system" Pod="coredns-668d6bf9bc-pzmsg" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--pzmsg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--pzmsg-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"f931fde1-f624-43e2-baac-64bdf6ab8d43", ResourceVersion:"783", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 5, 59, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"87d42ddf052ba83c8f6e61d4c6e5d2e403288046f2b921edd4b4d64abd073e03", Pod:"coredns-668d6bf9bc-pzmsg", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calicc20ddd0cc6", MAC:"c2:7e:ed:76:9a:a1", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 05:59:44.272743 containerd[1620]: 2025-09-12 05:59:44.258 [INFO][4778] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="87d42ddf052ba83c8f6e61d4c6e5d2e403288046f2b921edd4b4d64abd073e03" Namespace="kube-system" Pod="coredns-668d6bf9bc-pzmsg" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--pzmsg-eth0" Sep 12 05:59:44.272743 containerd[1620]: time="2025-09-12T05:59:44.264694754Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dftxb,Uid:fbcdb877-a070-4ed8-b9fc-c8a9acc42275,Namespace:calico-system,Attempt:0,} returns sandbox id \"f21c8952d5bec6afa3703c23100ec44209d90fce74a0944f9ca8f80e4fa65c5e\"" Sep 12 05:59:44.474498 containerd[1620]: time="2025-09-12T05:59:44.473624551Z" level=info msg="connecting to shim 87d42ddf052ba83c8f6e61d4c6e5d2e403288046f2b921edd4b4d64abd073e03" address="unix:///run/containerd/s/9928749550f0df3b5f52040834fd895627a90b56951eb1925bd0a671596f57d9" namespace=k8s.io protocol=ttrpc version=3 Sep 12 05:59:44.491941 kubelet[2932]: I0912 05:59:44.490570 2932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-bfh87" podStartSLOduration=38.454935279 podStartE2EDuration="38.454935279s" podCreationTimestamp="2025-09-12 05:59:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 05:59:44.45321492 +0000 UTC m=+45.504596910" watchObservedRunningTime="2025-09-12 05:59:44.454935279 +0000 UTC m=+45.506317265" Sep 12 05:59:44.500685 systemd[1]: Started cri-containerd-87d42ddf052ba83c8f6e61d4c6e5d2e403288046f2b921edd4b4d64abd073e03.scope - libcontainer container 87d42ddf052ba83c8f6e61d4c6e5d2e403288046f2b921edd4b4d64abd073e03. Sep 12 05:59:44.519612 systemd-resolved[1508]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 12 05:59:44.561678 containerd[1620]: time="2025-09-12T05:59:44.561589474Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-pzmsg,Uid:f931fde1-f624-43e2-baac-64bdf6ab8d43,Namespace:kube-system,Attempt:0,} returns sandbox id \"87d42ddf052ba83c8f6e61d4c6e5d2e403288046f2b921edd4b4d64abd073e03\"" Sep 12 05:59:44.565351 containerd[1620]: time="2025-09-12T05:59:44.565238620Z" level=info msg="CreateContainer within sandbox \"87d42ddf052ba83c8f6e61d4c6e5d2e403288046f2b921edd4b4d64abd073e03\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 12 05:59:44.573902 containerd[1620]: time="2025-09-12T05:59:44.573875253Z" level=info msg="Container 51af2caf6df85e8dbb60594bf6369921e7c998c609b484b38fc79443ff0cb71d: CDI devices from CRI Config.CDIDevices: []" Sep 12 05:59:44.578915 containerd[1620]: time="2025-09-12T05:59:44.578498039Z" level=info msg="CreateContainer within sandbox \"87d42ddf052ba83c8f6e61d4c6e5d2e403288046f2b921edd4b4d64abd073e03\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"51af2caf6df85e8dbb60594bf6369921e7c998c609b484b38fc79443ff0cb71d\"" Sep 12 05:59:44.578841 systemd-networkd[1507]: cali05303b8f089: Gained IPv6LL Sep 12 05:59:44.580572 containerd[1620]: time="2025-09-12T05:59:44.580144258Z" level=info msg="StartContainer for \"51af2caf6df85e8dbb60594bf6369921e7c998c609b484b38fc79443ff0cb71d\"" Sep 12 05:59:44.581289 containerd[1620]: time="2025-09-12T05:59:44.581276222Z" level=info msg="connecting to shim 51af2caf6df85e8dbb60594bf6369921e7c998c609b484b38fc79443ff0cb71d" address="unix:///run/containerd/s/9928749550f0df3b5f52040834fd895627a90b56951eb1925bd0a671596f57d9" protocol=ttrpc version=3 Sep 12 05:59:44.603321 systemd[1]: Started cri-containerd-51af2caf6df85e8dbb60594bf6369921e7c998c609b484b38fc79443ff0cb71d.scope - libcontainer container 51af2caf6df85e8dbb60594bf6369921e7c998c609b484b38fc79443ff0cb71d. Sep 12 05:59:44.644720 containerd[1620]: time="2025-09-12T05:59:44.644696848Z" level=info msg="StartContainer for \"51af2caf6df85e8dbb60594bf6369921e7c998c609b484b38fc79443ff0cb71d\" returns successfully" Sep 12 05:59:44.706978 systemd-networkd[1507]: cali54776ee12de: Gained IPv6LL Sep 12 05:59:45.285541 containerd[1620]: time="2025-09-12T05:59:45.285511175Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 05:59:45.286281 containerd[1620]: time="2025-09-12T05:59:45.286262807Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=47333864" Sep 12 05:59:45.287153 containerd[1620]: time="2025-09-12T05:59:45.287134475Z" level=info msg="ImageCreate event name:\"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 05:59:45.288473 containerd[1620]: time="2025-09-12T05:59:45.288453445Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 05:59:45.289153 containerd[1620]: time="2025-09-12T05:59:45.289133696Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 2.725577144s" Sep 12 05:59:45.289244 containerd[1620]: time="2025-09-12T05:59:45.289230573Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 12 05:59:45.289945 containerd[1620]: time="2025-09-12T05:59:45.289928005Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\"" Sep 12 05:59:45.293712 containerd[1620]: time="2025-09-12T05:59:45.293682886Z" level=info msg="CreateContainer within sandbox \"844d748fe916fac2bfa860fae932beec631426bb4cedeef194ef452eef3e3401\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 12 05:59:45.310287 containerd[1620]: time="2025-09-12T05:59:45.309683511Z" level=info msg="Container 580a402cc831dd4fb957db401e82333f817055674d045a0bf2a9073b43ada6be: CDI devices from CRI Config.CDIDevices: []" Sep 12 05:59:45.314013 containerd[1620]: time="2025-09-12T05:59:45.313988870Z" level=info msg="CreateContainer within sandbox \"844d748fe916fac2bfa860fae932beec631426bb4cedeef194ef452eef3e3401\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"580a402cc831dd4fb957db401e82333f817055674d045a0bf2a9073b43ada6be\"" Sep 12 05:59:45.315328 containerd[1620]: time="2025-09-12T05:59:45.315307950Z" level=info msg="StartContainer for \"580a402cc831dd4fb957db401e82333f817055674d045a0bf2a9073b43ada6be\"" Sep 12 05:59:45.317433 containerd[1620]: time="2025-09-12T05:59:45.317379704Z" level=info msg="connecting to shim 580a402cc831dd4fb957db401e82333f817055674d045a0bf2a9073b43ada6be" address="unix:///run/containerd/s/6c7abb9940a40a3dd57a5edf180f55ee54fcbafad1a53fcf6ebc3dae992c3908" protocol=ttrpc version=3 Sep 12 05:59:45.335756 systemd[1]: Started cri-containerd-580a402cc831dd4fb957db401e82333f817055674d045a0bf2a9073b43ada6be.scope - libcontainer container 580a402cc831dd4fb957db401e82333f817055674d045a0bf2a9073b43ada6be. Sep 12 05:59:45.347545 systemd-networkd[1507]: cali083fa637418: Gained IPv6LL Sep 12 05:59:45.377538 containerd[1620]: time="2025-09-12T05:59:45.377516230Z" level=info msg="StartContainer for \"580a402cc831dd4fb957db401e82333f817055674d045a0bf2a9073b43ada6be\" returns successfully" Sep 12 05:59:45.431567 kubelet[2932]: I0912 05:59:45.431508 2932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-8d7f475b6-6ctkw" podStartSLOduration=25.702500217 podStartE2EDuration="30.431490338s" podCreationTimestamp="2025-09-12 05:59:15 +0000 UTC" firstStartedPulling="2025-09-12 05:59:40.560861834 +0000 UTC m=+41.612243816" lastFinishedPulling="2025-09-12 05:59:45.289851956 +0000 UTC m=+46.341233937" observedRunningTime="2025-09-12 05:59:45.405153109 +0000 UTC m=+46.456535099" watchObservedRunningTime="2025-09-12 05:59:45.431490338 +0000 UTC m=+46.482872330" Sep 12 05:59:45.432026 kubelet[2932]: I0912 05:59:45.431907 2932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-pzmsg" podStartSLOduration=39.431896787 podStartE2EDuration="39.431896787s" podCreationTimestamp="2025-09-12 05:59:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 05:59:45.431803748 +0000 UTC m=+46.483185734" watchObservedRunningTime="2025-09-12 05:59:45.431896787 +0000 UTC m=+46.483278778" Sep 12 05:59:45.730659 systemd-networkd[1507]: cali5217f5a60b4: Gained IPv6LL Sep 12 05:59:46.050659 systemd-networkd[1507]: calicc20ddd0cc6: Gained IPv6LL Sep 12 05:59:46.405082 kubelet[2932]: I0912 05:59:46.404872 2932 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 12 05:59:47.865813 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount253111202.mount: Deactivated successfully. Sep 12 05:59:48.516903 containerd[1620]: time="2025-09-12T05:59:48.516877043Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 05:59:48.544189 containerd[1620]: time="2025-09-12T05:59:48.544155653Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.3: active requests=0, bytes read=66357526" Sep 12 05:59:48.550362 containerd[1620]: time="2025-09-12T05:59:48.550338311Z" level=info msg="ImageCreate event name:\"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 05:59:48.588565 containerd[1620]: time="2025-09-12T05:59:48.588261342Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 05:59:48.591299 containerd[1620]: time="2025-09-12T05:59:48.591269681Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" with image id \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\", size \"66357372\" in 3.299193973s" Sep 12 05:59:48.591433 containerd[1620]: time="2025-09-12T05:59:48.591418911Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" returns image reference \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\"" Sep 12 05:59:48.594948 containerd[1620]: time="2025-09-12T05:59:48.594907187Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\"" Sep 12 05:59:48.619166 containerd[1620]: time="2025-09-12T05:59:48.619138704Z" level=info msg="CreateContainer within sandbox \"7f0664e387356346cd938b42900bec1f24aa7f207969ec7a8165ec1009370bbf\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Sep 12 05:59:48.660812 containerd[1620]: time="2025-09-12T05:59:48.660785207Z" level=info msg="Container a29e5b4963b4df685b32b067aaa438b092cde38d306e1b05ec3d475971632883: CDI devices from CRI Config.CDIDevices: []" Sep 12 05:59:48.663511 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1835897683.mount: Deactivated successfully. Sep 12 05:59:48.691526 containerd[1620]: time="2025-09-12T05:59:48.691457720Z" level=info msg="CreateContainer within sandbox \"7f0664e387356346cd938b42900bec1f24aa7f207969ec7a8165ec1009370bbf\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"a29e5b4963b4df685b32b067aaa438b092cde38d306e1b05ec3d475971632883\"" Sep 12 05:59:48.697033 containerd[1620]: time="2025-09-12T05:59:48.697007828Z" level=info msg="StartContainer for \"a29e5b4963b4df685b32b067aaa438b092cde38d306e1b05ec3d475971632883\"" Sep 12 05:59:48.709206 containerd[1620]: time="2025-09-12T05:59:48.709172308Z" level=info msg="connecting to shim a29e5b4963b4df685b32b067aaa438b092cde38d306e1b05ec3d475971632883" address="unix:///run/containerd/s/aca768cc179a8a2b09ad00bb136dfda51a1d51b61fc22323606b8b9d24f6169f" protocol=ttrpc version=3 Sep 12 05:59:48.766668 systemd[1]: Started cri-containerd-a29e5b4963b4df685b32b067aaa438b092cde38d306e1b05ec3d475971632883.scope - libcontainer container a29e5b4963b4df685b32b067aaa438b092cde38d306e1b05ec3d475971632883. Sep 12 05:59:48.818233 containerd[1620]: time="2025-09-12T05:59:48.818141849Z" level=info msg="StartContainer for \"a29e5b4963b4df685b32b067aaa438b092cde38d306e1b05ec3d475971632883\" returns successfully" Sep 12 05:59:49.518010 kubelet[2932]: I0912 05:59:49.516830 2932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-54d579b49d-7t4n7" podStartSLOduration=25.369571042 podStartE2EDuration="32.51681478s" podCreationTimestamp="2025-09-12 05:59:17 +0000 UTC" firstStartedPulling="2025-09-12 05:59:41.44744096 +0000 UTC m=+42.498822942" lastFinishedPulling="2025-09-12 05:59:48.594684696 +0000 UTC m=+49.646066680" observedRunningTime="2025-09-12 05:59:49.503295819 +0000 UTC m=+50.554677802" watchObservedRunningTime="2025-09-12 05:59:49.51681478 +0000 UTC m=+50.568196765" Sep 12 05:59:49.593295 containerd[1620]: time="2025-09-12T05:59:49.593263376Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a29e5b4963b4df685b32b067aaa438b092cde38d306e1b05ec3d475971632883\" id:\"9a70bda6e9e471fc07a13278c4b8808be80779fdb6600b9bad30b6f33e1d7d97\" pid:5078 exit_status:1 exited_at:{seconds:1757656789 nanos:578982128}" Sep 12 05:59:50.570221 containerd[1620]: time="2025-09-12T05:59:50.570194692Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a29e5b4963b4df685b32b067aaa438b092cde38d306e1b05ec3d475971632883\" id:\"f4c787a858a017f97d037bb9a18df945bce54dc9602793148fcbac22530b1cec\" pid:5105 exit_status:1 exited_at:{seconds:1757656790 nanos:569904509}" Sep 12 05:59:51.593268 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2796437486.mount: Deactivated successfully. Sep 12 05:59:51.675442 containerd[1620]: time="2025-09-12T05:59:51.675403080Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a29e5b4963b4df685b32b067aaa438b092cde38d306e1b05ec3d475971632883\" id:\"e8ef676d60ab434dc55833b19fdf50e799f716adb6767f4674cae674fa74b7ab\" pid:5131 exit_status:1 exited_at:{seconds:1757656791 nanos:675072057}" Sep 12 05:59:51.703669 containerd[1620]: time="2025-09-12T05:59:51.703212873Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 05:59:51.703669 containerd[1620]: time="2025-09-12T05:59:51.703651768Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.3: active requests=0, bytes read=33085545" Sep 12 05:59:51.704864 containerd[1620]: time="2025-09-12T05:59:51.704839358Z" level=info msg="ImageCreate event name:\"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 05:59:51.705829 containerd[1620]: time="2025-09-12T05:59:51.705775900Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 05:59:51.706401 containerd[1620]: time="2025-09-12T05:59:51.706161139Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" with image id \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\", size \"33085375\" in 3.111211945s" Sep 12 05:59:51.706401 containerd[1620]: time="2025-09-12T05:59:51.706180877Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" returns image reference \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\"" Sep 12 05:59:51.706747 containerd[1620]: time="2025-09-12T05:59:51.706736865Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 12 05:59:51.708595 containerd[1620]: time="2025-09-12T05:59:51.708354629Z" level=info msg="CreateContainer within sandbox \"5ff86726d8902b172dcc5d4360c4f7c18e63e06303968b3bd11820b3d5029161\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Sep 12 05:59:51.717025 containerd[1620]: time="2025-09-12T05:59:51.717002874Z" level=info msg="Container 8959d7e7815180ad6f5d6faeb53d9d748b08f0c31dcb879b8528221a93eec49d: CDI devices from CRI Config.CDIDevices: []" Sep 12 05:59:51.721251 containerd[1620]: time="2025-09-12T05:59:51.721225215Z" level=info msg="CreateContainer within sandbox \"5ff86726d8902b172dcc5d4360c4f7c18e63e06303968b3bd11820b3d5029161\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"8959d7e7815180ad6f5d6faeb53d9d748b08f0c31dcb879b8528221a93eec49d\"" Sep 12 05:59:51.722729 containerd[1620]: time="2025-09-12T05:59:51.721695782Z" level=info msg="StartContainer for \"8959d7e7815180ad6f5d6faeb53d9d748b08f0c31dcb879b8528221a93eec49d\"" Sep 12 05:59:51.722729 containerd[1620]: time="2025-09-12T05:59:51.722407199Z" level=info msg="connecting to shim 8959d7e7815180ad6f5d6faeb53d9d748b08f0c31dcb879b8528221a93eec49d" address="unix:///run/containerd/s/72b665ceadd9a48bbacbef68260902786d364c26be81112a3a1520aee97ae4af" protocol=ttrpc version=3 Sep 12 05:59:51.742664 systemd[1]: Started cri-containerd-8959d7e7815180ad6f5d6faeb53d9d748b08f0c31dcb879b8528221a93eec49d.scope - libcontainer container 8959d7e7815180ad6f5d6faeb53d9d748b08f0c31dcb879b8528221a93eec49d. Sep 12 05:59:51.799935 containerd[1620]: time="2025-09-12T05:59:51.799910627Z" level=info msg="StartContainer for \"8959d7e7815180ad6f5d6faeb53d9d748b08f0c31dcb879b8528221a93eec49d\" returns successfully" Sep 12 05:59:52.203843 containerd[1620]: time="2025-09-12T05:59:52.203809282Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 05:59:52.207838 containerd[1620]: time="2025-09-12T05:59:52.207817084Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=77" Sep 12 05:59:52.220455 containerd[1620]: time="2025-09-12T05:59:52.208683666Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 501.880175ms" Sep 12 05:59:52.220455 containerd[1620]: time="2025-09-12T05:59:52.208699532Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 12 05:59:52.220455 containerd[1620]: time="2025-09-12T05:59:52.209428301Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\"" Sep 12 05:59:52.226148 containerd[1620]: time="2025-09-12T05:59:52.226120971Z" level=info msg="CreateContainer within sandbox \"a79497b095456a95e30186f327396562ca7fccdbb6e50890b6cf4389653a3041\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 12 05:59:52.265074 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount520943509.mount: Deactivated successfully. Sep 12 05:59:52.265304 containerd[1620]: time="2025-09-12T05:59:52.265277687Z" level=info msg="Container 01414e3c71e0add02e4886d19b18e98852d5cd7f397983508f17893c00b1f6f8: CDI devices from CRI Config.CDIDevices: []" Sep 12 05:59:52.298448 containerd[1620]: time="2025-09-12T05:59:52.298396074Z" level=info msg="CreateContainer within sandbox \"a79497b095456a95e30186f327396562ca7fccdbb6e50890b6cf4389653a3041\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"01414e3c71e0add02e4886d19b18e98852d5cd7f397983508f17893c00b1f6f8\"" Sep 12 05:59:52.302662 containerd[1620]: time="2025-09-12T05:59:52.298884159Z" level=info msg="StartContainer for \"01414e3c71e0add02e4886d19b18e98852d5cd7f397983508f17893c00b1f6f8\"" Sep 12 05:59:52.302662 containerd[1620]: time="2025-09-12T05:59:52.299480794Z" level=info msg="connecting to shim 01414e3c71e0add02e4886d19b18e98852d5cd7f397983508f17893c00b1f6f8" address="unix:///run/containerd/s/d6d59668fb16425d2c0f78f9228df84d233871c2e6d0109754ffd7c934b81f50" protocol=ttrpc version=3 Sep 12 05:59:52.318656 systemd[1]: Started cri-containerd-01414e3c71e0add02e4886d19b18e98852d5cd7f397983508f17893c00b1f6f8.scope - libcontainer container 01414e3c71e0add02e4886d19b18e98852d5cd7f397983508f17893c00b1f6f8. Sep 12 05:59:52.362999 containerd[1620]: time="2025-09-12T05:59:52.362973086Z" level=info msg="StartContainer for \"01414e3c71e0add02e4886d19b18e98852d5cd7f397983508f17893c00b1f6f8\" returns successfully" Sep 12 05:59:52.535846 kubelet[2932]: I0912 05:59:52.535124 2932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-6c76686fdc-w2w9w" podStartSLOduration=5.332561728 podStartE2EDuration="16.535112458s" podCreationTimestamp="2025-09-12 05:59:36 +0000 UTC" firstStartedPulling="2025-09-12 05:59:40.504129496 +0000 UTC m=+41.555511479" lastFinishedPulling="2025-09-12 05:59:51.706680228 +0000 UTC m=+52.758062209" observedRunningTime="2025-09-12 05:59:52.534151062 +0000 UTC m=+53.585533051" watchObservedRunningTime="2025-09-12 05:59:52.535112458 +0000 UTC m=+53.586494443" Sep 12 05:59:53.750356 kubelet[2932]: I0912 05:59:53.750239 2932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-8d7f475b6-hf4g9" podStartSLOduration=30.077322246 podStartE2EDuration="38.66632557s" podCreationTimestamp="2025-09-12 05:59:15 +0000 UTC" firstStartedPulling="2025-09-12 05:59:43.620188412 +0000 UTC m=+44.671570396" lastFinishedPulling="2025-09-12 05:59:52.209191737 +0000 UTC m=+53.260573720" observedRunningTime="2025-09-12 05:59:52.545327191 +0000 UTC m=+53.596709176" watchObservedRunningTime="2025-09-12 05:59:53.66632557 +0000 UTC m=+54.717707553" Sep 12 05:59:55.169640 containerd[1620]: time="2025-09-12T05:59:55.168768411Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 05:59:55.174595 containerd[1620]: time="2025-09-12T05:59:55.174451421Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.3: active requests=0, bytes read=51277746" Sep 12 05:59:55.251423 containerd[1620]: time="2025-09-12T05:59:55.251393329Z" level=info msg="ImageCreate event name:\"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 05:59:55.253446 containerd[1620]: time="2025-09-12T05:59:55.253427798Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 05:59:55.253821 containerd[1620]: time="2025-09-12T05:59:55.253807483Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" with image id \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\", size \"52770417\" in 3.044318666s" Sep 12 05:59:55.253884 containerd[1620]: time="2025-09-12T05:59:55.253874648Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" returns image reference \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\"" Sep 12 05:59:55.277814 containerd[1620]: time="2025-09-12T05:59:55.277789941Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\"" Sep 12 05:59:55.456195 containerd[1620]: time="2025-09-12T05:59:55.456132287Z" level=info msg="CreateContainer within sandbox \"b08f603120eea5e4e455e563126d35a80e694e6852bf013cc745f9c57ba1c4fc\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Sep 12 05:59:55.516840 containerd[1620]: time="2025-09-12T05:59:55.516681519Z" level=info msg="Container c1bc16394141bc9fd4311c9df982c690bac9bd40e084ed115fcfeeb7e49b3370: CDI devices from CRI Config.CDIDevices: []" Sep 12 05:59:55.540298 containerd[1620]: time="2025-09-12T05:59:55.540269401Z" level=info msg="CreateContainer within sandbox \"b08f603120eea5e4e455e563126d35a80e694e6852bf013cc745f9c57ba1c4fc\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"c1bc16394141bc9fd4311c9df982c690bac9bd40e084ed115fcfeeb7e49b3370\"" Sep 12 05:59:55.540927 containerd[1620]: time="2025-09-12T05:59:55.540741022Z" level=info msg="StartContainer for \"c1bc16394141bc9fd4311c9df982c690bac9bd40e084ed115fcfeeb7e49b3370\"" Sep 12 05:59:55.551596 containerd[1620]: time="2025-09-12T05:59:55.551561838Z" level=info msg="connecting to shim c1bc16394141bc9fd4311c9df982c690bac9bd40e084ed115fcfeeb7e49b3370" address="unix:///run/containerd/s/64e29d4abc655ad6b496f973e3414a96ea491ca70aded6359d51760301e63842" protocol=ttrpc version=3 Sep 12 05:59:55.598739 systemd[1]: Started cri-containerd-c1bc16394141bc9fd4311c9df982c690bac9bd40e084ed115fcfeeb7e49b3370.scope - libcontainer container c1bc16394141bc9fd4311c9df982c690bac9bd40e084ed115fcfeeb7e49b3370. Sep 12 05:59:55.653381 containerd[1620]: time="2025-09-12T05:59:55.652659322Z" level=info msg="StartContainer for \"c1bc16394141bc9fd4311c9df982c690bac9bd40e084ed115fcfeeb7e49b3370\" returns successfully" Sep 12 05:59:56.562712 kubelet[2932]: I0912 05:59:56.559054 2932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-868998bbf7-9bvbh" podStartSLOduration=26.940349305 podStartE2EDuration="38.559036186s" podCreationTimestamp="2025-09-12 05:59:18 +0000 UTC" firstStartedPulling="2025-09-12 05:59:43.658879901 +0000 UTC m=+44.710261885" lastFinishedPulling="2025-09-12 05:59:55.277566782 +0000 UTC m=+56.328948766" observedRunningTime="2025-09-12 05:59:56.558620503 +0000 UTC m=+57.610002494" watchObservedRunningTime="2025-09-12 05:59:56.559036186 +0000 UTC m=+57.610418171" Sep 12 05:59:56.821535 containerd[1620]: time="2025-09-12T05:59:56.821345053Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c1bc16394141bc9fd4311c9df982c690bac9bd40e084ed115fcfeeb7e49b3370\" id:\"7f5f325af5d1dad9457d4ba68bd03ed3fbefd3fff6de6eb82fe8c8c4b508c046\" pid:5280 exited_at:{seconds:1757656796 nanos:810367113}" Sep 12 05:59:56.924037 containerd[1620]: time="2025-09-12T05:59:56.924004018Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 05:59:56.924601 containerd[1620]: time="2025-09-12T05:59:56.924581466Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.3: active requests=0, bytes read=8760527" Sep 12 05:59:56.924866 containerd[1620]: time="2025-09-12T05:59:56.924849556Z" level=info msg="ImageCreate event name:\"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 05:59:56.929786 containerd[1620]: time="2025-09-12T05:59:56.929761784Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 05:59:56.930315 containerd[1620]: time="2025-09-12T05:59:56.929955930Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.3\" with image id \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\", size \"10253230\" in 1.652015918s" Sep 12 05:59:56.930315 containerd[1620]: time="2025-09-12T05:59:56.929971416Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\" returns image reference \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\"" Sep 12 05:59:56.964375 containerd[1620]: time="2025-09-12T05:59:56.964339450Z" level=info msg="CreateContainer within sandbox \"f21c8952d5bec6afa3703c23100ec44209d90fce74a0944f9ca8f80e4fa65c5e\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Sep 12 05:59:56.987050 containerd[1620]: time="2025-09-12T05:59:56.987016559Z" level=info msg="Container 31662e205dd7c970008ff1fbcb2e981342a28d3d57e29fe996e93bd0b7e18682: CDI devices from CRI Config.CDIDevices: []" Sep 12 05:59:57.004019 containerd[1620]: time="2025-09-12T05:59:57.003991023Z" level=info msg="CreateContainer within sandbox \"f21c8952d5bec6afa3703c23100ec44209d90fce74a0944f9ca8f80e4fa65c5e\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"31662e205dd7c970008ff1fbcb2e981342a28d3d57e29fe996e93bd0b7e18682\"" Sep 12 05:59:57.008789 containerd[1620]: time="2025-09-12T05:59:57.004343940Z" level=info msg="StartContainer for \"31662e205dd7c970008ff1fbcb2e981342a28d3d57e29fe996e93bd0b7e18682\"" Sep 12 05:59:57.008789 containerd[1620]: time="2025-09-12T05:59:57.005690626Z" level=info msg="connecting to shim 31662e205dd7c970008ff1fbcb2e981342a28d3d57e29fe996e93bd0b7e18682" address="unix:///run/containerd/s/126bbf7c07e72c9a2fbb33ac944aa5b22c90d12e62a2d6299671805932e6a6c1" protocol=ttrpc version=3 Sep 12 05:59:57.023751 systemd[1]: Started cri-containerd-31662e205dd7c970008ff1fbcb2e981342a28d3d57e29fe996e93bd0b7e18682.scope - libcontainer container 31662e205dd7c970008ff1fbcb2e981342a28d3d57e29fe996e93bd0b7e18682. Sep 12 05:59:57.050443 containerd[1620]: time="2025-09-12T05:59:57.050263831Z" level=info msg="StartContainer for \"31662e205dd7c970008ff1fbcb2e981342a28d3d57e29fe996e93bd0b7e18682\" returns successfully" Sep 12 05:59:57.072985 containerd[1620]: time="2025-09-12T05:59:57.072796794Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\"" Sep 12 06:00:00.364112 containerd[1620]: time="2025-09-12T06:00:00.364071885Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 06:00:00.369640 containerd[1620]: time="2025-09-12T06:00:00.368496980Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3: active requests=0, bytes read=14698542" Sep 12 06:00:00.375351 containerd[1620]: time="2025-09-12T06:00:00.373360250Z" level=info msg="ImageCreate event name:\"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 06:00:00.380633 containerd[1620]: time="2025-09-12T06:00:00.376654623Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 06:00:00.380633 containerd[1620]: time="2025-09-12T06:00:00.377040139Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" with image id \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\", size \"16191197\" in 3.304211747s" Sep 12 06:00:00.380633 containerd[1620]: time="2025-09-12T06:00:00.377055860Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" returns image reference \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\"" Sep 12 06:00:00.380633 containerd[1620]: time="2025-09-12T06:00:00.378726133Z" level=info msg="CreateContainer within sandbox \"f21c8952d5bec6afa3703c23100ec44209d90fce74a0944f9ca8f80e4fa65c5e\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Sep 12 06:00:00.394946 containerd[1620]: time="2025-09-12T06:00:00.394907839Z" level=info msg="Container ea30d2eeb83f7bc3df9c8131a4d03a5cb10066b8acc2a47bc400aa207c97f5f5: CDI devices from CRI Config.CDIDevices: []" Sep 12 06:00:00.416847 containerd[1620]: time="2025-09-12T06:00:00.416818315Z" level=info msg="CreateContainer within sandbox \"f21c8952d5bec6afa3703c23100ec44209d90fce74a0944f9ca8f80e4fa65c5e\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"ea30d2eeb83f7bc3df9c8131a4d03a5cb10066b8acc2a47bc400aa207c97f5f5\"" Sep 12 06:00:00.417206 containerd[1620]: time="2025-09-12T06:00:00.417188931Z" level=info msg="StartContainer for \"ea30d2eeb83f7bc3df9c8131a4d03a5cb10066b8acc2a47bc400aa207c97f5f5\"" Sep 12 06:00:00.419431 containerd[1620]: time="2025-09-12T06:00:00.418936366Z" level=info msg="connecting to shim ea30d2eeb83f7bc3df9c8131a4d03a5cb10066b8acc2a47bc400aa207c97f5f5" address="unix:///run/containerd/s/126bbf7c07e72c9a2fbb33ac944aa5b22c90d12e62a2d6299671805932e6a6c1" protocol=ttrpc version=3 Sep 12 06:00:00.449680 systemd[1]: Started cri-containerd-ea30d2eeb83f7bc3df9c8131a4d03a5cb10066b8acc2a47bc400aa207c97f5f5.scope - libcontainer container ea30d2eeb83f7bc3df9c8131a4d03a5cb10066b8acc2a47bc400aa207c97f5f5. Sep 12 06:00:00.486077 containerd[1620]: time="2025-09-12T06:00:00.485927018Z" level=info msg="StartContainer for \"ea30d2eeb83f7bc3df9c8131a4d03a5cb10066b8acc2a47bc400aa207c97f5f5\" returns successfully" Sep 12 06:00:00.690627 kubelet[2932]: I0912 06:00:00.690564 2932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-dftxb" podStartSLOduration=26.578908466 podStartE2EDuration="42.690528942s" podCreationTimestamp="2025-09-12 05:59:18 +0000 UTC" firstStartedPulling="2025-09-12 05:59:44.265783146 +0000 UTC m=+45.317165130" lastFinishedPulling="2025-09-12 06:00:00.377403623 +0000 UTC m=+61.428785606" observedRunningTime="2025-09-12 06:00:00.689056433 +0000 UTC m=+61.740438423" watchObservedRunningTime="2025-09-12 06:00:00.690528942 +0000 UTC m=+61.741910927" Sep 12 06:00:01.396491 kubelet[2932]: I0912 06:00:01.390024 2932 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Sep 12 06:00:01.396627 kubelet[2932]: I0912 06:00:01.396510 2932 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Sep 12 06:00:05.679871 containerd[1620]: time="2025-09-12T06:00:05.679842516Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c1bc16394141bc9fd4311c9df982c690bac9bd40e084ed115fcfeeb7e49b3370\" id:\"10ac7f413bf711c7af235b14f3d72bfbf1939b1ad94e7a7b7ce23d597196209b\" pid:5385 exited_at:{seconds:1757656805 nanos:679625675}" Sep 12 06:00:07.597716 containerd[1620]: time="2025-09-12T06:00:07.597685013Z" level=info msg="TaskExit event in podsandbox handler container_id:\"833a3ebe243ff92db2c758f98d6bc4bb50ad82c3a83fe5cafe2fec6d15abb4b7\" id:\"b97de99027a4d93363277899dd2ee8c38c3e98e94a5cad7e8548c831383af5e5\" pid:5406 exited_at:{seconds:1757656807 nanos:597350622}" Sep 12 06:00:19.110782 systemd[1]: Started sshd@8-139.178.70.104:22-139.178.68.195:51860.service - OpenSSH per-connection server daemon (139.178.68.195:51860). Sep 12 06:00:19.283413 sshd[5437]: Accepted publickey for core from 139.178.68.195 port 51860 ssh2: RSA SHA256:YuRw/Z+mtbKlizBy5Mlbv/m4kMpLx5w83DuHM3gqcsA Sep 12 06:00:19.288352 sshd-session[5437]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 06:00:19.299256 systemd-logind[1591]: New session 10 of user core. Sep 12 06:00:19.305747 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 12 06:00:19.759266 kubelet[2932]: I0912 06:00:19.755571 2932 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 12 06:00:20.128124 sshd[5443]: Connection closed by 139.178.68.195 port 51860 Sep 12 06:00:20.127881 sshd-session[5437]: pam_unix(sshd:session): session closed for user core Sep 12 06:00:20.136477 systemd-logind[1591]: Session 10 logged out. Waiting for processes to exit. Sep 12 06:00:20.136786 systemd[1]: sshd@8-139.178.70.104:22-139.178.68.195:51860.service: Deactivated successfully. Sep 12 06:00:20.139343 systemd[1]: session-10.scope: Deactivated successfully. Sep 12 06:00:20.141246 systemd-logind[1591]: Removed session 10. Sep 12 06:00:22.080516 containerd[1620]: time="2025-09-12T06:00:22.080472376Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a29e5b4963b4df685b32b067aaa438b092cde38d306e1b05ec3d475971632883\" id:\"285ce9a7501223ae060eac973f28bc047230cc631978adcf1e4ee80801f39c86\" pid:5474 exited_at:{seconds:1757656822 nanos:31441488}" Sep 12 06:00:25.151369 systemd[1]: Started sshd@9-139.178.70.104:22-139.178.68.195:56646.service - OpenSSH per-connection server daemon (139.178.68.195:56646). Sep 12 06:00:25.246740 sshd[5508]: Accepted publickey for core from 139.178.68.195 port 56646 ssh2: RSA SHA256:YuRw/Z+mtbKlizBy5Mlbv/m4kMpLx5w83DuHM3gqcsA Sep 12 06:00:25.248339 sshd-session[5508]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 06:00:25.251435 systemd-logind[1591]: New session 11 of user core. Sep 12 06:00:25.254641 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 12 06:00:25.629572 sshd[5511]: Connection closed by 139.178.68.195 port 56646 Sep 12 06:00:25.629919 sshd-session[5508]: pam_unix(sshd:session): session closed for user core Sep 12 06:00:25.632439 systemd-logind[1591]: Session 11 logged out. Waiting for processes to exit. Sep 12 06:00:25.632530 systemd[1]: sshd@9-139.178.70.104:22-139.178.68.195:56646.service: Deactivated successfully. Sep 12 06:00:25.633936 systemd[1]: session-11.scope: Deactivated successfully. Sep 12 06:00:25.635357 systemd-logind[1591]: Removed session 11. Sep 12 06:00:26.798620 containerd[1620]: time="2025-09-12T06:00:26.798591635Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c1bc16394141bc9fd4311c9df982c690bac9bd40e084ed115fcfeeb7e49b3370\" id:\"864153a04af5d5d30e6f6a3fba76b8e921eacfa244a5d86a781d8034021cb11c\" pid:5536 exited_at:{seconds:1757656826 nanos:798075563}" Sep 12 06:00:28.937856 containerd[1620]: time="2025-09-12T06:00:28.937823092Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a29e5b4963b4df685b32b067aaa438b092cde38d306e1b05ec3d475971632883\" id:\"bf472e0d5d5ad1a10cb0a9192e01b55bc7d8ee0b17b20d5c239bc3bfba2c3591\" pid:5560 exited_at:{seconds:1757656828 nanos:937434784}" Sep 12 06:00:30.640850 systemd[1]: Started sshd@10-139.178.70.104:22-139.178.68.195:46224.service - OpenSSH per-connection server daemon (139.178.68.195:46224). Sep 12 06:00:30.960842 sshd[5588]: Accepted publickey for core from 139.178.68.195 port 46224 ssh2: RSA SHA256:YuRw/Z+mtbKlizBy5Mlbv/m4kMpLx5w83DuHM3gqcsA Sep 12 06:00:30.965063 sshd-session[5588]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 06:00:30.971307 systemd-logind[1591]: New session 12 of user core. Sep 12 06:00:30.980279 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 12 06:00:31.562049 sshd[5591]: Connection closed by 139.178.68.195 port 46224 Sep 12 06:00:31.562434 sshd-session[5588]: pam_unix(sshd:session): session closed for user core Sep 12 06:00:31.565099 systemd[1]: sshd@10-139.178.70.104:22-139.178.68.195:46224.service: Deactivated successfully. Sep 12 06:00:31.566521 systemd[1]: session-12.scope: Deactivated successfully. Sep 12 06:00:31.567502 systemd-logind[1591]: Session 12 logged out. Waiting for processes to exit. Sep 12 06:00:31.572717 systemd-logind[1591]: Removed session 12. Sep 12 06:00:36.575500 systemd[1]: Started sshd@11-139.178.70.104:22-139.178.68.195:46228.service - OpenSSH per-connection server daemon (139.178.68.195:46228). Sep 12 06:00:36.617747 sshd[5605]: Accepted publickey for core from 139.178.68.195 port 46228 ssh2: RSA SHA256:YuRw/Z+mtbKlizBy5Mlbv/m4kMpLx5w83DuHM3gqcsA Sep 12 06:00:36.619626 sshd-session[5605]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 06:00:36.623002 systemd-logind[1591]: New session 13 of user core. Sep 12 06:00:36.627657 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 12 06:00:36.943241 sshd[5608]: Connection closed by 139.178.68.195 port 46228 Sep 12 06:00:36.943174 sshd-session[5605]: pam_unix(sshd:session): session closed for user core Sep 12 06:00:36.947428 systemd[1]: sshd@11-139.178.70.104:22-139.178.68.195:46228.service: Deactivated successfully. Sep 12 06:00:36.949243 systemd[1]: session-13.scope: Deactivated successfully. Sep 12 06:00:36.949926 systemd-logind[1591]: Session 13 logged out. Waiting for processes to exit. Sep 12 06:00:36.950836 systemd-logind[1591]: Removed session 13. Sep 12 06:00:38.700759 containerd[1620]: time="2025-09-12T06:00:38.700722296Z" level=info msg="TaskExit event in podsandbox handler container_id:\"833a3ebe243ff92db2c758f98d6bc4bb50ad82c3a83fe5cafe2fec6d15abb4b7\" id:\"7d1671de2ac449611ade04d31b0e779571374a87f78b4b5d4c5484d749131570\" pid:5633 exited_at:{seconds:1757656838 nanos:699746512}" Sep 12 06:00:41.952978 systemd[1]: Started sshd@12-139.178.70.104:22-139.178.68.195:40272.service - OpenSSH per-connection server daemon (139.178.68.195:40272). Sep 12 06:00:42.311099 sshd[5649]: Accepted publickey for core from 139.178.68.195 port 40272 ssh2: RSA SHA256:YuRw/Z+mtbKlizBy5Mlbv/m4kMpLx5w83DuHM3gqcsA Sep 12 06:00:42.312175 sshd-session[5649]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 06:00:42.316744 systemd-logind[1591]: New session 14 of user core. Sep 12 06:00:42.321675 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 12 06:00:43.826695 sshd[5652]: Connection closed by 139.178.68.195 port 40272 Sep 12 06:00:43.828891 sshd-session[5649]: pam_unix(sshd:session): session closed for user core Sep 12 06:00:43.836151 systemd[1]: sshd@12-139.178.70.104:22-139.178.68.195:40272.service: Deactivated successfully. Sep 12 06:00:43.838380 systemd[1]: session-14.scope: Deactivated successfully. Sep 12 06:00:43.845153 systemd-logind[1591]: Session 14 logged out. Waiting for processes to exit. Sep 12 06:00:43.847139 systemd[1]: Started sshd@13-139.178.70.104:22-139.178.68.195:40282.service - OpenSSH per-connection server daemon (139.178.68.195:40282). Sep 12 06:00:43.848099 systemd-logind[1591]: Removed session 14. Sep 12 06:00:43.903042 sshd[5665]: Accepted publickey for core from 139.178.68.195 port 40282 ssh2: RSA SHA256:YuRw/Z+mtbKlizBy5Mlbv/m4kMpLx5w83DuHM3gqcsA Sep 12 06:00:43.904087 sshd-session[5665]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 06:00:43.907330 systemd-logind[1591]: New session 15 of user core. Sep 12 06:00:43.914657 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 12 06:00:44.228893 sshd[5668]: Connection closed by 139.178.68.195 port 40282 Sep 12 06:00:44.238235 sshd-session[5665]: pam_unix(sshd:session): session closed for user core Sep 12 06:00:44.238648 systemd[1]: Started sshd@14-139.178.70.104:22-139.178.68.195:40292.service - OpenSSH per-connection server daemon (139.178.68.195:40292). Sep 12 06:00:44.278843 systemd-logind[1591]: Session 15 logged out. Waiting for processes to exit. Sep 12 06:00:44.278935 systemd[1]: sshd@13-139.178.70.104:22-139.178.68.195:40282.service: Deactivated successfully. Sep 12 06:00:44.281264 systemd[1]: session-15.scope: Deactivated successfully. Sep 12 06:00:44.282152 systemd-logind[1591]: Removed session 15. Sep 12 06:00:44.337457 sshd[5675]: Accepted publickey for core from 139.178.68.195 port 40292 ssh2: RSA SHA256:YuRw/Z+mtbKlizBy5Mlbv/m4kMpLx5w83DuHM3gqcsA Sep 12 06:00:44.338689 sshd-session[5675]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 06:00:44.341361 systemd-logind[1591]: New session 16 of user core. Sep 12 06:00:44.348638 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 12 06:00:44.629723 sshd[5681]: Connection closed by 139.178.68.195 port 40292 Sep 12 06:00:44.629909 sshd-session[5675]: pam_unix(sshd:session): session closed for user core Sep 12 06:00:44.633665 systemd[1]: sshd@14-139.178.70.104:22-139.178.68.195:40292.service: Deactivated successfully. Sep 12 06:00:44.635140 systemd[1]: session-16.scope: Deactivated successfully. Sep 12 06:00:44.635814 systemd-logind[1591]: Session 16 logged out. Waiting for processes to exit. Sep 12 06:00:44.636840 systemd-logind[1591]: Removed session 16. Sep 12 06:00:49.640250 systemd[1]: Started sshd@15-139.178.70.104:22-139.178.68.195:40296.service - OpenSSH per-connection server daemon (139.178.68.195:40296). Sep 12 06:00:49.726941 sshd[5693]: Accepted publickey for core from 139.178.68.195 port 40296 ssh2: RSA SHA256:YuRw/Z+mtbKlizBy5Mlbv/m4kMpLx5w83DuHM3gqcsA Sep 12 06:00:49.731694 sshd-session[5693]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 06:00:49.737484 systemd-logind[1591]: New session 17 of user core. Sep 12 06:00:49.743757 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 12 06:00:50.813046 sshd[5696]: Connection closed by 139.178.68.195 port 40296 Sep 12 06:00:50.813699 sshd-session[5693]: pam_unix(sshd:session): session closed for user core Sep 12 06:00:50.821398 systemd-logind[1591]: Session 17 logged out. Waiting for processes to exit. Sep 12 06:00:50.821729 systemd[1]: sshd@15-139.178.70.104:22-139.178.68.195:40296.service: Deactivated successfully. Sep 12 06:00:50.823724 systemd[1]: session-17.scope: Deactivated successfully. Sep 12 06:00:50.825186 systemd-logind[1591]: Removed session 17. Sep 12 06:00:51.900194 containerd[1620]: time="2025-09-12T06:00:51.900159199Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a29e5b4963b4df685b32b067aaa438b092cde38d306e1b05ec3d475971632883\" id:\"1aa945f40105ae459580438fd8178808f9972ac97f8d6e629ec67acd997d7003\" pid:5722 exited_at:{seconds:1757656851 nanos:899769675}" Sep 12 06:00:55.824311 systemd[1]: Started sshd@16-139.178.70.104:22-139.178.68.195:40186.service - OpenSSH per-connection server daemon (139.178.68.195:40186). Sep 12 06:00:56.041702 sshd[5733]: Accepted publickey for core from 139.178.68.195 port 40186 ssh2: RSA SHA256:YuRw/Z+mtbKlizBy5Mlbv/m4kMpLx5w83DuHM3gqcsA Sep 12 06:00:56.044600 sshd-session[5733]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 06:00:56.049815 systemd-logind[1591]: New session 18 of user core. Sep 12 06:00:56.054726 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 12 06:00:56.847410 containerd[1620]: time="2025-09-12T06:00:56.847344292Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c1bc16394141bc9fd4311c9df982c690bac9bd40e084ed115fcfeeb7e49b3370\" id:\"64a0470cfac239190554b74a0493d2aa1bb0010b262e675c75e8d202adee667c\" pid:5756 exited_at:{seconds:1757656856 nanos:830259824}" Sep 12 06:00:56.975448 sshd[5736]: Connection closed by 139.178.68.195 port 40186 Sep 12 06:00:56.975934 sshd-session[5733]: pam_unix(sshd:session): session closed for user core Sep 12 06:00:56.979355 systemd[1]: sshd@16-139.178.70.104:22-139.178.68.195:40186.service: Deactivated successfully. Sep 12 06:00:56.981030 systemd[1]: session-18.scope: Deactivated successfully. Sep 12 06:00:56.981980 systemd-logind[1591]: Session 18 logged out. Waiting for processes to exit. Sep 12 06:00:56.983301 systemd-logind[1591]: Removed session 18. Sep 12 06:01:01.986486 systemd[1]: Started sshd@17-139.178.70.104:22-139.178.68.195:34502.service - OpenSSH per-connection server daemon (139.178.68.195:34502). Sep 12 06:01:02.096114 sshd[5777]: Accepted publickey for core from 139.178.68.195 port 34502 ssh2: RSA SHA256:YuRw/Z+mtbKlizBy5Mlbv/m4kMpLx5w83DuHM3gqcsA Sep 12 06:01:02.097138 sshd-session[5777]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 06:01:02.104208 systemd-logind[1591]: New session 19 of user core. Sep 12 06:01:02.111738 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 12 06:01:02.521296 sshd[5780]: Connection closed by 139.178.68.195 port 34502 Sep 12 06:01:02.521019 sshd-session[5777]: pam_unix(sshd:session): session closed for user core Sep 12 06:01:02.527495 systemd-logind[1591]: Session 19 logged out. Waiting for processes to exit. Sep 12 06:01:02.528032 systemd[1]: sshd@17-139.178.70.104:22-139.178.68.195:34502.service: Deactivated successfully. Sep 12 06:01:02.529497 systemd[1]: session-19.scope: Deactivated successfully. Sep 12 06:01:02.531195 systemd-logind[1591]: Removed session 19. Sep 12 06:01:05.563776 containerd[1620]: time="2025-09-12T06:01:05.563710277Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c1bc16394141bc9fd4311c9df982c690bac9bd40e084ed115fcfeeb7e49b3370\" id:\"30d434785f8a246e960d0d8053076c91c1b22354a5fb8ea8d5b566a1102edac6\" pid:5805 exited_at:{seconds:1757656865 nanos:563489360}" Sep 12 06:01:07.525256 containerd[1620]: time="2025-09-12T06:01:07.518705311Z" level=info msg="TaskExit event in podsandbox handler container_id:\"833a3ebe243ff92db2c758f98d6bc4bb50ad82c3a83fe5cafe2fec6d15abb4b7\" id:\"735cc90814b2e7bbeb73a7f3df3a608b04a2946d0d6564ff01b82d37b5c3554c\" pid:5826 exited_at:{seconds:1757656867 nanos:518463447}" Sep 12 06:01:07.545586 systemd[1]: Started sshd@18-139.178.70.104:22-139.178.68.195:34516.service - OpenSSH per-connection server daemon (139.178.68.195:34516). Sep 12 06:01:07.703331 sshd[5841]: Accepted publickey for core from 139.178.68.195 port 34516 ssh2: RSA SHA256:YuRw/Z+mtbKlizBy5Mlbv/m4kMpLx5w83DuHM3gqcsA Sep 12 06:01:07.708777 sshd-session[5841]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 06:01:07.712770 systemd-logind[1591]: New session 20 of user core. Sep 12 06:01:07.718678 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 12 06:01:08.355956 sshd[5844]: Connection closed by 139.178.68.195 port 34516 Sep 12 06:01:08.359619 sshd-session[5841]: pam_unix(sshd:session): session closed for user core Sep 12 06:01:08.364505 systemd[1]: sshd@18-139.178.70.104:22-139.178.68.195:34516.service: Deactivated successfully. Sep 12 06:01:08.366396 systemd[1]: session-20.scope: Deactivated successfully. Sep 12 06:01:08.368149 systemd-logind[1591]: Session 20 logged out. Waiting for processes to exit. Sep 12 06:01:08.369301 systemd-logind[1591]: Removed session 20. Sep 12 06:01:13.366659 systemd[1]: Started sshd@19-139.178.70.104:22-139.178.68.195:47828.service - OpenSSH per-connection server daemon (139.178.68.195:47828). Sep 12 06:01:13.452108 sshd[5878]: Accepted publickey for core from 139.178.68.195 port 47828 ssh2: RSA SHA256:YuRw/Z+mtbKlizBy5Mlbv/m4kMpLx5w83DuHM3gqcsA Sep 12 06:01:13.454458 sshd-session[5878]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 06:01:13.457716 systemd-logind[1591]: New session 21 of user core. Sep 12 06:01:13.464661 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 12 06:01:14.352447 sshd[5881]: Connection closed by 139.178.68.195 port 47828 Sep 12 06:01:14.352355 sshd-session[5878]: pam_unix(sshd:session): session closed for user core Sep 12 06:01:14.361623 systemd[1]: sshd@19-139.178.70.104:22-139.178.68.195:47828.service: Deactivated successfully. Sep 12 06:01:14.362896 systemd[1]: session-21.scope: Deactivated successfully. Sep 12 06:01:14.363783 systemd-logind[1591]: Session 21 logged out. Waiting for processes to exit. Sep 12 06:01:14.365393 systemd-logind[1591]: Removed session 21. Sep 12 06:01:14.367924 systemd[1]: Started sshd@20-139.178.70.104:22-139.178.68.195:47830.service - OpenSSH per-connection server daemon (139.178.68.195:47830). Sep 12 06:01:14.415194 sshd[5893]: Accepted publickey for core from 139.178.68.195 port 47830 ssh2: RSA SHA256:YuRw/Z+mtbKlizBy5Mlbv/m4kMpLx5w83DuHM3gqcsA Sep 12 06:01:14.416141 sshd-session[5893]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 06:01:14.421675 systemd-logind[1591]: New session 22 of user core. Sep 12 06:01:14.423781 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 12 06:01:15.095184 sshd[5896]: Connection closed by 139.178.68.195 port 47830 Sep 12 06:01:15.102639 systemd[1]: Started sshd@21-139.178.70.104:22-139.178.68.195:47836.service - OpenSSH per-connection server daemon (139.178.68.195:47836). Sep 12 06:01:15.141955 sshd-session[5893]: pam_unix(sshd:session): session closed for user core Sep 12 06:01:15.193254 systemd[1]: sshd@20-139.178.70.104:22-139.178.68.195:47830.service: Deactivated successfully. Sep 12 06:01:15.195471 systemd[1]: session-22.scope: Deactivated successfully. Sep 12 06:01:15.195999 systemd-logind[1591]: Session 22 logged out. Waiting for processes to exit. Sep 12 06:01:15.196996 systemd-logind[1591]: Removed session 22. Sep 12 06:01:15.350503 sshd[5903]: Accepted publickey for core from 139.178.68.195 port 47836 ssh2: RSA SHA256:YuRw/Z+mtbKlizBy5Mlbv/m4kMpLx5w83DuHM3gqcsA Sep 12 06:01:15.351139 sshd-session[5903]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 06:01:15.353824 systemd-logind[1591]: New session 23 of user core. Sep 12 06:01:15.358667 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 12 06:01:16.809600 sshd[5909]: Connection closed by 139.178.68.195 port 47836 Sep 12 06:01:16.815195 sshd-session[5903]: pam_unix(sshd:session): session closed for user core Sep 12 06:01:16.825391 systemd[1]: Started sshd@22-139.178.70.104:22-139.178.68.195:47852.service - OpenSSH per-connection server daemon (139.178.68.195:47852). Sep 12 06:01:16.828535 systemd[1]: sshd@21-139.178.70.104:22-139.178.68.195:47836.service: Deactivated successfully. Sep 12 06:01:16.831070 systemd[1]: session-23.scope: Deactivated successfully. Sep 12 06:01:16.832489 systemd-logind[1591]: Session 23 logged out. Waiting for processes to exit. Sep 12 06:01:16.834817 systemd-logind[1591]: Removed session 23. Sep 12 06:01:16.992424 sshd[5920]: Accepted publickey for core from 139.178.68.195 port 47852 ssh2: RSA SHA256:YuRw/Z+mtbKlizBy5Mlbv/m4kMpLx5w83DuHM3gqcsA Sep 12 06:01:16.994503 sshd-session[5920]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 06:01:16.999487 systemd-logind[1591]: New session 24 of user core. Sep 12 06:01:17.006676 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 12 06:01:17.995347 sshd[5929]: Connection closed by 139.178.68.195 port 47852 Sep 12 06:01:17.995633 sshd-session[5920]: pam_unix(sshd:session): session closed for user core Sep 12 06:01:18.003102 systemd[1]: sshd@22-139.178.70.104:22-139.178.68.195:47852.service: Deactivated successfully. Sep 12 06:01:18.005541 systemd[1]: session-24.scope: Deactivated successfully. Sep 12 06:01:18.006761 systemd[1]: session-24.scope: Consumed 364ms CPU time, 68.3M memory peak. Sep 12 06:01:18.007497 systemd-logind[1591]: Session 24 logged out. Waiting for processes to exit. Sep 12 06:01:18.012038 systemd[1]: Started sshd@23-139.178.70.104:22-139.178.68.195:47856.service - OpenSSH per-connection server daemon (139.178.68.195:47856). Sep 12 06:01:18.018932 systemd-logind[1591]: Removed session 24. Sep 12 06:01:18.102721 sshd[5940]: Accepted publickey for core from 139.178.68.195 port 47856 ssh2: RSA SHA256:YuRw/Z+mtbKlizBy5Mlbv/m4kMpLx5w83DuHM3gqcsA Sep 12 06:01:18.104509 sshd-session[5940]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 06:01:18.112627 systemd-logind[1591]: New session 25 of user core. Sep 12 06:01:18.117705 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 12 06:01:18.384028 sshd[5943]: Connection closed by 139.178.68.195 port 47856 Sep 12 06:01:18.385407 sshd-session[5940]: pam_unix(sshd:session): session closed for user core Sep 12 06:01:18.387605 systemd-logind[1591]: Session 25 logged out. Waiting for processes to exit. Sep 12 06:01:18.387843 systemd[1]: sshd@23-139.178.70.104:22-139.178.68.195:47856.service: Deactivated successfully. Sep 12 06:01:18.389741 systemd[1]: session-25.scope: Deactivated successfully. Sep 12 06:01:18.391439 systemd-logind[1591]: Removed session 25. Sep 12 06:01:22.849877 containerd[1620]: time="2025-09-12T06:01:22.849839813Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a29e5b4963b4df685b32b067aaa438b092cde38d306e1b05ec3d475971632883\" id:\"cc6686daa2d0738a6252ed8fcb6357b5e444310fcce10118bcc19d644fd078f5\" pid:5967 exited_at:{seconds:1757656882 nanos:815389898}" Sep 12 06:01:23.403349 systemd[1]: Started sshd@24-139.178.70.104:22-139.178.68.195:60754.service - OpenSSH per-connection server daemon (139.178.68.195:60754). Sep 12 06:01:23.566613 sshd[5981]: Accepted publickey for core from 139.178.68.195 port 60754 ssh2: RSA SHA256:YuRw/Z+mtbKlizBy5Mlbv/m4kMpLx5w83DuHM3gqcsA Sep 12 06:01:23.571350 sshd-session[5981]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 06:01:23.575806 systemd-logind[1591]: New session 26 of user core. Sep 12 06:01:23.579669 systemd[1]: Started session-26.scope - Session 26 of User core. Sep 12 06:01:24.295353 sshd[5984]: Connection closed by 139.178.68.195 port 60754 Sep 12 06:01:24.295708 sshd-session[5981]: pam_unix(sshd:session): session closed for user core Sep 12 06:01:24.298055 systemd[1]: sshd@24-139.178.70.104:22-139.178.68.195:60754.service: Deactivated successfully. Sep 12 06:01:24.298186 systemd-logind[1591]: Session 26 logged out. Waiting for processes to exit. Sep 12 06:01:24.299338 systemd[1]: session-26.scope: Deactivated successfully. Sep 12 06:01:24.300739 systemd-logind[1591]: Removed session 26. Sep 12 06:01:26.692106 containerd[1620]: time="2025-09-12T06:01:26.692004265Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c1bc16394141bc9fd4311c9df982c690bac9bd40e084ed115fcfeeb7e49b3370\" id:\"bcf4b40b3bcb3d4a58155075553b67a83cd1ca263c7d610b401c6a871aabdaf5\" pid:6008 exited_at:{seconds:1757656886 nanos:691656080}" Sep 12 06:01:28.772485 containerd[1620]: time="2025-09-12T06:01:28.772451480Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a29e5b4963b4df685b32b067aaa438b092cde38d306e1b05ec3d475971632883\" id:\"3de5c659f0cbed3a0a497b037d8b41fdb2f7027f6eb2d4d4e1d6b85870e65279\" pid:6030 exited_at:{seconds:1757656888 nanos:772228418}" Sep 12 06:01:29.308327 systemd[1]: Started sshd@25-139.178.70.104:22-139.178.68.195:60768.service - OpenSSH per-connection server daemon (139.178.68.195:60768). Sep 12 06:01:29.397628 sshd[6041]: Accepted publickey for core from 139.178.68.195 port 60768 ssh2: RSA SHA256:YuRw/Z+mtbKlizBy5Mlbv/m4kMpLx5w83DuHM3gqcsA Sep 12 06:01:29.399849 sshd-session[6041]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 06:01:29.404492 systemd-logind[1591]: New session 27 of user core. Sep 12 06:01:29.409809 systemd[1]: Started session-27.scope - Session 27 of User core. Sep 12 06:01:30.005187 sshd[6044]: Connection closed by 139.178.68.195 port 60768 Sep 12 06:01:30.006142 sshd-session[6041]: pam_unix(sshd:session): session closed for user core Sep 12 06:01:30.013492 systemd[1]: sshd@25-139.178.70.104:22-139.178.68.195:60768.service: Deactivated successfully. Sep 12 06:01:30.013679 systemd-logind[1591]: Session 27 logged out. Waiting for processes to exit. Sep 12 06:01:30.017153 systemd[1]: session-27.scope: Deactivated successfully. Sep 12 06:01:30.018470 systemd-logind[1591]: Removed session 27. Sep 12 06:01:35.019644 systemd[1]: Started sshd@26-139.178.70.104:22-139.178.68.195:37590.service - OpenSSH per-connection server daemon (139.178.68.195:37590). Sep 12 06:01:35.090670 sshd[6056]: Accepted publickey for core from 139.178.68.195 port 37590 ssh2: RSA SHA256:YuRw/Z+mtbKlizBy5Mlbv/m4kMpLx5w83DuHM3gqcsA Sep 12 06:01:35.091568 sshd-session[6056]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 06:01:35.095298 systemd-logind[1591]: New session 28 of user core. Sep 12 06:01:35.101681 systemd[1]: Started session-28.scope - Session 28 of User core. Sep 12 06:01:35.428641 sshd[6059]: Connection closed by 139.178.68.195 port 37590 Sep 12 06:01:35.429086 sshd-session[6056]: pam_unix(sshd:session): session closed for user core Sep 12 06:01:35.432178 systemd-logind[1591]: Session 28 logged out. Waiting for processes to exit. Sep 12 06:01:35.432356 systemd[1]: sshd@26-139.178.70.104:22-139.178.68.195:37590.service: Deactivated successfully. Sep 12 06:01:35.433710 systemd[1]: session-28.scope: Deactivated successfully. Sep 12 06:01:35.435666 systemd-logind[1591]: Removed session 28.