Jul 14 23:03:35.730071 kernel: Linux version 6.6.97-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Mon Jul 14 20:23:49 -00 2025 Jul 14 23:03:35.730086 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=bfa97d577a2baa7448b0ab2cae71f1606bd0084ffae5b72cc7eef5122a2ca497 Jul 14 23:03:35.730093 kernel: Disabled fast string operations Jul 14 23:03:35.730097 kernel: BIOS-provided physical RAM map: Jul 14 23:03:35.730101 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ebff] usable Jul 14 23:03:35.730105 kernel: BIOS-e820: [mem 0x000000000009ec00-0x000000000009ffff] reserved Jul 14 23:03:35.730111 kernel: BIOS-e820: [mem 0x00000000000dc000-0x00000000000fffff] reserved Jul 14 23:03:35.730115 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007fedffff] usable Jul 14 23:03:35.730119 kernel: BIOS-e820: [mem 0x000000007fee0000-0x000000007fefefff] ACPI data Jul 14 23:03:35.730123 kernel: BIOS-e820: [mem 0x000000007feff000-0x000000007fefffff] ACPI NVS Jul 14 23:03:35.730127 kernel: BIOS-e820: [mem 0x000000007ff00000-0x000000007fffffff] usable Jul 14 23:03:35.730131 kernel: BIOS-e820: [mem 0x00000000f0000000-0x00000000f7ffffff] reserved Jul 14 23:03:35.730135 kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec0ffff] reserved Jul 14 23:03:35.730140 kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved Jul 14 23:03:35.730146 kernel: BIOS-e820: [mem 0x00000000fffe0000-0x00000000ffffffff] reserved Jul 14 23:03:35.730150 kernel: NX (Execute Disable) protection: active Jul 14 23:03:35.730155 kernel: APIC: Static calls initialized Jul 14 23:03:35.730160 kernel: SMBIOS 2.7 present. Jul 14 23:03:35.730164 kernel: DMI: VMware, Inc. VMware Virtual Platform/440BX Desktop Reference Platform, BIOS 6.00 05/28/2020 Jul 14 23:03:35.730169 kernel: vmware: hypercall mode: 0x00 Jul 14 23:03:35.730173 kernel: Hypervisor detected: VMware Jul 14 23:03:35.730178 kernel: vmware: TSC freq read from hypervisor : 3408.000 MHz Jul 14 23:03:35.730184 kernel: vmware: Host bus clock speed read from hypervisor : 66000000 Hz Jul 14 23:03:35.730188 kernel: vmware: using clock offset of 2974975035 ns Jul 14 23:03:35.730193 kernel: tsc: Detected 3408.000 MHz processor Jul 14 23:03:35.730198 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jul 14 23:03:35.730203 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jul 14 23:03:35.730208 kernel: last_pfn = 0x80000 max_arch_pfn = 0x400000000 Jul 14 23:03:35.730213 kernel: total RAM covered: 3072M Jul 14 23:03:35.730217 kernel: Found optimal setting for mtrr clean up Jul 14 23:03:35.730223 kernel: gran_size: 64K chunk_size: 64K num_reg: 2 lose cover RAM: 0G Jul 14 23:03:35.730228 kernel: MTRR map: 6 entries (5 fixed + 1 variable; max 21), built from 8 variable MTRRs Jul 14 23:03:35.730233 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jul 14 23:03:35.730238 kernel: Using GB pages for direct mapping Jul 14 23:03:35.730243 kernel: ACPI: Early table checksum verification disabled Jul 14 23:03:35.730247 kernel: ACPI: RSDP 0x00000000000F6A00 000024 (v02 PTLTD ) Jul 14 23:03:35.730252 kernel: ACPI: XSDT 0x000000007FEE965B 00005C (v01 INTEL 440BX 06040000 VMW 01324272) Jul 14 23:03:35.730257 kernel: ACPI: FACP 0x000000007FEFEE73 0000F4 (v04 INTEL 440BX 06040000 PTL 000F4240) Jul 14 23:03:35.730262 kernel: ACPI: DSDT 0x000000007FEEAD55 01411E (v01 PTLTD Custom 06040000 MSFT 03000001) Jul 14 23:03:35.730266 kernel: ACPI: FACS 0x000000007FEFFFC0 000040 Jul 14 23:03:35.730274 kernel: ACPI: FACS 0x000000007FEFFFC0 000040 Jul 14 23:03:35.730279 kernel: ACPI: BOOT 0x000000007FEEAD2D 000028 (v01 PTLTD $SBFTBL$ 06040000 LTP 00000001) Jul 14 23:03:35.730284 kernel: ACPI: APIC 0x000000007FEEA5EB 000742 (v01 PTLTD ? APIC 06040000 LTP 00000000) Jul 14 23:03:35.730289 kernel: ACPI: MCFG 0x000000007FEEA5AF 00003C (v01 PTLTD $PCITBL$ 06040000 LTP 00000001) Jul 14 23:03:35.730294 kernel: ACPI: SRAT 0x000000007FEE9757 0008A8 (v02 VMWARE MEMPLUG 06040000 VMW 00000001) Jul 14 23:03:35.730300 kernel: ACPI: HPET 0x000000007FEE971F 000038 (v01 VMWARE VMW HPET 06040000 VMW 00000001) Jul 14 23:03:35.730305 kernel: ACPI: WAET 0x000000007FEE96F7 000028 (v01 VMWARE VMW WAET 06040000 VMW 00000001) Jul 14 23:03:35.730310 kernel: ACPI: Reserving FACP table memory at [mem 0x7fefee73-0x7fefef66] Jul 14 23:03:35.730315 kernel: ACPI: Reserving DSDT table memory at [mem 0x7feead55-0x7fefee72] Jul 14 23:03:35.730320 kernel: ACPI: Reserving FACS table memory at [mem 0x7fefffc0-0x7fefffff] Jul 14 23:03:35.730325 kernel: ACPI: Reserving FACS table memory at [mem 0x7fefffc0-0x7fefffff] Jul 14 23:03:35.730330 kernel: ACPI: Reserving BOOT table memory at [mem 0x7feead2d-0x7feead54] Jul 14 23:03:35.730335 kernel: ACPI: Reserving APIC table memory at [mem 0x7feea5eb-0x7feead2c] Jul 14 23:03:35.730340 kernel: ACPI: Reserving MCFG table memory at [mem 0x7feea5af-0x7feea5ea] Jul 14 23:03:35.730354 kernel: ACPI: Reserving SRAT table memory at [mem 0x7fee9757-0x7fee9ffe] Jul 14 23:03:35.730361 kernel: ACPI: Reserving HPET table memory at [mem 0x7fee971f-0x7fee9756] Jul 14 23:03:35.730366 kernel: ACPI: Reserving WAET table memory at [mem 0x7fee96f7-0x7fee971e] Jul 14 23:03:35.730371 kernel: system APIC only can use physical flat Jul 14 23:03:35.730376 kernel: APIC: Switched APIC routing to: physical flat Jul 14 23:03:35.730381 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jul 14 23:03:35.730386 kernel: SRAT: PXM 0 -> APIC 0x02 -> Node 0 Jul 14 23:03:35.730391 kernel: SRAT: PXM 0 -> APIC 0x04 -> Node 0 Jul 14 23:03:35.730396 kernel: SRAT: PXM 0 -> APIC 0x06 -> Node 0 Jul 14 23:03:35.730401 kernel: SRAT: PXM 0 -> APIC 0x08 -> Node 0 Jul 14 23:03:35.730406 kernel: SRAT: PXM 0 -> APIC 0x0a -> Node 0 Jul 14 23:03:35.730412 kernel: SRAT: PXM 0 -> APIC 0x0c -> Node 0 Jul 14 23:03:35.730417 kernel: SRAT: PXM 0 -> APIC 0x0e -> Node 0 Jul 14 23:03:35.730422 kernel: SRAT: PXM 0 -> APIC 0x10 -> Node 0 Jul 14 23:03:35.730427 kernel: SRAT: PXM 0 -> APIC 0x12 -> Node 0 Jul 14 23:03:35.730432 kernel: SRAT: PXM 0 -> APIC 0x14 -> Node 0 Jul 14 23:03:35.730437 kernel: SRAT: PXM 0 -> APIC 0x16 -> Node 0 Jul 14 23:03:35.730441 kernel: SRAT: PXM 0 -> APIC 0x18 -> Node 0 Jul 14 23:03:35.730446 kernel: SRAT: PXM 0 -> APIC 0x1a -> Node 0 Jul 14 23:03:35.730451 kernel: SRAT: PXM 0 -> APIC 0x1c -> Node 0 Jul 14 23:03:35.730456 kernel: SRAT: PXM 0 -> APIC 0x1e -> Node 0 Jul 14 23:03:35.730462 kernel: SRAT: PXM 0 -> APIC 0x20 -> Node 0 Jul 14 23:03:35.730467 kernel: SRAT: PXM 0 -> APIC 0x22 -> Node 0 Jul 14 23:03:35.730472 kernel: SRAT: PXM 0 -> APIC 0x24 -> Node 0 Jul 14 23:03:35.730476 kernel: SRAT: PXM 0 -> APIC 0x26 -> Node 0 Jul 14 23:03:35.730481 kernel: SRAT: PXM 0 -> APIC 0x28 -> Node 0 Jul 14 23:03:35.730486 kernel: SRAT: PXM 0 -> APIC 0x2a -> Node 0 Jul 14 23:03:35.730491 kernel: SRAT: PXM 0 -> APIC 0x2c -> Node 0 Jul 14 23:03:35.730496 kernel: SRAT: PXM 0 -> APIC 0x2e -> Node 0 Jul 14 23:03:35.730501 kernel: SRAT: PXM 0 -> APIC 0x30 -> Node 0 Jul 14 23:03:35.730506 kernel: SRAT: PXM 0 -> APIC 0x32 -> Node 0 Jul 14 23:03:35.730512 kernel: SRAT: PXM 0 -> APIC 0x34 -> Node 0 Jul 14 23:03:35.730517 kernel: SRAT: PXM 0 -> APIC 0x36 -> Node 0 Jul 14 23:03:35.730522 kernel: SRAT: PXM 0 -> APIC 0x38 -> Node 0 Jul 14 23:03:35.730526 kernel: SRAT: PXM 0 -> APIC 0x3a -> Node 0 Jul 14 23:03:35.730532 kernel: SRAT: PXM 0 -> APIC 0x3c -> Node 0 Jul 14 23:03:35.730536 kernel: SRAT: PXM 0 -> APIC 0x3e -> Node 0 Jul 14 23:03:35.730541 kernel: SRAT: PXM 0 -> APIC 0x40 -> Node 0 Jul 14 23:03:35.730546 kernel: SRAT: PXM 0 -> APIC 0x42 -> Node 0 Jul 14 23:03:35.730551 kernel: SRAT: PXM 0 -> APIC 0x44 -> Node 0 Jul 14 23:03:35.730556 kernel: SRAT: PXM 0 -> APIC 0x46 -> Node 0 Jul 14 23:03:35.730562 kernel: SRAT: PXM 0 -> APIC 0x48 -> Node 0 Jul 14 23:03:35.730567 kernel: SRAT: PXM 0 -> APIC 0x4a -> Node 0 Jul 14 23:03:35.730571 kernel: SRAT: PXM 0 -> APIC 0x4c -> Node 0 Jul 14 23:03:35.730576 kernel: SRAT: PXM 0 -> APIC 0x4e -> Node 0 Jul 14 23:03:35.730581 kernel: SRAT: PXM 0 -> APIC 0x50 -> Node 0 Jul 14 23:03:35.730586 kernel: SRAT: PXM 0 -> APIC 0x52 -> Node 0 Jul 14 23:03:35.730591 kernel: SRAT: PXM 0 -> APIC 0x54 -> Node 0 Jul 14 23:03:35.730595 kernel: SRAT: PXM 0 -> APIC 0x56 -> Node 0 Jul 14 23:03:35.730600 kernel: SRAT: PXM 0 -> APIC 0x58 -> Node 0 Jul 14 23:03:35.730605 kernel: SRAT: PXM 0 -> APIC 0x5a -> Node 0 Jul 14 23:03:35.730611 kernel: SRAT: PXM 0 -> APIC 0x5c -> Node 0 Jul 14 23:03:35.730616 kernel: SRAT: PXM 0 -> APIC 0x5e -> Node 0 Jul 14 23:03:35.730621 kernel: SRAT: PXM 0 -> APIC 0x60 -> Node 0 Jul 14 23:03:35.730625 kernel: SRAT: PXM 0 -> APIC 0x62 -> Node 0 Jul 14 23:03:35.730630 kernel: SRAT: PXM 0 -> APIC 0x64 -> Node 0 Jul 14 23:03:35.730635 kernel: SRAT: PXM 0 -> APIC 0x66 -> Node 0 Jul 14 23:03:35.730640 kernel: SRAT: PXM 0 -> APIC 0x68 -> Node 0 Jul 14 23:03:35.730645 kernel: SRAT: PXM 0 -> APIC 0x6a -> Node 0 Jul 14 23:03:35.730650 kernel: SRAT: PXM 0 -> APIC 0x6c -> Node 0 Jul 14 23:03:35.730655 kernel: SRAT: PXM 0 -> APIC 0x6e -> Node 0 Jul 14 23:03:35.730659 kernel: SRAT: PXM 0 -> APIC 0x70 -> Node 0 Jul 14 23:03:35.730666 kernel: SRAT: PXM 0 -> APIC 0x72 -> Node 0 Jul 14 23:03:35.730671 kernel: SRAT: PXM 0 -> APIC 0x74 -> Node 0 Jul 14 23:03:35.730679 kernel: SRAT: PXM 0 -> APIC 0x76 -> Node 0 Jul 14 23:03:35.730685 kernel: SRAT: PXM 0 -> APIC 0x78 -> Node 0 Jul 14 23:03:35.730691 kernel: SRAT: PXM 0 -> APIC 0x7a -> Node 0 Jul 14 23:03:35.730696 kernel: SRAT: PXM 0 -> APIC 0x7c -> Node 0 Jul 14 23:03:35.730701 kernel: SRAT: PXM 0 -> APIC 0x7e -> Node 0 Jul 14 23:03:35.730707 kernel: SRAT: PXM 0 -> APIC 0x80 -> Node 0 Jul 14 23:03:35.730713 kernel: SRAT: PXM 0 -> APIC 0x82 -> Node 0 Jul 14 23:03:35.730718 kernel: SRAT: PXM 0 -> APIC 0x84 -> Node 0 Jul 14 23:03:35.730723 kernel: SRAT: PXM 0 -> APIC 0x86 -> Node 0 Jul 14 23:03:35.730729 kernel: SRAT: PXM 0 -> APIC 0x88 -> Node 0 Jul 14 23:03:35.730734 kernel: SRAT: PXM 0 -> APIC 0x8a -> Node 0 Jul 14 23:03:35.730739 kernel: SRAT: PXM 0 -> APIC 0x8c -> Node 0 Jul 14 23:03:35.730745 kernel: SRAT: PXM 0 -> APIC 0x8e -> Node 0 Jul 14 23:03:35.730750 kernel: SRAT: PXM 0 -> APIC 0x90 -> Node 0 Jul 14 23:03:35.730755 kernel: SRAT: PXM 0 -> APIC 0x92 -> Node 0 Jul 14 23:03:35.730760 kernel: SRAT: PXM 0 -> APIC 0x94 -> Node 0 Jul 14 23:03:35.730766 kernel: SRAT: PXM 0 -> APIC 0x96 -> Node 0 Jul 14 23:03:35.730772 kernel: SRAT: PXM 0 -> APIC 0x98 -> Node 0 Jul 14 23:03:35.730777 kernel: SRAT: PXM 0 -> APIC 0x9a -> Node 0 Jul 14 23:03:35.730782 kernel: SRAT: PXM 0 -> APIC 0x9c -> Node 0 Jul 14 23:03:35.730788 kernel: SRAT: PXM 0 -> APIC 0x9e -> Node 0 Jul 14 23:03:35.730793 kernel: SRAT: PXM 0 -> APIC 0xa0 -> Node 0 Jul 14 23:03:35.730798 kernel: SRAT: PXM 0 -> APIC 0xa2 -> Node 0 Jul 14 23:03:35.730803 kernel: SRAT: PXM 0 -> APIC 0xa4 -> Node 0 Jul 14 23:03:35.730808 kernel: SRAT: PXM 0 -> APIC 0xa6 -> Node 0 Jul 14 23:03:35.730813 kernel: SRAT: PXM 0 -> APIC 0xa8 -> Node 0 Jul 14 23:03:35.730819 kernel: SRAT: PXM 0 -> APIC 0xaa -> Node 0 Jul 14 23:03:35.730826 kernel: SRAT: PXM 0 -> APIC 0xac -> Node 0 Jul 14 23:03:35.730831 kernel: SRAT: PXM 0 -> APIC 0xae -> Node 0 Jul 14 23:03:35.730836 kernel: SRAT: PXM 0 -> APIC 0xb0 -> Node 0 Jul 14 23:03:35.730841 kernel: SRAT: PXM 0 -> APIC 0xb2 -> Node 0 Jul 14 23:03:35.730847 kernel: SRAT: PXM 0 -> APIC 0xb4 -> Node 0 Jul 14 23:03:35.730852 kernel: SRAT: PXM 0 -> APIC 0xb6 -> Node 0 Jul 14 23:03:35.730857 kernel: SRAT: PXM 0 -> APIC 0xb8 -> Node 0 Jul 14 23:03:35.730862 kernel: SRAT: PXM 0 -> APIC 0xba -> Node 0 Jul 14 23:03:35.730867 kernel: SRAT: PXM 0 -> APIC 0xbc -> Node 0 Jul 14 23:03:35.730873 kernel: SRAT: PXM 0 -> APIC 0xbe -> Node 0 Jul 14 23:03:35.730879 kernel: SRAT: PXM 0 -> APIC 0xc0 -> Node 0 Jul 14 23:03:35.730884 kernel: SRAT: PXM 0 -> APIC 0xc2 -> Node 0 Jul 14 23:03:35.730889 kernel: SRAT: PXM 0 -> APIC 0xc4 -> Node 0 Jul 14 23:03:35.730895 kernel: SRAT: PXM 0 -> APIC 0xc6 -> Node 0 Jul 14 23:03:35.730900 kernel: SRAT: PXM 0 -> APIC 0xc8 -> Node 0 Jul 14 23:03:35.730905 kernel: SRAT: PXM 0 -> APIC 0xca -> Node 0 Jul 14 23:03:35.730911 kernel: SRAT: PXM 0 -> APIC 0xcc -> Node 0 Jul 14 23:03:35.730916 kernel: SRAT: PXM 0 -> APIC 0xce -> Node 0 Jul 14 23:03:35.730921 kernel: SRAT: PXM 0 -> APIC 0xd0 -> Node 0 Jul 14 23:03:35.730926 kernel: SRAT: PXM 0 -> APIC 0xd2 -> Node 0 Jul 14 23:03:35.730931 kernel: SRAT: PXM 0 -> APIC 0xd4 -> Node 0 Jul 14 23:03:35.730938 kernel: SRAT: PXM 0 -> APIC 0xd6 -> Node 0 Jul 14 23:03:35.730943 kernel: SRAT: PXM 0 -> APIC 0xd8 -> Node 0 Jul 14 23:03:35.730948 kernel: SRAT: PXM 0 -> APIC 0xda -> Node 0 Jul 14 23:03:35.730953 kernel: SRAT: PXM 0 -> APIC 0xdc -> Node 0 Jul 14 23:03:35.730959 kernel: SRAT: PXM 0 -> APIC 0xde -> Node 0 Jul 14 23:03:35.730964 kernel: SRAT: PXM 0 -> APIC 0xe0 -> Node 0 Jul 14 23:03:35.730969 kernel: SRAT: PXM 0 -> APIC 0xe2 -> Node 0 Jul 14 23:03:35.730975 kernel: SRAT: PXM 0 -> APIC 0xe4 -> Node 0 Jul 14 23:03:35.730980 kernel: SRAT: PXM 0 -> APIC 0xe6 -> Node 0 Jul 14 23:03:35.730985 kernel: SRAT: PXM 0 -> APIC 0xe8 -> Node 0 Jul 14 23:03:35.730991 kernel: SRAT: PXM 0 -> APIC 0xea -> Node 0 Jul 14 23:03:35.730997 kernel: SRAT: PXM 0 -> APIC 0xec -> Node 0 Jul 14 23:03:35.731002 kernel: SRAT: PXM 0 -> APIC 0xee -> Node 0 Jul 14 23:03:35.731007 kernel: SRAT: PXM 0 -> APIC 0xf0 -> Node 0 Jul 14 23:03:35.731013 kernel: SRAT: PXM 0 -> APIC 0xf2 -> Node 0 Jul 14 23:03:35.731018 kernel: SRAT: PXM 0 -> APIC 0xf4 -> Node 0 Jul 14 23:03:35.731023 kernel: SRAT: PXM 0 -> APIC 0xf6 -> Node 0 Jul 14 23:03:35.731028 kernel: SRAT: PXM 0 -> APIC 0xf8 -> Node 0 Jul 14 23:03:35.731033 kernel: SRAT: PXM 0 -> APIC 0xfa -> Node 0 Jul 14 23:03:35.731039 kernel: SRAT: PXM 0 -> APIC 0xfc -> Node 0 Jul 14 23:03:35.731045 kernel: SRAT: PXM 0 -> APIC 0xfe -> Node 0 Jul 14 23:03:35.731050 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Jul 14 23:03:35.731056 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Jul 14 23:03:35.731061 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000-0xbfffffff] hotplug Jul 14 23:03:35.731067 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7fffffff] -> [mem 0x00000000-0x7fffffff] Jul 14 23:03:35.731072 kernel: NODE_DATA(0) allocated [mem 0x7fffa000-0x7fffffff] Jul 14 23:03:35.731078 kernel: Zone ranges: Jul 14 23:03:35.731083 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jul 14 23:03:35.731088 kernel: DMA32 [mem 0x0000000001000000-0x000000007fffffff] Jul 14 23:03:35.731095 kernel: Normal empty Jul 14 23:03:35.731100 kernel: Movable zone start for each node Jul 14 23:03:35.731106 kernel: Early memory node ranges Jul 14 23:03:35.731111 kernel: node 0: [mem 0x0000000000001000-0x000000000009dfff] Jul 14 23:03:35.731117 kernel: node 0: [mem 0x0000000000100000-0x000000007fedffff] Jul 14 23:03:35.731122 kernel: node 0: [mem 0x000000007ff00000-0x000000007fffffff] Jul 14 23:03:35.731127 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007fffffff] Jul 14 23:03:35.731133 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jul 14 23:03:35.731138 kernel: On node 0, zone DMA: 98 pages in unavailable ranges Jul 14 23:03:35.731144 kernel: On node 0, zone DMA32: 32 pages in unavailable ranges Jul 14 23:03:35.731150 kernel: ACPI: PM-Timer IO Port: 0x1008 Jul 14 23:03:35.731155 kernel: system APIC only can use physical flat Jul 14 23:03:35.731161 kernel: ACPI: LAPIC_NMI (acpi_id[0x00] high edge lint[0x1]) Jul 14 23:03:35.731166 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1]) Jul 14 23:03:35.731171 kernel: ACPI: LAPIC_NMI (acpi_id[0x02] high edge lint[0x1]) Jul 14 23:03:35.731177 kernel: ACPI: LAPIC_NMI (acpi_id[0x03] high edge lint[0x1]) Jul 14 23:03:35.731182 kernel: ACPI: LAPIC_NMI (acpi_id[0x04] high edge lint[0x1]) Jul 14 23:03:35.731187 kernel: ACPI: LAPIC_NMI (acpi_id[0x05] high edge lint[0x1]) Jul 14 23:03:35.731192 kernel: ACPI: LAPIC_NMI (acpi_id[0x06] high edge lint[0x1]) Jul 14 23:03:35.731198 kernel: ACPI: LAPIC_NMI (acpi_id[0x07] high edge lint[0x1]) Jul 14 23:03:35.731204 kernel: ACPI: LAPIC_NMI (acpi_id[0x08] high edge lint[0x1]) Jul 14 23:03:35.731209 kernel: ACPI: LAPIC_NMI (acpi_id[0x09] high edge lint[0x1]) Jul 14 23:03:35.731215 kernel: ACPI: LAPIC_NMI (acpi_id[0x0a] high edge lint[0x1]) Jul 14 23:03:35.731220 kernel: ACPI: LAPIC_NMI (acpi_id[0x0b] high edge lint[0x1]) Jul 14 23:03:35.731225 kernel: ACPI: LAPIC_NMI (acpi_id[0x0c] high edge lint[0x1]) Jul 14 23:03:35.731231 kernel: ACPI: LAPIC_NMI (acpi_id[0x0d] high edge lint[0x1]) Jul 14 23:03:35.731236 kernel: ACPI: LAPIC_NMI (acpi_id[0x0e] high edge lint[0x1]) Jul 14 23:03:35.731241 kernel: ACPI: LAPIC_NMI (acpi_id[0x0f] high edge lint[0x1]) Jul 14 23:03:35.731246 kernel: ACPI: LAPIC_NMI (acpi_id[0x10] high edge lint[0x1]) Jul 14 23:03:35.731253 kernel: ACPI: LAPIC_NMI (acpi_id[0x11] high edge lint[0x1]) Jul 14 23:03:35.731258 kernel: ACPI: LAPIC_NMI (acpi_id[0x12] high edge lint[0x1]) Jul 14 23:03:35.731263 kernel: ACPI: LAPIC_NMI (acpi_id[0x13] high edge lint[0x1]) Jul 14 23:03:35.731268 kernel: ACPI: LAPIC_NMI (acpi_id[0x14] high edge lint[0x1]) Jul 14 23:03:35.731274 kernel: ACPI: LAPIC_NMI (acpi_id[0x15] high edge lint[0x1]) Jul 14 23:03:35.731279 kernel: ACPI: LAPIC_NMI (acpi_id[0x16] high edge lint[0x1]) Jul 14 23:03:35.731284 kernel: ACPI: LAPIC_NMI (acpi_id[0x17] high edge lint[0x1]) Jul 14 23:03:35.731290 kernel: ACPI: LAPIC_NMI (acpi_id[0x18] high edge lint[0x1]) Jul 14 23:03:35.731295 kernel: ACPI: LAPIC_NMI (acpi_id[0x19] high edge lint[0x1]) Jul 14 23:03:35.731300 kernel: ACPI: LAPIC_NMI (acpi_id[0x1a] high edge lint[0x1]) Jul 14 23:03:35.731306 kernel: ACPI: LAPIC_NMI (acpi_id[0x1b] high edge lint[0x1]) Jul 14 23:03:35.731312 kernel: ACPI: LAPIC_NMI (acpi_id[0x1c] high edge lint[0x1]) Jul 14 23:03:35.731317 kernel: ACPI: LAPIC_NMI (acpi_id[0x1d] high edge lint[0x1]) Jul 14 23:03:35.731322 kernel: ACPI: LAPIC_NMI (acpi_id[0x1e] high edge lint[0x1]) Jul 14 23:03:35.731328 kernel: ACPI: LAPIC_NMI (acpi_id[0x1f] high edge lint[0x1]) Jul 14 23:03:35.731333 kernel: ACPI: LAPIC_NMI (acpi_id[0x20] high edge lint[0x1]) Jul 14 23:03:35.731338 kernel: ACPI: LAPIC_NMI (acpi_id[0x21] high edge lint[0x1]) Jul 14 23:03:35.731351 kernel: ACPI: LAPIC_NMI (acpi_id[0x22] high edge lint[0x1]) Jul 14 23:03:35.731357 kernel: ACPI: LAPIC_NMI (acpi_id[0x23] high edge lint[0x1]) Jul 14 23:03:35.731362 kernel: ACPI: LAPIC_NMI (acpi_id[0x24] high edge lint[0x1]) Jul 14 23:03:35.731369 kernel: ACPI: LAPIC_NMI (acpi_id[0x25] high edge lint[0x1]) Jul 14 23:03:35.731374 kernel: ACPI: LAPIC_NMI (acpi_id[0x26] high edge lint[0x1]) Jul 14 23:03:35.731379 kernel: ACPI: LAPIC_NMI (acpi_id[0x27] high edge lint[0x1]) Jul 14 23:03:35.731384 kernel: ACPI: LAPIC_NMI (acpi_id[0x28] high edge lint[0x1]) Jul 14 23:03:35.731390 kernel: ACPI: LAPIC_NMI (acpi_id[0x29] high edge lint[0x1]) Jul 14 23:03:35.731395 kernel: ACPI: LAPIC_NMI (acpi_id[0x2a] high edge lint[0x1]) Jul 14 23:03:35.731400 kernel: ACPI: LAPIC_NMI (acpi_id[0x2b] high edge lint[0x1]) Jul 14 23:03:35.731406 kernel: ACPI: LAPIC_NMI (acpi_id[0x2c] high edge lint[0x1]) Jul 14 23:03:35.731433 kernel: ACPI: LAPIC_NMI (acpi_id[0x2d] high edge lint[0x1]) Jul 14 23:03:35.731439 kernel: ACPI: LAPIC_NMI (acpi_id[0x2e] high edge lint[0x1]) Jul 14 23:03:35.731445 kernel: ACPI: LAPIC_NMI (acpi_id[0x2f] high edge lint[0x1]) Jul 14 23:03:35.731450 kernel: ACPI: LAPIC_NMI (acpi_id[0x30] high edge lint[0x1]) Jul 14 23:03:35.731456 kernel: ACPI: LAPIC_NMI (acpi_id[0x31] high edge lint[0x1]) Jul 14 23:03:35.731478 kernel: ACPI: LAPIC_NMI (acpi_id[0x32] high edge lint[0x1]) Jul 14 23:03:35.731483 kernel: ACPI: LAPIC_NMI (acpi_id[0x33] high edge lint[0x1]) Jul 14 23:03:35.731488 kernel: ACPI: LAPIC_NMI (acpi_id[0x34] high edge lint[0x1]) Jul 14 23:03:35.731493 kernel: ACPI: LAPIC_NMI (acpi_id[0x35] high edge lint[0x1]) Jul 14 23:03:35.731499 kernel: ACPI: LAPIC_NMI (acpi_id[0x36] high edge lint[0x1]) Jul 14 23:03:35.731504 kernel: ACPI: LAPIC_NMI (acpi_id[0x37] high edge lint[0x1]) Jul 14 23:03:35.731510 kernel: ACPI: LAPIC_NMI (acpi_id[0x38] high edge lint[0x1]) Jul 14 23:03:35.731515 kernel: ACPI: LAPIC_NMI (acpi_id[0x39] high edge lint[0x1]) Jul 14 23:03:35.731521 kernel: ACPI: LAPIC_NMI (acpi_id[0x3a] high edge lint[0x1]) Jul 14 23:03:35.731526 kernel: ACPI: LAPIC_NMI (acpi_id[0x3b] high edge lint[0x1]) Jul 14 23:03:35.731531 kernel: ACPI: LAPIC_NMI (acpi_id[0x3c] high edge lint[0x1]) Jul 14 23:03:35.731536 kernel: ACPI: LAPIC_NMI (acpi_id[0x3d] high edge lint[0x1]) Jul 14 23:03:35.731542 kernel: ACPI: LAPIC_NMI (acpi_id[0x3e] high edge lint[0x1]) Jul 14 23:03:35.731547 kernel: ACPI: LAPIC_NMI (acpi_id[0x3f] high edge lint[0x1]) Jul 14 23:03:35.731553 kernel: ACPI: LAPIC_NMI (acpi_id[0x40] high edge lint[0x1]) Jul 14 23:03:35.731558 kernel: ACPI: LAPIC_NMI (acpi_id[0x41] high edge lint[0x1]) Jul 14 23:03:35.731564 kernel: ACPI: LAPIC_NMI (acpi_id[0x42] high edge lint[0x1]) Jul 14 23:03:35.731570 kernel: ACPI: LAPIC_NMI (acpi_id[0x43] high edge lint[0x1]) Jul 14 23:03:35.731575 kernel: ACPI: LAPIC_NMI (acpi_id[0x44] high edge lint[0x1]) Jul 14 23:03:35.731580 kernel: ACPI: LAPIC_NMI (acpi_id[0x45] high edge lint[0x1]) Jul 14 23:03:35.731586 kernel: ACPI: LAPIC_NMI (acpi_id[0x46] high edge lint[0x1]) Jul 14 23:03:35.731591 kernel: ACPI: LAPIC_NMI (acpi_id[0x47] high edge lint[0x1]) Jul 14 23:03:35.731596 kernel: ACPI: LAPIC_NMI (acpi_id[0x48] high edge lint[0x1]) Jul 14 23:03:35.731601 kernel: ACPI: LAPIC_NMI (acpi_id[0x49] high edge lint[0x1]) Jul 14 23:03:35.731607 kernel: ACPI: LAPIC_NMI (acpi_id[0x4a] high edge lint[0x1]) Jul 14 23:03:35.731613 kernel: ACPI: LAPIC_NMI (acpi_id[0x4b] high edge lint[0x1]) Jul 14 23:03:35.731618 kernel: ACPI: LAPIC_NMI (acpi_id[0x4c] high edge lint[0x1]) Jul 14 23:03:35.731624 kernel: ACPI: LAPIC_NMI (acpi_id[0x4d] high edge lint[0x1]) Jul 14 23:03:35.731629 kernel: ACPI: LAPIC_NMI (acpi_id[0x4e] high edge lint[0x1]) Jul 14 23:03:35.731634 kernel: ACPI: LAPIC_NMI (acpi_id[0x4f] high edge lint[0x1]) Jul 14 23:03:35.731639 kernel: ACPI: LAPIC_NMI (acpi_id[0x50] high edge lint[0x1]) Jul 14 23:03:35.731645 kernel: ACPI: LAPIC_NMI (acpi_id[0x51] high edge lint[0x1]) Jul 14 23:03:35.731650 kernel: ACPI: LAPIC_NMI (acpi_id[0x52] high edge lint[0x1]) Jul 14 23:03:35.731655 kernel: ACPI: LAPIC_NMI (acpi_id[0x53] high edge lint[0x1]) Jul 14 23:03:35.731660 kernel: ACPI: LAPIC_NMI (acpi_id[0x54] high edge lint[0x1]) Jul 14 23:03:35.731667 kernel: ACPI: LAPIC_NMI (acpi_id[0x55] high edge lint[0x1]) Jul 14 23:03:35.731672 kernel: ACPI: LAPIC_NMI (acpi_id[0x56] high edge lint[0x1]) Jul 14 23:03:35.731677 kernel: ACPI: LAPIC_NMI (acpi_id[0x57] high edge lint[0x1]) Jul 14 23:03:35.731683 kernel: ACPI: LAPIC_NMI (acpi_id[0x58] high edge lint[0x1]) Jul 14 23:03:35.731688 kernel: ACPI: LAPIC_NMI (acpi_id[0x59] high edge lint[0x1]) Jul 14 23:03:35.731693 kernel: ACPI: LAPIC_NMI (acpi_id[0x5a] high edge lint[0x1]) Jul 14 23:03:35.731698 kernel: ACPI: LAPIC_NMI (acpi_id[0x5b] high edge lint[0x1]) Jul 14 23:03:35.731704 kernel: ACPI: LAPIC_NMI (acpi_id[0x5c] high edge lint[0x1]) Jul 14 23:03:35.731709 kernel: ACPI: LAPIC_NMI (acpi_id[0x5d] high edge lint[0x1]) Jul 14 23:03:35.731714 kernel: ACPI: LAPIC_NMI (acpi_id[0x5e] high edge lint[0x1]) Jul 14 23:03:35.731721 kernel: ACPI: LAPIC_NMI (acpi_id[0x5f] high edge lint[0x1]) Jul 14 23:03:35.731726 kernel: ACPI: LAPIC_NMI (acpi_id[0x60] high edge lint[0x1]) Jul 14 23:03:35.731731 kernel: ACPI: LAPIC_NMI (acpi_id[0x61] high edge lint[0x1]) Jul 14 23:03:35.731737 kernel: ACPI: LAPIC_NMI (acpi_id[0x62] high edge lint[0x1]) Jul 14 23:03:35.731742 kernel: ACPI: LAPIC_NMI (acpi_id[0x63] high edge lint[0x1]) Jul 14 23:03:35.731747 kernel: ACPI: LAPIC_NMI (acpi_id[0x64] high edge lint[0x1]) Jul 14 23:03:35.731752 kernel: ACPI: LAPIC_NMI (acpi_id[0x65] high edge lint[0x1]) Jul 14 23:03:35.731758 kernel: ACPI: LAPIC_NMI (acpi_id[0x66] high edge lint[0x1]) Jul 14 23:03:35.731763 kernel: ACPI: LAPIC_NMI (acpi_id[0x67] high edge lint[0x1]) Jul 14 23:03:35.731769 kernel: ACPI: LAPIC_NMI (acpi_id[0x68] high edge lint[0x1]) Jul 14 23:03:35.731775 kernel: ACPI: LAPIC_NMI (acpi_id[0x69] high edge lint[0x1]) Jul 14 23:03:35.731780 kernel: ACPI: LAPIC_NMI (acpi_id[0x6a] high edge lint[0x1]) Jul 14 23:03:35.731786 kernel: ACPI: LAPIC_NMI (acpi_id[0x6b] high edge lint[0x1]) Jul 14 23:03:35.731791 kernel: ACPI: LAPIC_NMI (acpi_id[0x6c] high edge lint[0x1]) Jul 14 23:03:35.731796 kernel: ACPI: LAPIC_NMI (acpi_id[0x6d] high edge lint[0x1]) Jul 14 23:03:35.731802 kernel: ACPI: LAPIC_NMI (acpi_id[0x6e] high edge lint[0x1]) Jul 14 23:03:35.731807 kernel: ACPI: LAPIC_NMI (acpi_id[0x6f] high edge lint[0x1]) Jul 14 23:03:35.731812 kernel: ACPI: LAPIC_NMI (acpi_id[0x70] high edge lint[0x1]) Jul 14 23:03:35.731817 kernel: ACPI: LAPIC_NMI (acpi_id[0x71] high edge lint[0x1]) Jul 14 23:03:35.731824 kernel: ACPI: LAPIC_NMI (acpi_id[0x72] high edge lint[0x1]) Jul 14 23:03:35.731829 kernel: ACPI: LAPIC_NMI (acpi_id[0x73] high edge lint[0x1]) Jul 14 23:03:35.731834 kernel: ACPI: LAPIC_NMI (acpi_id[0x74] high edge lint[0x1]) Jul 14 23:03:35.731840 kernel: ACPI: LAPIC_NMI (acpi_id[0x75] high edge lint[0x1]) Jul 14 23:03:35.731845 kernel: ACPI: LAPIC_NMI (acpi_id[0x76] high edge lint[0x1]) Jul 14 23:03:35.731851 kernel: ACPI: LAPIC_NMI (acpi_id[0x77] high edge lint[0x1]) Jul 14 23:03:35.731856 kernel: ACPI: LAPIC_NMI (acpi_id[0x78] high edge lint[0x1]) Jul 14 23:03:35.731861 kernel: ACPI: LAPIC_NMI (acpi_id[0x79] high edge lint[0x1]) Jul 14 23:03:35.731866 kernel: ACPI: LAPIC_NMI (acpi_id[0x7a] high edge lint[0x1]) Jul 14 23:03:35.731872 kernel: ACPI: LAPIC_NMI (acpi_id[0x7b] high edge lint[0x1]) Jul 14 23:03:35.731878 kernel: ACPI: LAPIC_NMI (acpi_id[0x7c] high edge lint[0x1]) Jul 14 23:03:35.731883 kernel: ACPI: LAPIC_NMI (acpi_id[0x7d] high edge lint[0x1]) Jul 14 23:03:35.731888 kernel: ACPI: LAPIC_NMI (acpi_id[0x7e] high edge lint[0x1]) Jul 14 23:03:35.731894 kernel: ACPI: LAPIC_NMI (acpi_id[0x7f] high edge lint[0x1]) Jul 14 23:03:35.731899 kernel: IOAPIC[0]: apic_id 1, version 17, address 0xfec00000, GSI 0-23 Jul 14 23:03:35.731905 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 high edge) Jul 14 23:03:35.731910 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jul 14 23:03:35.731916 kernel: ACPI: HPET id: 0x8086af01 base: 0xfed00000 Jul 14 23:03:35.731921 kernel: TSC deadline timer available Jul 14 23:03:35.731926 kernel: smpboot: Allowing 128 CPUs, 126 hotplug CPUs Jul 14 23:03:35.731933 kernel: [mem 0x80000000-0xefffffff] available for PCI devices Jul 14 23:03:35.731938 kernel: Booting paravirtualized kernel on VMware hypervisor Jul 14 23:03:35.731944 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jul 14 23:03:35.731949 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:128 nr_cpu_ids:128 nr_node_ids:1 Jul 14 23:03:35.731955 kernel: percpu: Embedded 58 pages/cpu s197096 r8192 d32280 u262144 Jul 14 23:03:35.731960 kernel: pcpu-alloc: s197096 r8192 d32280 u262144 alloc=1*2097152 Jul 14 23:03:35.731965 kernel: pcpu-alloc: [0] 000 001 002 003 004 005 006 007 Jul 14 23:03:35.731971 kernel: pcpu-alloc: [0] 008 009 010 011 012 013 014 015 Jul 14 23:03:35.731977 kernel: pcpu-alloc: [0] 016 017 018 019 020 021 022 023 Jul 14 23:03:35.731982 kernel: pcpu-alloc: [0] 024 025 026 027 028 029 030 031 Jul 14 23:03:35.731988 kernel: pcpu-alloc: [0] 032 033 034 035 036 037 038 039 Jul 14 23:03:35.732000 kernel: pcpu-alloc: [0] 040 041 042 043 044 045 046 047 Jul 14 23:03:35.732006 kernel: pcpu-alloc: [0] 048 049 050 051 052 053 054 055 Jul 14 23:03:35.732012 kernel: pcpu-alloc: [0] 056 057 058 059 060 061 062 063 Jul 14 23:03:35.732017 kernel: pcpu-alloc: [0] 064 065 066 067 068 069 070 071 Jul 14 23:03:35.732023 kernel: pcpu-alloc: [0] 072 073 074 075 076 077 078 079 Jul 14 23:03:35.732029 kernel: pcpu-alloc: [0] 080 081 082 083 084 085 086 087 Jul 14 23:03:35.732036 kernel: pcpu-alloc: [0] 088 089 090 091 092 093 094 095 Jul 14 23:03:35.732041 kernel: pcpu-alloc: [0] 096 097 098 099 100 101 102 103 Jul 14 23:03:35.732047 kernel: pcpu-alloc: [0] 104 105 106 107 108 109 110 111 Jul 14 23:03:35.732052 kernel: pcpu-alloc: [0] 112 113 114 115 116 117 118 119 Jul 14 23:03:35.732058 kernel: pcpu-alloc: [0] 120 121 122 123 124 125 126 127 Jul 14 23:03:35.732064 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=bfa97d577a2baa7448b0ab2cae71f1606bd0084ffae5b72cc7eef5122a2ca497 Jul 14 23:03:35.732071 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 14 23:03:35.732076 kernel: random: crng init done Jul 14 23:03:35.732083 kernel: printk: log_buf_len individual max cpu contribution: 4096 bytes Jul 14 23:03:35.732089 kernel: printk: log_buf_len total cpu_extra contributions: 520192 bytes Jul 14 23:03:35.732094 kernel: printk: log_buf_len min size: 262144 bytes Jul 14 23:03:35.732100 kernel: printk: log_buf_len: 1048576 bytes Jul 14 23:03:35.732106 kernel: printk: early log buf free: 239648(91%) Jul 14 23:03:35.732112 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 14 23:03:35.732118 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jul 14 23:03:35.732123 kernel: Fallback order for Node 0: 0 Jul 14 23:03:35.732129 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515808 Jul 14 23:03:35.732136 kernel: Policy zone: DMA32 Jul 14 23:03:35.732142 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 14 23:03:35.732148 kernel: Memory: 1936340K/2096628K available (12288K kernel code, 2295K rwdata, 22748K rodata, 42876K init, 2316K bss, 160028K reserved, 0K cma-reserved) Jul 14 23:03:35.732155 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=128, Nodes=1 Jul 14 23:03:35.732160 kernel: ftrace: allocating 37970 entries in 149 pages Jul 14 23:03:35.732166 kernel: ftrace: allocated 149 pages with 4 groups Jul 14 23:03:35.732173 kernel: Dynamic Preempt: voluntary Jul 14 23:03:35.732179 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 14 23:03:35.732185 kernel: rcu: RCU event tracing is enabled. Jul 14 23:03:35.732190 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=128. Jul 14 23:03:35.732196 kernel: Trampoline variant of Tasks RCU enabled. Jul 14 23:03:35.732202 kernel: Rude variant of Tasks RCU enabled. Jul 14 23:03:35.732208 kernel: Tracing variant of Tasks RCU enabled. Jul 14 23:03:35.732213 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 14 23:03:35.732219 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=128 Jul 14 23:03:35.732226 kernel: NR_IRQS: 33024, nr_irqs: 1448, preallocated irqs: 16 Jul 14 23:03:35.732232 kernel: rcu: srcu_init: Setting srcu_struct sizes to big. Jul 14 23:03:35.732237 kernel: Console: colour VGA+ 80x25 Jul 14 23:03:35.732243 kernel: printk: console [tty0] enabled Jul 14 23:03:35.732249 kernel: printk: console [ttyS0] enabled Jul 14 23:03:35.732254 kernel: ACPI: Core revision 20230628 Jul 14 23:03:35.732260 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 133484882848 ns Jul 14 23:03:35.732266 kernel: APIC: Switch to symmetric I/O mode setup Jul 14 23:03:35.732272 kernel: x2apic enabled Jul 14 23:03:35.732279 kernel: APIC: Switched APIC routing to: physical x2apic Jul 14 23:03:35.732284 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jul 14 23:03:35.732290 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x311fd3cd494, max_idle_ns: 440795223879 ns Jul 14 23:03:35.732296 kernel: Calibrating delay loop (skipped) preset value.. 6816.00 BogoMIPS (lpj=3408000) Jul 14 23:03:35.732302 kernel: Disabled fast string operations Jul 14 23:03:35.732307 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Jul 14 23:03:35.732313 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Jul 14 23:03:35.732319 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jul 14 23:03:35.732325 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Jul 14 23:03:35.732332 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Jul 14 23:03:35.732337 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS Jul 14 23:03:35.732492 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT Jul 14 23:03:35.732500 kernel: RETBleed: Mitigation: Enhanced IBRS Jul 14 23:03:35.732506 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jul 14 23:03:35.732511 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jul 14 23:03:35.732517 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jul 14 23:03:35.732523 kernel: SRBDS: Unknown: Dependent on hypervisor status Jul 14 23:03:35.732529 kernel: GDS: Unknown: Dependent on hypervisor status Jul 14 23:03:35.732537 kernel: ITS: Mitigation: Aligned branch/return thunks Jul 14 23:03:35.732543 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jul 14 23:03:35.732548 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jul 14 23:03:35.732554 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jul 14 23:03:35.732560 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jul 14 23:03:35.732566 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jul 14 23:03:35.732572 kernel: Freeing SMP alternatives memory: 32K Jul 14 23:03:35.732598 kernel: pid_max: default: 131072 minimum: 1024 Jul 14 23:03:35.732604 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jul 14 23:03:35.732610 kernel: landlock: Up and running. Jul 14 23:03:35.732616 kernel: SELinux: Initializing. Jul 14 23:03:35.732639 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jul 14 23:03:35.732646 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jul 14 23:03:35.732651 kernel: smpboot: CPU0: Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz (family: 0x6, model: 0x9e, stepping: 0xd) Jul 14 23:03:35.732657 kernel: RCU Tasks: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. Jul 14 23:03:35.732663 kernel: RCU Tasks Rude: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. Jul 14 23:03:35.732669 kernel: RCU Tasks Trace: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. Jul 14 23:03:35.732674 kernel: Performance Events: Skylake events, core PMU driver. Jul 14 23:03:35.732681 kernel: core: CPUID marked event: 'cpu cycles' unavailable Jul 14 23:03:35.732687 kernel: core: CPUID marked event: 'instructions' unavailable Jul 14 23:03:35.732693 kernel: core: CPUID marked event: 'bus cycles' unavailable Jul 14 23:03:35.732698 kernel: core: CPUID marked event: 'cache references' unavailable Jul 14 23:03:35.732704 kernel: core: CPUID marked event: 'cache misses' unavailable Jul 14 23:03:35.732709 kernel: core: CPUID marked event: 'branch instructions' unavailable Jul 14 23:03:35.732715 kernel: core: CPUID marked event: 'branch misses' unavailable Jul 14 23:03:35.732721 kernel: ... version: 1 Jul 14 23:03:35.732728 kernel: ... bit width: 48 Jul 14 23:03:35.732734 kernel: ... generic registers: 4 Jul 14 23:03:35.732739 kernel: ... value mask: 0000ffffffffffff Jul 14 23:03:35.732745 kernel: ... max period: 000000007fffffff Jul 14 23:03:35.732751 kernel: ... fixed-purpose events: 0 Jul 14 23:03:35.732756 kernel: ... event mask: 000000000000000f Jul 14 23:03:35.732762 kernel: signal: max sigframe size: 1776 Jul 14 23:03:35.732768 kernel: rcu: Hierarchical SRCU implementation. Jul 14 23:03:35.732774 kernel: rcu: Max phase no-delay instances is 400. Jul 14 23:03:35.732781 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jul 14 23:03:35.732786 kernel: smp: Bringing up secondary CPUs ... Jul 14 23:03:35.732792 kernel: smpboot: x86: Booting SMP configuration: Jul 14 23:03:35.732798 kernel: .... node #0, CPUs: #1 Jul 14 23:03:35.732803 kernel: Disabled fast string operations Jul 14 23:03:35.732809 kernel: smpboot: CPU 1 Converting physical 2 to logical package 1 Jul 14 23:03:35.732815 kernel: smpboot: CPU 1 Converting physical 0 to logical die 1 Jul 14 23:03:35.732820 kernel: smp: Brought up 1 node, 2 CPUs Jul 14 23:03:35.732826 kernel: smpboot: Max logical packages: 128 Jul 14 23:03:35.732832 kernel: smpboot: Total of 2 processors activated (13632.00 BogoMIPS) Jul 14 23:03:35.732838 kernel: devtmpfs: initialized Jul 14 23:03:35.732844 kernel: x86/mm: Memory block size: 128MB Jul 14 23:03:35.732850 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7feff000-0x7fefffff] (4096 bytes) Jul 14 23:03:35.732856 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 14 23:03:35.732862 kernel: futex hash table entries: 32768 (order: 9, 2097152 bytes, linear) Jul 14 23:03:35.732867 kernel: pinctrl core: initialized pinctrl subsystem Jul 14 23:03:35.732873 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 14 23:03:35.732879 kernel: audit: initializing netlink subsys (disabled) Jul 14 23:03:35.732885 kernel: audit: type=2000 audit(1752534214.086:1): state=initialized audit_enabled=0 res=1 Jul 14 23:03:35.732892 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 14 23:03:35.732897 kernel: thermal_sys: Registered thermal governor 'user_space' Jul 14 23:03:35.732903 kernel: cpuidle: using governor menu Jul 14 23:03:35.732908 kernel: Simple Boot Flag at 0x36 set to 0x80 Jul 14 23:03:35.732914 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 14 23:03:35.732920 kernel: dca service started, version 1.12.1 Jul 14 23:03:35.732926 kernel: PCI: MMCONFIG for domain 0000 [bus 00-7f] at [mem 0xf0000000-0xf7ffffff] (base 0xf0000000) Jul 14 23:03:35.732931 kernel: PCI: Using configuration type 1 for base access Jul 14 23:03:35.732937 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jul 14 23:03:35.732944 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 14 23:03:35.732950 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jul 14 23:03:35.732955 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 14 23:03:35.732961 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jul 14 23:03:35.732967 kernel: ACPI: Added _OSI(Module Device) Jul 14 23:03:35.732973 kernel: ACPI: Added _OSI(Processor Device) Jul 14 23:03:35.732979 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 14 23:03:35.732984 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 14 23:03:35.732990 kernel: ACPI: [Firmware Bug]: BIOS _OSI(Linux) query ignored Jul 14 23:03:35.732996 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jul 14 23:03:35.733002 kernel: ACPI: Interpreter enabled Jul 14 23:03:35.733009 kernel: ACPI: PM: (supports S0 S1 S5) Jul 14 23:03:35.733015 kernel: ACPI: Using IOAPIC for interrupt routing Jul 14 23:03:35.733020 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jul 14 23:03:35.733026 kernel: PCI: Using E820 reservations for host bridge windows Jul 14 23:03:35.733032 kernel: ACPI: Enabled 4 GPEs in block 00 to 0F Jul 14 23:03:35.733038 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-7f]) Jul 14 23:03:35.733119 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 14 23:03:35.733180 kernel: acpi PNP0A03:00: _OSC: platform does not support [AER LTR] Jul 14 23:03:35.733232 kernel: acpi PNP0A03:00: _OSC: OS now controls [PCIeHotplug PME PCIeCapability] Jul 14 23:03:35.733241 kernel: PCI host bridge to bus 0000:00 Jul 14 23:03:35.733294 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jul 14 23:03:35.733385 kernel: pci_bus 0000:00: root bus resource [mem 0x000cc000-0x000dbfff window] Jul 14 23:03:35.733659 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jul 14 23:03:35.733733 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jul 14 23:03:35.733781 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xfeff window] Jul 14 23:03:35.733844 kernel: pci_bus 0000:00: root bus resource [bus 00-7f] Jul 14 23:03:35.733905 kernel: pci 0000:00:00.0: [8086:7190] type 00 class 0x060000 Jul 14 23:03:35.733963 kernel: pci 0000:00:01.0: [8086:7191] type 01 class 0x060400 Jul 14 23:03:35.734020 kernel: pci 0000:00:07.0: [8086:7110] type 00 class 0x060100 Jul 14 23:03:35.734078 kernel: pci 0000:00:07.1: [8086:7111] type 00 class 0x01018a Jul 14 23:03:35.734131 kernel: pci 0000:00:07.1: reg 0x20: [io 0x1060-0x106f] Jul 14 23:03:35.734183 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Jul 14 23:03:35.734234 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Jul 14 23:03:35.734285 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Jul 14 23:03:35.734336 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Jul 14 23:03:35.734437 kernel: pci 0000:00:07.3: [8086:7113] type 00 class 0x068000 Jul 14 23:03:35.736450 kernel: pci 0000:00:07.3: quirk: [io 0x1000-0x103f] claimed by PIIX4 ACPI Jul 14 23:03:35.736514 kernel: pci 0000:00:07.3: quirk: [io 0x1040-0x104f] claimed by PIIX4 SMB Jul 14 23:03:35.736575 kernel: pci 0000:00:07.7: [15ad:0740] type 00 class 0x088000 Jul 14 23:03:35.736630 kernel: pci 0000:00:07.7: reg 0x10: [io 0x1080-0x10bf] Jul 14 23:03:35.736684 kernel: pci 0000:00:07.7: reg 0x14: [mem 0xfebfe000-0xfebfffff 64bit] Jul 14 23:03:35.736741 kernel: pci 0000:00:0f.0: [15ad:0405] type 00 class 0x030000 Jul 14 23:03:35.736798 kernel: pci 0000:00:0f.0: reg 0x10: [io 0x1070-0x107f] Jul 14 23:03:35.736851 kernel: pci 0000:00:0f.0: reg 0x14: [mem 0xe8000000-0xefffffff pref] Jul 14 23:03:35.736902 kernel: pci 0000:00:0f.0: reg 0x18: [mem 0xfe000000-0xfe7fffff] Jul 14 23:03:35.736954 kernel: pci 0000:00:0f.0: reg 0x30: [mem 0x00000000-0x00007fff pref] Jul 14 23:03:35.737005 kernel: pci 0000:00:0f.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jul 14 23:03:35.737061 kernel: pci 0000:00:11.0: [15ad:0790] type 01 class 0x060401 Jul 14 23:03:35.737118 kernel: pci 0000:00:15.0: [15ad:07a0] type 01 class 0x060400 Jul 14 23:03:35.737174 kernel: pci 0000:00:15.0: PME# supported from D0 D3hot D3cold Jul 14 23:03:35.737230 kernel: pci 0000:00:15.1: [15ad:07a0] type 01 class 0x060400 Jul 14 23:03:35.737284 kernel: pci 0000:00:15.1: PME# supported from D0 D3hot D3cold Jul 14 23:03:35.737340 kernel: pci 0000:00:15.2: [15ad:07a0] type 01 class 0x060400 Jul 14 23:03:35.737402 kernel: pci 0000:00:15.2: PME# supported from D0 D3hot D3cold Jul 14 23:03:35.737464 kernel: pci 0000:00:15.3: [15ad:07a0] type 01 class 0x060400 Jul 14 23:03:35.737520 kernel: pci 0000:00:15.3: PME# supported from D0 D3hot D3cold Jul 14 23:03:35.737580 kernel: pci 0000:00:15.4: [15ad:07a0] type 01 class 0x060400 Jul 14 23:03:35.737633 kernel: pci 0000:00:15.4: PME# supported from D0 D3hot D3cold Jul 14 23:03:35.737689 kernel: pci 0000:00:15.5: [15ad:07a0] type 01 class 0x060400 Jul 14 23:03:35.737741 kernel: pci 0000:00:15.5: PME# supported from D0 D3hot D3cold Jul 14 23:03:35.737797 kernel: pci 0000:00:15.6: [15ad:07a0] type 01 class 0x060400 Jul 14 23:03:35.737853 kernel: pci 0000:00:15.6: PME# supported from D0 D3hot D3cold Jul 14 23:03:35.737910 kernel: pci 0000:00:15.7: [15ad:07a0] type 01 class 0x060400 Jul 14 23:03:35.737963 kernel: pci 0000:00:15.7: PME# supported from D0 D3hot D3cold Jul 14 23:03:35.738019 kernel: pci 0000:00:16.0: [15ad:07a0] type 01 class 0x060400 Jul 14 23:03:35.738073 kernel: pci 0000:00:16.0: PME# supported from D0 D3hot D3cold Jul 14 23:03:35.738129 kernel: pci 0000:00:16.1: [15ad:07a0] type 01 class 0x060400 Jul 14 23:03:35.738185 kernel: pci 0000:00:16.1: PME# supported from D0 D3hot D3cold Jul 14 23:03:35.738242 kernel: pci 0000:00:16.2: [15ad:07a0] type 01 class 0x060400 Jul 14 23:03:35.738295 kernel: pci 0000:00:16.2: PME# supported from D0 D3hot D3cold Jul 14 23:03:35.738443 kernel: pci 0000:00:16.3: [15ad:07a0] type 01 class 0x060400 Jul 14 23:03:35.738501 kernel: pci 0000:00:16.3: PME# supported from D0 D3hot D3cold Jul 14 23:03:35.738559 kernel: pci 0000:00:16.4: [15ad:07a0] type 01 class 0x060400 Jul 14 23:03:35.738616 kernel: pci 0000:00:16.4: PME# supported from D0 D3hot D3cold Jul 14 23:03:35.738675 kernel: pci 0000:00:16.5: [15ad:07a0] type 01 class 0x060400 Jul 14 23:03:35.738728 kernel: pci 0000:00:16.5: PME# supported from D0 D3hot D3cold Jul 14 23:03:35.738785 kernel: pci 0000:00:16.6: [15ad:07a0] type 01 class 0x060400 Jul 14 23:03:35.738838 kernel: pci 0000:00:16.6: PME# supported from D0 D3hot D3cold Jul 14 23:03:35.738896 kernel: pci 0000:00:16.7: [15ad:07a0] type 01 class 0x060400 Jul 14 23:03:35.738952 kernel: pci 0000:00:16.7: PME# supported from D0 D3hot D3cold Jul 14 23:03:35.739009 kernel: pci 0000:00:17.0: [15ad:07a0] type 01 class 0x060400 Jul 14 23:03:35.739061 kernel: pci 0000:00:17.0: PME# supported from D0 D3hot D3cold Jul 14 23:03:35.739118 kernel: pci 0000:00:17.1: [15ad:07a0] type 01 class 0x060400 Jul 14 23:03:35.739172 kernel: pci 0000:00:17.1: PME# supported from D0 D3hot D3cold Jul 14 23:03:35.739227 kernel: pci 0000:00:17.2: [15ad:07a0] type 01 class 0x060400 Jul 14 23:03:35.739283 kernel: pci 0000:00:17.2: PME# supported from D0 D3hot D3cold Jul 14 23:03:35.739339 kernel: pci 0000:00:17.3: [15ad:07a0] type 01 class 0x060400 Jul 14 23:03:35.739408 kernel: pci 0000:00:17.3: PME# supported from D0 D3hot D3cold Jul 14 23:03:35.739466 kernel: pci 0000:00:17.4: [15ad:07a0] type 01 class 0x060400 Jul 14 23:03:35.739519 kernel: pci 0000:00:17.4: PME# supported from D0 D3hot D3cold Jul 14 23:03:35.739578 kernel: pci 0000:00:17.5: [15ad:07a0] type 01 class 0x060400 Jul 14 23:03:35.739631 kernel: pci 0000:00:17.5: PME# supported from D0 D3hot D3cold Jul 14 23:03:35.739691 kernel: pci 0000:00:17.6: [15ad:07a0] type 01 class 0x060400 Jul 14 23:03:35.739745 kernel: pci 0000:00:17.6: PME# supported from D0 D3hot D3cold Jul 14 23:03:35.739801 kernel: pci 0000:00:17.7: [15ad:07a0] type 01 class 0x060400 Jul 14 23:03:35.739855 kernel: pci 0000:00:17.7: PME# supported from D0 D3hot D3cold Jul 14 23:03:35.739912 kernel: pci 0000:00:18.0: [15ad:07a0] type 01 class 0x060400 Jul 14 23:03:35.739965 kernel: pci 0000:00:18.0: PME# supported from D0 D3hot D3cold Jul 14 23:03:35.740023 kernel: pci 0000:00:18.1: [15ad:07a0] type 01 class 0x060400 Jul 14 23:03:35.740077 kernel: pci 0000:00:18.1: PME# supported from D0 D3hot D3cold Jul 14 23:03:35.740134 kernel: pci 0000:00:18.2: [15ad:07a0] type 01 class 0x060400 Jul 14 23:03:35.740187 kernel: pci 0000:00:18.2: PME# supported from D0 D3hot D3cold Jul 14 23:03:35.740244 kernel: pci 0000:00:18.3: [15ad:07a0] type 01 class 0x060400 Jul 14 23:03:35.740296 kernel: pci 0000:00:18.3: PME# supported from D0 D3hot D3cold Jul 14 23:03:35.743444 kernel: pci 0000:00:18.4: [15ad:07a0] type 01 class 0x060400 Jul 14 23:03:35.743520 kernel: pci 0000:00:18.4: PME# supported from D0 D3hot D3cold Jul 14 23:03:35.743585 kernel: pci 0000:00:18.5: [15ad:07a0] type 01 class 0x060400 Jul 14 23:03:35.743675 kernel: pci 0000:00:18.5: PME# supported from D0 D3hot D3cold Jul 14 23:03:35.743749 kernel: pci 0000:00:18.6: [15ad:07a0] type 01 class 0x060400 Jul 14 23:03:35.743801 kernel: pci 0000:00:18.6: PME# supported from D0 D3hot D3cold Jul 14 23:03:35.743861 kernel: pci 0000:00:18.7: [15ad:07a0] type 01 class 0x060400 Jul 14 23:03:35.743913 kernel: pci 0000:00:18.7: PME# supported from D0 D3hot D3cold Jul 14 23:03:35.744001 kernel: pci_bus 0000:01: extended config space not accessible Jul 14 23:03:35.744055 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Jul 14 23:03:35.744108 kernel: pci_bus 0000:02: extended config space not accessible Jul 14 23:03:35.744117 kernel: acpiphp: Slot [32] registered Jul 14 23:03:35.744123 kernel: acpiphp: Slot [33] registered Jul 14 23:03:35.744132 kernel: acpiphp: Slot [34] registered Jul 14 23:03:35.744137 kernel: acpiphp: Slot [35] registered Jul 14 23:03:35.744143 kernel: acpiphp: Slot [36] registered Jul 14 23:03:35.744149 kernel: acpiphp: Slot [37] registered Jul 14 23:03:35.744155 kernel: acpiphp: Slot [38] registered Jul 14 23:03:35.744161 kernel: acpiphp: Slot [39] registered Jul 14 23:03:35.744167 kernel: acpiphp: Slot [40] registered Jul 14 23:03:35.744191 kernel: acpiphp: Slot [41] registered Jul 14 23:03:35.744197 kernel: acpiphp: Slot [42] registered Jul 14 23:03:35.744203 kernel: acpiphp: Slot [43] registered Jul 14 23:03:35.744210 kernel: acpiphp: Slot [44] registered Jul 14 23:03:35.744216 kernel: acpiphp: Slot [45] registered Jul 14 23:03:35.744222 kernel: acpiphp: Slot [46] registered Jul 14 23:03:35.744244 kernel: acpiphp: Slot [47] registered Jul 14 23:03:35.744250 kernel: acpiphp: Slot [48] registered Jul 14 23:03:35.744256 kernel: acpiphp: Slot [49] registered Jul 14 23:03:35.744262 kernel: acpiphp: Slot [50] registered Jul 14 23:03:35.744272 kernel: acpiphp: Slot [51] registered Jul 14 23:03:35.744278 kernel: acpiphp: Slot [52] registered Jul 14 23:03:35.744294 kernel: acpiphp: Slot [53] registered Jul 14 23:03:35.744302 kernel: acpiphp: Slot [54] registered Jul 14 23:03:35.744311 kernel: acpiphp: Slot [55] registered Jul 14 23:03:35.744320 kernel: acpiphp: Slot [56] registered Jul 14 23:03:35.744326 kernel: acpiphp: Slot [57] registered Jul 14 23:03:35.744331 kernel: acpiphp: Slot [58] registered Jul 14 23:03:35.744337 kernel: acpiphp: Slot [59] registered Jul 14 23:03:35.744351 kernel: acpiphp: Slot [60] registered Jul 14 23:03:35.744357 kernel: acpiphp: Slot [61] registered Jul 14 23:03:35.744365 kernel: acpiphp: Slot [62] registered Jul 14 23:03:35.744373 kernel: acpiphp: Slot [63] registered Jul 14 23:03:35.744435 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] (subtractive decode) Jul 14 23:03:35.744488 kernel: pci 0000:00:11.0: bridge window [io 0x2000-0x3fff] Jul 14 23:03:35.744539 kernel: pci 0000:00:11.0: bridge window [mem 0xfd600000-0xfdffffff] Jul 14 23:03:35.744590 kernel: pci 0000:00:11.0: bridge window [mem 0xe7b00000-0xe7ffffff 64bit pref] Jul 14 23:03:35.744641 kernel: pci 0000:00:11.0: bridge window [mem 0x000a0000-0x000bffff window] (subtractive decode) Jul 14 23:03:35.744692 kernel: pci 0000:00:11.0: bridge window [mem 0x000cc000-0x000dbfff window] (subtractive decode) Jul 14 23:03:35.744745 kernel: pci 0000:00:11.0: bridge window [mem 0xc0000000-0xfebfffff window] (subtractive decode) Jul 14 23:03:35.744797 kernel: pci 0000:00:11.0: bridge window [io 0x0000-0x0cf7 window] (subtractive decode) Jul 14 23:03:35.744868 kernel: pci 0000:00:11.0: bridge window [io 0x0d00-0xfeff window] (subtractive decode) Jul 14 23:03:35.747657 kernel: pci 0000:03:00.0: [15ad:07c0] type 00 class 0x010700 Jul 14 23:03:35.747722 kernel: pci 0000:03:00.0: reg 0x10: [io 0x4000-0x4007] Jul 14 23:03:35.747780 kernel: pci 0000:03:00.0: reg 0x14: [mem 0xfd5f8000-0xfd5fffff 64bit] Jul 14 23:03:35.747835 kernel: pci 0000:03:00.0: reg 0x30: [mem 0x00000000-0x0000ffff pref] Jul 14 23:03:35.747890 kernel: pci 0000:03:00.0: PME# supported from D0 D3hot D3cold Jul 14 23:03:35.747948 kernel: pci 0000:03:00.0: disabling ASPM on pre-1.1 PCIe device. You can enable it with 'pcie_aspm=force' Jul 14 23:03:35.748002 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] Jul 14 23:03:35.748055 kernel: pci 0000:00:15.0: bridge window [io 0x4000-0x4fff] Jul 14 23:03:35.748108 kernel: pci 0000:00:15.0: bridge window [mem 0xfd500000-0xfd5fffff] Jul 14 23:03:35.748162 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] Jul 14 23:03:35.748216 kernel: pci 0000:00:15.1: bridge window [io 0x8000-0x8fff] Jul 14 23:03:35.748269 kernel: pci 0000:00:15.1: bridge window [mem 0xfd100000-0xfd1fffff] Jul 14 23:03:35.748325 kernel: pci 0000:00:15.1: bridge window [mem 0xe7800000-0xe78fffff 64bit pref] Jul 14 23:03:35.748423 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] Jul 14 23:03:35.748494 kernel: pci 0000:00:15.2: bridge window [io 0xc000-0xcfff] Jul 14 23:03:35.748545 kernel: pci 0000:00:15.2: bridge window [mem 0xfcd00000-0xfcdfffff] Jul 14 23:03:35.748596 kernel: pci 0000:00:15.2: bridge window [mem 0xe7400000-0xe74fffff 64bit pref] Jul 14 23:03:35.748649 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] Jul 14 23:03:35.748700 kernel: pci 0000:00:15.3: bridge window [mem 0xfc900000-0xfc9fffff] Jul 14 23:03:35.748751 kernel: pci 0000:00:15.3: bridge window [mem 0xe7000000-0xe70fffff 64bit pref] Jul 14 23:03:35.748807 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] Jul 14 23:03:35.748858 kernel: pci 0000:00:15.4: bridge window [mem 0xfc500000-0xfc5fffff] Jul 14 23:03:35.748910 kernel: pci 0000:00:15.4: bridge window [mem 0xe6c00000-0xe6cfffff 64bit pref] Jul 14 23:03:35.748963 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] Jul 14 23:03:35.749026 kernel: pci 0000:00:15.5: bridge window [mem 0xfc100000-0xfc1fffff] Jul 14 23:03:35.749084 kernel: pci 0000:00:15.5: bridge window [mem 0xe6800000-0xe68fffff 64bit pref] Jul 14 23:03:35.749136 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] Jul 14 23:03:35.749187 kernel: pci 0000:00:15.6: bridge window [mem 0xfbd00000-0xfbdfffff] Jul 14 23:03:35.749238 kernel: pci 0000:00:15.6: bridge window [mem 0xe6400000-0xe64fffff 64bit pref] Jul 14 23:03:35.749290 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] Jul 14 23:03:35.749348 kernel: pci 0000:00:15.7: bridge window [mem 0xfb900000-0xfb9fffff] Jul 14 23:03:35.749407 kernel: pci 0000:00:15.7: bridge window [mem 0xe6000000-0xe60fffff 64bit pref] Jul 14 23:03:35.749505 kernel: pci 0000:0b:00.0: [15ad:07b0] type 00 class 0x020000 Jul 14 23:03:35.749560 kernel: pci 0000:0b:00.0: reg 0x10: [mem 0xfd4fc000-0xfd4fcfff] Jul 14 23:03:35.749613 kernel: pci 0000:0b:00.0: reg 0x14: [mem 0xfd4fd000-0xfd4fdfff] Jul 14 23:03:35.749666 kernel: pci 0000:0b:00.0: reg 0x18: [mem 0xfd4fe000-0xfd4fffff] Jul 14 23:03:35.749719 kernel: pci 0000:0b:00.0: reg 0x1c: [io 0x5000-0x500f] Jul 14 23:03:35.749771 kernel: pci 0000:0b:00.0: reg 0x30: [mem 0x00000000-0x0000ffff pref] Jul 14 23:03:35.749825 kernel: pci 0000:0b:00.0: supports D1 D2 Jul 14 23:03:35.749880 kernel: pci 0000:0b:00.0: PME# supported from D0 D1 D2 D3hot D3cold Jul 14 23:03:35.749933 kernel: pci 0000:0b:00.0: disabling ASPM on pre-1.1 PCIe device. You can enable it with 'pcie_aspm=force' Jul 14 23:03:35.749986 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] Jul 14 23:03:35.750038 kernel: pci 0000:00:16.0: bridge window [io 0x5000-0x5fff] Jul 14 23:03:35.750090 kernel: pci 0000:00:16.0: bridge window [mem 0xfd400000-0xfd4fffff] Jul 14 23:03:35.750144 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] Jul 14 23:03:35.750196 kernel: pci 0000:00:16.1: bridge window [io 0x9000-0x9fff] Jul 14 23:03:35.750247 kernel: pci 0000:00:16.1: bridge window [mem 0xfd000000-0xfd0fffff] Jul 14 23:03:35.750303 kernel: pci 0000:00:16.1: bridge window [mem 0xe7700000-0xe77fffff 64bit pref] Jul 14 23:03:35.752402 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] Jul 14 23:03:35.752460 kernel: pci 0000:00:16.2: bridge window [io 0xd000-0xdfff] Jul 14 23:03:35.752513 kernel: pci 0000:00:16.2: bridge window [mem 0xfcc00000-0xfccfffff] Jul 14 23:03:35.752565 kernel: pci 0000:00:16.2: bridge window [mem 0xe7300000-0xe73fffff 64bit pref] Jul 14 23:03:35.752620 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] Jul 14 23:03:35.752672 kernel: pci 0000:00:16.3: bridge window [mem 0xfc800000-0xfc8fffff] Jul 14 23:03:35.752724 kernel: pci 0000:00:16.3: bridge window [mem 0xe6f00000-0xe6ffffff 64bit pref] Jul 14 23:03:35.752782 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] Jul 14 23:03:35.752835 kernel: pci 0000:00:16.4: bridge window [mem 0xfc400000-0xfc4fffff] Jul 14 23:03:35.752887 kernel: pci 0000:00:16.4: bridge window [mem 0xe6b00000-0xe6bfffff 64bit pref] Jul 14 23:03:35.752941 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] Jul 14 23:03:35.752993 kernel: pci 0000:00:16.5: bridge window [mem 0xfc000000-0xfc0fffff] Jul 14 23:03:35.753045 kernel: pci 0000:00:16.5: bridge window [mem 0xe6700000-0xe67fffff 64bit pref] Jul 14 23:03:35.753099 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] Jul 14 23:03:35.753151 kernel: pci 0000:00:16.6: bridge window [mem 0xfbc00000-0xfbcfffff] Jul 14 23:03:35.753206 kernel: pci 0000:00:16.6: bridge window [mem 0xe6300000-0xe63fffff 64bit pref] Jul 14 23:03:35.753259 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] Jul 14 23:03:35.753311 kernel: pci 0000:00:16.7: bridge window [mem 0xfb800000-0xfb8fffff] Jul 14 23:03:35.753373 kernel: pci 0000:00:16.7: bridge window [mem 0xe5f00000-0xe5ffffff 64bit pref] Jul 14 23:03:35.753428 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] Jul 14 23:03:35.753480 kernel: pci 0000:00:17.0: bridge window [io 0x6000-0x6fff] Jul 14 23:03:35.753533 kernel: pci 0000:00:17.0: bridge window [mem 0xfd300000-0xfd3fffff] Jul 14 23:03:35.753586 kernel: pci 0000:00:17.0: bridge window [mem 0xe7a00000-0xe7afffff 64bit pref] Jul 14 23:03:35.753644 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] Jul 14 23:03:35.753697 kernel: pci 0000:00:17.1: bridge window [io 0xa000-0xafff] Jul 14 23:03:35.753750 kernel: pci 0000:00:17.1: bridge window [mem 0xfcf00000-0xfcffffff] Jul 14 23:03:35.753803 kernel: pci 0000:00:17.1: bridge window [mem 0xe7600000-0xe76fffff 64bit pref] Jul 14 23:03:35.753856 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] Jul 14 23:03:35.753909 kernel: pci 0000:00:17.2: bridge window [io 0xe000-0xefff] Jul 14 23:03:35.753961 kernel: pci 0000:00:17.2: bridge window [mem 0xfcb00000-0xfcbfffff] Jul 14 23:03:35.754016 kernel: pci 0000:00:17.2: bridge window [mem 0xe7200000-0xe72fffff 64bit pref] Jul 14 23:03:35.754071 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] Jul 14 23:03:35.754123 kernel: pci 0000:00:17.3: bridge window [mem 0xfc700000-0xfc7fffff] Jul 14 23:03:35.754176 kernel: pci 0000:00:17.3: bridge window [mem 0xe6e00000-0xe6efffff 64bit pref] Jul 14 23:03:35.754231 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] Jul 14 23:03:35.754285 kernel: pci 0000:00:17.4: bridge window [mem 0xfc300000-0xfc3fffff] Jul 14 23:03:35.754338 kernel: pci 0000:00:17.4: bridge window [mem 0xe6a00000-0xe6afffff 64bit pref] Jul 14 23:03:35.754405 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] Jul 14 23:03:35.754466 kernel: pci 0000:00:17.5: bridge window [mem 0xfbf00000-0xfbffffff] Jul 14 23:03:35.754519 kernel: pci 0000:00:17.5: bridge window [mem 0xe6600000-0xe66fffff 64bit pref] Jul 14 23:03:35.754574 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] Jul 14 23:03:35.754626 kernel: pci 0000:00:17.6: bridge window [mem 0xfbb00000-0xfbbfffff] Jul 14 23:03:35.754678 kernel: pci 0000:00:17.6: bridge window [mem 0xe6200000-0xe62fffff 64bit pref] Jul 14 23:03:35.754732 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] Jul 14 23:03:35.754785 kernel: pci 0000:00:17.7: bridge window [mem 0xfb700000-0xfb7fffff] Jul 14 23:03:35.754838 kernel: pci 0000:00:17.7: bridge window [mem 0xe5e00000-0xe5efffff 64bit pref] Jul 14 23:03:35.754896 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] Jul 14 23:03:35.754949 kernel: pci 0000:00:18.0: bridge window [io 0x7000-0x7fff] Jul 14 23:03:35.755001 kernel: pci 0000:00:18.0: bridge window [mem 0xfd200000-0xfd2fffff] Jul 14 23:03:35.755054 kernel: pci 0000:00:18.0: bridge window [mem 0xe7900000-0xe79fffff 64bit pref] Jul 14 23:03:35.755109 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] Jul 14 23:03:35.755161 kernel: pci 0000:00:18.1: bridge window [io 0xb000-0xbfff] Jul 14 23:03:35.755214 kernel: pci 0000:00:18.1: bridge window [mem 0xfce00000-0xfcefffff] Jul 14 23:03:35.755266 kernel: pci 0000:00:18.1: bridge window [mem 0xe7500000-0xe75fffff 64bit pref] Jul 14 23:03:35.755323 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] Jul 14 23:03:35.757541 kernel: pci 0000:00:18.2: bridge window [mem 0xfca00000-0xfcafffff] Jul 14 23:03:35.757603 kernel: pci 0000:00:18.2: bridge window [mem 0xe7100000-0xe71fffff 64bit pref] Jul 14 23:03:35.757662 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] Jul 14 23:03:35.757716 kernel: pci 0000:00:18.3: bridge window [mem 0xfc600000-0xfc6fffff] Jul 14 23:03:35.757769 kernel: pci 0000:00:18.3: bridge window [mem 0xe6d00000-0xe6dfffff 64bit pref] Jul 14 23:03:35.757823 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] Jul 14 23:03:35.757875 kernel: pci 0000:00:18.4: bridge window [mem 0xfc200000-0xfc2fffff] Jul 14 23:03:35.757932 kernel: pci 0000:00:18.4: bridge window [mem 0xe6900000-0xe69fffff 64bit pref] Jul 14 23:03:35.757986 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] Jul 14 23:03:35.758040 kernel: pci 0000:00:18.5: bridge window [mem 0xfbe00000-0xfbefffff] Jul 14 23:03:35.758092 kernel: pci 0000:00:18.5: bridge window [mem 0xe6500000-0xe65fffff 64bit pref] Jul 14 23:03:35.758171 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] Jul 14 23:03:35.758225 kernel: pci 0000:00:18.6: bridge window [mem 0xfba00000-0xfbafffff] Jul 14 23:03:35.758278 kernel: pci 0000:00:18.6: bridge window [mem 0xe6100000-0xe61fffff 64bit pref] Jul 14 23:03:35.758332 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] Jul 14 23:03:35.758400 kernel: pci 0000:00:18.7: bridge window [mem 0xfb600000-0xfb6fffff] Jul 14 23:03:35.758460 kernel: pci 0000:00:18.7: bridge window [mem 0xe5d00000-0xe5dfffff 64bit pref] Jul 14 23:03:35.758468 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 9 Jul 14 23:03:35.758475 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 0 Jul 14 23:03:35.758481 kernel: ACPI: PCI: Interrupt link LNKB disabled Jul 14 23:03:35.758487 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jul 14 23:03:35.758493 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 10 Jul 14 23:03:35.758499 kernel: iommu: Default domain type: Translated Jul 14 23:03:35.758507 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jul 14 23:03:35.758513 kernel: PCI: Using ACPI for IRQ routing Jul 14 23:03:35.758519 kernel: PCI: pci_cache_line_size set to 64 bytes Jul 14 23:03:35.758525 kernel: e820: reserve RAM buffer [mem 0x0009ec00-0x0009ffff] Jul 14 23:03:35.758531 kernel: e820: reserve RAM buffer [mem 0x7fee0000-0x7fffffff] Jul 14 23:03:35.758584 kernel: pci 0000:00:0f.0: vgaarb: setting as boot VGA device Jul 14 23:03:35.758638 kernel: pci 0000:00:0f.0: vgaarb: bridge control possible Jul 14 23:03:35.758691 kernel: pci 0000:00:0f.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jul 14 23:03:35.758700 kernel: vgaarb: loaded Jul 14 23:03:35.758708 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 Jul 14 23:03:35.758714 kernel: hpet0: 16 comparators, 64-bit 14.318180 MHz counter Jul 14 23:03:35.758720 kernel: clocksource: Switched to clocksource tsc-early Jul 14 23:03:35.758726 kernel: VFS: Disk quotas dquot_6.6.0 Jul 14 23:03:35.758732 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 14 23:03:35.758738 kernel: pnp: PnP ACPI init Jul 14 23:03:35.758797 kernel: system 00:00: [io 0x1000-0x103f] has been reserved Jul 14 23:03:35.758848 kernel: system 00:00: [io 0x1040-0x104f] has been reserved Jul 14 23:03:35.758899 kernel: system 00:00: [io 0x0cf0-0x0cf1] has been reserved Jul 14 23:03:35.758952 kernel: system 00:04: [mem 0xfed00000-0xfed003ff] has been reserved Jul 14 23:03:35.759003 kernel: pnp 00:06: [dma 2] Jul 14 23:03:35.759056 kernel: system 00:07: [io 0xfce0-0xfcff] has been reserved Jul 14 23:03:35.759105 kernel: system 00:07: [mem 0xf0000000-0xf7ffffff] has been reserved Jul 14 23:03:35.759152 kernel: system 00:07: [mem 0xfe800000-0xfe9fffff] has been reserved Jul 14 23:03:35.759161 kernel: pnp: PnP ACPI: found 8 devices Jul 14 23:03:35.759170 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jul 14 23:03:35.759176 kernel: NET: Registered PF_INET protocol family Jul 14 23:03:35.759182 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 14 23:03:35.759188 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Jul 14 23:03:35.759194 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 14 23:03:35.759200 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Jul 14 23:03:35.759206 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Jul 14 23:03:35.759212 kernel: TCP: Hash tables configured (established 16384 bind 16384) Jul 14 23:03:35.759218 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Jul 14 23:03:35.759225 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Jul 14 23:03:35.759232 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 14 23:03:35.759237 kernel: NET: Registered PF_XDP protocol family Jul 14 23:03:35.759292 kernel: pci 0000:00:15.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 Jul 14 23:03:35.759376 kernel: pci 0000:00:15.3: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Jul 14 23:03:35.759434 kernel: pci 0000:00:15.4: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Jul 14 23:03:35.759488 kernel: pci 0000:00:15.5: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Jul 14 23:03:35.759545 kernel: pci 0000:00:15.6: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Jul 14 23:03:35.759599 kernel: pci 0000:00:15.7: bridge window [io 0x1000-0x0fff] to [bus 0a] add_size 1000 Jul 14 23:03:35.759652 kernel: pci 0000:00:16.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 0b] add_size 200000 add_align 100000 Jul 14 23:03:35.759706 kernel: pci 0000:00:16.3: bridge window [io 0x1000-0x0fff] to [bus 0e] add_size 1000 Jul 14 23:03:35.759759 kernel: pci 0000:00:16.4: bridge window [io 0x1000-0x0fff] to [bus 0f] add_size 1000 Jul 14 23:03:35.759813 kernel: pci 0000:00:16.5: bridge window [io 0x1000-0x0fff] to [bus 10] add_size 1000 Jul 14 23:03:35.759869 kernel: pci 0000:00:16.6: bridge window [io 0x1000-0x0fff] to [bus 11] add_size 1000 Jul 14 23:03:35.759922 kernel: pci 0000:00:16.7: bridge window [io 0x1000-0x0fff] to [bus 12] add_size 1000 Jul 14 23:03:35.759976 kernel: pci 0000:00:17.3: bridge window [io 0x1000-0x0fff] to [bus 16] add_size 1000 Jul 14 23:03:35.760029 kernel: pci 0000:00:17.4: bridge window [io 0x1000-0x0fff] to [bus 17] add_size 1000 Jul 14 23:03:35.760082 kernel: pci 0000:00:17.5: bridge window [io 0x1000-0x0fff] to [bus 18] add_size 1000 Jul 14 23:03:35.760138 kernel: pci 0000:00:17.6: bridge window [io 0x1000-0x0fff] to [bus 19] add_size 1000 Jul 14 23:03:35.760192 kernel: pci 0000:00:17.7: bridge window [io 0x1000-0x0fff] to [bus 1a] add_size 1000 Jul 14 23:03:35.760245 kernel: pci 0000:00:18.2: bridge window [io 0x1000-0x0fff] to [bus 1d] add_size 1000 Jul 14 23:03:35.760298 kernel: pci 0000:00:18.3: bridge window [io 0x1000-0x0fff] to [bus 1e] add_size 1000 Jul 14 23:03:35.760362 kernel: pci 0000:00:18.4: bridge window [io 0x1000-0x0fff] to [bus 1f] add_size 1000 Jul 14 23:03:35.760418 kernel: pci 0000:00:18.5: bridge window [io 0x1000-0x0fff] to [bus 20] add_size 1000 Jul 14 23:03:35.760475 kernel: pci 0000:00:18.6: bridge window [io 0x1000-0x0fff] to [bus 21] add_size 1000 Jul 14 23:03:35.760528 kernel: pci 0000:00:18.7: bridge window [io 0x1000-0x0fff] to [bus 22] add_size 1000 Jul 14 23:03:35.760582 kernel: pci 0000:00:15.0: BAR 15: assigned [mem 0xc0000000-0xc01fffff 64bit pref] Jul 14 23:03:35.760634 kernel: pci 0000:00:16.0: BAR 15: assigned [mem 0xc0200000-0xc03fffff 64bit pref] Jul 14 23:03:35.760688 kernel: pci 0000:00:15.3: BAR 13: no space for [io size 0x1000] Jul 14 23:03:35.760740 kernel: pci 0000:00:15.3: BAR 13: failed to assign [io size 0x1000] Jul 14 23:03:35.760793 kernel: pci 0000:00:15.4: BAR 13: no space for [io size 0x1000] Jul 14 23:03:35.760849 kernel: pci 0000:00:15.4: BAR 13: failed to assign [io size 0x1000] Jul 14 23:03:35.760902 kernel: pci 0000:00:15.5: BAR 13: no space for [io size 0x1000] Jul 14 23:03:35.760955 kernel: pci 0000:00:15.5: BAR 13: failed to assign [io size 0x1000] Jul 14 23:03:35.761008 kernel: pci 0000:00:15.6: BAR 13: no space for [io size 0x1000] Jul 14 23:03:35.761061 kernel: pci 0000:00:15.6: BAR 13: failed to assign [io size 0x1000] Jul 14 23:03:35.761115 kernel: pci 0000:00:15.7: BAR 13: no space for [io size 0x1000] Jul 14 23:03:35.761167 kernel: pci 0000:00:15.7: BAR 13: failed to assign [io size 0x1000] Jul 14 23:03:35.761220 kernel: pci 0000:00:16.3: BAR 13: no space for [io size 0x1000] Jul 14 23:03:35.761276 kernel: pci 0000:00:16.3: BAR 13: failed to assign [io size 0x1000] Jul 14 23:03:35.761336 kernel: pci 0000:00:16.4: BAR 13: no space for [io size 0x1000] Jul 14 23:03:35.761713 kernel: pci 0000:00:16.4: BAR 13: failed to assign [io size 0x1000] Jul 14 23:03:35.761769 kernel: pci 0000:00:16.5: BAR 13: no space for [io size 0x1000] Jul 14 23:03:35.761838 kernel: pci 0000:00:16.5: BAR 13: failed to assign [io size 0x1000] Jul 14 23:03:35.761891 kernel: pci 0000:00:16.6: BAR 13: no space for [io size 0x1000] Jul 14 23:03:35.761945 kernel: pci 0000:00:16.6: BAR 13: failed to assign [io size 0x1000] Jul 14 23:03:35.762014 kernel: pci 0000:00:16.7: BAR 13: no space for [io size 0x1000] Jul 14 23:03:35.762070 kernel: pci 0000:00:16.7: BAR 13: failed to assign [io size 0x1000] Jul 14 23:03:35.762122 kernel: pci 0000:00:17.3: BAR 13: no space for [io size 0x1000] Jul 14 23:03:35.762175 kernel: pci 0000:00:17.3: BAR 13: failed to assign [io size 0x1000] Jul 14 23:03:35.762226 kernel: pci 0000:00:17.4: BAR 13: no space for [io size 0x1000] Jul 14 23:03:35.762278 kernel: pci 0000:00:17.4: BAR 13: failed to assign [io size 0x1000] Jul 14 23:03:35.762329 kernel: pci 0000:00:17.5: BAR 13: no space for [io size 0x1000] Jul 14 23:03:35.762393 kernel: pci 0000:00:17.5: BAR 13: failed to assign [io size 0x1000] Jul 14 23:03:35.762447 kernel: pci 0000:00:17.6: BAR 13: no space for [io size 0x1000] Jul 14 23:03:35.762501 kernel: pci 0000:00:17.6: BAR 13: failed to assign [io size 0x1000] Jul 14 23:03:35.762553 kernel: pci 0000:00:17.7: BAR 13: no space for [io size 0x1000] Jul 14 23:03:35.762604 kernel: pci 0000:00:17.7: BAR 13: failed to assign [io size 0x1000] Jul 14 23:03:35.762656 kernel: pci 0000:00:18.2: BAR 13: no space for [io size 0x1000] Jul 14 23:03:35.762708 kernel: pci 0000:00:18.2: BAR 13: failed to assign [io size 0x1000] Jul 14 23:03:35.762761 kernel: pci 0000:00:18.3: BAR 13: no space for [io size 0x1000] Jul 14 23:03:35.762812 kernel: pci 0000:00:18.3: BAR 13: failed to assign [io size 0x1000] Jul 14 23:03:35.762864 kernel: pci 0000:00:18.4: BAR 13: no space for [io size 0x1000] Jul 14 23:03:35.762919 kernel: pci 0000:00:18.4: BAR 13: failed to assign [io size 0x1000] Jul 14 23:03:35.762971 kernel: pci 0000:00:18.5: BAR 13: no space for [io size 0x1000] Jul 14 23:03:35.763023 kernel: pci 0000:00:18.5: BAR 13: failed to assign [io size 0x1000] Jul 14 23:03:35.763075 kernel: pci 0000:00:18.6: BAR 13: no space for [io size 0x1000] Jul 14 23:03:35.763126 kernel: pci 0000:00:18.6: BAR 13: failed to assign [io size 0x1000] Jul 14 23:03:35.763178 kernel: pci 0000:00:18.7: BAR 13: no space for [io size 0x1000] Jul 14 23:03:35.763230 kernel: pci 0000:00:18.7: BAR 13: failed to assign [io size 0x1000] Jul 14 23:03:35.763282 kernel: pci 0000:00:18.7: BAR 13: no space for [io size 0x1000] Jul 14 23:03:35.763333 kernel: pci 0000:00:18.7: BAR 13: failed to assign [io size 0x1000] Jul 14 23:03:35.765936 kernel: pci 0000:00:18.6: BAR 13: no space for [io size 0x1000] Jul 14 23:03:35.765993 kernel: pci 0000:00:18.6: BAR 13: failed to assign [io size 0x1000] Jul 14 23:03:35.766048 kernel: pci 0000:00:18.5: BAR 13: no space for [io size 0x1000] Jul 14 23:03:35.766100 kernel: pci 0000:00:18.5: BAR 13: failed to assign [io size 0x1000] Jul 14 23:03:35.766153 kernel: pci 0000:00:18.4: BAR 13: no space for [io size 0x1000] Jul 14 23:03:35.766205 kernel: pci 0000:00:18.4: BAR 13: failed to assign [io size 0x1000] Jul 14 23:03:35.766257 kernel: pci 0000:00:18.3: BAR 13: no space for [io size 0x1000] Jul 14 23:03:35.766309 kernel: pci 0000:00:18.3: BAR 13: failed to assign [io size 0x1000] Jul 14 23:03:35.766374 kernel: pci 0000:00:18.2: BAR 13: no space for [io size 0x1000] Jul 14 23:03:35.766431 kernel: pci 0000:00:18.2: BAR 13: failed to assign [io size 0x1000] Jul 14 23:03:35.766483 kernel: pci 0000:00:17.7: BAR 13: no space for [io size 0x1000] Jul 14 23:03:35.766535 kernel: pci 0000:00:17.7: BAR 13: failed to assign [io size 0x1000] Jul 14 23:03:35.766597 kernel: pci 0000:00:17.6: BAR 13: no space for [io size 0x1000] Jul 14 23:03:35.766653 kernel: pci 0000:00:17.6: BAR 13: failed to assign [io size 0x1000] Jul 14 23:03:35.766704 kernel: pci 0000:00:17.5: BAR 13: no space for [io size 0x1000] Jul 14 23:03:35.766756 kernel: pci 0000:00:17.5: BAR 13: failed to assign [io size 0x1000] Jul 14 23:03:35.766807 kernel: pci 0000:00:17.4: BAR 13: no space for [io size 0x1000] Jul 14 23:03:35.766859 kernel: pci 0000:00:17.4: BAR 13: failed to assign [io size 0x1000] Jul 14 23:03:35.766911 kernel: pci 0000:00:17.3: BAR 13: no space for [io size 0x1000] Jul 14 23:03:35.766966 kernel: pci 0000:00:17.3: BAR 13: failed to assign [io size 0x1000] Jul 14 23:03:35.767019 kernel: pci 0000:00:16.7: BAR 13: no space for [io size 0x1000] Jul 14 23:03:35.767070 kernel: pci 0000:00:16.7: BAR 13: failed to assign [io size 0x1000] Jul 14 23:03:35.767121 kernel: pci 0000:00:16.6: BAR 13: no space for [io size 0x1000] Jul 14 23:03:35.767172 kernel: pci 0000:00:16.6: BAR 13: failed to assign [io size 0x1000] Jul 14 23:03:35.767223 kernel: pci 0000:00:16.5: BAR 13: no space for [io size 0x1000] Jul 14 23:03:35.767275 kernel: pci 0000:00:16.5: BAR 13: failed to assign [io size 0x1000] Jul 14 23:03:35.767326 kernel: pci 0000:00:16.4: BAR 13: no space for [io size 0x1000] Jul 14 23:03:35.767418 kernel: pci 0000:00:16.4: BAR 13: failed to assign [io size 0x1000] Jul 14 23:03:35.767475 kernel: pci 0000:00:16.3: BAR 13: no space for [io size 0x1000] Jul 14 23:03:35.767526 kernel: pci 0000:00:16.3: BAR 13: failed to assign [io size 0x1000] Jul 14 23:03:35.767579 kernel: pci 0000:00:15.7: BAR 13: no space for [io size 0x1000] Jul 14 23:03:35.767630 kernel: pci 0000:00:15.7: BAR 13: failed to assign [io size 0x1000] Jul 14 23:03:35.767682 kernel: pci 0000:00:15.6: BAR 13: no space for [io size 0x1000] Jul 14 23:03:35.767734 kernel: pci 0000:00:15.6: BAR 13: failed to assign [io size 0x1000] Jul 14 23:03:35.767785 kernel: pci 0000:00:15.5: BAR 13: no space for [io size 0x1000] Jul 14 23:03:35.767836 kernel: pci 0000:00:15.5: BAR 13: failed to assign [io size 0x1000] Jul 14 23:03:35.767888 kernel: pci 0000:00:15.4: BAR 13: no space for [io size 0x1000] Jul 14 23:03:35.767939 kernel: pci 0000:00:15.4: BAR 13: failed to assign [io size 0x1000] Jul 14 23:03:35.767994 kernel: pci 0000:00:15.3: BAR 13: no space for [io size 0x1000] Jul 14 23:03:35.768046 kernel: pci 0000:00:15.3: BAR 13: failed to assign [io size 0x1000] Jul 14 23:03:35.768099 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Jul 14 23:03:35.768151 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] Jul 14 23:03:35.768203 kernel: pci 0000:00:11.0: bridge window [io 0x2000-0x3fff] Jul 14 23:03:35.768254 kernel: pci 0000:00:11.0: bridge window [mem 0xfd600000-0xfdffffff] Jul 14 23:03:35.768305 kernel: pci 0000:00:11.0: bridge window [mem 0xe7b00000-0xe7ffffff 64bit pref] Jul 14 23:03:35.768368 kernel: pci 0000:03:00.0: BAR 6: assigned [mem 0xfd500000-0xfd50ffff pref] Jul 14 23:03:35.768453 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] Jul 14 23:03:35.768554 kernel: pci 0000:00:15.0: bridge window [io 0x4000-0x4fff] Jul 14 23:03:35.768606 kernel: pci 0000:00:15.0: bridge window [mem 0xfd500000-0xfd5fffff] Jul 14 23:03:35.768658 kernel: pci 0000:00:15.0: bridge window [mem 0xc0000000-0xc01fffff 64bit pref] Jul 14 23:03:35.768751 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] Jul 14 23:03:35.768830 kernel: pci 0000:00:15.1: bridge window [io 0x8000-0x8fff] Jul 14 23:03:35.768906 kernel: pci 0000:00:15.1: bridge window [mem 0xfd100000-0xfd1fffff] Jul 14 23:03:35.768962 kernel: pci 0000:00:15.1: bridge window [mem 0xe7800000-0xe78fffff 64bit pref] Jul 14 23:03:35.769016 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] Jul 14 23:03:35.769072 kernel: pci 0000:00:15.2: bridge window [io 0xc000-0xcfff] Jul 14 23:03:35.769123 kernel: pci 0000:00:15.2: bridge window [mem 0xfcd00000-0xfcdfffff] Jul 14 23:03:35.769175 kernel: pci 0000:00:15.2: bridge window [mem 0xe7400000-0xe74fffff 64bit pref] Jul 14 23:03:35.769227 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] Jul 14 23:03:35.769279 kernel: pci 0000:00:15.3: bridge window [mem 0xfc900000-0xfc9fffff] Jul 14 23:03:35.769332 kernel: pci 0000:00:15.3: bridge window [mem 0xe7000000-0xe70fffff 64bit pref] Jul 14 23:03:35.769431 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] Jul 14 23:03:35.769484 kernel: pci 0000:00:15.4: bridge window [mem 0xfc500000-0xfc5fffff] Jul 14 23:03:35.769536 kernel: pci 0000:00:15.4: bridge window [mem 0xe6c00000-0xe6cfffff 64bit pref] Jul 14 23:03:35.769592 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] Jul 14 23:03:35.769647 kernel: pci 0000:00:15.5: bridge window [mem 0xfc100000-0xfc1fffff] Jul 14 23:03:35.769700 kernel: pci 0000:00:15.5: bridge window [mem 0xe6800000-0xe68fffff 64bit pref] Jul 14 23:03:35.769752 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] Jul 14 23:03:35.769803 kernel: pci 0000:00:15.6: bridge window [mem 0xfbd00000-0xfbdfffff] Jul 14 23:03:35.769854 kernel: pci 0000:00:15.6: bridge window [mem 0xe6400000-0xe64fffff 64bit pref] Jul 14 23:03:35.769908 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] Jul 14 23:03:35.769961 kernel: pci 0000:00:15.7: bridge window [mem 0xfb900000-0xfb9fffff] Jul 14 23:03:35.770013 kernel: pci 0000:00:15.7: bridge window [mem 0xe6000000-0xe60fffff 64bit pref] Jul 14 23:03:35.770068 kernel: pci 0000:0b:00.0: BAR 6: assigned [mem 0xfd400000-0xfd40ffff pref] Jul 14 23:03:35.770122 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] Jul 14 23:03:35.770174 kernel: pci 0000:00:16.0: bridge window [io 0x5000-0x5fff] Jul 14 23:03:35.770242 kernel: pci 0000:00:16.0: bridge window [mem 0xfd400000-0xfd4fffff] Jul 14 23:03:35.770314 kernel: pci 0000:00:16.0: bridge window [mem 0xc0200000-0xc03fffff 64bit pref] Jul 14 23:03:35.770382 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] Jul 14 23:03:35.770441 kernel: pci 0000:00:16.1: bridge window [io 0x9000-0x9fff] Jul 14 23:03:35.770510 kernel: pci 0000:00:16.1: bridge window [mem 0xfd000000-0xfd0fffff] Jul 14 23:03:35.770562 kernel: pci 0000:00:16.1: bridge window [mem 0xe7700000-0xe77fffff 64bit pref] Jul 14 23:03:35.770615 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] Jul 14 23:03:35.770668 kernel: pci 0000:00:16.2: bridge window [io 0xd000-0xdfff] Jul 14 23:03:35.770720 kernel: pci 0000:00:16.2: bridge window [mem 0xfcc00000-0xfccfffff] Jul 14 23:03:35.770772 kernel: pci 0000:00:16.2: bridge window [mem 0xe7300000-0xe73fffff 64bit pref] Jul 14 23:03:35.770824 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] Jul 14 23:03:35.770875 kernel: pci 0000:00:16.3: bridge window [mem 0xfc800000-0xfc8fffff] Jul 14 23:03:35.770927 kernel: pci 0000:00:16.3: bridge window [mem 0xe6f00000-0xe6ffffff 64bit pref] Jul 14 23:03:35.770981 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] Jul 14 23:03:35.771033 kernel: pci 0000:00:16.4: bridge window [mem 0xfc400000-0xfc4fffff] Jul 14 23:03:35.771085 kernel: pci 0000:00:16.4: bridge window [mem 0xe6b00000-0xe6bfffff 64bit pref] Jul 14 23:03:35.771137 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] Jul 14 23:03:35.771189 kernel: pci 0000:00:16.5: bridge window [mem 0xfc000000-0xfc0fffff] Jul 14 23:03:35.771241 kernel: pci 0000:00:16.5: bridge window [mem 0xe6700000-0xe67fffff 64bit pref] Jul 14 23:03:35.771292 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] Jul 14 23:03:35.771352 kernel: pci 0000:00:16.6: bridge window [mem 0xfbc00000-0xfbcfffff] Jul 14 23:03:35.771405 kernel: pci 0000:00:16.6: bridge window [mem 0xe6300000-0xe63fffff 64bit pref] Jul 14 23:03:35.771465 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] Jul 14 23:03:35.771517 kernel: pci 0000:00:16.7: bridge window [mem 0xfb800000-0xfb8fffff] Jul 14 23:03:35.771570 kernel: pci 0000:00:16.7: bridge window [mem 0xe5f00000-0xe5ffffff 64bit pref] Jul 14 23:03:35.771623 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] Jul 14 23:03:35.771675 kernel: pci 0000:00:17.0: bridge window [io 0x6000-0x6fff] Jul 14 23:03:35.771727 kernel: pci 0000:00:17.0: bridge window [mem 0xfd300000-0xfd3fffff] Jul 14 23:03:35.771779 kernel: pci 0000:00:17.0: bridge window [mem 0xe7a00000-0xe7afffff 64bit pref] Jul 14 23:03:35.771832 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] Jul 14 23:03:35.771884 kernel: pci 0000:00:17.1: bridge window [io 0xa000-0xafff] Jul 14 23:03:35.771935 kernel: pci 0000:00:17.1: bridge window [mem 0xfcf00000-0xfcffffff] Jul 14 23:03:35.771990 kernel: pci 0000:00:17.1: bridge window [mem 0xe7600000-0xe76fffff 64bit pref] Jul 14 23:03:35.772044 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] Jul 14 23:03:35.772096 kernel: pci 0000:00:17.2: bridge window [io 0xe000-0xefff] Jul 14 23:03:35.772148 kernel: pci 0000:00:17.2: bridge window [mem 0xfcb00000-0xfcbfffff] Jul 14 23:03:35.772199 kernel: pci 0000:00:17.2: bridge window [mem 0xe7200000-0xe72fffff 64bit pref] Jul 14 23:03:35.772252 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] Jul 14 23:03:35.772304 kernel: pci 0000:00:17.3: bridge window [mem 0xfc700000-0xfc7fffff] Jul 14 23:03:35.772424 kernel: pci 0000:00:17.3: bridge window [mem 0xe6e00000-0xe6efffff 64bit pref] Jul 14 23:03:35.772478 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] Jul 14 23:03:35.772533 kernel: pci 0000:00:17.4: bridge window [mem 0xfc300000-0xfc3fffff] Jul 14 23:03:35.772585 kernel: pci 0000:00:17.4: bridge window [mem 0xe6a00000-0xe6afffff 64bit pref] Jul 14 23:03:35.772636 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] Jul 14 23:03:35.772687 kernel: pci 0000:00:17.5: bridge window [mem 0xfbf00000-0xfbffffff] Jul 14 23:03:35.772738 kernel: pci 0000:00:17.5: bridge window [mem 0xe6600000-0xe66fffff 64bit pref] Jul 14 23:03:35.772790 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] Jul 14 23:03:35.772842 kernel: pci 0000:00:17.6: bridge window [mem 0xfbb00000-0xfbbfffff] Jul 14 23:03:35.772894 kernel: pci 0000:00:17.6: bridge window [mem 0xe6200000-0xe62fffff 64bit pref] Jul 14 23:03:35.772946 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] Jul 14 23:03:35.772997 kernel: pci 0000:00:17.7: bridge window [mem 0xfb700000-0xfb7fffff] Jul 14 23:03:35.773051 kernel: pci 0000:00:17.7: bridge window [mem 0xe5e00000-0xe5efffff 64bit pref] Jul 14 23:03:35.773103 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] Jul 14 23:03:35.773156 kernel: pci 0000:00:18.0: bridge window [io 0x7000-0x7fff] Jul 14 23:03:35.773207 kernel: pci 0000:00:18.0: bridge window [mem 0xfd200000-0xfd2fffff] Jul 14 23:03:35.773259 kernel: pci 0000:00:18.0: bridge window [mem 0xe7900000-0xe79fffff 64bit pref] Jul 14 23:03:35.773311 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] Jul 14 23:03:35.773378 kernel: pci 0000:00:18.1: bridge window [io 0xb000-0xbfff] Jul 14 23:03:35.773477 kernel: pci 0000:00:18.1: bridge window [mem 0xfce00000-0xfcefffff] Jul 14 23:03:35.773530 kernel: pci 0000:00:18.1: bridge window [mem 0xe7500000-0xe75fffff 64bit pref] Jul 14 23:03:35.773585 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] Jul 14 23:03:35.773637 kernel: pci 0000:00:18.2: bridge window [mem 0xfca00000-0xfcafffff] Jul 14 23:03:35.773689 kernel: pci 0000:00:18.2: bridge window [mem 0xe7100000-0xe71fffff 64bit pref] Jul 14 23:03:35.773741 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] Jul 14 23:03:35.773794 kernel: pci 0000:00:18.3: bridge window [mem 0xfc600000-0xfc6fffff] Jul 14 23:03:35.773846 kernel: pci 0000:00:18.3: bridge window [mem 0xe6d00000-0xe6dfffff 64bit pref] Jul 14 23:03:35.773898 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] Jul 14 23:03:35.773951 kernel: pci 0000:00:18.4: bridge window [mem 0xfc200000-0xfc2fffff] Jul 14 23:03:35.774003 kernel: pci 0000:00:18.4: bridge window [mem 0xe6900000-0xe69fffff 64bit pref] Jul 14 23:03:35.774055 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] Jul 14 23:03:35.774110 kernel: pci 0000:00:18.5: bridge window [mem 0xfbe00000-0xfbefffff] Jul 14 23:03:35.774162 kernel: pci 0000:00:18.5: bridge window [mem 0xe6500000-0xe65fffff 64bit pref] Jul 14 23:03:35.774214 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] Jul 14 23:03:35.774267 kernel: pci 0000:00:18.6: bridge window [mem 0xfba00000-0xfbafffff] Jul 14 23:03:35.774318 kernel: pci 0000:00:18.6: bridge window [mem 0xe6100000-0xe61fffff 64bit pref] Jul 14 23:03:35.774402 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] Jul 14 23:03:35.774458 kernel: pci 0000:00:18.7: bridge window [mem 0xfb600000-0xfb6fffff] Jul 14 23:03:35.774510 kernel: pci 0000:00:18.7: bridge window [mem 0xe5d00000-0xe5dfffff 64bit pref] Jul 14 23:03:35.774560 kernel: pci_bus 0000:00: resource 4 [mem 0x000a0000-0x000bffff window] Jul 14 23:03:35.774609 kernel: pci_bus 0000:00: resource 5 [mem 0x000cc000-0x000dbfff window] Jul 14 23:03:35.774655 kernel: pci_bus 0000:00: resource 6 [mem 0xc0000000-0xfebfffff window] Jul 14 23:03:35.774701 kernel: pci_bus 0000:00: resource 7 [io 0x0000-0x0cf7 window] Jul 14 23:03:35.774745 kernel: pci_bus 0000:00: resource 8 [io 0x0d00-0xfeff window] Jul 14 23:03:35.774796 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x3fff] Jul 14 23:03:35.774844 kernel: pci_bus 0000:02: resource 1 [mem 0xfd600000-0xfdffffff] Jul 14 23:03:35.774892 kernel: pci_bus 0000:02: resource 2 [mem 0xe7b00000-0xe7ffffff 64bit pref] Jul 14 23:03:35.774940 kernel: pci_bus 0000:02: resource 4 [mem 0x000a0000-0x000bffff window] Jul 14 23:03:35.774990 kernel: pci_bus 0000:02: resource 5 [mem 0x000cc000-0x000dbfff window] Jul 14 23:03:35.775038 kernel: pci_bus 0000:02: resource 6 [mem 0xc0000000-0xfebfffff window] Jul 14 23:03:35.775086 kernel: pci_bus 0000:02: resource 7 [io 0x0000-0x0cf7 window] Jul 14 23:03:35.775134 kernel: pci_bus 0000:02: resource 8 [io 0x0d00-0xfeff window] Jul 14 23:03:35.775186 kernel: pci_bus 0000:03: resource 0 [io 0x4000-0x4fff] Jul 14 23:03:35.775235 kernel: pci_bus 0000:03: resource 1 [mem 0xfd500000-0xfd5fffff] Jul 14 23:03:35.775283 kernel: pci_bus 0000:03: resource 2 [mem 0xc0000000-0xc01fffff 64bit pref] Jul 14 23:03:35.775337 kernel: pci_bus 0000:04: resource 0 [io 0x8000-0x8fff] Jul 14 23:03:35.775399 kernel: pci_bus 0000:04: resource 1 [mem 0xfd100000-0xfd1fffff] Jul 14 23:03:35.775447 kernel: pci_bus 0000:04: resource 2 [mem 0xe7800000-0xe78fffff 64bit pref] Jul 14 23:03:35.775499 kernel: pci_bus 0000:05: resource 0 [io 0xc000-0xcfff] Jul 14 23:03:35.775547 kernel: pci_bus 0000:05: resource 1 [mem 0xfcd00000-0xfcdfffff] Jul 14 23:03:35.775594 kernel: pci_bus 0000:05: resource 2 [mem 0xe7400000-0xe74fffff 64bit pref] Jul 14 23:03:35.775646 kernel: pci_bus 0000:06: resource 1 [mem 0xfc900000-0xfc9fffff] Jul 14 23:03:35.775698 kernel: pci_bus 0000:06: resource 2 [mem 0xe7000000-0xe70fffff 64bit pref] Jul 14 23:03:35.775753 kernel: pci_bus 0000:07: resource 1 [mem 0xfc500000-0xfc5fffff] Jul 14 23:03:35.775802 kernel: pci_bus 0000:07: resource 2 [mem 0xe6c00000-0xe6cfffff 64bit pref] Jul 14 23:03:35.775854 kernel: pci_bus 0000:08: resource 1 [mem 0xfc100000-0xfc1fffff] Jul 14 23:03:35.775903 kernel: pci_bus 0000:08: resource 2 [mem 0xe6800000-0xe68fffff 64bit pref] Jul 14 23:03:35.775955 kernel: pci_bus 0000:09: resource 1 [mem 0xfbd00000-0xfbdfffff] Jul 14 23:03:35.776006 kernel: pci_bus 0000:09: resource 2 [mem 0xe6400000-0xe64fffff 64bit pref] Jul 14 23:03:35.776058 kernel: pci_bus 0000:0a: resource 1 [mem 0xfb900000-0xfb9fffff] Jul 14 23:03:35.776106 kernel: pci_bus 0000:0a: resource 2 [mem 0xe6000000-0xe60fffff 64bit pref] Jul 14 23:03:35.776169 kernel: pci_bus 0000:0b: resource 0 [io 0x5000-0x5fff] Jul 14 23:03:35.776218 kernel: pci_bus 0000:0b: resource 1 [mem 0xfd400000-0xfd4fffff] Jul 14 23:03:35.776268 kernel: pci_bus 0000:0b: resource 2 [mem 0xc0200000-0xc03fffff 64bit pref] Jul 14 23:03:35.776321 kernel: pci_bus 0000:0c: resource 0 [io 0x9000-0x9fff] Jul 14 23:03:35.776479 kernel: pci_bus 0000:0c: resource 1 [mem 0xfd000000-0xfd0fffff] Jul 14 23:03:35.776529 kernel: pci_bus 0000:0c: resource 2 [mem 0xe7700000-0xe77fffff 64bit pref] Jul 14 23:03:35.776582 kernel: pci_bus 0000:0d: resource 0 [io 0xd000-0xdfff] Jul 14 23:03:35.776631 kernel: pci_bus 0000:0d: resource 1 [mem 0xfcc00000-0xfccfffff] Jul 14 23:03:35.776682 kernel: pci_bus 0000:0d: resource 2 [mem 0xe7300000-0xe73fffff 64bit pref] Jul 14 23:03:35.776737 kernel: pci_bus 0000:0e: resource 1 [mem 0xfc800000-0xfc8fffff] Jul 14 23:03:35.776786 kernel: pci_bus 0000:0e: resource 2 [mem 0xe6f00000-0xe6ffffff 64bit pref] Jul 14 23:03:35.776839 kernel: pci_bus 0000:0f: resource 1 [mem 0xfc400000-0xfc4fffff] Jul 14 23:03:35.776887 kernel: pci_bus 0000:0f: resource 2 [mem 0xe6b00000-0xe6bfffff 64bit pref] Jul 14 23:03:35.776938 kernel: pci_bus 0000:10: resource 1 [mem 0xfc000000-0xfc0fffff] Jul 14 23:03:35.776986 kernel: pci_bus 0000:10: resource 2 [mem 0xe6700000-0xe67fffff 64bit pref] Jul 14 23:03:35.777039 kernel: pci_bus 0000:11: resource 1 [mem 0xfbc00000-0xfbcfffff] Jul 14 23:03:35.777088 kernel: pci_bus 0000:11: resource 2 [mem 0xe6300000-0xe63fffff 64bit pref] Jul 14 23:03:35.777138 kernel: pci_bus 0000:12: resource 1 [mem 0xfb800000-0xfb8fffff] Jul 14 23:03:35.777187 kernel: pci_bus 0000:12: resource 2 [mem 0xe5f00000-0xe5ffffff 64bit pref] Jul 14 23:03:35.777238 kernel: pci_bus 0000:13: resource 0 [io 0x6000-0x6fff] Jul 14 23:03:35.777286 kernel: pci_bus 0000:13: resource 1 [mem 0xfd300000-0xfd3fffff] Jul 14 23:03:35.777334 kernel: pci_bus 0000:13: resource 2 [mem 0xe7a00000-0xe7afffff 64bit pref] Jul 14 23:03:35.777396 kernel: pci_bus 0000:14: resource 0 [io 0xa000-0xafff] Jul 14 23:03:35.777444 kernel: pci_bus 0000:14: resource 1 [mem 0xfcf00000-0xfcffffff] Jul 14 23:03:35.777492 kernel: pci_bus 0000:14: resource 2 [mem 0xe7600000-0xe76fffff 64bit pref] Jul 14 23:03:35.777543 kernel: pci_bus 0000:15: resource 0 [io 0xe000-0xefff] Jul 14 23:03:35.777591 kernel: pci_bus 0000:15: resource 1 [mem 0xfcb00000-0xfcbfffff] Jul 14 23:03:35.777639 kernel: pci_bus 0000:15: resource 2 [mem 0xe7200000-0xe72fffff 64bit pref] Jul 14 23:03:35.777690 kernel: pci_bus 0000:16: resource 1 [mem 0xfc700000-0xfc7fffff] Jul 14 23:03:35.777741 kernel: pci_bus 0000:16: resource 2 [mem 0xe6e00000-0xe6efffff 64bit pref] Jul 14 23:03:35.777796 kernel: pci_bus 0000:17: resource 1 [mem 0xfc300000-0xfc3fffff] Jul 14 23:03:35.777845 kernel: pci_bus 0000:17: resource 2 [mem 0xe6a00000-0xe6afffff 64bit pref] Jul 14 23:03:35.777897 kernel: pci_bus 0000:18: resource 1 [mem 0xfbf00000-0xfbffffff] Jul 14 23:03:35.777949 kernel: pci_bus 0000:18: resource 2 [mem 0xe6600000-0xe66fffff 64bit pref] Jul 14 23:03:35.778000 kernel: pci_bus 0000:19: resource 1 [mem 0xfbb00000-0xfbbfffff] Jul 14 23:03:35.778051 kernel: pci_bus 0000:19: resource 2 [mem 0xe6200000-0xe62fffff 64bit pref] Jul 14 23:03:35.778103 kernel: pci_bus 0000:1a: resource 1 [mem 0xfb700000-0xfb7fffff] Jul 14 23:03:35.778151 kernel: pci_bus 0000:1a: resource 2 [mem 0xe5e00000-0xe5efffff 64bit pref] Jul 14 23:03:35.778203 kernel: pci_bus 0000:1b: resource 0 [io 0x7000-0x7fff] Jul 14 23:03:35.778252 kernel: pci_bus 0000:1b: resource 1 [mem 0xfd200000-0xfd2fffff] Jul 14 23:03:35.778303 kernel: pci_bus 0000:1b: resource 2 [mem 0xe7900000-0xe79fffff 64bit pref] Jul 14 23:03:35.778364 kernel: pci_bus 0000:1c: resource 0 [io 0xb000-0xbfff] Jul 14 23:03:35.778447 kernel: pci_bus 0000:1c: resource 1 [mem 0xfce00000-0xfcefffff] Jul 14 23:03:35.778497 kernel: pci_bus 0000:1c: resource 2 [mem 0xe7500000-0xe75fffff 64bit pref] Jul 14 23:03:35.778551 kernel: pci_bus 0000:1d: resource 1 [mem 0xfca00000-0xfcafffff] Jul 14 23:03:35.778601 kernel: pci_bus 0000:1d: resource 2 [mem 0xe7100000-0xe71fffff 64bit pref] Jul 14 23:03:35.778654 kernel: pci_bus 0000:1e: resource 1 [mem 0xfc600000-0xfc6fffff] Jul 14 23:03:35.778708 kernel: pci_bus 0000:1e: resource 2 [mem 0xe6d00000-0xe6dfffff 64bit pref] Jul 14 23:03:35.778761 kernel: pci_bus 0000:1f: resource 1 [mem 0xfc200000-0xfc2fffff] Jul 14 23:03:35.778812 kernel: pci_bus 0000:1f: resource 2 [mem 0xe6900000-0xe69fffff 64bit pref] Jul 14 23:03:35.778870 kernel: pci_bus 0000:20: resource 1 [mem 0xfbe00000-0xfbefffff] Jul 14 23:03:35.778921 kernel: pci_bus 0000:20: resource 2 [mem 0xe6500000-0xe65fffff 64bit pref] Jul 14 23:03:35.778974 kernel: pci_bus 0000:21: resource 1 [mem 0xfba00000-0xfbafffff] Jul 14 23:03:35.779027 kernel: pci_bus 0000:21: resource 2 [mem 0xe6100000-0xe61fffff 64bit pref] Jul 14 23:03:35.779080 kernel: pci_bus 0000:22: resource 1 [mem 0xfb600000-0xfb6fffff] Jul 14 23:03:35.779130 kernel: pci_bus 0000:22: resource 2 [mem 0xe5d00000-0xe5dfffff 64bit pref] Jul 14 23:03:35.779187 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jul 14 23:03:35.779201 kernel: PCI: CLS 32 bytes, default 64 Jul 14 23:03:35.779208 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jul 14 23:03:35.779215 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x311fd3cd494, max_idle_ns: 440795223879 ns Jul 14 23:03:35.779223 kernel: clocksource: Switched to clocksource tsc Jul 14 23:03:35.779230 kernel: Initialise system trusted keyrings Jul 14 23:03:35.779237 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Jul 14 23:03:35.779243 kernel: Key type asymmetric registered Jul 14 23:03:35.779249 kernel: Asymmetric key parser 'x509' registered Jul 14 23:03:35.779256 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jul 14 23:03:35.779262 kernel: io scheduler mq-deadline registered Jul 14 23:03:35.779285 kernel: io scheduler kyber registered Jul 14 23:03:35.779292 kernel: io scheduler bfq registered Jul 14 23:03:35.780834 kernel: pcieport 0000:00:15.0: PME: Signaling with IRQ 24 Jul 14 23:03:35.780904 kernel: pcieport 0000:00:15.0: pciehp: Slot #160 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 14 23:03:35.780963 kernel: pcieport 0000:00:15.1: PME: Signaling with IRQ 25 Jul 14 23:03:35.781018 kernel: pcieport 0000:00:15.1: pciehp: Slot #161 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 14 23:03:35.781074 kernel: pcieport 0000:00:15.2: PME: Signaling with IRQ 26 Jul 14 23:03:35.781126 kernel: pcieport 0000:00:15.2: pciehp: Slot #162 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 14 23:03:35.781180 kernel: pcieport 0000:00:15.3: PME: Signaling with IRQ 27 Jul 14 23:03:35.781233 kernel: pcieport 0000:00:15.3: pciehp: Slot #163 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 14 23:03:35.781291 kernel: pcieport 0000:00:15.4: PME: Signaling with IRQ 28 Jul 14 23:03:35.781352 kernel: pcieport 0000:00:15.4: pciehp: Slot #164 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 14 23:03:35.781435 kernel: pcieport 0000:00:15.5: PME: Signaling with IRQ 29 Jul 14 23:03:35.781506 kernel: pcieport 0000:00:15.5: pciehp: Slot #165 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 14 23:03:35.781560 kernel: pcieport 0000:00:15.6: PME: Signaling with IRQ 30 Jul 14 23:03:35.781617 kernel: pcieport 0000:00:15.6: pciehp: Slot #166 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 14 23:03:35.781671 kernel: pcieport 0000:00:15.7: PME: Signaling with IRQ 31 Jul 14 23:03:35.781725 kernel: pcieport 0000:00:15.7: pciehp: Slot #167 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 14 23:03:35.783385 kernel: pcieport 0000:00:16.0: PME: Signaling with IRQ 32 Jul 14 23:03:35.783465 kernel: pcieport 0000:00:16.0: pciehp: Slot #192 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 14 23:03:35.783525 kernel: pcieport 0000:00:16.1: PME: Signaling with IRQ 33 Jul 14 23:03:35.783584 kernel: pcieport 0000:00:16.1: pciehp: Slot #193 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 14 23:03:35.783639 kernel: pcieport 0000:00:16.2: PME: Signaling with IRQ 34 Jul 14 23:03:35.783692 kernel: pcieport 0000:00:16.2: pciehp: Slot #194 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 14 23:03:35.783746 kernel: pcieport 0000:00:16.3: PME: Signaling with IRQ 35 Jul 14 23:03:35.783800 kernel: pcieport 0000:00:16.3: pciehp: Slot #195 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 14 23:03:35.783853 kernel: pcieport 0000:00:16.4: PME: Signaling with IRQ 36 Jul 14 23:03:35.783911 kernel: pcieport 0000:00:16.4: pciehp: Slot #196 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 14 23:03:35.783966 kernel: pcieport 0000:00:16.5: PME: Signaling with IRQ 37 Jul 14 23:03:35.784019 kernel: pcieport 0000:00:16.5: pciehp: Slot #197 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 14 23:03:35.784073 kernel: pcieport 0000:00:16.6: PME: Signaling with IRQ 38 Jul 14 23:03:35.784127 kernel: pcieport 0000:00:16.6: pciehp: Slot #198 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 14 23:03:35.784183 kernel: pcieport 0000:00:16.7: PME: Signaling with IRQ 39 Jul 14 23:03:35.784238 kernel: pcieport 0000:00:16.7: pciehp: Slot #199 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 14 23:03:35.784292 kernel: pcieport 0000:00:17.0: PME: Signaling with IRQ 40 Jul 14 23:03:35.784362 kernel: pcieport 0000:00:17.0: pciehp: Slot #224 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 14 23:03:35.784442 kernel: pcieport 0000:00:17.1: PME: Signaling with IRQ 41 Jul 14 23:03:35.784512 kernel: pcieport 0000:00:17.1: pciehp: Slot #225 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 14 23:03:35.784565 kernel: pcieport 0000:00:17.2: PME: Signaling with IRQ 42 Jul 14 23:03:35.784621 kernel: pcieport 0000:00:17.2: pciehp: Slot #226 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 14 23:03:35.784675 kernel: pcieport 0000:00:17.3: PME: Signaling with IRQ 43 Jul 14 23:03:35.784727 kernel: pcieport 0000:00:17.3: pciehp: Slot #227 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 14 23:03:35.784781 kernel: pcieport 0000:00:17.4: PME: Signaling with IRQ 44 Jul 14 23:03:35.784834 kernel: pcieport 0000:00:17.4: pciehp: Slot #228 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 14 23:03:35.784888 kernel: pcieport 0000:00:17.5: PME: Signaling with IRQ 45 Jul 14 23:03:35.784944 kernel: pcieport 0000:00:17.5: pciehp: Slot #229 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 14 23:03:35.784999 kernel: pcieport 0000:00:17.6: PME: Signaling with IRQ 46 Jul 14 23:03:35.785052 kernel: pcieport 0000:00:17.6: pciehp: Slot #230 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 14 23:03:35.785105 kernel: pcieport 0000:00:17.7: PME: Signaling with IRQ 47 Jul 14 23:03:35.785158 kernel: pcieport 0000:00:17.7: pciehp: Slot #231 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 14 23:03:35.785214 kernel: pcieport 0000:00:18.0: PME: Signaling with IRQ 48 Jul 14 23:03:35.785268 kernel: pcieport 0000:00:18.0: pciehp: Slot #256 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 14 23:03:35.785322 kernel: pcieport 0000:00:18.1: PME: Signaling with IRQ 49 Jul 14 23:03:35.785402 kernel: pcieport 0000:00:18.1: pciehp: Slot #257 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 14 23:03:35.785455 kernel: pcieport 0000:00:18.2: PME: Signaling with IRQ 50 Jul 14 23:03:35.785508 kernel: pcieport 0000:00:18.2: pciehp: Slot #258 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 14 23:03:35.785565 kernel: pcieport 0000:00:18.3: PME: Signaling with IRQ 51 Jul 14 23:03:35.785617 kernel: pcieport 0000:00:18.3: pciehp: Slot #259 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 14 23:03:35.785671 kernel: pcieport 0000:00:18.4: PME: Signaling with IRQ 52 Jul 14 23:03:35.785723 kernel: pcieport 0000:00:18.4: pciehp: Slot #260 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 14 23:03:35.785778 kernel: pcieport 0000:00:18.5: PME: Signaling with IRQ 53 Jul 14 23:03:35.785831 kernel: pcieport 0000:00:18.5: pciehp: Slot #261 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 14 23:03:35.785888 kernel: pcieport 0000:00:18.6: PME: Signaling with IRQ 54 Jul 14 23:03:35.785940 kernel: pcieport 0000:00:18.6: pciehp: Slot #262 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 14 23:03:35.785994 kernel: pcieport 0000:00:18.7: PME: Signaling with IRQ 55 Jul 14 23:03:35.786047 kernel: pcieport 0000:00:18.7: pciehp: Slot #263 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 14 23:03:35.786062 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jul 14 23:03:35.786073 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 14 23:03:35.786080 kernel: 00:05: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jul 14 23:03:35.786086 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBC,PNP0f13:MOUS] at 0x60,0x64 irq 1,12 Jul 14 23:03:35.786092 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jul 14 23:03:35.786098 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jul 14 23:03:35.786158 kernel: rtc_cmos 00:01: registered as rtc0 Jul 14 23:03:35.786210 kernel: rtc_cmos 00:01: setting system clock to 2025-07-14T23:03:35 UTC (1752534215) Jul 14 23:03:35.786258 kernel: rtc_cmos 00:01: alarms up to one month, y3k, 114 bytes nvram Jul 14 23:03:35.786269 kernel: intel_pstate: CPU model not supported Jul 14 23:03:35.786276 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jul 14 23:03:35.786282 kernel: NET: Registered PF_INET6 protocol family Jul 14 23:03:35.786288 kernel: Segment Routing with IPv6 Jul 14 23:03:35.786295 kernel: In-situ OAM (IOAM) with IPv6 Jul 14 23:03:35.786301 kernel: NET: Registered PF_PACKET protocol family Jul 14 23:03:35.786307 kernel: Key type dns_resolver registered Jul 14 23:03:35.786314 kernel: IPI shorthand broadcast: enabled Jul 14 23:03:35.786320 kernel: sched_clock: Marking stable (916003494, 218892448)->(1194738315, -59842373) Jul 14 23:03:35.786328 kernel: registered taskstats version 1 Jul 14 23:03:35.786334 kernel: Loading compiled-in X.509 certificates Jul 14 23:03:35.786340 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.97-flatcar: ff10e110ca3923b510cf0133f4e9f48dd636b870' Jul 14 23:03:35.786355 kernel: Key type .fscrypt registered Jul 14 23:03:35.786364 kernel: Key type fscrypt-provisioning registered Jul 14 23:03:35.786370 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 14 23:03:35.786377 kernel: ima: Allocated hash algorithm: sha1 Jul 14 23:03:35.786383 kernel: ima: No architecture policies found Jul 14 23:03:35.786389 kernel: clk: Disabling unused clocks Jul 14 23:03:35.786397 kernel: Freeing unused kernel image (initmem) memory: 42876K Jul 14 23:03:35.786403 kernel: Write protecting the kernel read-only data: 36864k Jul 14 23:03:35.786409 kernel: Freeing unused kernel image (rodata/data gap) memory: 1828K Jul 14 23:03:35.786416 kernel: Run /init as init process Jul 14 23:03:35.786422 kernel: with arguments: Jul 14 23:03:35.786428 kernel: /init Jul 14 23:03:35.786434 kernel: with environment: Jul 14 23:03:35.786440 kernel: HOME=/ Jul 14 23:03:35.786446 kernel: TERM=linux Jul 14 23:03:35.786453 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 14 23:03:35.786461 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 14 23:03:35.786469 systemd[1]: Detected virtualization vmware. Jul 14 23:03:35.786475 systemd[1]: Detected architecture x86-64. Jul 14 23:03:35.786483 systemd[1]: Running in initrd. Jul 14 23:03:35.786489 systemd[1]: No hostname configured, using default hostname. Jul 14 23:03:35.786495 systemd[1]: Hostname set to . Jul 14 23:03:35.786503 systemd[1]: Initializing machine ID from random generator. Jul 14 23:03:35.786509 systemd[1]: Queued start job for default target initrd.target. Jul 14 23:03:35.786516 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 14 23:03:35.786523 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 14 23:03:35.786529 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 14 23:03:35.786536 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 14 23:03:35.786542 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 14 23:03:35.786548 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 14 23:03:35.786557 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 14 23:03:35.786563 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 14 23:03:35.786570 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 14 23:03:35.786577 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 14 23:03:35.786583 systemd[1]: Reached target paths.target - Path Units. Jul 14 23:03:35.786589 systemd[1]: Reached target slices.target - Slice Units. Jul 14 23:03:35.786596 systemd[1]: Reached target swap.target - Swaps. Jul 14 23:03:35.786603 systemd[1]: Reached target timers.target - Timer Units. Jul 14 23:03:35.786610 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 14 23:03:35.786616 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 14 23:03:35.786622 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 14 23:03:35.786629 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jul 14 23:03:35.786635 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 14 23:03:35.786642 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 14 23:03:35.786648 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 14 23:03:35.786654 systemd[1]: Reached target sockets.target - Socket Units. Jul 14 23:03:35.786662 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 14 23:03:35.786669 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 14 23:03:35.786675 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 14 23:03:35.786682 systemd[1]: Starting systemd-fsck-usr.service... Jul 14 23:03:35.786688 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 14 23:03:35.786694 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 14 23:03:35.786700 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 14 23:03:35.786721 systemd-journald[216]: Collecting audit messages is disabled. Jul 14 23:03:35.786739 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 14 23:03:35.786745 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 14 23:03:35.786751 systemd[1]: Finished systemd-fsck-usr.service. Jul 14 23:03:35.786759 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 14 23:03:35.786766 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 14 23:03:35.786773 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 14 23:03:35.786779 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 14 23:03:35.786786 kernel: Bridge firewalling registered Jul 14 23:03:35.786792 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 14 23:03:35.786800 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 14 23:03:35.786806 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 14 23:03:35.786814 systemd-journald[216]: Journal started Jul 14 23:03:35.786832 systemd-journald[216]: Runtime Journal (/run/log/journal/b29ca438d42e4dc3bfd367a8d8a2df63) is 4.8M, max 38.6M, 33.8M free. Jul 14 23:03:35.743335 systemd-modules-load[217]: Inserted module 'overlay' Jul 14 23:03:35.788583 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 14 23:03:35.770203 systemd-modules-load[217]: Inserted module 'br_netfilter' Jul 14 23:03:35.790361 systemd[1]: Started systemd-journald.service - Journal Service. Jul 14 23:03:35.795675 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 14 23:03:35.796154 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 14 23:03:35.799433 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 14 23:03:35.800439 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 14 23:03:35.800647 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 14 23:03:35.806726 dracut-cmdline[244]: dracut-dracut-053 Jul 14 23:03:35.808110 dracut-cmdline[244]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=bfa97d577a2baa7448b0ab2cae71f1606bd0084ffae5b72cc7eef5122a2ca497 Jul 14 23:03:35.808573 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 14 23:03:35.812437 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 14 23:03:35.828290 systemd-resolved[262]: Positive Trust Anchors: Jul 14 23:03:35.828302 systemd-resolved[262]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 14 23:03:35.828324 systemd-resolved[262]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 14 23:03:35.830274 systemd-resolved[262]: Defaulting to hostname 'linux'. Jul 14 23:03:35.831100 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 14 23:03:35.831237 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 14 23:03:35.853361 kernel: SCSI subsystem initialized Jul 14 23:03:35.860356 kernel: Loading iSCSI transport class v2.0-870. Jul 14 23:03:35.867360 kernel: iscsi: registered transport (tcp) Jul 14 23:03:35.882367 kernel: iscsi: registered transport (qla4xxx) Jul 14 23:03:35.882407 kernel: QLogic iSCSI HBA Driver Jul 14 23:03:35.902375 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 14 23:03:35.907483 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 14 23:03:35.922554 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 14 23:03:35.922591 kernel: device-mapper: uevent: version 1.0.3 Jul 14 23:03:35.923633 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jul 14 23:03:35.955363 kernel: raid6: avx2x4 gen() 52623 MB/s Jul 14 23:03:35.972360 kernel: raid6: avx2x2 gen() 53446 MB/s Jul 14 23:03:35.989650 kernel: raid6: avx2x1 gen() 44775 MB/s Jul 14 23:03:35.989688 kernel: raid6: using algorithm avx2x2 gen() 53446 MB/s Jul 14 23:03:36.007635 kernel: raid6: .... xor() 31302 MB/s, rmw enabled Jul 14 23:03:36.007686 kernel: raid6: using avx2x2 recovery algorithm Jul 14 23:03:36.021354 kernel: xor: automatically using best checksumming function avx Jul 14 23:03:36.122372 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 14 23:03:36.127469 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 14 23:03:36.132444 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 14 23:03:36.139520 systemd-udevd[434]: Using default interface naming scheme 'v255'. Jul 14 23:03:36.142023 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 14 23:03:36.152441 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 14 23:03:36.159109 dracut-pre-trigger[440]: rd.md=0: removing MD RAID activation Jul 14 23:03:36.173724 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 14 23:03:36.175481 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 14 23:03:36.247922 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 14 23:03:36.257421 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 14 23:03:36.266808 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 14 23:03:36.267147 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 14 23:03:36.267515 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 14 23:03:36.267826 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 14 23:03:36.271440 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 14 23:03:36.280964 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 14 23:03:36.320358 kernel: VMware PVSCSI driver - version 1.0.7.0-k Jul 14 23:03:36.336356 kernel: VMware vmxnet3 virtual NIC driver - version 1.7.0.0-k-NAPI Jul 14 23:03:36.336391 kernel: cryptd: max_cpu_qlen set to 1000 Jul 14 23:03:36.351702 kernel: vmw_pvscsi: using 64bit dma Jul 14 23:03:36.351742 kernel: vmw_pvscsi: max_id: 16 Jul 14 23:03:36.351751 kernel: vmw_pvscsi: setting ring_pages to 8 Jul 14 23:03:36.351760 kernel: vmxnet3 0000:0b:00.0: # of Tx queues : 2, # of Rx queues : 2 Jul 14 23:03:36.353884 kernel: vmxnet3 0000:0b:00.0 eth0: NIC Link is Up 10000 Mbps Jul 14 23:03:36.355359 kernel: AVX2 version of gcm_enc/dec engaged. Jul 14 23:03:36.355577 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 14 23:03:36.358657 kernel: AES CTR mode by8 optimization enabled Jul 14 23:03:36.358674 kernel: vmw_pvscsi: enabling reqCallThreshold Jul 14 23:03:36.358683 kernel: vmw_pvscsi: driver-based request coalescing enabled Jul 14 23:03:36.358690 kernel: vmw_pvscsi: using MSI-X Jul 14 23:03:36.358698 kernel: libata version 3.00 loaded. Jul 14 23:03:36.358705 kernel: scsi host0: VMware PVSCSI storage adapter rev 2, req/cmp/msg rings: 8/8/1 pages, cmd_per_lun=254 Jul 14 23:03:36.355650 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 14 23:03:36.358645 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 14 23:03:36.358744 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 14 23:03:36.358829 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 14 23:03:36.359006 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 14 23:03:36.369847 kernel: vmw_pvscsi 0000:03:00.0: VMware PVSCSI rev 2 host #0 Jul 14 23:03:36.369950 kernel: scsi 0:0:0:0: Direct-Access VMware Virtual disk 2.0 PQ: 0 ANSI: 6 Jul 14 23:03:36.370039 kernel: ata_piix 0000:00:07.1: version 2.13 Jul 14 23:03:36.370112 kernel: scsi host1: ata_piix Jul 14 23:03:36.370181 kernel: vmxnet3 0000:0b:00.0 ens192: renamed from eth0 Jul 14 23:03:36.371196 kernel: scsi host2: ata_piix Jul 14 23:03:36.371277 kernel: ata1: PATA max UDMA/33 cmd 0x1f0 ctl 0x3f6 bmdma 0x1060 irq 14 Jul 14 23:03:36.371287 kernel: ata2: PATA max UDMA/33 cmd 0x170 ctl 0x376 bmdma 0x1068 irq 15 Jul 14 23:03:36.365606 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 14 23:03:36.386073 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 14 23:03:36.391454 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 14 23:03:36.402208 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 14 23:03:36.536364 kernel: ata2.00: ATAPI: VMware Virtual IDE CDROM Drive, 00000001, max UDMA/33 Jul 14 23:03:36.541386 kernel: scsi 2:0:0:0: CD-ROM NECVMWar VMware IDE CDR10 1.00 PQ: 0 ANSI: 5 Jul 14 23:03:36.555382 kernel: sd 0:0:0:0: [sda] 17805312 512-byte logical blocks: (9.12 GB/8.49 GiB) Jul 14 23:03:36.555473 kernel: sd 0:0:0:0: [sda] Write Protect is off Jul 14 23:03:36.556613 kernel: sd 0:0:0:0: [sda] Mode Sense: 31 00 00 00 Jul 14 23:03:36.556688 kernel: sd 0:0:0:0: [sda] Cache data unavailable Jul 14 23:03:36.556752 kernel: sd 0:0:0:0: [sda] Assuming drive cache: write through Jul 14 23:03:36.560362 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 1x/1x writer dvd-ram cd/rw xa/form2 cdda tray Jul 14 23:03:36.560475 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jul 14 23:03:36.562373 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 14 23:03:36.562391 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jul 14 23:03:36.574408 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jul 14 23:03:36.603318 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_disk ROOT. Jul 14 23:03:36.603949 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by (udev-worker) (486) Jul 14 23:03:36.607374 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_disk EFI-SYSTEM. Jul 14 23:03:36.610362 kernel: BTRFS: device fsid d23b6972-ad36-4741-bf36-4d440b923127 devid 1 transid 36 /dev/sda3 scanned by (udev-worker) (488) Jul 14 23:03:36.612088 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_disk OEM. Jul 14 23:03:36.616115 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_disk USR-A. Jul 14 23:03:36.616252 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_disk USR-A. Jul 14 23:03:36.625468 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 14 23:03:36.653385 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 14 23:03:36.659359 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 14 23:03:36.664359 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 14 23:03:37.664401 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 14 23:03:37.665570 disk-uuid[589]: The operation has completed successfully. Jul 14 23:03:37.706658 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 14 23:03:37.706716 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 14 23:03:37.711428 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 14 23:03:37.713808 sh[609]: Success Jul 14 23:03:37.722382 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jul 14 23:03:37.778471 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 14 23:03:37.795375 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 14 23:03:37.795634 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 14 23:03:37.821373 kernel: BTRFS info (device dm-0): first mount of filesystem d23b6972-ad36-4741-bf36-4d440b923127 Jul 14 23:03:37.821412 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jul 14 23:03:37.823925 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jul 14 23:03:37.823943 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jul 14 23:03:37.825834 kernel: BTRFS info (device dm-0): using free space tree Jul 14 23:03:37.842367 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jul 14 23:03:37.845381 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 14 23:03:37.860567 systemd[1]: Starting afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments... Jul 14 23:03:37.861970 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 14 23:03:37.883753 kernel: BTRFS info (device sda6): first mount of filesystem 1f379987-f438-494c-89f9-63473ca1b18d Jul 14 23:03:37.883795 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jul 14 23:03:37.883805 kernel: BTRFS info (device sda6): using free space tree Jul 14 23:03:37.888362 kernel: BTRFS info (device sda6): enabling ssd optimizations Jul 14 23:03:37.903919 systemd[1]: mnt-oem.mount: Deactivated successfully. Jul 14 23:03:37.904356 kernel: BTRFS info (device sda6): last unmount of filesystem 1f379987-f438-494c-89f9-63473ca1b18d Jul 14 23:03:37.908659 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 14 23:03:37.913459 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 14 23:03:37.935176 systemd[1]: Finished afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments. Jul 14 23:03:37.941495 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 14 23:03:38.003634 ignition[668]: Ignition 2.19.0 Jul 14 23:03:38.004187 ignition[668]: Stage: fetch-offline Jul 14 23:03:38.004222 ignition[668]: no configs at "/usr/lib/ignition/base.d" Jul 14 23:03:38.004231 ignition[668]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Jul 14 23:03:38.004312 ignition[668]: parsed url from cmdline: "" Jul 14 23:03:38.004315 ignition[668]: no config URL provided Jul 14 23:03:38.004320 ignition[668]: reading system config file "/usr/lib/ignition/user.ign" Jul 14 23:03:38.004327 ignition[668]: no config at "/usr/lib/ignition/user.ign" Jul 14 23:03:38.005259 ignition[668]: config successfully fetched Jul 14 23:03:38.005294 ignition[668]: parsing config with SHA512: c831134dcff4a48d0cb37b47062598fc42c1ba8d40d5754708a426b74bdd5d3b8009941c70738f1e0638725082897fec22bc766038005bb11117fab29b76cfe1 Jul 14 23:03:38.011151 unknown[668]: fetched base config from "system" Jul 14 23:03:38.011160 unknown[668]: fetched user config from "vmware" Jul 14 23:03:38.011595 ignition[668]: fetch-offline: fetch-offline passed Jul 14 23:03:38.011652 ignition[668]: Ignition finished successfully Jul 14 23:03:38.012540 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 14 23:03:38.027155 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 14 23:03:38.039622 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 14 23:03:38.051716 systemd-networkd[803]: lo: Link UP Jul 14 23:03:38.051722 systemd-networkd[803]: lo: Gained carrier Jul 14 23:03:38.052474 systemd-networkd[803]: Enumeration completed Jul 14 23:03:38.052663 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 14 23:03:38.052731 systemd-networkd[803]: ens192: Configuring with /etc/systemd/network/10-dracut-cmdline-99.network. Jul 14 23:03:38.052851 systemd[1]: Reached target network.target - Network. Jul 14 23:03:38.056421 kernel: vmxnet3 0000:0b:00.0 ens192: intr type 3, mode 0, 3 vectors allocated Jul 14 23:03:38.056585 kernel: vmxnet3 0000:0b:00.0 ens192: NIC Link is Up 10000 Mbps Jul 14 23:03:38.052961 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jul 14 23:03:38.056471 systemd-networkd[803]: ens192: Link UP Jul 14 23:03:38.056474 systemd-networkd[803]: ens192: Gained carrier Jul 14 23:03:38.066539 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 14 23:03:38.078599 ignition[805]: Ignition 2.19.0 Jul 14 23:03:38.078611 ignition[805]: Stage: kargs Jul 14 23:03:38.078759 ignition[805]: no configs at "/usr/lib/ignition/base.d" Jul 14 23:03:38.078769 ignition[805]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Jul 14 23:03:38.079680 ignition[805]: kargs: kargs passed Jul 14 23:03:38.079721 ignition[805]: Ignition finished successfully Jul 14 23:03:38.080879 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 14 23:03:38.086455 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 14 23:03:38.094429 ignition[813]: Ignition 2.19.0 Jul 14 23:03:38.094436 ignition[813]: Stage: disks Jul 14 23:03:38.094574 ignition[813]: no configs at "/usr/lib/ignition/base.d" Jul 14 23:03:38.094581 ignition[813]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Jul 14 23:03:38.095119 ignition[813]: disks: disks passed Jul 14 23:03:38.095147 ignition[813]: Ignition finished successfully Jul 14 23:03:38.095872 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 14 23:03:38.096316 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 14 23:03:38.096458 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 14 23:03:38.096645 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 14 23:03:38.096823 systemd[1]: Reached target sysinit.target - System Initialization. Jul 14 23:03:38.097010 systemd[1]: Reached target basic.target - Basic System. Jul 14 23:03:38.100499 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 14 23:03:38.112036 systemd-fsck[821]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Jul 14 23:03:38.112955 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 14 23:03:38.118454 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 14 23:03:38.184257 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 14 23:03:38.184420 kernel: EXT4-fs (sda9): mounted filesystem dda007d3-640b-4d11-976f-3b761ca7aabd r/w with ordered data mode. Quota mode: none. Jul 14 23:03:38.184765 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 14 23:03:38.190407 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 14 23:03:38.191394 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 14 23:03:38.192197 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jul 14 23:03:38.192224 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 14 23:03:38.192239 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 14 23:03:38.196071 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 14 23:03:38.197066 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 14 23:03:38.199376 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (829) Jul 14 23:03:38.201438 kernel: BTRFS info (device sda6): first mount of filesystem 1f379987-f438-494c-89f9-63473ca1b18d Jul 14 23:03:38.201466 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jul 14 23:03:38.203368 kernel: BTRFS info (device sda6): using free space tree Jul 14 23:03:38.207363 kernel: BTRFS info (device sda6): enabling ssd optimizations Jul 14 23:03:38.209053 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 14 23:03:38.226553 initrd-setup-root[853]: cut: /sysroot/etc/passwd: No such file or directory Jul 14 23:03:38.230558 initrd-setup-root[860]: cut: /sysroot/etc/group: No such file or directory Jul 14 23:03:38.233420 initrd-setup-root[867]: cut: /sysroot/etc/shadow: No such file or directory Jul 14 23:03:38.235488 initrd-setup-root[874]: cut: /sysroot/etc/gshadow: No such file or directory Jul 14 23:03:38.296526 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 14 23:03:38.301455 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 14 23:03:38.304056 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 14 23:03:38.308353 kernel: BTRFS info (device sda6): last unmount of filesystem 1f379987-f438-494c-89f9-63473ca1b18d Jul 14 23:03:38.324555 ignition[943]: INFO : Ignition 2.19.0 Jul 14 23:03:38.324555 ignition[943]: INFO : Stage: mount Jul 14 23:03:38.325014 ignition[943]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 14 23:03:38.325014 ignition[943]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Jul 14 23:03:38.325236 ignition[943]: INFO : mount: mount passed Jul 14 23:03:38.325372 ignition[943]: INFO : Ignition finished successfully Jul 14 23:03:38.325914 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 14 23:03:38.328758 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 14 23:03:38.328989 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 14 23:03:38.817962 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 14 23:03:38.828515 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 14 23:03:38.914364 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by mount (955) Jul 14 23:03:38.916991 kernel: BTRFS info (device sda6): first mount of filesystem 1f379987-f438-494c-89f9-63473ca1b18d Jul 14 23:03:38.917010 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jul 14 23:03:38.917018 kernel: BTRFS info (device sda6): using free space tree Jul 14 23:03:38.920358 kernel: BTRFS info (device sda6): enabling ssd optimizations Jul 14 23:03:38.921573 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 14 23:03:38.937059 ignition[972]: INFO : Ignition 2.19.0 Jul 14 23:03:38.937059 ignition[972]: INFO : Stage: files Jul 14 23:03:38.937588 ignition[972]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 14 23:03:38.937588 ignition[972]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Jul 14 23:03:38.937810 ignition[972]: DEBUG : files: compiled without relabeling support, skipping Jul 14 23:03:38.938521 ignition[972]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 14 23:03:38.938521 ignition[972]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 14 23:03:38.941652 ignition[972]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 14 23:03:38.941844 ignition[972]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 14 23:03:38.942209 unknown[972]: wrote ssh authorized keys file for user: core Jul 14 23:03:38.942475 ignition[972]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 14 23:03:38.945065 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jul 14 23:03:38.945065 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jul 14 23:03:38.983679 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 14 23:03:39.128630 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jul 14 23:03:39.128630 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jul 14 23:03:39.129063 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jul 14 23:03:39.129063 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 14 23:03:39.129063 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 14 23:03:39.129063 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 14 23:03:39.129762 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 14 23:03:39.129762 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 14 23:03:39.129762 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 14 23:03:39.129762 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 14 23:03:39.129762 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 14 23:03:39.129762 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 14 23:03:39.129762 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 14 23:03:39.129762 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 14 23:03:39.129762 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 Jul 14 23:03:39.564325 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jul 14 23:03:39.777549 systemd-networkd[803]: ens192: Gained IPv6LL Jul 14 23:03:40.023385 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 14 23:03:40.023385 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/etc/systemd/network/00-vmware.network" Jul 14 23:03:40.023938 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/etc/systemd/network/00-vmware.network" Jul 14 23:03:40.023938 ignition[972]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jul 14 23:03:40.029412 ignition[972]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 14 23:03:40.029653 ignition[972]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 14 23:03:40.029653 ignition[972]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jul 14 23:03:40.029653 ignition[972]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Jul 14 23:03:40.029653 ignition[972]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 14 23:03:40.029653 ignition[972]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 14 23:03:40.030404 ignition[972]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Jul 14 23:03:40.030404 ignition[972]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Jul 14 23:03:40.542355 ignition[972]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Jul 14 23:03:40.545269 ignition[972]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jul 14 23:03:40.545269 ignition[972]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Jul 14 23:03:40.545269 ignition[972]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Jul 14 23:03:40.545269 ignition[972]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Jul 14 23:03:40.545269 ignition[972]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 14 23:03:40.545269 ignition[972]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 14 23:03:40.545269 ignition[972]: INFO : files: files passed Jul 14 23:03:40.545269 ignition[972]: INFO : Ignition finished successfully Jul 14 23:03:40.547086 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 14 23:03:40.550427 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 14 23:03:40.552039 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 14 23:03:40.552623 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 14 23:03:40.552815 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 14 23:03:40.558069 initrd-setup-root-after-ignition[1003]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 14 23:03:40.558069 initrd-setup-root-after-ignition[1003]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 14 23:03:40.559093 initrd-setup-root-after-ignition[1007]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 14 23:03:40.559963 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 14 23:03:40.560532 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 14 23:03:40.564429 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 14 23:03:40.577037 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 14 23:03:40.577096 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 14 23:03:40.577519 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 14 23:03:40.577640 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 14 23:03:40.577838 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 14 23:03:40.578284 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 14 23:03:40.587534 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 14 23:03:40.591421 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 14 23:03:40.596869 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 14 23:03:40.597042 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 14 23:03:40.597266 systemd[1]: Stopped target timers.target - Timer Units. Jul 14 23:03:40.597453 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 14 23:03:40.597523 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 14 23:03:40.597770 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 14 23:03:40.598013 systemd[1]: Stopped target basic.target - Basic System. Jul 14 23:03:40.598196 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 14 23:03:40.598393 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 14 23:03:40.598588 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 14 23:03:40.598790 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 14 23:03:40.599138 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 14 23:03:40.599329 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 14 23:03:40.599547 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 14 23:03:40.599730 systemd[1]: Stopped target swap.target - Swaps. Jul 14 23:03:40.599886 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 14 23:03:40.599947 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 14 23:03:40.600197 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 14 23:03:40.600435 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 14 23:03:40.600618 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 14 23:03:40.600664 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 14 23:03:40.600830 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 14 23:03:40.600890 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 14 23:03:40.601177 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 14 23:03:40.601241 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 14 23:03:40.601492 systemd[1]: Stopped target paths.target - Path Units. Jul 14 23:03:40.601630 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 14 23:03:40.607367 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 14 23:03:40.607534 systemd[1]: Stopped target slices.target - Slice Units. Jul 14 23:03:40.607731 systemd[1]: Stopped target sockets.target - Socket Units. Jul 14 23:03:40.607916 systemd[1]: iscsid.socket: Deactivated successfully. Jul 14 23:03:40.607981 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 14 23:03:40.608198 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 14 23:03:40.608245 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 14 23:03:40.608524 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 14 23:03:40.608609 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 14 23:03:40.608838 systemd[1]: ignition-files.service: Deactivated successfully. Jul 14 23:03:40.608916 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 14 23:03:40.616501 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 14 23:03:40.616607 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 14 23:03:40.616698 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 14 23:03:40.619479 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 14 23:03:40.619593 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 14 23:03:40.619688 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 14 23:03:40.619866 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 14 23:03:40.619937 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 14 23:03:40.621672 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 14 23:03:40.621734 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 14 23:03:40.626780 ignition[1027]: INFO : Ignition 2.19.0 Jul 14 23:03:40.627280 ignition[1027]: INFO : Stage: umount Jul 14 23:03:40.627280 ignition[1027]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 14 23:03:40.627280 ignition[1027]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Jul 14 23:03:40.628206 ignition[1027]: INFO : umount: umount passed Jul 14 23:03:40.628381 ignition[1027]: INFO : Ignition finished successfully Jul 14 23:03:40.628998 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 14 23:03:40.629175 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 14 23:03:40.629721 systemd[1]: Stopped target network.target - Network. Jul 14 23:03:40.630048 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 14 23:03:40.630079 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 14 23:03:40.630298 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 14 23:03:40.630321 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 14 23:03:40.630810 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 14 23:03:40.630834 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 14 23:03:40.630944 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 14 23:03:40.630965 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 14 23:03:40.631155 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 14 23:03:40.631571 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 14 23:03:40.633640 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 14 23:03:40.633841 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 14 23:03:40.634855 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 14 23:03:40.635005 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 14 23:03:40.637173 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 14 23:03:40.637232 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 14 23:03:40.637583 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 14 23:03:40.637610 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 14 23:03:40.641439 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 14 23:03:40.641536 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 14 23:03:40.641563 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 14 23:03:40.641686 systemd[1]: afterburn-network-kargs.service: Deactivated successfully. Jul 14 23:03:40.641708 systemd[1]: Stopped afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments. Jul 14 23:03:40.641827 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 14 23:03:40.641847 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 14 23:03:40.641955 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 14 23:03:40.641975 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 14 23:03:40.642123 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 14 23:03:40.647643 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 14 23:03:40.647705 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 14 23:03:40.651681 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 14 23:03:40.651763 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 14 23:03:40.652227 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 14 23:03:40.652254 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 14 23:03:40.654252 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 14 23:03:40.654275 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 14 23:03:40.654387 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 14 23:03:40.654412 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 14 23:03:40.654559 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 14 23:03:40.654581 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 14 23:03:40.654706 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 14 23:03:40.654728 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 14 23:03:40.659446 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 14 23:03:40.659558 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 14 23:03:40.659592 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 14 23:03:40.659727 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 14 23:03:40.659750 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 14 23:03:40.660447 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 14 23:03:40.662714 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 14 23:03:40.662770 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 14 23:03:40.888836 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 14 23:03:40.888902 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 14 23:03:40.889184 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 14 23:03:40.889294 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 14 23:03:40.889320 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 14 23:03:40.893427 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 14 23:03:40.897043 systemd[1]: Switching root. Jul 14 23:03:40.924809 systemd-journald[216]: Journal stopped Jul 14 23:03:35.730071 kernel: Linux version 6.6.97-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Mon Jul 14 20:23:49 -00 2025 Jul 14 23:03:35.730086 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=bfa97d577a2baa7448b0ab2cae71f1606bd0084ffae5b72cc7eef5122a2ca497 Jul 14 23:03:35.730093 kernel: Disabled fast string operations Jul 14 23:03:35.730097 kernel: BIOS-provided physical RAM map: Jul 14 23:03:35.730101 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ebff] usable Jul 14 23:03:35.730105 kernel: BIOS-e820: [mem 0x000000000009ec00-0x000000000009ffff] reserved Jul 14 23:03:35.730111 kernel: BIOS-e820: [mem 0x00000000000dc000-0x00000000000fffff] reserved Jul 14 23:03:35.730115 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007fedffff] usable Jul 14 23:03:35.730119 kernel: BIOS-e820: [mem 0x000000007fee0000-0x000000007fefefff] ACPI data Jul 14 23:03:35.730123 kernel: BIOS-e820: [mem 0x000000007feff000-0x000000007fefffff] ACPI NVS Jul 14 23:03:35.730127 kernel: BIOS-e820: [mem 0x000000007ff00000-0x000000007fffffff] usable Jul 14 23:03:35.730131 kernel: BIOS-e820: [mem 0x00000000f0000000-0x00000000f7ffffff] reserved Jul 14 23:03:35.730135 kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec0ffff] reserved Jul 14 23:03:35.730140 kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved Jul 14 23:03:35.730146 kernel: BIOS-e820: [mem 0x00000000fffe0000-0x00000000ffffffff] reserved Jul 14 23:03:35.730150 kernel: NX (Execute Disable) protection: active Jul 14 23:03:35.730155 kernel: APIC: Static calls initialized Jul 14 23:03:35.730160 kernel: SMBIOS 2.7 present. Jul 14 23:03:35.730164 kernel: DMI: VMware, Inc. VMware Virtual Platform/440BX Desktop Reference Platform, BIOS 6.00 05/28/2020 Jul 14 23:03:35.730169 kernel: vmware: hypercall mode: 0x00 Jul 14 23:03:35.730173 kernel: Hypervisor detected: VMware Jul 14 23:03:35.730178 kernel: vmware: TSC freq read from hypervisor : 3408.000 MHz Jul 14 23:03:35.730184 kernel: vmware: Host bus clock speed read from hypervisor : 66000000 Hz Jul 14 23:03:35.730188 kernel: vmware: using clock offset of 2974975035 ns Jul 14 23:03:35.730193 kernel: tsc: Detected 3408.000 MHz processor Jul 14 23:03:35.730198 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jul 14 23:03:35.730203 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jul 14 23:03:35.730208 kernel: last_pfn = 0x80000 max_arch_pfn = 0x400000000 Jul 14 23:03:35.730213 kernel: total RAM covered: 3072M Jul 14 23:03:35.730217 kernel: Found optimal setting for mtrr clean up Jul 14 23:03:35.730223 kernel: gran_size: 64K chunk_size: 64K num_reg: 2 lose cover RAM: 0G Jul 14 23:03:35.730228 kernel: MTRR map: 6 entries (5 fixed + 1 variable; max 21), built from 8 variable MTRRs Jul 14 23:03:35.730233 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jul 14 23:03:35.730238 kernel: Using GB pages for direct mapping Jul 14 23:03:35.730243 kernel: ACPI: Early table checksum verification disabled Jul 14 23:03:35.730247 kernel: ACPI: RSDP 0x00000000000F6A00 000024 (v02 PTLTD ) Jul 14 23:03:35.730252 kernel: ACPI: XSDT 0x000000007FEE965B 00005C (v01 INTEL 440BX 06040000 VMW 01324272) Jul 14 23:03:35.730257 kernel: ACPI: FACP 0x000000007FEFEE73 0000F4 (v04 INTEL 440BX 06040000 PTL 000F4240) Jul 14 23:03:35.730262 kernel: ACPI: DSDT 0x000000007FEEAD55 01411E (v01 PTLTD Custom 06040000 MSFT 03000001) Jul 14 23:03:35.730266 kernel: ACPI: FACS 0x000000007FEFFFC0 000040 Jul 14 23:03:35.730274 kernel: ACPI: FACS 0x000000007FEFFFC0 000040 Jul 14 23:03:35.730279 kernel: ACPI: BOOT 0x000000007FEEAD2D 000028 (v01 PTLTD $SBFTBL$ 06040000 LTP 00000001) Jul 14 23:03:35.730284 kernel: ACPI: APIC 0x000000007FEEA5EB 000742 (v01 PTLTD ? APIC 06040000 LTP 00000000) Jul 14 23:03:35.730289 kernel: ACPI: MCFG 0x000000007FEEA5AF 00003C (v01 PTLTD $PCITBL$ 06040000 LTP 00000001) Jul 14 23:03:35.730294 kernel: ACPI: SRAT 0x000000007FEE9757 0008A8 (v02 VMWARE MEMPLUG 06040000 VMW 00000001) Jul 14 23:03:35.730300 kernel: ACPI: HPET 0x000000007FEE971F 000038 (v01 VMWARE VMW HPET 06040000 VMW 00000001) Jul 14 23:03:35.730305 kernel: ACPI: WAET 0x000000007FEE96F7 000028 (v01 VMWARE VMW WAET 06040000 VMW 00000001) Jul 14 23:03:35.730310 kernel: ACPI: Reserving FACP table memory at [mem 0x7fefee73-0x7fefef66] Jul 14 23:03:35.730315 kernel: ACPI: Reserving DSDT table memory at [mem 0x7feead55-0x7fefee72] Jul 14 23:03:35.730320 kernel: ACPI: Reserving FACS table memory at [mem 0x7fefffc0-0x7fefffff] Jul 14 23:03:35.730325 kernel: ACPI: Reserving FACS table memory at [mem 0x7fefffc0-0x7fefffff] Jul 14 23:03:35.730330 kernel: ACPI: Reserving BOOT table memory at [mem 0x7feead2d-0x7feead54] Jul 14 23:03:35.730335 kernel: ACPI: Reserving APIC table memory at [mem 0x7feea5eb-0x7feead2c] Jul 14 23:03:35.730340 kernel: ACPI: Reserving MCFG table memory at [mem 0x7feea5af-0x7feea5ea] Jul 14 23:03:35.730354 kernel: ACPI: Reserving SRAT table memory at [mem 0x7fee9757-0x7fee9ffe] Jul 14 23:03:35.730361 kernel: ACPI: Reserving HPET table memory at [mem 0x7fee971f-0x7fee9756] Jul 14 23:03:35.730366 kernel: ACPI: Reserving WAET table memory at [mem 0x7fee96f7-0x7fee971e] Jul 14 23:03:35.730371 kernel: system APIC only can use physical flat Jul 14 23:03:35.730376 kernel: APIC: Switched APIC routing to: physical flat Jul 14 23:03:35.730381 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jul 14 23:03:35.730386 kernel: SRAT: PXM 0 -> APIC 0x02 -> Node 0 Jul 14 23:03:35.730391 kernel: SRAT: PXM 0 -> APIC 0x04 -> Node 0 Jul 14 23:03:35.730396 kernel: SRAT: PXM 0 -> APIC 0x06 -> Node 0 Jul 14 23:03:35.730401 kernel: SRAT: PXM 0 -> APIC 0x08 -> Node 0 Jul 14 23:03:35.730406 kernel: SRAT: PXM 0 -> APIC 0x0a -> Node 0 Jul 14 23:03:35.730412 kernel: SRAT: PXM 0 -> APIC 0x0c -> Node 0 Jul 14 23:03:35.730417 kernel: SRAT: PXM 0 -> APIC 0x0e -> Node 0 Jul 14 23:03:35.730422 kernel: SRAT: PXM 0 -> APIC 0x10 -> Node 0 Jul 14 23:03:35.730427 kernel: SRAT: PXM 0 -> APIC 0x12 -> Node 0 Jul 14 23:03:35.730432 kernel: SRAT: PXM 0 -> APIC 0x14 -> Node 0 Jul 14 23:03:35.730437 kernel: SRAT: PXM 0 -> APIC 0x16 -> Node 0 Jul 14 23:03:35.730441 kernel: SRAT: PXM 0 -> APIC 0x18 -> Node 0 Jul 14 23:03:35.730446 kernel: SRAT: PXM 0 -> APIC 0x1a -> Node 0 Jul 14 23:03:35.730451 kernel: SRAT: PXM 0 -> APIC 0x1c -> Node 0 Jul 14 23:03:35.730456 kernel: SRAT: PXM 0 -> APIC 0x1e -> Node 0 Jul 14 23:03:35.730462 kernel: SRAT: PXM 0 -> APIC 0x20 -> Node 0 Jul 14 23:03:35.730467 kernel: SRAT: PXM 0 -> APIC 0x22 -> Node 0 Jul 14 23:03:35.730472 kernel: SRAT: PXM 0 -> APIC 0x24 -> Node 0 Jul 14 23:03:35.730476 kernel: SRAT: PXM 0 -> APIC 0x26 -> Node 0 Jul 14 23:03:35.730481 kernel: SRAT: PXM 0 -> APIC 0x28 -> Node 0 Jul 14 23:03:35.730486 kernel: SRAT: PXM 0 -> APIC 0x2a -> Node 0 Jul 14 23:03:35.730491 kernel: SRAT: PXM 0 -> APIC 0x2c -> Node 0 Jul 14 23:03:35.730496 kernel: SRAT: PXM 0 -> APIC 0x2e -> Node 0 Jul 14 23:03:35.730501 kernel: SRAT: PXM 0 -> APIC 0x30 -> Node 0 Jul 14 23:03:35.730506 kernel: SRAT: PXM 0 -> APIC 0x32 -> Node 0 Jul 14 23:03:35.730512 kernel: SRAT: PXM 0 -> APIC 0x34 -> Node 0 Jul 14 23:03:35.730517 kernel: SRAT: PXM 0 -> APIC 0x36 -> Node 0 Jul 14 23:03:35.730522 kernel: SRAT: PXM 0 -> APIC 0x38 -> Node 0 Jul 14 23:03:35.730526 kernel: SRAT: PXM 0 -> APIC 0x3a -> Node 0 Jul 14 23:03:35.730532 kernel: SRAT: PXM 0 -> APIC 0x3c -> Node 0 Jul 14 23:03:35.730536 kernel: SRAT: PXM 0 -> APIC 0x3e -> Node 0 Jul 14 23:03:35.730541 kernel: SRAT: PXM 0 -> APIC 0x40 -> Node 0 Jul 14 23:03:35.730546 kernel: SRAT: PXM 0 -> APIC 0x42 -> Node 0 Jul 14 23:03:35.730551 kernel: SRAT: PXM 0 -> APIC 0x44 -> Node 0 Jul 14 23:03:35.730556 kernel: SRAT: PXM 0 -> APIC 0x46 -> Node 0 Jul 14 23:03:35.730562 kernel: SRAT: PXM 0 -> APIC 0x48 -> Node 0 Jul 14 23:03:35.730567 kernel: SRAT: PXM 0 -> APIC 0x4a -> Node 0 Jul 14 23:03:35.730571 kernel: SRAT: PXM 0 -> APIC 0x4c -> Node 0 Jul 14 23:03:35.730576 kernel: SRAT: PXM 0 -> APIC 0x4e -> Node 0 Jul 14 23:03:35.730581 kernel: SRAT: PXM 0 -> APIC 0x50 -> Node 0 Jul 14 23:03:35.730586 kernel: SRAT: PXM 0 -> APIC 0x52 -> Node 0 Jul 14 23:03:35.730591 kernel: SRAT: PXM 0 -> APIC 0x54 -> Node 0 Jul 14 23:03:35.730595 kernel: SRAT: PXM 0 -> APIC 0x56 -> Node 0 Jul 14 23:03:35.730600 kernel: SRAT: PXM 0 -> APIC 0x58 -> Node 0 Jul 14 23:03:35.730605 kernel: SRAT: PXM 0 -> APIC 0x5a -> Node 0 Jul 14 23:03:35.730611 kernel: SRAT: PXM 0 -> APIC 0x5c -> Node 0 Jul 14 23:03:35.730616 kernel: SRAT: PXM 0 -> APIC 0x5e -> Node 0 Jul 14 23:03:35.730621 kernel: SRAT: PXM 0 -> APIC 0x60 -> Node 0 Jul 14 23:03:35.730625 kernel: SRAT: PXM 0 -> APIC 0x62 -> Node 0 Jul 14 23:03:35.730630 kernel: SRAT: PXM 0 -> APIC 0x64 -> Node 0 Jul 14 23:03:35.730635 kernel: SRAT: PXM 0 -> APIC 0x66 -> Node 0 Jul 14 23:03:35.730640 kernel: SRAT: PXM 0 -> APIC 0x68 -> Node 0 Jul 14 23:03:35.730645 kernel: SRAT: PXM 0 -> APIC 0x6a -> Node 0 Jul 14 23:03:35.730650 kernel: SRAT: PXM 0 -> APIC 0x6c -> Node 0 Jul 14 23:03:35.730655 kernel: SRAT: PXM 0 -> APIC 0x6e -> Node 0 Jul 14 23:03:35.730659 kernel: SRAT: PXM 0 -> APIC 0x70 -> Node 0 Jul 14 23:03:35.730666 kernel: SRAT: PXM 0 -> APIC 0x72 -> Node 0 Jul 14 23:03:35.730671 kernel: SRAT: PXM 0 -> APIC 0x74 -> Node 0 Jul 14 23:03:35.730679 kernel: SRAT: PXM 0 -> APIC 0x76 -> Node 0 Jul 14 23:03:35.730685 kernel: SRAT: PXM 0 -> APIC 0x78 -> Node 0 Jul 14 23:03:35.730691 kernel: SRAT: PXM 0 -> APIC 0x7a -> Node 0 Jul 14 23:03:35.730696 kernel: SRAT: PXM 0 -> APIC 0x7c -> Node 0 Jul 14 23:03:35.730701 kernel: SRAT: PXM 0 -> APIC 0x7e -> Node 0 Jul 14 23:03:35.730707 kernel: SRAT: PXM 0 -> APIC 0x80 -> Node 0 Jul 14 23:03:35.730713 kernel: SRAT: PXM 0 -> APIC 0x82 -> Node 0 Jul 14 23:03:35.730718 kernel: SRAT: PXM 0 -> APIC 0x84 -> Node 0 Jul 14 23:03:35.730723 kernel: SRAT: PXM 0 -> APIC 0x86 -> Node 0 Jul 14 23:03:35.730729 kernel: SRAT: PXM 0 -> APIC 0x88 -> Node 0 Jul 14 23:03:35.730734 kernel: SRAT: PXM 0 -> APIC 0x8a -> Node 0 Jul 14 23:03:35.730739 kernel: SRAT: PXM 0 -> APIC 0x8c -> Node 0 Jul 14 23:03:35.730745 kernel: SRAT: PXM 0 -> APIC 0x8e -> Node 0 Jul 14 23:03:35.730750 kernel: SRAT: PXM 0 -> APIC 0x90 -> Node 0 Jul 14 23:03:35.730755 kernel: SRAT: PXM 0 -> APIC 0x92 -> Node 0 Jul 14 23:03:35.730760 kernel: SRAT: PXM 0 -> APIC 0x94 -> Node 0 Jul 14 23:03:35.730766 kernel: SRAT: PXM 0 -> APIC 0x96 -> Node 0 Jul 14 23:03:35.730772 kernel: SRAT: PXM 0 -> APIC 0x98 -> Node 0 Jul 14 23:03:35.730777 kernel: SRAT: PXM 0 -> APIC 0x9a -> Node 0 Jul 14 23:03:35.730782 kernel: SRAT: PXM 0 -> APIC 0x9c -> Node 0 Jul 14 23:03:35.730788 kernel: SRAT: PXM 0 -> APIC 0x9e -> Node 0 Jul 14 23:03:35.730793 kernel: SRAT: PXM 0 -> APIC 0xa0 -> Node 0 Jul 14 23:03:35.730798 kernel: SRAT: PXM 0 -> APIC 0xa2 -> Node 0 Jul 14 23:03:35.730803 kernel: SRAT: PXM 0 -> APIC 0xa4 -> Node 0 Jul 14 23:03:35.730808 kernel: SRAT: PXM 0 -> APIC 0xa6 -> Node 0 Jul 14 23:03:35.730813 kernel: SRAT: PXM 0 -> APIC 0xa8 -> Node 0 Jul 14 23:03:35.730819 kernel: SRAT: PXM 0 -> APIC 0xaa -> Node 0 Jul 14 23:03:35.730826 kernel: SRAT: PXM 0 -> APIC 0xac -> Node 0 Jul 14 23:03:35.730831 kernel: SRAT: PXM 0 -> APIC 0xae -> Node 0 Jul 14 23:03:35.730836 kernel: SRAT: PXM 0 -> APIC 0xb0 -> Node 0 Jul 14 23:03:35.730841 kernel: SRAT: PXM 0 -> APIC 0xb2 -> Node 0 Jul 14 23:03:35.730847 kernel: SRAT: PXM 0 -> APIC 0xb4 -> Node 0 Jul 14 23:03:35.730852 kernel: SRAT: PXM 0 -> APIC 0xb6 -> Node 0 Jul 14 23:03:35.730857 kernel: SRAT: PXM 0 -> APIC 0xb8 -> Node 0 Jul 14 23:03:35.730862 kernel: SRAT: PXM 0 -> APIC 0xba -> Node 0 Jul 14 23:03:35.730867 kernel: SRAT: PXM 0 -> APIC 0xbc -> Node 0 Jul 14 23:03:35.730873 kernel: SRAT: PXM 0 -> APIC 0xbe -> Node 0 Jul 14 23:03:35.730879 kernel: SRAT: PXM 0 -> APIC 0xc0 -> Node 0 Jul 14 23:03:35.730884 kernel: SRAT: PXM 0 -> APIC 0xc2 -> Node 0 Jul 14 23:03:35.730889 kernel: SRAT: PXM 0 -> APIC 0xc4 -> Node 0 Jul 14 23:03:35.730895 kernel: SRAT: PXM 0 -> APIC 0xc6 -> Node 0 Jul 14 23:03:35.730900 kernel: SRAT: PXM 0 -> APIC 0xc8 -> Node 0 Jul 14 23:03:35.730905 kernel: SRAT: PXM 0 -> APIC 0xca -> Node 0 Jul 14 23:03:35.730911 kernel: SRAT: PXM 0 -> APIC 0xcc -> Node 0 Jul 14 23:03:35.730916 kernel: SRAT: PXM 0 -> APIC 0xce -> Node 0 Jul 14 23:03:35.730921 kernel: SRAT: PXM 0 -> APIC 0xd0 -> Node 0 Jul 14 23:03:35.730926 kernel: SRAT: PXM 0 -> APIC 0xd2 -> Node 0 Jul 14 23:03:35.730931 kernel: SRAT: PXM 0 -> APIC 0xd4 -> Node 0 Jul 14 23:03:35.730938 kernel: SRAT: PXM 0 -> APIC 0xd6 -> Node 0 Jul 14 23:03:35.730943 kernel: SRAT: PXM 0 -> APIC 0xd8 -> Node 0 Jul 14 23:03:35.730948 kernel: SRAT: PXM 0 -> APIC 0xda -> Node 0 Jul 14 23:03:35.730953 kernel: SRAT: PXM 0 -> APIC 0xdc -> Node 0 Jul 14 23:03:35.730959 kernel: SRAT: PXM 0 -> APIC 0xde -> Node 0 Jul 14 23:03:35.730964 kernel: SRAT: PXM 0 -> APIC 0xe0 -> Node 0 Jul 14 23:03:35.730969 kernel: SRAT: PXM 0 -> APIC 0xe2 -> Node 0 Jul 14 23:03:35.730975 kernel: SRAT: PXM 0 -> APIC 0xe4 -> Node 0 Jul 14 23:03:35.730980 kernel: SRAT: PXM 0 -> APIC 0xe6 -> Node 0 Jul 14 23:03:35.730985 kernel: SRAT: PXM 0 -> APIC 0xe8 -> Node 0 Jul 14 23:03:35.730991 kernel: SRAT: PXM 0 -> APIC 0xea -> Node 0 Jul 14 23:03:35.730997 kernel: SRAT: PXM 0 -> APIC 0xec -> Node 0 Jul 14 23:03:35.731002 kernel: SRAT: PXM 0 -> APIC 0xee -> Node 0 Jul 14 23:03:35.731007 kernel: SRAT: PXM 0 -> APIC 0xf0 -> Node 0 Jul 14 23:03:35.731013 kernel: SRAT: PXM 0 -> APIC 0xf2 -> Node 0 Jul 14 23:03:35.731018 kernel: SRAT: PXM 0 -> APIC 0xf4 -> Node 0 Jul 14 23:03:35.731023 kernel: SRAT: PXM 0 -> APIC 0xf6 -> Node 0 Jul 14 23:03:35.731028 kernel: SRAT: PXM 0 -> APIC 0xf8 -> Node 0 Jul 14 23:03:35.731033 kernel: SRAT: PXM 0 -> APIC 0xfa -> Node 0 Jul 14 23:03:35.731039 kernel: SRAT: PXM 0 -> APIC 0xfc -> Node 0 Jul 14 23:03:35.731045 kernel: SRAT: PXM 0 -> APIC 0xfe -> Node 0 Jul 14 23:03:35.731050 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Jul 14 23:03:35.731056 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Jul 14 23:03:35.731061 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000-0xbfffffff] hotplug Jul 14 23:03:35.731067 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7fffffff] -> [mem 0x00000000-0x7fffffff] Jul 14 23:03:35.731072 kernel: NODE_DATA(0) allocated [mem 0x7fffa000-0x7fffffff] Jul 14 23:03:35.731078 kernel: Zone ranges: Jul 14 23:03:35.731083 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jul 14 23:03:35.731088 kernel: DMA32 [mem 0x0000000001000000-0x000000007fffffff] Jul 14 23:03:35.731095 kernel: Normal empty Jul 14 23:03:35.731100 kernel: Movable zone start for each node Jul 14 23:03:35.731106 kernel: Early memory node ranges Jul 14 23:03:35.731111 kernel: node 0: [mem 0x0000000000001000-0x000000000009dfff] Jul 14 23:03:35.731117 kernel: node 0: [mem 0x0000000000100000-0x000000007fedffff] Jul 14 23:03:35.731122 kernel: node 0: [mem 0x000000007ff00000-0x000000007fffffff] Jul 14 23:03:35.731127 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007fffffff] Jul 14 23:03:35.731133 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jul 14 23:03:35.731138 kernel: On node 0, zone DMA: 98 pages in unavailable ranges Jul 14 23:03:35.731144 kernel: On node 0, zone DMA32: 32 pages in unavailable ranges Jul 14 23:03:35.731150 kernel: ACPI: PM-Timer IO Port: 0x1008 Jul 14 23:03:35.731155 kernel: system APIC only can use physical flat Jul 14 23:03:35.731161 kernel: ACPI: LAPIC_NMI (acpi_id[0x00] high edge lint[0x1]) Jul 14 23:03:35.731166 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1]) Jul 14 23:03:35.731171 kernel: ACPI: LAPIC_NMI (acpi_id[0x02] high edge lint[0x1]) Jul 14 23:03:35.731177 kernel: ACPI: LAPIC_NMI (acpi_id[0x03] high edge lint[0x1]) Jul 14 23:03:35.731182 kernel: ACPI: LAPIC_NMI (acpi_id[0x04] high edge lint[0x1]) Jul 14 23:03:35.731187 kernel: ACPI: LAPIC_NMI (acpi_id[0x05] high edge lint[0x1]) Jul 14 23:03:35.731192 kernel: ACPI: LAPIC_NMI (acpi_id[0x06] high edge lint[0x1]) Jul 14 23:03:35.731198 kernel: ACPI: LAPIC_NMI (acpi_id[0x07] high edge lint[0x1]) Jul 14 23:03:35.731204 kernel: ACPI: LAPIC_NMI (acpi_id[0x08] high edge lint[0x1]) Jul 14 23:03:35.731209 kernel: ACPI: LAPIC_NMI (acpi_id[0x09] high edge lint[0x1]) Jul 14 23:03:35.731215 kernel: ACPI: LAPIC_NMI (acpi_id[0x0a] high edge lint[0x1]) Jul 14 23:03:35.731220 kernel: ACPI: LAPIC_NMI (acpi_id[0x0b] high edge lint[0x1]) Jul 14 23:03:35.731225 kernel: ACPI: LAPIC_NMI (acpi_id[0x0c] high edge lint[0x1]) Jul 14 23:03:35.731231 kernel: ACPI: LAPIC_NMI (acpi_id[0x0d] high edge lint[0x1]) Jul 14 23:03:35.731236 kernel: ACPI: LAPIC_NMI (acpi_id[0x0e] high edge lint[0x1]) Jul 14 23:03:35.731241 kernel: ACPI: LAPIC_NMI (acpi_id[0x0f] high edge lint[0x1]) Jul 14 23:03:35.731246 kernel: ACPI: LAPIC_NMI (acpi_id[0x10] high edge lint[0x1]) Jul 14 23:03:35.731253 kernel: ACPI: LAPIC_NMI (acpi_id[0x11] high edge lint[0x1]) Jul 14 23:03:35.731258 kernel: ACPI: LAPIC_NMI (acpi_id[0x12] high edge lint[0x1]) Jul 14 23:03:35.731263 kernel: ACPI: LAPIC_NMI (acpi_id[0x13] high edge lint[0x1]) Jul 14 23:03:35.731268 kernel: ACPI: LAPIC_NMI (acpi_id[0x14] high edge lint[0x1]) Jul 14 23:03:35.731274 kernel: ACPI: LAPIC_NMI (acpi_id[0x15] high edge lint[0x1]) Jul 14 23:03:35.731279 kernel: ACPI: LAPIC_NMI (acpi_id[0x16] high edge lint[0x1]) Jul 14 23:03:35.731284 kernel: ACPI: LAPIC_NMI (acpi_id[0x17] high edge lint[0x1]) Jul 14 23:03:35.731290 kernel: ACPI: LAPIC_NMI (acpi_id[0x18] high edge lint[0x1]) Jul 14 23:03:35.731295 kernel: ACPI: LAPIC_NMI (acpi_id[0x19] high edge lint[0x1]) Jul 14 23:03:35.731300 kernel: ACPI: LAPIC_NMI (acpi_id[0x1a] high edge lint[0x1]) Jul 14 23:03:35.731306 kernel: ACPI: LAPIC_NMI (acpi_id[0x1b] high edge lint[0x1]) Jul 14 23:03:35.731312 kernel: ACPI: LAPIC_NMI (acpi_id[0x1c] high edge lint[0x1]) Jul 14 23:03:35.731317 kernel: ACPI: LAPIC_NMI (acpi_id[0x1d] high edge lint[0x1]) Jul 14 23:03:35.731322 kernel: ACPI: LAPIC_NMI (acpi_id[0x1e] high edge lint[0x1]) Jul 14 23:03:35.731328 kernel: ACPI: LAPIC_NMI (acpi_id[0x1f] high edge lint[0x1]) Jul 14 23:03:35.731333 kernel: ACPI: LAPIC_NMI (acpi_id[0x20] high edge lint[0x1]) Jul 14 23:03:35.731338 kernel: ACPI: LAPIC_NMI (acpi_id[0x21] high edge lint[0x1]) Jul 14 23:03:35.731351 kernel: ACPI: LAPIC_NMI (acpi_id[0x22] high edge lint[0x1]) Jul 14 23:03:35.731357 kernel: ACPI: LAPIC_NMI (acpi_id[0x23] high edge lint[0x1]) Jul 14 23:03:35.731362 kernel: ACPI: LAPIC_NMI (acpi_id[0x24] high edge lint[0x1]) Jul 14 23:03:35.731369 kernel: ACPI: LAPIC_NMI (acpi_id[0x25] high edge lint[0x1]) Jul 14 23:03:35.731374 kernel: ACPI: LAPIC_NMI (acpi_id[0x26] high edge lint[0x1]) Jul 14 23:03:35.731379 kernel: ACPI: LAPIC_NMI (acpi_id[0x27] high edge lint[0x1]) Jul 14 23:03:35.731384 kernel: ACPI: LAPIC_NMI (acpi_id[0x28] high edge lint[0x1]) Jul 14 23:03:35.731390 kernel: ACPI: LAPIC_NMI (acpi_id[0x29] high edge lint[0x1]) Jul 14 23:03:35.731395 kernel: ACPI: LAPIC_NMI (acpi_id[0x2a] high edge lint[0x1]) Jul 14 23:03:35.731400 kernel: ACPI: LAPIC_NMI (acpi_id[0x2b] high edge lint[0x1]) Jul 14 23:03:35.731406 kernel: ACPI: LAPIC_NMI (acpi_id[0x2c] high edge lint[0x1]) Jul 14 23:03:35.731433 kernel: ACPI: LAPIC_NMI (acpi_id[0x2d] high edge lint[0x1]) Jul 14 23:03:35.731439 kernel: ACPI: LAPIC_NMI (acpi_id[0x2e] high edge lint[0x1]) Jul 14 23:03:35.731445 kernel: ACPI: LAPIC_NMI (acpi_id[0x2f] high edge lint[0x1]) Jul 14 23:03:35.731450 kernel: ACPI: LAPIC_NMI (acpi_id[0x30] high edge lint[0x1]) Jul 14 23:03:35.731456 kernel: ACPI: LAPIC_NMI (acpi_id[0x31] high edge lint[0x1]) Jul 14 23:03:35.731478 kernel: ACPI: LAPIC_NMI (acpi_id[0x32] high edge lint[0x1]) Jul 14 23:03:35.731483 kernel: ACPI: LAPIC_NMI (acpi_id[0x33] high edge lint[0x1]) Jul 14 23:03:35.731488 kernel: ACPI: LAPIC_NMI (acpi_id[0x34] high edge lint[0x1]) Jul 14 23:03:35.731493 kernel: ACPI: LAPIC_NMI (acpi_id[0x35] high edge lint[0x1]) Jul 14 23:03:35.731499 kernel: ACPI: LAPIC_NMI (acpi_id[0x36] high edge lint[0x1]) Jul 14 23:03:35.731504 kernel: ACPI: LAPIC_NMI (acpi_id[0x37] high edge lint[0x1]) Jul 14 23:03:35.731510 kernel: ACPI: LAPIC_NMI (acpi_id[0x38] high edge lint[0x1]) Jul 14 23:03:35.731515 kernel: ACPI: LAPIC_NMI (acpi_id[0x39] high edge lint[0x1]) Jul 14 23:03:35.731521 kernel: ACPI: LAPIC_NMI (acpi_id[0x3a] high edge lint[0x1]) Jul 14 23:03:35.731526 kernel: ACPI: LAPIC_NMI (acpi_id[0x3b] high edge lint[0x1]) Jul 14 23:03:35.731531 kernel: ACPI: LAPIC_NMI (acpi_id[0x3c] high edge lint[0x1]) Jul 14 23:03:35.731536 kernel: ACPI: LAPIC_NMI (acpi_id[0x3d] high edge lint[0x1]) Jul 14 23:03:35.731542 kernel: ACPI: LAPIC_NMI (acpi_id[0x3e] high edge lint[0x1]) Jul 14 23:03:35.731547 kernel: ACPI: LAPIC_NMI (acpi_id[0x3f] high edge lint[0x1]) Jul 14 23:03:35.731553 kernel: ACPI: LAPIC_NMI (acpi_id[0x40] high edge lint[0x1]) Jul 14 23:03:35.731558 kernel: ACPI: LAPIC_NMI (acpi_id[0x41] high edge lint[0x1]) Jul 14 23:03:35.731564 kernel: ACPI: LAPIC_NMI (acpi_id[0x42] high edge lint[0x1]) Jul 14 23:03:35.731570 kernel: ACPI: LAPIC_NMI (acpi_id[0x43] high edge lint[0x1]) Jul 14 23:03:35.731575 kernel: ACPI: LAPIC_NMI (acpi_id[0x44] high edge lint[0x1]) Jul 14 23:03:35.731580 kernel: ACPI: LAPIC_NMI (acpi_id[0x45] high edge lint[0x1]) Jul 14 23:03:35.731586 kernel: ACPI: LAPIC_NMI (acpi_id[0x46] high edge lint[0x1]) Jul 14 23:03:35.731591 kernel: ACPI: LAPIC_NMI (acpi_id[0x47] high edge lint[0x1]) Jul 14 23:03:35.731596 kernel: ACPI: LAPIC_NMI (acpi_id[0x48] high edge lint[0x1]) Jul 14 23:03:35.731601 kernel: ACPI: LAPIC_NMI (acpi_id[0x49] high edge lint[0x1]) Jul 14 23:03:35.731607 kernel: ACPI: LAPIC_NMI (acpi_id[0x4a] high edge lint[0x1]) Jul 14 23:03:35.731613 kernel: ACPI: LAPIC_NMI (acpi_id[0x4b] high edge lint[0x1]) Jul 14 23:03:35.731618 kernel: ACPI: LAPIC_NMI (acpi_id[0x4c] high edge lint[0x1]) Jul 14 23:03:35.731624 kernel: ACPI: LAPIC_NMI (acpi_id[0x4d] high edge lint[0x1]) Jul 14 23:03:35.731629 kernel: ACPI: LAPIC_NMI (acpi_id[0x4e] high edge lint[0x1]) Jul 14 23:03:35.731634 kernel: ACPI: LAPIC_NMI (acpi_id[0x4f] high edge lint[0x1]) Jul 14 23:03:35.731639 kernel: ACPI: LAPIC_NMI (acpi_id[0x50] high edge lint[0x1]) Jul 14 23:03:35.731645 kernel: ACPI: LAPIC_NMI (acpi_id[0x51] high edge lint[0x1]) Jul 14 23:03:35.731650 kernel: ACPI: LAPIC_NMI (acpi_id[0x52] high edge lint[0x1]) Jul 14 23:03:35.731655 kernel: ACPI: LAPIC_NMI (acpi_id[0x53] high edge lint[0x1]) Jul 14 23:03:35.731660 kernel: ACPI: LAPIC_NMI (acpi_id[0x54] high edge lint[0x1]) Jul 14 23:03:35.731667 kernel: ACPI: LAPIC_NMI (acpi_id[0x55] high edge lint[0x1]) Jul 14 23:03:35.731672 kernel: ACPI: LAPIC_NMI (acpi_id[0x56] high edge lint[0x1]) Jul 14 23:03:35.731677 kernel: ACPI: LAPIC_NMI (acpi_id[0x57] high edge lint[0x1]) Jul 14 23:03:35.731683 kernel: ACPI: LAPIC_NMI (acpi_id[0x58] high edge lint[0x1]) Jul 14 23:03:35.731688 kernel: ACPI: LAPIC_NMI (acpi_id[0x59] high edge lint[0x1]) Jul 14 23:03:35.731693 kernel: ACPI: LAPIC_NMI (acpi_id[0x5a] high edge lint[0x1]) Jul 14 23:03:35.731698 kernel: ACPI: LAPIC_NMI (acpi_id[0x5b] high edge lint[0x1]) Jul 14 23:03:35.731704 kernel: ACPI: LAPIC_NMI (acpi_id[0x5c] high edge lint[0x1]) Jul 14 23:03:35.731709 kernel: ACPI: LAPIC_NMI (acpi_id[0x5d] high edge lint[0x1]) Jul 14 23:03:35.731714 kernel: ACPI: LAPIC_NMI (acpi_id[0x5e] high edge lint[0x1]) Jul 14 23:03:35.731721 kernel: ACPI: LAPIC_NMI (acpi_id[0x5f] high edge lint[0x1]) Jul 14 23:03:35.731726 kernel: ACPI: LAPIC_NMI (acpi_id[0x60] high edge lint[0x1]) Jul 14 23:03:35.731731 kernel: ACPI: LAPIC_NMI (acpi_id[0x61] high edge lint[0x1]) Jul 14 23:03:35.731737 kernel: ACPI: LAPIC_NMI (acpi_id[0x62] high edge lint[0x1]) Jul 14 23:03:35.731742 kernel: ACPI: LAPIC_NMI (acpi_id[0x63] high edge lint[0x1]) Jul 14 23:03:35.731747 kernel: ACPI: LAPIC_NMI (acpi_id[0x64] high edge lint[0x1]) Jul 14 23:03:35.731752 kernel: ACPI: LAPIC_NMI (acpi_id[0x65] high edge lint[0x1]) Jul 14 23:03:35.731758 kernel: ACPI: LAPIC_NMI (acpi_id[0x66] high edge lint[0x1]) Jul 14 23:03:35.731763 kernel: ACPI: LAPIC_NMI (acpi_id[0x67] high edge lint[0x1]) Jul 14 23:03:35.731769 kernel: ACPI: LAPIC_NMI (acpi_id[0x68] high edge lint[0x1]) Jul 14 23:03:35.731775 kernel: ACPI: LAPIC_NMI (acpi_id[0x69] high edge lint[0x1]) Jul 14 23:03:35.731780 kernel: ACPI: LAPIC_NMI (acpi_id[0x6a] high edge lint[0x1]) Jul 14 23:03:35.731786 kernel: ACPI: LAPIC_NMI (acpi_id[0x6b] high edge lint[0x1]) Jul 14 23:03:35.731791 kernel: ACPI: LAPIC_NMI (acpi_id[0x6c] high edge lint[0x1]) Jul 14 23:03:35.731796 kernel: ACPI: LAPIC_NMI (acpi_id[0x6d] high edge lint[0x1]) Jul 14 23:03:35.731802 kernel: ACPI: LAPIC_NMI (acpi_id[0x6e] high edge lint[0x1]) Jul 14 23:03:35.731807 kernel: ACPI: LAPIC_NMI (acpi_id[0x6f] high edge lint[0x1]) Jul 14 23:03:35.731812 kernel: ACPI: LAPIC_NMI (acpi_id[0x70] high edge lint[0x1]) Jul 14 23:03:35.731817 kernel: ACPI: LAPIC_NMI (acpi_id[0x71] high edge lint[0x1]) Jul 14 23:03:35.731824 kernel: ACPI: LAPIC_NMI (acpi_id[0x72] high edge lint[0x1]) Jul 14 23:03:35.731829 kernel: ACPI: LAPIC_NMI (acpi_id[0x73] high edge lint[0x1]) Jul 14 23:03:35.731834 kernel: ACPI: LAPIC_NMI (acpi_id[0x74] high edge lint[0x1]) Jul 14 23:03:35.731840 kernel: ACPI: LAPIC_NMI (acpi_id[0x75] high edge lint[0x1]) Jul 14 23:03:35.731845 kernel: ACPI: LAPIC_NMI (acpi_id[0x76] high edge lint[0x1]) Jul 14 23:03:35.731851 kernel: ACPI: LAPIC_NMI (acpi_id[0x77] high edge lint[0x1]) Jul 14 23:03:35.731856 kernel: ACPI: LAPIC_NMI (acpi_id[0x78] high edge lint[0x1]) Jul 14 23:03:35.731861 kernel: ACPI: LAPIC_NMI (acpi_id[0x79] high edge lint[0x1]) Jul 14 23:03:35.731866 kernel: ACPI: LAPIC_NMI (acpi_id[0x7a] high edge lint[0x1]) Jul 14 23:03:35.731872 kernel: ACPI: LAPIC_NMI (acpi_id[0x7b] high edge lint[0x1]) Jul 14 23:03:35.731878 kernel: ACPI: LAPIC_NMI (acpi_id[0x7c] high edge lint[0x1]) Jul 14 23:03:35.731883 kernel: ACPI: LAPIC_NMI (acpi_id[0x7d] high edge lint[0x1]) Jul 14 23:03:35.731888 kernel: ACPI: LAPIC_NMI (acpi_id[0x7e] high edge lint[0x1]) Jul 14 23:03:35.731894 kernel: ACPI: LAPIC_NMI (acpi_id[0x7f] high edge lint[0x1]) Jul 14 23:03:35.731899 kernel: IOAPIC[0]: apic_id 1, version 17, address 0xfec00000, GSI 0-23 Jul 14 23:03:35.731905 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 high edge) Jul 14 23:03:35.731910 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jul 14 23:03:35.731916 kernel: ACPI: HPET id: 0x8086af01 base: 0xfed00000 Jul 14 23:03:35.731921 kernel: TSC deadline timer available Jul 14 23:03:35.731926 kernel: smpboot: Allowing 128 CPUs, 126 hotplug CPUs Jul 14 23:03:35.731933 kernel: [mem 0x80000000-0xefffffff] available for PCI devices Jul 14 23:03:35.731938 kernel: Booting paravirtualized kernel on VMware hypervisor Jul 14 23:03:35.731944 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jul 14 23:03:35.731949 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:128 nr_cpu_ids:128 nr_node_ids:1 Jul 14 23:03:35.731955 kernel: percpu: Embedded 58 pages/cpu s197096 r8192 d32280 u262144 Jul 14 23:03:35.731960 kernel: pcpu-alloc: s197096 r8192 d32280 u262144 alloc=1*2097152 Jul 14 23:03:35.731965 kernel: pcpu-alloc: [0] 000 001 002 003 004 005 006 007 Jul 14 23:03:35.731971 kernel: pcpu-alloc: [0] 008 009 010 011 012 013 014 015 Jul 14 23:03:35.731977 kernel: pcpu-alloc: [0] 016 017 018 019 020 021 022 023 Jul 14 23:03:35.731982 kernel: pcpu-alloc: [0] 024 025 026 027 028 029 030 031 Jul 14 23:03:35.731988 kernel: pcpu-alloc: [0] 032 033 034 035 036 037 038 039 Jul 14 23:03:35.732000 kernel: pcpu-alloc: [0] 040 041 042 043 044 045 046 047 Jul 14 23:03:35.732006 kernel: pcpu-alloc: [0] 048 049 050 051 052 053 054 055 Jul 14 23:03:35.732012 kernel: pcpu-alloc: [0] 056 057 058 059 060 061 062 063 Jul 14 23:03:35.732017 kernel: pcpu-alloc: [0] 064 065 066 067 068 069 070 071 Jul 14 23:03:35.732023 kernel: pcpu-alloc: [0] 072 073 074 075 076 077 078 079 Jul 14 23:03:35.732029 kernel: pcpu-alloc: [0] 080 081 082 083 084 085 086 087 Jul 14 23:03:35.732036 kernel: pcpu-alloc: [0] 088 089 090 091 092 093 094 095 Jul 14 23:03:35.732041 kernel: pcpu-alloc: [0] 096 097 098 099 100 101 102 103 Jul 14 23:03:35.732047 kernel: pcpu-alloc: [0] 104 105 106 107 108 109 110 111 Jul 14 23:03:35.732052 kernel: pcpu-alloc: [0] 112 113 114 115 116 117 118 119 Jul 14 23:03:35.732058 kernel: pcpu-alloc: [0] 120 121 122 123 124 125 126 127 Jul 14 23:03:35.732064 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=bfa97d577a2baa7448b0ab2cae71f1606bd0084ffae5b72cc7eef5122a2ca497 Jul 14 23:03:35.732071 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 14 23:03:35.732076 kernel: random: crng init done Jul 14 23:03:35.732083 kernel: printk: log_buf_len individual max cpu contribution: 4096 bytes Jul 14 23:03:35.732089 kernel: printk: log_buf_len total cpu_extra contributions: 520192 bytes Jul 14 23:03:35.732094 kernel: printk: log_buf_len min size: 262144 bytes Jul 14 23:03:35.732100 kernel: printk: log_buf_len: 1048576 bytes Jul 14 23:03:35.732106 kernel: printk: early log buf free: 239648(91%) Jul 14 23:03:35.732112 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 14 23:03:35.732118 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jul 14 23:03:35.732123 kernel: Fallback order for Node 0: 0 Jul 14 23:03:35.732129 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515808 Jul 14 23:03:35.732136 kernel: Policy zone: DMA32 Jul 14 23:03:35.732142 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 14 23:03:35.732148 kernel: Memory: 1936340K/2096628K available (12288K kernel code, 2295K rwdata, 22748K rodata, 42876K init, 2316K bss, 160028K reserved, 0K cma-reserved) Jul 14 23:03:35.732155 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=128, Nodes=1 Jul 14 23:03:35.732160 kernel: ftrace: allocating 37970 entries in 149 pages Jul 14 23:03:35.732166 kernel: ftrace: allocated 149 pages with 4 groups Jul 14 23:03:35.732173 kernel: Dynamic Preempt: voluntary Jul 14 23:03:35.732179 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 14 23:03:35.732185 kernel: rcu: RCU event tracing is enabled. Jul 14 23:03:35.732190 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=128. Jul 14 23:03:35.732196 kernel: Trampoline variant of Tasks RCU enabled. Jul 14 23:03:35.732202 kernel: Rude variant of Tasks RCU enabled. Jul 14 23:03:35.732208 kernel: Tracing variant of Tasks RCU enabled. Jul 14 23:03:35.732213 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 14 23:03:35.732219 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=128 Jul 14 23:03:35.732226 kernel: NR_IRQS: 33024, nr_irqs: 1448, preallocated irqs: 16 Jul 14 23:03:35.732232 kernel: rcu: srcu_init: Setting srcu_struct sizes to big. Jul 14 23:03:35.732237 kernel: Console: colour VGA+ 80x25 Jul 14 23:03:35.732243 kernel: printk: console [tty0] enabled Jul 14 23:03:35.732249 kernel: printk: console [ttyS0] enabled Jul 14 23:03:35.732254 kernel: ACPI: Core revision 20230628 Jul 14 23:03:35.732260 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 133484882848 ns Jul 14 23:03:35.732266 kernel: APIC: Switch to symmetric I/O mode setup Jul 14 23:03:35.732272 kernel: x2apic enabled Jul 14 23:03:35.732279 kernel: APIC: Switched APIC routing to: physical x2apic Jul 14 23:03:35.732284 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jul 14 23:03:35.732290 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x311fd3cd494, max_idle_ns: 440795223879 ns Jul 14 23:03:35.732296 kernel: Calibrating delay loop (skipped) preset value.. 6816.00 BogoMIPS (lpj=3408000) Jul 14 23:03:35.732302 kernel: Disabled fast string operations Jul 14 23:03:35.732307 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Jul 14 23:03:35.732313 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Jul 14 23:03:35.732319 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jul 14 23:03:35.732325 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Jul 14 23:03:35.732332 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Jul 14 23:03:35.732337 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS Jul 14 23:03:35.732492 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT Jul 14 23:03:35.732500 kernel: RETBleed: Mitigation: Enhanced IBRS Jul 14 23:03:35.732506 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jul 14 23:03:35.732511 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jul 14 23:03:35.732517 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jul 14 23:03:35.732523 kernel: SRBDS: Unknown: Dependent on hypervisor status Jul 14 23:03:35.732529 kernel: GDS: Unknown: Dependent on hypervisor status Jul 14 23:03:35.732537 kernel: ITS: Mitigation: Aligned branch/return thunks Jul 14 23:03:35.732543 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jul 14 23:03:35.732548 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jul 14 23:03:35.732554 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jul 14 23:03:35.732560 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jul 14 23:03:35.732566 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jul 14 23:03:35.732572 kernel: Freeing SMP alternatives memory: 32K Jul 14 23:03:35.732598 kernel: pid_max: default: 131072 minimum: 1024 Jul 14 23:03:35.732604 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jul 14 23:03:35.732610 kernel: landlock: Up and running. Jul 14 23:03:35.732616 kernel: SELinux: Initializing. Jul 14 23:03:35.732639 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jul 14 23:03:35.732646 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jul 14 23:03:35.732651 kernel: smpboot: CPU0: Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz (family: 0x6, model: 0x9e, stepping: 0xd) Jul 14 23:03:35.732657 kernel: RCU Tasks: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. Jul 14 23:03:35.732663 kernel: RCU Tasks Rude: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. Jul 14 23:03:35.732669 kernel: RCU Tasks Trace: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. Jul 14 23:03:35.732674 kernel: Performance Events: Skylake events, core PMU driver. Jul 14 23:03:35.732681 kernel: core: CPUID marked event: 'cpu cycles' unavailable Jul 14 23:03:35.732687 kernel: core: CPUID marked event: 'instructions' unavailable Jul 14 23:03:35.732693 kernel: core: CPUID marked event: 'bus cycles' unavailable Jul 14 23:03:35.732698 kernel: core: CPUID marked event: 'cache references' unavailable Jul 14 23:03:35.732704 kernel: core: CPUID marked event: 'cache misses' unavailable Jul 14 23:03:35.732709 kernel: core: CPUID marked event: 'branch instructions' unavailable Jul 14 23:03:35.732715 kernel: core: CPUID marked event: 'branch misses' unavailable Jul 14 23:03:35.732721 kernel: ... version: 1 Jul 14 23:03:35.732728 kernel: ... bit width: 48 Jul 14 23:03:35.732734 kernel: ... generic registers: 4 Jul 14 23:03:35.732739 kernel: ... value mask: 0000ffffffffffff Jul 14 23:03:35.732745 kernel: ... max period: 000000007fffffff Jul 14 23:03:35.732751 kernel: ... fixed-purpose events: 0 Jul 14 23:03:35.732756 kernel: ... event mask: 000000000000000f Jul 14 23:03:35.732762 kernel: signal: max sigframe size: 1776 Jul 14 23:03:35.732768 kernel: rcu: Hierarchical SRCU implementation. Jul 14 23:03:35.732774 kernel: rcu: Max phase no-delay instances is 400. Jul 14 23:03:35.732781 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jul 14 23:03:35.732786 kernel: smp: Bringing up secondary CPUs ... Jul 14 23:03:35.732792 kernel: smpboot: x86: Booting SMP configuration: Jul 14 23:03:35.732798 kernel: .... node #0, CPUs: #1 Jul 14 23:03:35.732803 kernel: Disabled fast string operations Jul 14 23:03:35.732809 kernel: smpboot: CPU 1 Converting physical 2 to logical package 1 Jul 14 23:03:35.732815 kernel: smpboot: CPU 1 Converting physical 0 to logical die 1 Jul 14 23:03:35.732820 kernel: smp: Brought up 1 node, 2 CPUs Jul 14 23:03:35.732826 kernel: smpboot: Max logical packages: 128 Jul 14 23:03:35.732832 kernel: smpboot: Total of 2 processors activated (13632.00 BogoMIPS) Jul 14 23:03:35.732838 kernel: devtmpfs: initialized Jul 14 23:03:35.732844 kernel: x86/mm: Memory block size: 128MB Jul 14 23:03:35.732850 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7feff000-0x7fefffff] (4096 bytes) Jul 14 23:03:35.732856 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 14 23:03:35.732862 kernel: futex hash table entries: 32768 (order: 9, 2097152 bytes, linear) Jul 14 23:03:35.732867 kernel: pinctrl core: initialized pinctrl subsystem Jul 14 23:03:35.732873 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 14 23:03:35.732879 kernel: audit: initializing netlink subsys (disabled) Jul 14 23:03:35.732885 kernel: audit: type=2000 audit(1752534214.086:1): state=initialized audit_enabled=0 res=1 Jul 14 23:03:35.732892 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 14 23:03:35.732897 kernel: thermal_sys: Registered thermal governor 'user_space' Jul 14 23:03:35.732903 kernel: cpuidle: using governor menu Jul 14 23:03:35.732908 kernel: Simple Boot Flag at 0x36 set to 0x80 Jul 14 23:03:35.732914 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 14 23:03:35.732920 kernel: dca service started, version 1.12.1 Jul 14 23:03:35.732926 kernel: PCI: MMCONFIG for domain 0000 [bus 00-7f] at [mem 0xf0000000-0xf7ffffff] (base 0xf0000000) Jul 14 23:03:35.732931 kernel: PCI: Using configuration type 1 for base access Jul 14 23:03:35.732937 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jul 14 23:03:35.732944 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 14 23:03:35.732950 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jul 14 23:03:35.732955 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 14 23:03:35.732961 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jul 14 23:03:35.732967 kernel: ACPI: Added _OSI(Module Device) Jul 14 23:03:35.732973 kernel: ACPI: Added _OSI(Processor Device) Jul 14 23:03:35.732979 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 14 23:03:35.732984 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 14 23:03:35.732990 kernel: ACPI: [Firmware Bug]: BIOS _OSI(Linux) query ignored Jul 14 23:03:35.732996 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jul 14 23:03:35.733002 kernel: ACPI: Interpreter enabled Jul 14 23:03:35.733009 kernel: ACPI: PM: (supports S0 S1 S5) Jul 14 23:03:35.733015 kernel: ACPI: Using IOAPIC for interrupt routing Jul 14 23:03:35.733020 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jul 14 23:03:35.733026 kernel: PCI: Using E820 reservations for host bridge windows Jul 14 23:03:35.733032 kernel: ACPI: Enabled 4 GPEs in block 00 to 0F Jul 14 23:03:35.733038 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-7f]) Jul 14 23:03:35.733119 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 14 23:03:35.733180 kernel: acpi PNP0A03:00: _OSC: platform does not support [AER LTR] Jul 14 23:03:35.733232 kernel: acpi PNP0A03:00: _OSC: OS now controls [PCIeHotplug PME PCIeCapability] Jul 14 23:03:35.733241 kernel: PCI host bridge to bus 0000:00 Jul 14 23:03:35.733294 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jul 14 23:03:35.733385 kernel: pci_bus 0000:00: root bus resource [mem 0x000cc000-0x000dbfff window] Jul 14 23:03:35.733659 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jul 14 23:03:35.733733 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jul 14 23:03:35.733781 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xfeff window] Jul 14 23:03:35.733844 kernel: pci_bus 0000:00: root bus resource [bus 00-7f] Jul 14 23:03:35.733905 kernel: pci 0000:00:00.0: [8086:7190] type 00 class 0x060000 Jul 14 23:03:35.733963 kernel: pci 0000:00:01.0: [8086:7191] type 01 class 0x060400 Jul 14 23:03:35.734020 kernel: pci 0000:00:07.0: [8086:7110] type 00 class 0x060100 Jul 14 23:03:35.734078 kernel: pci 0000:00:07.1: [8086:7111] type 00 class 0x01018a Jul 14 23:03:35.734131 kernel: pci 0000:00:07.1: reg 0x20: [io 0x1060-0x106f] Jul 14 23:03:35.734183 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Jul 14 23:03:35.734234 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Jul 14 23:03:35.734285 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Jul 14 23:03:35.734336 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Jul 14 23:03:35.734437 kernel: pci 0000:00:07.3: [8086:7113] type 00 class 0x068000 Jul 14 23:03:35.736450 kernel: pci 0000:00:07.3: quirk: [io 0x1000-0x103f] claimed by PIIX4 ACPI Jul 14 23:03:35.736514 kernel: pci 0000:00:07.3: quirk: [io 0x1040-0x104f] claimed by PIIX4 SMB Jul 14 23:03:35.736575 kernel: pci 0000:00:07.7: [15ad:0740] type 00 class 0x088000 Jul 14 23:03:35.736630 kernel: pci 0000:00:07.7: reg 0x10: [io 0x1080-0x10bf] Jul 14 23:03:35.736684 kernel: pci 0000:00:07.7: reg 0x14: [mem 0xfebfe000-0xfebfffff 64bit] Jul 14 23:03:35.736741 kernel: pci 0000:00:0f.0: [15ad:0405] type 00 class 0x030000 Jul 14 23:03:35.736798 kernel: pci 0000:00:0f.0: reg 0x10: [io 0x1070-0x107f] Jul 14 23:03:35.736851 kernel: pci 0000:00:0f.0: reg 0x14: [mem 0xe8000000-0xefffffff pref] Jul 14 23:03:35.736902 kernel: pci 0000:00:0f.0: reg 0x18: [mem 0xfe000000-0xfe7fffff] Jul 14 23:03:35.736954 kernel: pci 0000:00:0f.0: reg 0x30: [mem 0x00000000-0x00007fff pref] Jul 14 23:03:35.737005 kernel: pci 0000:00:0f.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jul 14 23:03:35.737061 kernel: pci 0000:00:11.0: [15ad:0790] type 01 class 0x060401 Jul 14 23:03:35.737118 kernel: pci 0000:00:15.0: [15ad:07a0] type 01 class 0x060400 Jul 14 23:03:35.737174 kernel: pci 0000:00:15.0: PME# supported from D0 D3hot D3cold Jul 14 23:03:35.737230 kernel: pci 0000:00:15.1: [15ad:07a0] type 01 class 0x060400 Jul 14 23:03:35.737284 kernel: pci 0000:00:15.1: PME# supported from D0 D3hot D3cold Jul 14 23:03:35.737340 kernel: pci 0000:00:15.2: [15ad:07a0] type 01 class 0x060400 Jul 14 23:03:35.737402 kernel: pci 0000:00:15.2: PME# supported from D0 D3hot D3cold Jul 14 23:03:35.737464 kernel: pci 0000:00:15.3: [15ad:07a0] type 01 class 0x060400 Jul 14 23:03:35.737520 kernel: pci 0000:00:15.3: PME# supported from D0 D3hot D3cold Jul 14 23:03:35.737580 kernel: pci 0000:00:15.4: [15ad:07a0] type 01 class 0x060400 Jul 14 23:03:35.737633 kernel: pci 0000:00:15.4: PME# supported from D0 D3hot D3cold Jul 14 23:03:35.737689 kernel: pci 0000:00:15.5: [15ad:07a0] type 01 class 0x060400 Jul 14 23:03:35.737741 kernel: pci 0000:00:15.5: PME# supported from D0 D3hot D3cold Jul 14 23:03:35.737797 kernel: pci 0000:00:15.6: [15ad:07a0] type 01 class 0x060400 Jul 14 23:03:35.737853 kernel: pci 0000:00:15.6: PME# supported from D0 D3hot D3cold Jul 14 23:03:35.737910 kernel: pci 0000:00:15.7: [15ad:07a0] type 01 class 0x060400 Jul 14 23:03:35.737963 kernel: pci 0000:00:15.7: PME# supported from D0 D3hot D3cold Jul 14 23:03:35.738019 kernel: pci 0000:00:16.0: [15ad:07a0] type 01 class 0x060400 Jul 14 23:03:35.738073 kernel: pci 0000:00:16.0: PME# supported from D0 D3hot D3cold Jul 14 23:03:35.738129 kernel: pci 0000:00:16.1: [15ad:07a0] type 01 class 0x060400 Jul 14 23:03:35.738185 kernel: pci 0000:00:16.1: PME# supported from D0 D3hot D3cold Jul 14 23:03:35.738242 kernel: pci 0000:00:16.2: [15ad:07a0] type 01 class 0x060400 Jul 14 23:03:35.738295 kernel: pci 0000:00:16.2: PME# supported from D0 D3hot D3cold Jul 14 23:03:35.738443 kernel: pci 0000:00:16.3: [15ad:07a0] type 01 class 0x060400 Jul 14 23:03:35.738501 kernel: pci 0000:00:16.3: PME# supported from D0 D3hot D3cold Jul 14 23:03:35.738559 kernel: pci 0000:00:16.4: [15ad:07a0] type 01 class 0x060400 Jul 14 23:03:35.738616 kernel: pci 0000:00:16.4: PME# supported from D0 D3hot D3cold Jul 14 23:03:35.738675 kernel: pci 0000:00:16.5: [15ad:07a0] type 01 class 0x060400 Jul 14 23:03:35.738728 kernel: pci 0000:00:16.5: PME# supported from D0 D3hot D3cold Jul 14 23:03:35.738785 kernel: pci 0000:00:16.6: [15ad:07a0] type 01 class 0x060400 Jul 14 23:03:35.738838 kernel: pci 0000:00:16.6: PME# supported from D0 D3hot D3cold Jul 14 23:03:35.738896 kernel: pci 0000:00:16.7: [15ad:07a0] type 01 class 0x060400 Jul 14 23:03:35.738952 kernel: pci 0000:00:16.7: PME# supported from D0 D3hot D3cold Jul 14 23:03:35.739009 kernel: pci 0000:00:17.0: [15ad:07a0] type 01 class 0x060400 Jul 14 23:03:35.739061 kernel: pci 0000:00:17.0: PME# supported from D0 D3hot D3cold Jul 14 23:03:35.739118 kernel: pci 0000:00:17.1: [15ad:07a0] type 01 class 0x060400 Jul 14 23:03:35.739172 kernel: pci 0000:00:17.1: PME# supported from D0 D3hot D3cold Jul 14 23:03:35.739227 kernel: pci 0000:00:17.2: [15ad:07a0] type 01 class 0x060400 Jul 14 23:03:35.739283 kernel: pci 0000:00:17.2: PME# supported from D0 D3hot D3cold Jul 14 23:03:35.739339 kernel: pci 0000:00:17.3: [15ad:07a0] type 01 class 0x060400 Jul 14 23:03:35.739408 kernel: pci 0000:00:17.3: PME# supported from D0 D3hot D3cold Jul 14 23:03:35.739466 kernel: pci 0000:00:17.4: [15ad:07a0] type 01 class 0x060400 Jul 14 23:03:35.739519 kernel: pci 0000:00:17.4: PME# supported from D0 D3hot D3cold Jul 14 23:03:35.739578 kernel: pci 0000:00:17.5: [15ad:07a0] type 01 class 0x060400 Jul 14 23:03:35.739631 kernel: pci 0000:00:17.5: PME# supported from D0 D3hot D3cold Jul 14 23:03:35.739691 kernel: pci 0000:00:17.6: [15ad:07a0] type 01 class 0x060400 Jul 14 23:03:35.739745 kernel: pci 0000:00:17.6: PME# supported from D0 D3hot D3cold Jul 14 23:03:35.739801 kernel: pci 0000:00:17.7: [15ad:07a0] type 01 class 0x060400 Jul 14 23:03:35.739855 kernel: pci 0000:00:17.7: PME# supported from D0 D3hot D3cold Jul 14 23:03:35.739912 kernel: pci 0000:00:18.0: [15ad:07a0] type 01 class 0x060400 Jul 14 23:03:35.739965 kernel: pci 0000:00:18.0: PME# supported from D0 D3hot D3cold Jul 14 23:03:35.740023 kernel: pci 0000:00:18.1: [15ad:07a0] type 01 class 0x060400 Jul 14 23:03:35.740077 kernel: pci 0000:00:18.1: PME# supported from D0 D3hot D3cold Jul 14 23:03:35.740134 kernel: pci 0000:00:18.2: [15ad:07a0] type 01 class 0x060400 Jul 14 23:03:35.740187 kernel: pci 0000:00:18.2: PME# supported from D0 D3hot D3cold Jul 14 23:03:35.740244 kernel: pci 0000:00:18.3: [15ad:07a0] type 01 class 0x060400 Jul 14 23:03:35.740296 kernel: pci 0000:00:18.3: PME# supported from D0 D3hot D3cold Jul 14 23:03:35.743444 kernel: pci 0000:00:18.4: [15ad:07a0] type 01 class 0x060400 Jul 14 23:03:35.743520 kernel: pci 0000:00:18.4: PME# supported from D0 D3hot D3cold Jul 14 23:03:35.743585 kernel: pci 0000:00:18.5: [15ad:07a0] type 01 class 0x060400 Jul 14 23:03:35.743675 kernel: pci 0000:00:18.5: PME# supported from D0 D3hot D3cold Jul 14 23:03:35.743749 kernel: pci 0000:00:18.6: [15ad:07a0] type 01 class 0x060400 Jul 14 23:03:35.743801 kernel: pci 0000:00:18.6: PME# supported from D0 D3hot D3cold Jul 14 23:03:35.743861 kernel: pci 0000:00:18.7: [15ad:07a0] type 01 class 0x060400 Jul 14 23:03:35.743913 kernel: pci 0000:00:18.7: PME# supported from D0 D3hot D3cold Jul 14 23:03:35.744001 kernel: pci_bus 0000:01: extended config space not accessible Jul 14 23:03:35.744055 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Jul 14 23:03:35.744108 kernel: pci_bus 0000:02: extended config space not accessible Jul 14 23:03:35.744117 kernel: acpiphp: Slot [32] registered Jul 14 23:03:35.744123 kernel: acpiphp: Slot [33] registered Jul 14 23:03:35.744132 kernel: acpiphp: Slot [34] registered Jul 14 23:03:35.744137 kernel: acpiphp: Slot [35] registered Jul 14 23:03:35.744143 kernel: acpiphp: Slot [36] registered Jul 14 23:03:35.744149 kernel: acpiphp: Slot [37] registered Jul 14 23:03:35.744155 kernel: acpiphp: Slot [38] registered Jul 14 23:03:35.744161 kernel: acpiphp: Slot [39] registered Jul 14 23:03:35.744167 kernel: acpiphp: Slot [40] registered Jul 14 23:03:35.744191 kernel: acpiphp: Slot [41] registered Jul 14 23:03:35.744197 kernel: acpiphp: Slot [42] registered Jul 14 23:03:35.744203 kernel: acpiphp: Slot [43] registered Jul 14 23:03:35.744210 kernel: acpiphp: Slot [44] registered Jul 14 23:03:35.744216 kernel: acpiphp: Slot [45] registered Jul 14 23:03:35.744222 kernel: acpiphp: Slot [46] registered Jul 14 23:03:35.744244 kernel: acpiphp: Slot [47] registered Jul 14 23:03:35.744250 kernel: acpiphp: Slot [48] registered Jul 14 23:03:35.744256 kernel: acpiphp: Slot [49] registered Jul 14 23:03:35.744262 kernel: acpiphp: Slot [50] registered Jul 14 23:03:35.744272 kernel: acpiphp: Slot [51] registered Jul 14 23:03:35.744278 kernel: acpiphp: Slot [52] registered Jul 14 23:03:35.744294 kernel: acpiphp: Slot [53] registered Jul 14 23:03:35.744302 kernel: acpiphp: Slot [54] registered Jul 14 23:03:35.744311 kernel: acpiphp: Slot [55] registered Jul 14 23:03:35.744320 kernel: acpiphp: Slot [56] registered Jul 14 23:03:35.744326 kernel: acpiphp: Slot [57] registered Jul 14 23:03:35.744331 kernel: acpiphp: Slot [58] registered Jul 14 23:03:35.744337 kernel: acpiphp: Slot [59] registered Jul 14 23:03:35.744351 kernel: acpiphp: Slot [60] registered Jul 14 23:03:35.744357 kernel: acpiphp: Slot [61] registered Jul 14 23:03:35.744365 kernel: acpiphp: Slot [62] registered Jul 14 23:03:35.744373 kernel: acpiphp: Slot [63] registered Jul 14 23:03:35.744435 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] (subtractive decode) Jul 14 23:03:35.744488 kernel: pci 0000:00:11.0: bridge window [io 0x2000-0x3fff] Jul 14 23:03:35.744539 kernel: pci 0000:00:11.0: bridge window [mem 0xfd600000-0xfdffffff] Jul 14 23:03:35.744590 kernel: pci 0000:00:11.0: bridge window [mem 0xe7b00000-0xe7ffffff 64bit pref] Jul 14 23:03:35.744641 kernel: pci 0000:00:11.0: bridge window [mem 0x000a0000-0x000bffff window] (subtractive decode) Jul 14 23:03:35.744692 kernel: pci 0000:00:11.0: bridge window [mem 0x000cc000-0x000dbfff window] (subtractive decode) Jul 14 23:03:35.744745 kernel: pci 0000:00:11.0: bridge window [mem 0xc0000000-0xfebfffff window] (subtractive decode) Jul 14 23:03:35.744797 kernel: pci 0000:00:11.0: bridge window [io 0x0000-0x0cf7 window] (subtractive decode) Jul 14 23:03:35.744868 kernel: pci 0000:00:11.0: bridge window [io 0x0d00-0xfeff window] (subtractive decode) Jul 14 23:03:35.747657 kernel: pci 0000:03:00.0: [15ad:07c0] type 00 class 0x010700 Jul 14 23:03:35.747722 kernel: pci 0000:03:00.0: reg 0x10: [io 0x4000-0x4007] Jul 14 23:03:35.747780 kernel: pci 0000:03:00.0: reg 0x14: [mem 0xfd5f8000-0xfd5fffff 64bit] Jul 14 23:03:35.747835 kernel: pci 0000:03:00.0: reg 0x30: [mem 0x00000000-0x0000ffff pref] Jul 14 23:03:35.747890 kernel: pci 0000:03:00.0: PME# supported from D0 D3hot D3cold Jul 14 23:03:35.747948 kernel: pci 0000:03:00.0: disabling ASPM on pre-1.1 PCIe device. You can enable it with 'pcie_aspm=force' Jul 14 23:03:35.748002 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] Jul 14 23:03:35.748055 kernel: pci 0000:00:15.0: bridge window [io 0x4000-0x4fff] Jul 14 23:03:35.748108 kernel: pci 0000:00:15.0: bridge window [mem 0xfd500000-0xfd5fffff] Jul 14 23:03:35.748162 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] Jul 14 23:03:35.748216 kernel: pci 0000:00:15.1: bridge window [io 0x8000-0x8fff] Jul 14 23:03:35.748269 kernel: pci 0000:00:15.1: bridge window [mem 0xfd100000-0xfd1fffff] Jul 14 23:03:35.748325 kernel: pci 0000:00:15.1: bridge window [mem 0xe7800000-0xe78fffff 64bit pref] Jul 14 23:03:35.748423 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] Jul 14 23:03:35.748494 kernel: pci 0000:00:15.2: bridge window [io 0xc000-0xcfff] Jul 14 23:03:35.748545 kernel: pci 0000:00:15.2: bridge window [mem 0xfcd00000-0xfcdfffff] Jul 14 23:03:35.748596 kernel: pci 0000:00:15.2: bridge window [mem 0xe7400000-0xe74fffff 64bit pref] Jul 14 23:03:35.748649 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] Jul 14 23:03:35.748700 kernel: pci 0000:00:15.3: bridge window [mem 0xfc900000-0xfc9fffff] Jul 14 23:03:35.748751 kernel: pci 0000:00:15.3: bridge window [mem 0xe7000000-0xe70fffff 64bit pref] Jul 14 23:03:35.748807 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] Jul 14 23:03:35.748858 kernel: pci 0000:00:15.4: bridge window [mem 0xfc500000-0xfc5fffff] Jul 14 23:03:35.748910 kernel: pci 0000:00:15.4: bridge window [mem 0xe6c00000-0xe6cfffff 64bit pref] Jul 14 23:03:35.748963 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] Jul 14 23:03:35.749026 kernel: pci 0000:00:15.5: bridge window [mem 0xfc100000-0xfc1fffff] Jul 14 23:03:35.749084 kernel: pci 0000:00:15.5: bridge window [mem 0xe6800000-0xe68fffff 64bit pref] Jul 14 23:03:35.749136 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] Jul 14 23:03:35.749187 kernel: pci 0000:00:15.6: bridge window [mem 0xfbd00000-0xfbdfffff] Jul 14 23:03:35.749238 kernel: pci 0000:00:15.6: bridge window [mem 0xe6400000-0xe64fffff 64bit pref] Jul 14 23:03:35.749290 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] Jul 14 23:03:35.749348 kernel: pci 0000:00:15.7: bridge window [mem 0xfb900000-0xfb9fffff] Jul 14 23:03:35.749407 kernel: pci 0000:00:15.7: bridge window [mem 0xe6000000-0xe60fffff 64bit pref] Jul 14 23:03:35.749505 kernel: pci 0000:0b:00.0: [15ad:07b0] type 00 class 0x020000 Jul 14 23:03:35.749560 kernel: pci 0000:0b:00.0: reg 0x10: [mem 0xfd4fc000-0xfd4fcfff] Jul 14 23:03:35.749613 kernel: pci 0000:0b:00.0: reg 0x14: [mem 0xfd4fd000-0xfd4fdfff] Jul 14 23:03:35.749666 kernel: pci 0000:0b:00.0: reg 0x18: [mem 0xfd4fe000-0xfd4fffff] Jul 14 23:03:35.749719 kernel: pci 0000:0b:00.0: reg 0x1c: [io 0x5000-0x500f] Jul 14 23:03:35.749771 kernel: pci 0000:0b:00.0: reg 0x30: [mem 0x00000000-0x0000ffff pref] Jul 14 23:03:35.749825 kernel: pci 0000:0b:00.0: supports D1 D2 Jul 14 23:03:35.749880 kernel: pci 0000:0b:00.0: PME# supported from D0 D1 D2 D3hot D3cold Jul 14 23:03:35.749933 kernel: pci 0000:0b:00.0: disabling ASPM on pre-1.1 PCIe device. You can enable it with 'pcie_aspm=force' Jul 14 23:03:35.749986 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] Jul 14 23:03:35.750038 kernel: pci 0000:00:16.0: bridge window [io 0x5000-0x5fff] Jul 14 23:03:35.750090 kernel: pci 0000:00:16.0: bridge window [mem 0xfd400000-0xfd4fffff] Jul 14 23:03:35.750144 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] Jul 14 23:03:35.750196 kernel: pci 0000:00:16.1: bridge window [io 0x9000-0x9fff] Jul 14 23:03:35.750247 kernel: pci 0000:00:16.1: bridge window [mem 0xfd000000-0xfd0fffff] Jul 14 23:03:35.750303 kernel: pci 0000:00:16.1: bridge window [mem 0xe7700000-0xe77fffff 64bit pref] Jul 14 23:03:35.752402 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] Jul 14 23:03:35.752460 kernel: pci 0000:00:16.2: bridge window [io 0xd000-0xdfff] Jul 14 23:03:35.752513 kernel: pci 0000:00:16.2: bridge window [mem 0xfcc00000-0xfccfffff] Jul 14 23:03:35.752565 kernel: pci 0000:00:16.2: bridge window [mem 0xe7300000-0xe73fffff 64bit pref] Jul 14 23:03:35.752620 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] Jul 14 23:03:35.752672 kernel: pci 0000:00:16.3: bridge window [mem 0xfc800000-0xfc8fffff] Jul 14 23:03:35.752724 kernel: pci 0000:00:16.3: bridge window [mem 0xe6f00000-0xe6ffffff 64bit pref] Jul 14 23:03:35.752782 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] Jul 14 23:03:35.752835 kernel: pci 0000:00:16.4: bridge window [mem 0xfc400000-0xfc4fffff] Jul 14 23:03:35.752887 kernel: pci 0000:00:16.4: bridge window [mem 0xe6b00000-0xe6bfffff 64bit pref] Jul 14 23:03:35.752941 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] Jul 14 23:03:35.752993 kernel: pci 0000:00:16.5: bridge window [mem 0xfc000000-0xfc0fffff] Jul 14 23:03:35.753045 kernel: pci 0000:00:16.5: bridge window [mem 0xe6700000-0xe67fffff 64bit pref] Jul 14 23:03:35.753099 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] Jul 14 23:03:35.753151 kernel: pci 0000:00:16.6: bridge window [mem 0xfbc00000-0xfbcfffff] Jul 14 23:03:35.753206 kernel: pci 0000:00:16.6: bridge window [mem 0xe6300000-0xe63fffff 64bit pref] Jul 14 23:03:35.753259 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] Jul 14 23:03:35.753311 kernel: pci 0000:00:16.7: bridge window [mem 0xfb800000-0xfb8fffff] Jul 14 23:03:35.753373 kernel: pci 0000:00:16.7: bridge window [mem 0xe5f00000-0xe5ffffff 64bit pref] Jul 14 23:03:35.753428 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] Jul 14 23:03:35.753480 kernel: pci 0000:00:17.0: bridge window [io 0x6000-0x6fff] Jul 14 23:03:35.753533 kernel: pci 0000:00:17.0: bridge window [mem 0xfd300000-0xfd3fffff] Jul 14 23:03:35.753586 kernel: pci 0000:00:17.0: bridge window [mem 0xe7a00000-0xe7afffff 64bit pref] Jul 14 23:03:35.753644 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] Jul 14 23:03:35.753697 kernel: pci 0000:00:17.1: bridge window [io 0xa000-0xafff] Jul 14 23:03:35.753750 kernel: pci 0000:00:17.1: bridge window [mem 0xfcf00000-0xfcffffff] Jul 14 23:03:35.753803 kernel: pci 0000:00:17.1: bridge window [mem 0xe7600000-0xe76fffff 64bit pref] Jul 14 23:03:35.753856 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] Jul 14 23:03:35.753909 kernel: pci 0000:00:17.2: bridge window [io 0xe000-0xefff] Jul 14 23:03:35.753961 kernel: pci 0000:00:17.2: bridge window [mem 0xfcb00000-0xfcbfffff] Jul 14 23:03:35.754016 kernel: pci 0000:00:17.2: bridge window [mem 0xe7200000-0xe72fffff 64bit pref] Jul 14 23:03:35.754071 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] Jul 14 23:03:35.754123 kernel: pci 0000:00:17.3: bridge window [mem 0xfc700000-0xfc7fffff] Jul 14 23:03:35.754176 kernel: pci 0000:00:17.3: bridge window [mem 0xe6e00000-0xe6efffff 64bit pref] Jul 14 23:03:35.754231 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] Jul 14 23:03:35.754285 kernel: pci 0000:00:17.4: bridge window [mem 0xfc300000-0xfc3fffff] Jul 14 23:03:35.754338 kernel: pci 0000:00:17.4: bridge window [mem 0xe6a00000-0xe6afffff 64bit pref] Jul 14 23:03:35.754405 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] Jul 14 23:03:35.754466 kernel: pci 0000:00:17.5: bridge window [mem 0xfbf00000-0xfbffffff] Jul 14 23:03:35.754519 kernel: pci 0000:00:17.5: bridge window [mem 0xe6600000-0xe66fffff 64bit pref] Jul 14 23:03:35.754574 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] Jul 14 23:03:35.754626 kernel: pci 0000:00:17.6: bridge window [mem 0xfbb00000-0xfbbfffff] Jul 14 23:03:35.754678 kernel: pci 0000:00:17.6: bridge window [mem 0xe6200000-0xe62fffff 64bit pref] Jul 14 23:03:35.754732 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] Jul 14 23:03:35.754785 kernel: pci 0000:00:17.7: bridge window [mem 0xfb700000-0xfb7fffff] Jul 14 23:03:35.754838 kernel: pci 0000:00:17.7: bridge window [mem 0xe5e00000-0xe5efffff 64bit pref] Jul 14 23:03:35.754896 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] Jul 14 23:03:35.754949 kernel: pci 0000:00:18.0: bridge window [io 0x7000-0x7fff] Jul 14 23:03:35.755001 kernel: pci 0000:00:18.0: bridge window [mem 0xfd200000-0xfd2fffff] Jul 14 23:03:35.755054 kernel: pci 0000:00:18.0: bridge window [mem 0xe7900000-0xe79fffff 64bit pref] Jul 14 23:03:35.755109 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] Jul 14 23:03:35.755161 kernel: pci 0000:00:18.1: bridge window [io 0xb000-0xbfff] Jul 14 23:03:35.755214 kernel: pci 0000:00:18.1: bridge window [mem 0xfce00000-0xfcefffff] Jul 14 23:03:35.755266 kernel: pci 0000:00:18.1: bridge window [mem 0xe7500000-0xe75fffff 64bit pref] Jul 14 23:03:35.755323 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] Jul 14 23:03:35.757541 kernel: pci 0000:00:18.2: bridge window [mem 0xfca00000-0xfcafffff] Jul 14 23:03:35.757603 kernel: pci 0000:00:18.2: bridge window [mem 0xe7100000-0xe71fffff 64bit pref] Jul 14 23:03:35.757662 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] Jul 14 23:03:35.757716 kernel: pci 0000:00:18.3: bridge window [mem 0xfc600000-0xfc6fffff] Jul 14 23:03:35.757769 kernel: pci 0000:00:18.3: bridge window [mem 0xe6d00000-0xe6dfffff 64bit pref] Jul 14 23:03:35.757823 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] Jul 14 23:03:35.757875 kernel: pci 0000:00:18.4: bridge window [mem 0xfc200000-0xfc2fffff] Jul 14 23:03:35.757932 kernel: pci 0000:00:18.4: bridge window [mem 0xe6900000-0xe69fffff 64bit pref] Jul 14 23:03:35.757986 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] Jul 14 23:03:35.758040 kernel: pci 0000:00:18.5: bridge window [mem 0xfbe00000-0xfbefffff] Jul 14 23:03:35.758092 kernel: pci 0000:00:18.5: bridge window [mem 0xe6500000-0xe65fffff 64bit pref] Jul 14 23:03:35.758171 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] Jul 14 23:03:35.758225 kernel: pci 0000:00:18.6: bridge window [mem 0xfba00000-0xfbafffff] Jul 14 23:03:35.758278 kernel: pci 0000:00:18.6: bridge window [mem 0xe6100000-0xe61fffff 64bit pref] Jul 14 23:03:35.758332 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] Jul 14 23:03:35.758400 kernel: pci 0000:00:18.7: bridge window [mem 0xfb600000-0xfb6fffff] Jul 14 23:03:35.758460 kernel: pci 0000:00:18.7: bridge window [mem 0xe5d00000-0xe5dfffff 64bit pref] Jul 14 23:03:35.758468 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 9 Jul 14 23:03:35.758475 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 0 Jul 14 23:03:35.758481 kernel: ACPI: PCI: Interrupt link LNKB disabled Jul 14 23:03:35.758487 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jul 14 23:03:35.758493 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 10 Jul 14 23:03:35.758499 kernel: iommu: Default domain type: Translated Jul 14 23:03:35.758507 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jul 14 23:03:35.758513 kernel: PCI: Using ACPI for IRQ routing Jul 14 23:03:35.758519 kernel: PCI: pci_cache_line_size set to 64 bytes Jul 14 23:03:35.758525 kernel: e820: reserve RAM buffer [mem 0x0009ec00-0x0009ffff] Jul 14 23:03:35.758531 kernel: e820: reserve RAM buffer [mem 0x7fee0000-0x7fffffff] Jul 14 23:03:35.758584 kernel: pci 0000:00:0f.0: vgaarb: setting as boot VGA device Jul 14 23:03:35.758638 kernel: pci 0000:00:0f.0: vgaarb: bridge control possible Jul 14 23:03:35.758691 kernel: pci 0000:00:0f.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jul 14 23:03:35.758700 kernel: vgaarb: loaded Jul 14 23:03:35.758708 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 Jul 14 23:03:35.758714 kernel: hpet0: 16 comparators, 64-bit 14.318180 MHz counter Jul 14 23:03:35.758720 kernel: clocksource: Switched to clocksource tsc-early Jul 14 23:03:35.758726 kernel: VFS: Disk quotas dquot_6.6.0 Jul 14 23:03:35.758732 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 14 23:03:35.758738 kernel: pnp: PnP ACPI init Jul 14 23:03:35.758797 kernel: system 00:00: [io 0x1000-0x103f] has been reserved Jul 14 23:03:35.758848 kernel: system 00:00: [io 0x1040-0x104f] has been reserved Jul 14 23:03:35.758899 kernel: system 00:00: [io 0x0cf0-0x0cf1] has been reserved Jul 14 23:03:35.758952 kernel: system 00:04: [mem 0xfed00000-0xfed003ff] has been reserved Jul 14 23:03:35.759003 kernel: pnp 00:06: [dma 2] Jul 14 23:03:35.759056 kernel: system 00:07: [io 0xfce0-0xfcff] has been reserved Jul 14 23:03:35.759105 kernel: system 00:07: [mem 0xf0000000-0xf7ffffff] has been reserved Jul 14 23:03:35.759152 kernel: system 00:07: [mem 0xfe800000-0xfe9fffff] has been reserved Jul 14 23:03:35.759161 kernel: pnp: PnP ACPI: found 8 devices Jul 14 23:03:35.759170 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jul 14 23:03:35.759176 kernel: NET: Registered PF_INET protocol family Jul 14 23:03:35.759182 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 14 23:03:35.759188 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Jul 14 23:03:35.759194 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 14 23:03:35.759200 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Jul 14 23:03:35.759206 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Jul 14 23:03:35.759212 kernel: TCP: Hash tables configured (established 16384 bind 16384) Jul 14 23:03:35.759218 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Jul 14 23:03:35.759225 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Jul 14 23:03:35.759232 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 14 23:03:35.759237 kernel: NET: Registered PF_XDP protocol family Jul 14 23:03:35.759292 kernel: pci 0000:00:15.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 Jul 14 23:03:35.759376 kernel: pci 0000:00:15.3: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Jul 14 23:03:35.759434 kernel: pci 0000:00:15.4: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Jul 14 23:03:35.759488 kernel: pci 0000:00:15.5: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Jul 14 23:03:35.759545 kernel: pci 0000:00:15.6: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Jul 14 23:03:35.759599 kernel: pci 0000:00:15.7: bridge window [io 0x1000-0x0fff] to [bus 0a] add_size 1000 Jul 14 23:03:35.759652 kernel: pci 0000:00:16.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 0b] add_size 200000 add_align 100000 Jul 14 23:03:35.759706 kernel: pci 0000:00:16.3: bridge window [io 0x1000-0x0fff] to [bus 0e] add_size 1000 Jul 14 23:03:35.759759 kernel: pci 0000:00:16.4: bridge window [io 0x1000-0x0fff] to [bus 0f] add_size 1000 Jul 14 23:03:35.759813 kernel: pci 0000:00:16.5: bridge window [io 0x1000-0x0fff] to [bus 10] add_size 1000 Jul 14 23:03:35.759869 kernel: pci 0000:00:16.6: bridge window [io 0x1000-0x0fff] to [bus 11] add_size 1000 Jul 14 23:03:35.759922 kernel: pci 0000:00:16.7: bridge window [io 0x1000-0x0fff] to [bus 12] add_size 1000 Jul 14 23:03:35.759976 kernel: pci 0000:00:17.3: bridge window [io 0x1000-0x0fff] to [bus 16] add_size 1000 Jul 14 23:03:35.760029 kernel: pci 0000:00:17.4: bridge window [io 0x1000-0x0fff] to [bus 17] add_size 1000 Jul 14 23:03:35.760082 kernel: pci 0000:00:17.5: bridge window [io 0x1000-0x0fff] to [bus 18] add_size 1000 Jul 14 23:03:35.760138 kernel: pci 0000:00:17.6: bridge window [io 0x1000-0x0fff] to [bus 19] add_size 1000 Jul 14 23:03:35.760192 kernel: pci 0000:00:17.7: bridge window [io 0x1000-0x0fff] to [bus 1a] add_size 1000 Jul 14 23:03:35.760245 kernel: pci 0000:00:18.2: bridge window [io 0x1000-0x0fff] to [bus 1d] add_size 1000 Jul 14 23:03:35.760298 kernel: pci 0000:00:18.3: bridge window [io 0x1000-0x0fff] to [bus 1e] add_size 1000 Jul 14 23:03:35.760362 kernel: pci 0000:00:18.4: bridge window [io 0x1000-0x0fff] to [bus 1f] add_size 1000 Jul 14 23:03:35.760418 kernel: pci 0000:00:18.5: bridge window [io 0x1000-0x0fff] to [bus 20] add_size 1000 Jul 14 23:03:35.760475 kernel: pci 0000:00:18.6: bridge window [io 0x1000-0x0fff] to [bus 21] add_size 1000 Jul 14 23:03:35.760528 kernel: pci 0000:00:18.7: bridge window [io 0x1000-0x0fff] to [bus 22] add_size 1000 Jul 14 23:03:35.760582 kernel: pci 0000:00:15.0: BAR 15: assigned [mem 0xc0000000-0xc01fffff 64bit pref] Jul 14 23:03:35.760634 kernel: pci 0000:00:16.0: BAR 15: assigned [mem 0xc0200000-0xc03fffff 64bit pref] Jul 14 23:03:35.760688 kernel: pci 0000:00:15.3: BAR 13: no space for [io size 0x1000] Jul 14 23:03:35.760740 kernel: pci 0000:00:15.3: BAR 13: failed to assign [io size 0x1000] Jul 14 23:03:35.760793 kernel: pci 0000:00:15.4: BAR 13: no space for [io size 0x1000] Jul 14 23:03:35.760849 kernel: pci 0000:00:15.4: BAR 13: failed to assign [io size 0x1000] Jul 14 23:03:35.760902 kernel: pci 0000:00:15.5: BAR 13: no space for [io size 0x1000] Jul 14 23:03:35.760955 kernel: pci 0000:00:15.5: BAR 13: failed to assign [io size 0x1000] Jul 14 23:03:35.761008 kernel: pci 0000:00:15.6: BAR 13: no space for [io size 0x1000] Jul 14 23:03:35.761061 kernel: pci 0000:00:15.6: BAR 13: failed to assign [io size 0x1000] Jul 14 23:03:35.761115 kernel: pci 0000:00:15.7: BAR 13: no space for [io size 0x1000] Jul 14 23:03:35.761167 kernel: pci 0000:00:15.7: BAR 13: failed to assign [io size 0x1000] Jul 14 23:03:35.761220 kernel: pci 0000:00:16.3: BAR 13: no space for [io size 0x1000] Jul 14 23:03:35.761276 kernel: pci 0000:00:16.3: BAR 13: failed to assign [io size 0x1000] Jul 14 23:03:35.761336 kernel: pci 0000:00:16.4: BAR 13: no space for [io size 0x1000] Jul 14 23:03:35.761713 kernel: pci 0000:00:16.4: BAR 13: failed to assign [io size 0x1000] Jul 14 23:03:35.761769 kernel: pci 0000:00:16.5: BAR 13: no space for [io size 0x1000] Jul 14 23:03:35.761838 kernel: pci 0000:00:16.5: BAR 13: failed to assign [io size 0x1000] Jul 14 23:03:35.761891 kernel: pci 0000:00:16.6: BAR 13: no space for [io size 0x1000] Jul 14 23:03:35.761945 kernel: pci 0000:00:16.6: BAR 13: failed to assign [io size 0x1000] Jul 14 23:03:35.762014 kernel: pci 0000:00:16.7: BAR 13: no space for [io size 0x1000] Jul 14 23:03:35.762070 kernel: pci 0000:00:16.7: BAR 13: failed to assign [io size 0x1000] Jul 14 23:03:35.762122 kernel: pci 0000:00:17.3: BAR 13: no space for [io size 0x1000] Jul 14 23:03:35.762175 kernel: pci 0000:00:17.3: BAR 13: failed to assign [io size 0x1000] Jul 14 23:03:35.762226 kernel: pci 0000:00:17.4: BAR 13: no space for [io size 0x1000] Jul 14 23:03:35.762278 kernel: pci 0000:00:17.4: BAR 13: failed to assign [io size 0x1000] Jul 14 23:03:35.762329 kernel: pci 0000:00:17.5: BAR 13: no space for [io size 0x1000] Jul 14 23:03:35.762393 kernel: pci 0000:00:17.5: BAR 13: failed to assign [io size 0x1000] Jul 14 23:03:35.762447 kernel: pci 0000:00:17.6: BAR 13: no space for [io size 0x1000] Jul 14 23:03:35.762501 kernel: pci 0000:00:17.6: BAR 13: failed to assign [io size 0x1000] Jul 14 23:03:35.762553 kernel: pci 0000:00:17.7: BAR 13: no space for [io size 0x1000] Jul 14 23:03:35.762604 kernel: pci 0000:00:17.7: BAR 13: failed to assign [io size 0x1000] Jul 14 23:03:35.762656 kernel: pci 0000:00:18.2: BAR 13: no space for [io size 0x1000] Jul 14 23:03:35.762708 kernel: pci 0000:00:18.2: BAR 13: failed to assign [io size 0x1000] Jul 14 23:03:35.762761 kernel: pci 0000:00:18.3: BAR 13: no space for [io size 0x1000] Jul 14 23:03:35.762812 kernel: pci 0000:00:18.3: BAR 13: failed to assign [io size 0x1000] Jul 14 23:03:35.762864 kernel: pci 0000:00:18.4: BAR 13: no space for [io size 0x1000] Jul 14 23:03:35.762919 kernel: pci 0000:00:18.4: BAR 13: failed to assign [io size 0x1000] Jul 14 23:03:35.762971 kernel: pci 0000:00:18.5: BAR 13: no space for [io size 0x1000] Jul 14 23:03:35.763023 kernel: pci 0000:00:18.5: BAR 13: failed to assign [io size 0x1000] Jul 14 23:03:35.763075 kernel: pci 0000:00:18.6: BAR 13: no space for [io size 0x1000] Jul 14 23:03:35.763126 kernel: pci 0000:00:18.6: BAR 13: failed to assign [io size 0x1000] Jul 14 23:03:35.763178 kernel: pci 0000:00:18.7: BAR 13: no space for [io size 0x1000] Jul 14 23:03:35.763230 kernel: pci 0000:00:18.7: BAR 13: failed to assign [io size 0x1000] Jul 14 23:03:35.763282 kernel: pci 0000:00:18.7: BAR 13: no space for [io size 0x1000] Jul 14 23:03:35.763333 kernel: pci 0000:00:18.7: BAR 13: failed to assign [io size 0x1000] Jul 14 23:03:35.765936 kernel: pci 0000:00:18.6: BAR 13: no space for [io size 0x1000] Jul 14 23:03:35.765993 kernel: pci 0000:00:18.6: BAR 13: failed to assign [io size 0x1000] Jul 14 23:03:35.766048 kernel: pci 0000:00:18.5: BAR 13: no space for [io size 0x1000] Jul 14 23:03:35.766100 kernel: pci 0000:00:18.5: BAR 13: failed to assign [io size 0x1000] Jul 14 23:03:35.766153 kernel: pci 0000:00:18.4: BAR 13: no space for [io size 0x1000] Jul 14 23:03:35.766205 kernel: pci 0000:00:18.4: BAR 13: failed to assign [io size 0x1000] Jul 14 23:03:35.766257 kernel: pci 0000:00:18.3: BAR 13: no space for [io size 0x1000] Jul 14 23:03:35.766309 kernel: pci 0000:00:18.3: BAR 13: failed to assign [io size 0x1000] Jul 14 23:03:35.766374 kernel: pci 0000:00:18.2: BAR 13: no space for [io size 0x1000] Jul 14 23:03:35.766431 kernel: pci 0000:00:18.2: BAR 13: failed to assign [io size 0x1000] Jul 14 23:03:35.766483 kernel: pci 0000:00:17.7: BAR 13: no space for [io size 0x1000] Jul 14 23:03:35.766535 kernel: pci 0000:00:17.7: BAR 13: failed to assign [io size 0x1000] Jul 14 23:03:35.766597 kernel: pci 0000:00:17.6: BAR 13: no space for [io size 0x1000] Jul 14 23:03:35.766653 kernel: pci 0000:00:17.6: BAR 13: failed to assign [io size 0x1000] Jul 14 23:03:35.766704 kernel: pci 0000:00:17.5: BAR 13: no space for [io size 0x1000] Jul 14 23:03:35.766756 kernel: pci 0000:00:17.5: BAR 13: failed to assign [io size 0x1000] Jul 14 23:03:35.766807 kernel: pci 0000:00:17.4: BAR 13: no space for [io size 0x1000] Jul 14 23:03:35.766859 kernel: pci 0000:00:17.4: BAR 13: failed to assign [io size 0x1000] Jul 14 23:03:35.766911 kernel: pci 0000:00:17.3: BAR 13: no space for [io size 0x1000] Jul 14 23:03:35.766966 kernel: pci 0000:00:17.3: BAR 13: failed to assign [io size 0x1000] Jul 14 23:03:35.767019 kernel: pci 0000:00:16.7: BAR 13: no space for [io size 0x1000] Jul 14 23:03:35.767070 kernel: pci 0000:00:16.7: BAR 13: failed to assign [io size 0x1000] Jul 14 23:03:35.767121 kernel: pci 0000:00:16.6: BAR 13: no space for [io size 0x1000] Jul 14 23:03:35.767172 kernel: pci 0000:00:16.6: BAR 13: failed to assign [io size 0x1000] Jul 14 23:03:35.767223 kernel: pci 0000:00:16.5: BAR 13: no space for [io size 0x1000] Jul 14 23:03:35.767275 kernel: pci 0000:00:16.5: BAR 13: failed to assign [io size 0x1000] Jul 14 23:03:35.767326 kernel: pci 0000:00:16.4: BAR 13: no space for [io size 0x1000] Jul 14 23:03:35.767418 kernel: pci 0000:00:16.4: BAR 13: failed to assign [io size 0x1000] Jul 14 23:03:35.767475 kernel: pci 0000:00:16.3: BAR 13: no space for [io size 0x1000] Jul 14 23:03:35.767526 kernel: pci 0000:00:16.3: BAR 13: failed to assign [io size 0x1000] Jul 14 23:03:35.767579 kernel: pci 0000:00:15.7: BAR 13: no space for [io size 0x1000] Jul 14 23:03:35.767630 kernel: pci 0000:00:15.7: BAR 13: failed to assign [io size 0x1000] Jul 14 23:03:35.767682 kernel: pci 0000:00:15.6: BAR 13: no space for [io size 0x1000] Jul 14 23:03:35.767734 kernel: pci 0000:00:15.6: BAR 13: failed to assign [io size 0x1000] Jul 14 23:03:35.767785 kernel: pci 0000:00:15.5: BAR 13: no space for [io size 0x1000] Jul 14 23:03:35.767836 kernel: pci 0000:00:15.5: BAR 13: failed to assign [io size 0x1000] Jul 14 23:03:35.767888 kernel: pci 0000:00:15.4: BAR 13: no space for [io size 0x1000] Jul 14 23:03:35.767939 kernel: pci 0000:00:15.4: BAR 13: failed to assign [io size 0x1000] Jul 14 23:03:35.767994 kernel: pci 0000:00:15.3: BAR 13: no space for [io size 0x1000] Jul 14 23:03:35.768046 kernel: pci 0000:00:15.3: BAR 13: failed to assign [io size 0x1000] Jul 14 23:03:35.768099 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Jul 14 23:03:35.768151 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] Jul 14 23:03:35.768203 kernel: pci 0000:00:11.0: bridge window [io 0x2000-0x3fff] Jul 14 23:03:35.768254 kernel: pci 0000:00:11.0: bridge window [mem 0xfd600000-0xfdffffff] Jul 14 23:03:35.768305 kernel: pci 0000:00:11.0: bridge window [mem 0xe7b00000-0xe7ffffff 64bit pref] Jul 14 23:03:35.768368 kernel: pci 0000:03:00.0: BAR 6: assigned [mem 0xfd500000-0xfd50ffff pref] Jul 14 23:03:35.768453 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] Jul 14 23:03:35.768554 kernel: pci 0000:00:15.0: bridge window [io 0x4000-0x4fff] Jul 14 23:03:35.768606 kernel: pci 0000:00:15.0: bridge window [mem 0xfd500000-0xfd5fffff] Jul 14 23:03:35.768658 kernel: pci 0000:00:15.0: bridge window [mem 0xc0000000-0xc01fffff 64bit pref] Jul 14 23:03:35.768751 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] Jul 14 23:03:35.768830 kernel: pci 0000:00:15.1: bridge window [io 0x8000-0x8fff] Jul 14 23:03:35.768906 kernel: pci 0000:00:15.1: bridge window [mem 0xfd100000-0xfd1fffff] Jul 14 23:03:35.768962 kernel: pci 0000:00:15.1: bridge window [mem 0xe7800000-0xe78fffff 64bit pref] Jul 14 23:03:35.769016 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] Jul 14 23:03:35.769072 kernel: pci 0000:00:15.2: bridge window [io 0xc000-0xcfff] Jul 14 23:03:35.769123 kernel: pci 0000:00:15.2: bridge window [mem 0xfcd00000-0xfcdfffff] Jul 14 23:03:35.769175 kernel: pci 0000:00:15.2: bridge window [mem 0xe7400000-0xe74fffff 64bit pref] Jul 14 23:03:35.769227 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] Jul 14 23:03:35.769279 kernel: pci 0000:00:15.3: bridge window [mem 0xfc900000-0xfc9fffff] Jul 14 23:03:35.769332 kernel: pci 0000:00:15.3: bridge window [mem 0xe7000000-0xe70fffff 64bit pref] Jul 14 23:03:35.769431 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] Jul 14 23:03:35.769484 kernel: pci 0000:00:15.4: bridge window [mem 0xfc500000-0xfc5fffff] Jul 14 23:03:35.769536 kernel: pci 0000:00:15.4: bridge window [mem 0xe6c00000-0xe6cfffff 64bit pref] Jul 14 23:03:35.769592 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] Jul 14 23:03:35.769647 kernel: pci 0000:00:15.5: bridge window [mem 0xfc100000-0xfc1fffff] Jul 14 23:03:35.769700 kernel: pci 0000:00:15.5: bridge window [mem 0xe6800000-0xe68fffff 64bit pref] Jul 14 23:03:35.769752 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] Jul 14 23:03:35.769803 kernel: pci 0000:00:15.6: bridge window [mem 0xfbd00000-0xfbdfffff] Jul 14 23:03:35.769854 kernel: pci 0000:00:15.6: bridge window [mem 0xe6400000-0xe64fffff 64bit pref] Jul 14 23:03:35.769908 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] Jul 14 23:03:35.769961 kernel: pci 0000:00:15.7: bridge window [mem 0xfb900000-0xfb9fffff] Jul 14 23:03:35.770013 kernel: pci 0000:00:15.7: bridge window [mem 0xe6000000-0xe60fffff 64bit pref] Jul 14 23:03:35.770068 kernel: pci 0000:0b:00.0: BAR 6: assigned [mem 0xfd400000-0xfd40ffff pref] Jul 14 23:03:35.770122 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] Jul 14 23:03:35.770174 kernel: pci 0000:00:16.0: bridge window [io 0x5000-0x5fff] Jul 14 23:03:35.770242 kernel: pci 0000:00:16.0: bridge window [mem 0xfd400000-0xfd4fffff] Jul 14 23:03:35.770314 kernel: pci 0000:00:16.0: bridge window [mem 0xc0200000-0xc03fffff 64bit pref] Jul 14 23:03:35.770382 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] Jul 14 23:03:35.770441 kernel: pci 0000:00:16.1: bridge window [io 0x9000-0x9fff] Jul 14 23:03:35.770510 kernel: pci 0000:00:16.1: bridge window [mem 0xfd000000-0xfd0fffff] Jul 14 23:03:35.770562 kernel: pci 0000:00:16.1: bridge window [mem 0xe7700000-0xe77fffff 64bit pref] Jul 14 23:03:35.770615 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] Jul 14 23:03:35.770668 kernel: pci 0000:00:16.2: bridge window [io 0xd000-0xdfff] Jul 14 23:03:35.770720 kernel: pci 0000:00:16.2: bridge window [mem 0xfcc00000-0xfccfffff] Jul 14 23:03:35.770772 kernel: pci 0000:00:16.2: bridge window [mem 0xe7300000-0xe73fffff 64bit pref] Jul 14 23:03:35.770824 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] Jul 14 23:03:35.770875 kernel: pci 0000:00:16.3: bridge window [mem 0xfc800000-0xfc8fffff] Jul 14 23:03:35.770927 kernel: pci 0000:00:16.3: bridge window [mem 0xe6f00000-0xe6ffffff 64bit pref] Jul 14 23:03:35.770981 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] Jul 14 23:03:35.771033 kernel: pci 0000:00:16.4: bridge window [mem 0xfc400000-0xfc4fffff] Jul 14 23:03:35.771085 kernel: pci 0000:00:16.4: bridge window [mem 0xe6b00000-0xe6bfffff 64bit pref] Jul 14 23:03:35.771137 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] Jul 14 23:03:35.771189 kernel: pci 0000:00:16.5: bridge window [mem 0xfc000000-0xfc0fffff] Jul 14 23:03:35.771241 kernel: pci 0000:00:16.5: bridge window [mem 0xe6700000-0xe67fffff 64bit pref] Jul 14 23:03:35.771292 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] Jul 14 23:03:35.771352 kernel: pci 0000:00:16.6: bridge window [mem 0xfbc00000-0xfbcfffff] Jul 14 23:03:35.771405 kernel: pci 0000:00:16.6: bridge window [mem 0xe6300000-0xe63fffff 64bit pref] Jul 14 23:03:35.771465 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] Jul 14 23:03:35.771517 kernel: pci 0000:00:16.7: bridge window [mem 0xfb800000-0xfb8fffff] Jul 14 23:03:35.771570 kernel: pci 0000:00:16.7: bridge window [mem 0xe5f00000-0xe5ffffff 64bit pref] Jul 14 23:03:35.771623 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] Jul 14 23:03:35.771675 kernel: pci 0000:00:17.0: bridge window [io 0x6000-0x6fff] Jul 14 23:03:35.771727 kernel: pci 0000:00:17.0: bridge window [mem 0xfd300000-0xfd3fffff] Jul 14 23:03:35.771779 kernel: pci 0000:00:17.0: bridge window [mem 0xe7a00000-0xe7afffff 64bit pref] Jul 14 23:03:35.771832 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] Jul 14 23:03:35.771884 kernel: pci 0000:00:17.1: bridge window [io 0xa000-0xafff] Jul 14 23:03:35.771935 kernel: pci 0000:00:17.1: bridge window [mem 0xfcf00000-0xfcffffff] Jul 14 23:03:35.771990 kernel: pci 0000:00:17.1: bridge window [mem 0xe7600000-0xe76fffff 64bit pref] Jul 14 23:03:35.772044 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] Jul 14 23:03:35.772096 kernel: pci 0000:00:17.2: bridge window [io 0xe000-0xefff] Jul 14 23:03:35.772148 kernel: pci 0000:00:17.2: bridge window [mem 0xfcb00000-0xfcbfffff] Jul 14 23:03:35.772199 kernel: pci 0000:00:17.2: bridge window [mem 0xe7200000-0xe72fffff 64bit pref] Jul 14 23:03:35.772252 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] Jul 14 23:03:35.772304 kernel: pci 0000:00:17.3: bridge window [mem 0xfc700000-0xfc7fffff] Jul 14 23:03:35.772424 kernel: pci 0000:00:17.3: bridge window [mem 0xe6e00000-0xe6efffff 64bit pref] Jul 14 23:03:35.772478 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] Jul 14 23:03:35.772533 kernel: pci 0000:00:17.4: bridge window [mem 0xfc300000-0xfc3fffff] Jul 14 23:03:35.772585 kernel: pci 0000:00:17.4: bridge window [mem 0xe6a00000-0xe6afffff 64bit pref] Jul 14 23:03:35.772636 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] Jul 14 23:03:35.772687 kernel: pci 0000:00:17.5: bridge window [mem 0xfbf00000-0xfbffffff] Jul 14 23:03:35.772738 kernel: pci 0000:00:17.5: bridge window [mem 0xe6600000-0xe66fffff 64bit pref] Jul 14 23:03:35.772790 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] Jul 14 23:03:35.772842 kernel: pci 0000:00:17.6: bridge window [mem 0xfbb00000-0xfbbfffff] Jul 14 23:03:35.772894 kernel: pci 0000:00:17.6: bridge window [mem 0xe6200000-0xe62fffff 64bit pref] Jul 14 23:03:35.772946 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] Jul 14 23:03:35.772997 kernel: pci 0000:00:17.7: bridge window [mem 0xfb700000-0xfb7fffff] Jul 14 23:03:35.773051 kernel: pci 0000:00:17.7: bridge window [mem 0xe5e00000-0xe5efffff 64bit pref] Jul 14 23:03:35.773103 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] Jul 14 23:03:35.773156 kernel: pci 0000:00:18.0: bridge window [io 0x7000-0x7fff] Jul 14 23:03:35.773207 kernel: pci 0000:00:18.0: bridge window [mem 0xfd200000-0xfd2fffff] Jul 14 23:03:35.773259 kernel: pci 0000:00:18.0: bridge window [mem 0xe7900000-0xe79fffff 64bit pref] Jul 14 23:03:35.773311 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] Jul 14 23:03:35.773378 kernel: pci 0000:00:18.1: bridge window [io 0xb000-0xbfff] Jul 14 23:03:35.773477 kernel: pci 0000:00:18.1: bridge window [mem 0xfce00000-0xfcefffff] Jul 14 23:03:35.773530 kernel: pci 0000:00:18.1: bridge window [mem 0xe7500000-0xe75fffff 64bit pref] Jul 14 23:03:35.773585 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] Jul 14 23:03:35.773637 kernel: pci 0000:00:18.2: bridge window [mem 0xfca00000-0xfcafffff] Jul 14 23:03:35.773689 kernel: pci 0000:00:18.2: bridge window [mem 0xe7100000-0xe71fffff 64bit pref] Jul 14 23:03:35.773741 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] Jul 14 23:03:35.773794 kernel: pci 0000:00:18.3: bridge window [mem 0xfc600000-0xfc6fffff] Jul 14 23:03:35.773846 kernel: pci 0000:00:18.3: bridge window [mem 0xe6d00000-0xe6dfffff 64bit pref] Jul 14 23:03:35.773898 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] Jul 14 23:03:35.773951 kernel: pci 0000:00:18.4: bridge window [mem 0xfc200000-0xfc2fffff] Jul 14 23:03:35.774003 kernel: pci 0000:00:18.4: bridge window [mem 0xe6900000-0xe69fffff 64bit pref] Jul 14 23:03:35.774055 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] Jul 14 23:03:35.774110 kernel: pci 0000:00:18.5: bridge window [mem 0xfbe00000-0xfbefffff] Jul 14 23:03:35.774162 kernel: pci 0000:00:18.5: bridge window [mem 0xe6500000-0xe65fffff 64bit pref] Jul 14 23:03:35.774214 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] Jul 14 23:03:35.774267 kernel: pci 0000:00:18.6: bridge window [mem 0xfba00000-0xfbafffff] Jul 14 23:03:35.774318 kernel: pci 0000:00:18.6: bridge window [mem 0xe6100000-0xe61fffff 64bit pref] Jul 14 23:03:35.774402 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] Jul 14 23:03:35.774458 kernel: pci 0000:00:18.7: bridge window [mem 0xfb600000-0xfb6fffff] Jul 14 23:03:35.774510 kernel: pci 0000:00:18.7: bridge window [mem 0xe5d00000-0xe5dfffff 64bit pref] Jul 14 23:03:35.774560 kernel: pci_bus 0000:00: resource 4 [mem 0x000a0000-0x000bffff window] Jul 14 23:03:35.774609 kernel: pci_bus 0000:00: resource 5 [mem 0x000cc000-0x000dbfff window] Jul 14 23:03:35.774655 kernel: pci_bus 0000:00: resource 6 [mem 0xc0000000-0xfebfffff window] Jul 14 23:03:35.774701 kernel: pci_bus 0000:00: resource 7 [io 0x0000-0x0cf7 window] Jul 14 23:03:35.774745 kernel: pci_bus 0000:00: resource 8 [io 0x0d00-0xfeff window] Jul 14 23:03:35.774796 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x3fff] Jul 14 23:03:35.774844 kernel: pci_bus 0000:02: resource 1 [mem 0xfd600000-0xfdffffff] Jul 14 23:03:35.774892 kernel: pci_bus 0000:02: resource 2 [mem 0xe7b00000-0xe7ffffff 64bit pref] Jul 14 23:03:35.774940 kernel: pci_bus 0000:02: resource 4 [mem 0x000a0000-0x000bffff window] Jul 14 23:03:35.774990 kernel: pci_bus 0000:02: resource 5 [mem 0x000cc000-0x000dbfff window] Jul 14 23:03:35.775038 kernel: pci_bus 0000:02: resource 6 [mem 0xc0000000-0xfebfffff window] Jul 14 23:03:35.775086 kernel: pci_bus 0000:02: resource 7 [io 0x0000-0x0cf7 window] Jul 14 23:03:35.775134 kernel: pci_bus 0000:02: resource 8 [io 0x0d00-0xfeff window] Jul 14 23:03:35.775186 kernel: pci_bus 0000:03: resource 0 [io 0x4000-0x4fff] Jul 14 23:03:35.775235 kernel: pci_bus 0000:03: resource 1 [mem 0xfd500000-0xfd5fffff] Jul 14 23:03:35.775283 kernel: pci_bus 0000:03: resource 2 [mem 0xc0000000-0xc01fffff 64bit pref] Jul 14 23:03:35.775337 kernel: pci_bus 0000:04: resource 0 [io 0x8000-0x8fff] Jul 14 23:03:35.775399 kernel: pci_bus 0000:04: resource 1 [mem 0xfd100000-0xfd1fffff] Jul 14 23:03:35.775447 kernel: pci_bus 0000:04: resource 2 [mem 0xe7800000-0xe78fffff 64bit pref] Jul 14 23:03:35.775499 kernel: pci_bus 0000:05: resource 0 [io 0xc000-0xcfff] Jul 14 23:03:35.775547 kernel: pci_bus 0000:05: resource 1 [mem 0xfcd00000-0xfcdfffff] Jul 14 23:03:35.775594 kernel: pci_bus 0000:05: resource 2 [mem 0xe7400000-0xe74fffff 64bit pref] Jul 14 23:03:35.775646 kernel: pci_bus 0000:06: resource 1 [mem 0xfc900000-0xfc9fffff] Jul 14 23:03:35.775698 kernel: pci_bus 0000:06: resource 2 [mem 0xe7000000-0xe70fffff 64bit pref] Jul 14 23:03:35.775753 kernel: pci_bus 0000:07: resource 1 [mem 0xfc500000-0xfc5fffff] Jul 14 23:03:35.775802 kernel: pci_bus 0000:07: resource 2 [mem 0xe6c00000-0xe6cfffff 64bit pref] Jul 14 23:03:35.775854 kernel: pci_bus 0000:08: resource 1 [mem 0xfc100000-0xfc1fffff] Jul 14 23:03:35.775903 kernel: pci_bus 0000:08: resource 2 [mem 0xe6800000-0xe68fffff 64bit pref] Jul 14 23:03:35.775955 kernel: pci_bus 0000:09: resource 1 [mem 0xfbd00000-0xfbdfffff] Jul 14 23:03:35.776006 kernel: pci_bus 0000:09: resource 2 [mem 0xe6400000-0xe64fffff 64bit pref] Jul 14 23:03:35.776058 kernel: pci_bus 0000:0a: resource 1 [mem 0xfb900000-0xfb9fffff] Jul 14 23:03:35.776106 kernel: pci_bus 0000:0a: resource 2 [mem 0xe6000000-0xe60fffff 64bit pref] Jul 14 23:03:35.776169 kernel: pci_bus 0000:0b: resource 0 [io 0x5000-0x5fff] Jul 14 23:03:35.776218 kernel: pci_bus 0000:0b: resource 1 [mem 0xfd400000-0xfd4fffff] Jul 14 23:03:35.776268 kernel: pci_bus 0000:0b: resource 2 [mem 0xc0200000-0xc03fffff 64bit pref] Jul 14 23:03:35.776321 kernel: pci_bus 0000:0c: resource 0 [io 0x9000-0x9fff] Jul 14 23:03:35.776479 kernel: pci_bus 0000:0c: resource 1 [mem 0xfd000000-0xfd0fffff] Jul 14 23:03:35.776529 kernel: pci_bus 0000:0c: resource 2 [mem 0xe7700000-0xe77fffff 64bit pref] Jul 14 23:03:35.776582 kernel: pci_bus 0000:0d: resource 0 [io 0xd000-0xdfff] Jul 14 23:03:35.776631 kernel: pci_bus 0000:0d: resource 1 [mem 0xfcc00000-0xfccfffff] Jul 14 23:03:35.776682 kernel: pci_bus 0000:0d: resource 2 [mem 0xe7300000-0xe73fffff 64bit pref] Jul 14 23:03:35.776737 kernel: pci_bus 0000:0e: resource 1 [mem 0xfc800000-0xfc8fffff] Jul 14 23:03:35.776786 kernel: pci_bus 0000:0e: resource 2 [mem 0xe6f00000-0xe6ffffff 64bit pref] Jul 14 23:03:35.776839 kernel: pci_bus 0000:0f: resource 1 [mem 0xfc400000-0xfc4fffff] Jul 14 23:03:35.776887 kernel: pci_bus 0000:0f: resource 2 [mem 0xe6b00000-0xe6bfffff 64bit pref] Jul 14 23:03:35.776938 kernel: pci_bus 0000:10: resource 1 [mem 0xfc000000-0xfc0fffff] Jul 14 23:03:35.776986 kernel: pci_bus 0000:10: resource 2 [mem 0xe6700000-0xe67fffff 64bit pref] Jul 14 23:03:35.777039 kernel: pci_bus 0000:11: resource 1 [mem 0xfbc00000-0xfbcfffff] Jul 14 23:03:35.777088 kernel: pci_bus 0000:11: resource 2 [mem 0xe6300000-0xe63fffff 64bit pref] Jul 14 23:03:35.777138 kernel: pci_bus 0000:12: resource 1 [mem 0xfb800000-0xfb8fffff] Jul 14 23:03:35.777187 kernel: pci_bus 0000:12: resource 2 [mem 0xe5f00000-0xe5ffffff 64bit pref] Jul 14 23:03:35.777238 kernel: pci_bus 0000:13: resource 0 [io 0x6000-0x6fff] Jul 14 23:03:35.777286 kernel: pci_bus 0000:13: resource 1 [mem 0xfd300000-0xfd3fffff] Jul 14 23:03:35.777334 kernel: pci_bus 0000:13: resource 2 [mem 0xe7a00000-0xe7afffff 64bit pref] Jul 14 23:03:35.777396 kernel: pci_bus 0000:14: resource 0 [io 0xa000-0xafff] Jul 14 23:03:35.777444 kernel: pci_bus 0000:14: resource 1 [mem 0xfcf00000-0xfcffffff] Jul 14 23:03:35.777492 kernel: pci_bus 0000:14: resource 2 [mem 0xe7600000-0xe76fffff 64bit pref] Jul 14 23:03:35.777543 kernel: pci_bus 0000:15: resource 0 [io 0xe000-0xefff] Jul 14 23:03:35.777591 kernel: pci_bus 0000:15: resource 1 [mem 0xfcb00000-0xfcbfffff] Jul 14 23:03:35.777639 kernel: pci_bus 0000:15: resource 2 [mem 0xe7200000-0xe72fffff 64bit pref] Jul 14 23:03:35.777690 kernel: pci_bus 0000:16: resource 1 [mem 0xfc700000-0xfc7fffff] Jul 14 23:03:35.777741 kernel: pci_bus 0000:16: resource 2 [mem 0xe6e00000-0xe6efffff 64bit pref] Jul 14 23:03:35.777796 kernel: pci_bus 0000:17: resource 1 [mem 0xfc300000-0xfc3fffff] Jul 14 23:03:35.777845 kernel: pci_bus 0000:17: resource 2 [mem 0xe6a00000-0xe6afffff 64bit pref] Jul 14 23:03:35.777897 kernel: pci_bus 0000:18: resource 1 [mem 0xfbf00000-0xfbffffff] Jul 14 23:03:35.777949 kernel: pci_bus 0000:18: resource 2 [mem 0xe6600000-0xe66fffff 64bit pref] Jul 14 23:03:35.778000 kernel: pci_bus 0000:19: resource 1 [mem 0xfbb00000-0xfbbfffff] Jul 14 23:03:35.778051 kernel: pci_bus 0000:19: resource 2 [mem 0xe6200000-0xe62fffff 64bit pref] Jul 14 23:03:35.778103 kernel: pci_bus 0000:1a: resource 1 [mem 0xfb700000-0xfb7fffff] Jul 14 23:03:35.778151 kernel: pci_bus 0000:1a: resource 2 [mem 0xe5e00000-0xe5efffff 64bit pref] Jul 14 23:03:35.778203 kernel: pci_bus 0000:1b: resource 0 [io 0x7000-0x7fff] Jul 14 23:03:35.778252 kernel: pci_bus 0000:1b: resource 1 [mem 0xfd200000-0xfd2fffff] Jul 14 23:03:35.778303 kernel: pci_bus 0000:1b: resource 2 [mem 0xe7900000-0xe79fffff 64bit pref] Jul 14 23:03:35.778364 kernel: pci_bus 0000:1c: resource 0 [io 0xb000-0xbfff] Jul 14 23:03:35.778447 kernel: pci_bus 0000:1c: resource 1 [mem 0xfce00000-0xfcefffff] Jul 14 23:03:35.778497 kernel: pci_bus 0000:1c: resource 2 [mem 0xe7500000-0xe75fffff 64bit pref] Jul 14 23:03:35.778551 kernel: pci_bus 0000:1d: resource 1 [mem 0xfca00000-0xfcafffff] Jul 14 23:03:35.778601 kernel: pci_bus 0000:1d: resource 2 [mem 0xe7100000-0xe71fffff 64bit pref] Jul 14 23:03:35.778654 kernel: pci_bus 0000:1e: resource 1 [mem 0xfc600000-0xfc6fffff] Jul 14 23:03:35.778708 kernel: pci_bus 0000:1e: resource 2 [mem 0xe6d00000-0xe6dfffff 64bit pref] Jul 14 23:03:35.778761 kernel: pci_bus 0000:1f: resource 1 [mem 0xfc200000-0xfc2fffff] Jul 14 23:03:35.778812 kernel: pci_bus 0000:1f: resource 2 [mem 0xe6900000-0xe69fffff 64bit pref] Jul 14 23:03:35.778870 kernel: pci_bus 0000:20: resource 1 [mem 0xfbe00000-0xfbefffff] Jul 14 23:03:35.778921 kernel: pci_bus 0000:20: resource 2 [mem 0xe6500000-0xe65fffff 64bit pref] Jul 14 23:03:35.778974 kernel: pci_bus 0000:21: resource 1 [mem 0xfba00000-0xfbafffff] Jul 14 23:03:35.779027 kernel: pci_bus 0000:21: resource 2 [mem 0xe6100000-0xe61fffff 64bit pref] Jul 14 23:03:35.779080 kernel: pci_bus 0000:22: resource 1 [mem 0xfb600000-0xfb6fffff] Jul 14 23:03:35.779130 kernel: pci_bus 0000:22: resource 2 [mem 0xe5d00000-0xe5dfffff 64bit pref] Jul 14 23:03:35.779187 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jul 14 23:03:35.779201 kernel: PCI: CLS 32 bytes, default 64 Jul 14 23:03:35.779208 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jul 14 23:03:35.779215 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x311fd3cd494, max_idle_ns: 440795223879 ns Jul 14 23:03:35.779223 kernel: clocksource: Switched to clocksource tsc Jul 14 23:03:35.779230 kernel: Initialise system trusted keyrings Jul 14 23:03:35.779237 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Jul 14 23:03:35.779243 kernel: Key type asymmetric registered Jul 14 23:03:35.779249 kernel: Asymmetric key parser 'x509' registered Jul 14 23:03:35.779256 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jul 14 23:03:35.779262 kernel: io scheduler mq-deadline registered Jul 14 23:03:35.779285 kernel: io scheduler kyber registered Jul 14 23:03:35.779292 kernel: io scheduler bfq registered Jul 14 23:03:35.780834 kernel: pcieport 0000:00:15.0: PME: Signaling with IRQ 24 Jul 14 23:03:35.780904 kernel: pcieport 0000:00:15.0: pciehp: Slot #160 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 14 23:03:35.780963 kernel: pcieport 0000:00:15.1: PME: Signaling with IRQ 25 Jul 14 23:03:35.781018 kernel: pcieport 0000:00:15.1: pciehp: Slot #161 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 14 23:03:35.781074 kernel: pcieport 0000:00:15.2: PME: Signaling with IRQ 26 Jul 14 23:03:35.781126 kernel: pcieport 0000:00:15.2: pciehp: Slot #162 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 14 23:03:35.781180 kernel: pcieport 0000:00:15.3: PME: Signaling with IRQ 27 Jul 14 23:03:35.781233 kernel: pcieport 0000:00:15.3: pciehp: Slot #163 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 14 23:03:35.781291 kernel: pcieport 0000:00:15.4: PME: Signaling with IRQ 28 Jul 14 23:03:35.781352 kernel: pcieport 0000:00:15.4: pciehp: Slot #164 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 14 23:03:35.781435 kernel: pcieport 0000:00:15.5: PME: Signaling with IRQ 29 Jul 14 23:03:35.781506 kernel: pcieport 0000:00:15.5: pciehp: Slot #165 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 14 23:03:35.781560 kernel: pcieport 0000:00:15.6: PME: Signaling with IRQ 30 Jul 14 23:03:35.781617 kernel: pcieport 0000:00:15.6: pciehp: Slot #166 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 14 23:03:35.781671 kernel: pcieport 0000:00:15.7: PME: Signaling with IRQ 31 Jul 14 23:03:35.781725 kernel: pcieport 0000:00:15.7: pciehp: Slot #167 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 14 23:03:35.783385 kernel: pcieport 0000:00:16.0: PME: Signaling with IRQ 32 Jul 14 23:03:35.783465 kernel: pcieport 0000:00:16.0: pciehp: Slot #192 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 14 23:03:35.783525 kernel: pcieport 0000:00:16.1: PME: Signaling with IRQ 33 Jul 14 23:03:35.783584 kernel: pcieport 0000:00:16.1: pciehp: Slot #193 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 14 23:03:35.783639 kernel: pcieport 0000:00:16.2: PME: Signaling with IRQ 34 Jul 14 23:03:35.783692 kernel: pcieport 0000:00:16.2: pciehp: Slot #194 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 14 23:03:35.783746 kernel: pcieport 0000:00:16.3: PME: Signaling with IRQ 35 Jul 14 23:03:35.783800 kernel: pcieport 0000:00:16.3: pciehp: Slot #195 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 14 23:03:35.783853 kernel: pcieport 0000:00:16.4: PME: Signaling with IRQ 36 Jul 14 23:03:35.783911 kernel: pcieport 0000:00:16.4: pciehp: Slot #196 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 14 23:03:35.783966 kernel: pcieport 0000:00:16.5: PME: Signaling with IRQ 37 Jul 14 23:03:35.784019 kernel: pcieport 0000:00:16.5: pciehp: Slot #197 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 14 23:03:35.784073 kernel: pcieport 0000:00:16.6: PME: Signaling with IRQ 38 Jul 14 23:03:35.784127 kernel: pcieport 0000:00:16.6: pciehp: Slot #198 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 14 23:03:35.784183 kernel: pcieport 0000:00:16.7: PME: Signaling with IRQ 39 Jul 14 23:03:35.784238 kernel: pcieport 0000:00:16.7: pciehp: Slot #199 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 14 23:03:35.784292 kernel: pcieport 0000:00:17.0: PME: Signaling with IRQ 40 Jul 14 23:03:35.784362 kernel: pcieport 0000:00:17.0: pciehp: Slot #224 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 14 23:03:35.784442 kernel: pcieport 0000:00:17.1: PME: Signaling with IRQ 41 Jul 14 23:03:35.784512 kernel: pcieport 0000:00:17.1: pciehp: Slot #225 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 14 23:03:35.784565 kernel: pcieport 0000:00:17.2: PME: Signaling with IRQ 42 Jul 14 23:03:35.784621 kernel: pcieport 0000:00:17.2: pciehp: Slot #226 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 14 23:03:35.784675 kernel: pcieport 0000:00:17.3: PME: Signaling with IRQ 43 Jul 14 23:03:35.784727 kernel: pcieport 0000:00:17.3: pciehp: Slot #227 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 14 23:03:35.784781 kernel: pcieport 0000:00:17.4: PME: Signaling with IRQ 44 Jul 14 23:03:35.784834 kernel: pcieport 0000:00:17.4: pciehp: Slot #228 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 14 23:03:35.784888 kernel: pcieport 0000:00:17.5: PME: Signaling with IRQ 45 Jul 14 23:03:35.784944 kernel: pcieport 0000:00:17.5: pciehp: Slot #229 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 14 23:03:35.784999 kernel: pcieport 0000:00:17.6: PME: Signaling with IRQ 46 Jul 14 23:03:35.785052 kernel: pcieport 0000:00:17.6: pciehp: Slot #230 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 14 23:03:35.785105 kernel: pcieport 0000:00:17.7: PME: Signaling with IRQ 47 Jul 14 23:03:35.785158 kernel: pcieport 0000:00:17.7: pciehp: Slot #231 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 14 23:03:35.785214 kernel: pcieport 0000:00:18.0: PME: Signaling with IRQ 48 Jul 14 23:03:35.785268 kernel: pcieport 0000:00:18.0: pciehp: Slot #256 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 14 23:03:35.785322 kernel: pcieport 0000:00:18.1: PME: Signaling with IRQ 49 Jul 14 23:03:35.785402 kernel: pcieport 0000:00:18.1: pciehp: Slot #257 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 14 23:03:35.785455 kernel: pcieport 0000:00:18.2: PME: Signaling with IRQ 50 Jul 14 23:03:35.785508 kernel: pcieport 0000:00:18.2: pciehp: Slot #258 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 14 23:03:35.785565 kernel: pcieport 0000:00:18.3: PME: Signaling with IRQ 51 Jul 14 23:03:35.785617 kernel: pcieport 0000:00:18.3: pciehp: Slot #259 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 14 23:03:35.785671 kernel: pcieport 0000:00:18.4: PME: Signaling with IRQ 52 Jul 14 23:03:35.785723 kernel: pcieport 0000:00:18.4: pciehp: Slot #260 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 14 23:03:35.785778 kernel: pcieport 0000:00:18.5: PME: Signaling with IRQ 53 Jul 14 23:03:35.785831 kernel: pcieport 0000:00:18.5: pciehp: Slot #261 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 14 23:03:35.785888 kernel: pcieport 0000:00:18.6: PME: Signaling with IRQ 54 Jul 14 23:03:35.785940 kernel: pcieport 0000:00:18.6: pciehp: Slot #262 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 14 23:03:35.785994 kernel: pcieport 0000:00:18.7: PME: Signaling with IRQ 55 Jul 14 23:03:35.786047 kernel: pcieport 0000:00:18.7: pciehp: Slot #263 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 14 23:03:35.786062 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jul 14 23:03:35.786073 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 14 23:03:35.786080 kernel: 00:05: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jul 14 23:03:35.786086 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBC,PNP0f13:MOUS] at 0x60,0x64 irq 1,12 Jul 14 23:03:35.786092 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jul 14 23:03:35.786098 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jul 14 23:03:35.786158 kernel: rtc_cmos 00:01: registered as rtc0 Jul 14 23:03:35.786210 kernel: rtc_cmos 00:01: setting system clock to 2025-07-14T23:03:35 UTC (1752534215) Jul 14 23:03:35.786258 kernel: rtc_cmos 00:01: alarms up to one month, y3k, 114 bytes nvram Jul 14 23:03:35.786269 kernel: intel_pstate: CPU model not supported Jul 14 23:03:35.786276 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jul 14 23:03:35.786282 kernel: NET: Registered PF_INET6 protocol family Jul 14 23:03:35.786288 kernel: Segment Routing with IPv6 Jul 14 23:03:35.786295 kernel: In-situ OAM (IOAM) with IPv6 Jul 14 23:03:35.786301 kernel: NET: Registered PF_PACKET protocol family Jul 14 23:03:35.786307 kernel: Key type dns_resolver registered Jul 14 23:03:35.786314 kernel: IPI shorthand broadcast: enabled Jul 14 23:03:35.786320 kernel: sched_clock: Marking stable (916003494, 218892448)->(1194738315, -59842373) Jul 14 23:03:35.786328 kernel: registered taskstats version 1 Jul 14 23:03:35.786334 kernel: Loading compiled-in X.509 certificates Jul 14 23:03:35.786340 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.97-flatcar: ff10e110ca3923b510cf0133f4e9f48dd636b870' Jul 14 23:03:35.786355 kernel: Key type .fscrypt registered Jul 14 23:03:35.786364 kernel: Key type fscrypt-provisioning registered Jul 14 23:03:35.786370 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 14 23:03:35.786377 kernel: ima: Allocated hash algorithm: sha1 Jul 14 23:03:35.786383 kernel: ima: No architecture policies found Jul 14 23:03:35.786389 kernel: clk: Disabling unused clocks Jul 14 23:03:35.786397 kernel: Freeing unused kernel image (initmem) memory: 42876K Jul 14 23:03:35.786403 kernel: Write protecting the kernel read-only data: 36864k Jul 14 23:03:35.786409 kernel: Freeing unused kernel image (rodata/data gap) memory: 1828K Jul 14 23:03:35.786416 kernel: Run /init as init process Jul 14 23:03:35.786422 kernel: with arguments: Jul 14 23:03:35.786428 kernel: /init Jul 14 23:03:35.786434 kernel: with environment: Jul 14 23:03:35.786440 kernel: HOME=/ Jul 14 23:03:35.786446 kernel: TERM=linux Jul 14 23:03:35.786453 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 14 23:03:35.786461 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 14 23:03:35.786469 systemd[1]: Detected virtualization vmware. Jul 14 23:03:35.786475 systemd[1]: Detected architecture x86-64. Jul 14 23:03:35.786483 systemd[1]: Running in initrd. Jul 14 23:03:35.786489 systemd[1]: No hostname configured, using default hostname. Jul 14 23:03:35.786495 systemd[1]: Hostname set to . Jul 14 23:03:35.786503 systemd[1]: Initializing machine ID from random generator. Jul 14 23:03:35.786509 systemd[1]: Queued start job for default target initrd.target. Jul 14 23:03:35.786516 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 14 23:03:35.786523 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 14 23:03:35.786529 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 14 23:03:35.786536 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 14 23:03:35.786542 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 14 23:03:35.786548 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 14 23:03:35.786557 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 14 23:03:35.786563 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 14 23:03:35.786570 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 14 23:03:35.786577 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 14 23:03:35.786583 systemd[1]: Reached target paths.target - Path Units. Jul 14 23:03:35.786589 systemd[1]: Reached target slices.target - Slice Units. Jul 14 23:03:35.786596 systemd[1]: Reached target swap.target - Swaps. Jul 14 23:03:35.786603 systemd[1]: Reached target timers.target - Timer Units. Jul 14 23:03:35.786610 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 14 23:03:35.786616 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 14 23:03:35.786622 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 14 23:03:35.786629 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jul 14 23:03:35.786635 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 14 23:03:35.786642 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 14 23:03:35.786648 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 14 23:03:35.786654 systemd[1]: Reached target sockets.target - Socket Units. Jul 14 23:03:35.786662 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 14 23:03:35.786669 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 14 23:03:35.786675 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 14 23:03:35.786682 systemd[1]: Starting systemd-fsck-usr.service... Jul 14 23:03:35.786688 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 14 23:03:35.786694 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 14 23:03:35.786700 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 14 23:03:35.786721 systemd-journald[216]: Collecting audit messages is disabled. Jul 14 23:03:35.786739 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 14 23:03:35.786745 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 14 23:03:35.786751 systemd[1]: Finished systemd-fsck-usr.service. Jul 14 23:03:35.786759 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 14 23:03:35.786766 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 14 23:03:35.786773 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 14 23:03:35.786779 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 14 23:03:35.786786 kernel: Bridge firewalling registered Jul 14 23:03:35.786792 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 14 23:03:35.786800 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 14 23:03:35.786806 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 14 23:03:35.786814 systemd-journald[216]: Journal started Jul 14 23:03:35.786832 systemd-journald[216]: Runtime Journal (/run/log/journal/b29ca438d42e4dc3bfd367a8d8a2df63) is 4.8M, max 38.6M, 33.8M free. Jul 14 23:03:35.743335 systemd-modules-load[217]: Inserted module 'overlay' Jul 14 23:03:35.788583 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 14 23:03:35.770203 systemd-modules-load[217]: Inserted module 'br_netfilter' Jul 14 23:03:35.790361 systemd[1]: Started systemd-journald.service - Journal Service. Jul 14 23:03:35.795675 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 14 23:03:35.796154 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 14 23:03:35.799433 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 14 23:03:35.800439 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 14 23:03:35.800647 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 14 23:03:35.806726 dracut-cmdline[244]: dracut-dracut-053 Jul 14 23:03:35.808110 dracut-cmdline[244]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=bfa97d577a2baa7448b0ab2cae71f1606bd0084ffae5b72cc7eef5122a2ca497 Jul 14 23:03:35.808573 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 14 23:03:35.812437 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 14 23:03:35.828290 systemd-resolved[262]: Positive Trust Anchors: Jul 14 23:03:35.828302 systemd-resolved[262]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 14 23:03:35.828324 systemd-resolved[262]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 14 23:03:35.830274 systemd-resolved[262]: Defaulting to hostname 'linux'. Jul 14 23:03:35.831100 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 14 23:03:35.831237 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 14 23:03:35.853361 kernel: SCSI subsystem initialized Jul 14 23:03:35.860356 kernel: Loading iSCSI transport class v2.0-870. Jul 14 23:03:35.867360 kernel: iscsi: registered transport (tcp) Jul 14 23:03:35.882367 kernel: iscsi: registered transport (qla4xxx) Jul 14 23:03:35.882407 kernel: QLogic iSCSI HBA Driver Jul 14 23:03:35.902375 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 14 23:03:35.907483 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 14 23:03:35.922554 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 14 23:03:35.922591 kernel: device-mapper: uevent: version 1.0.3 Jul 14 23:03:35.923633 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jul 14 23:03:35.955363 kernel: raid6: avx2x4 gen() 52623 MB/s Jul 14 23:03:35.972360 kernel: raid6: avx2x2 gen() 53446 MB/s Jul 14 23:03:35.989650 kernel: raid6: avx2x1 gen() 44775 MB/s Jul 14 23:03:35.989688 kernel: raid6: using algorithm avx2x2 gen() 53446 MB/s Jul 14 23:03:36.007635 kernel: raid6: .... xor() 31302 MB/s, rmw enabled Jul 14 23:03:36.007686 kernel: raid6: using avx2x2 recovery algorithm Jul 14 23:03:36.021354 kernel: xor: automatically using best checksumming function avx Jul 14 23:03:36.122372 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 14 23:03:36.127469 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 14 23:03:36.132444 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 14 23:03:36.139520 systemd-udevd[434]: Using default interface naming scheme 'v255'. Jul 14 23:03:36.142023 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 14 23:03:36.152441 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 14 23:03:36.159109 dracut-pre-trigger[440]: rd.md=0: removing MD RAID activation Jul 14 23:03:36.173724 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 14 23:03:36.175481 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 14 23:03:36.247922 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 14 23:03:36.257421 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 14 23:03:36.266808 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 14 23:03:36.267147 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 14 23:03:36.267515 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 14 23:03:36.267826 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 14 23:03:36.271440 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 14 23:03:36.280964 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 14 23:03:36.320358 kernel: VMware PVSCSI driver - version 1.0.7.0-k Jul 14 23:03:36.336356 kernel: VMware vmxnet3 virtual NIC driver - version 1.7.0.0-k-NAPI Jul 14 23:03:36.336391 kernel: cryptd: max_cpu_qlen set to 1000 Jul 14 23:03:36.351702 kernel: vmw_pvscsi: using 64bit dma Jul 14 23:03:36.351742 kernel: vmw_pvscsi: max_id: 16 Jul 14 23:03:36.351751 kernel: vmw_pvscsi: setting ring_pages to 8 Jul 14 23:03:36.351760 kernel: vmxnet3 0000:0b:00.0: # of Tx queues : 2, # of Rx queues : 2 Jul 14 23:03:36.353884 kernel: vmxnet3 0000:0b:00.0 eth0: NIC Link is Up 10000 Mbps Jul 14 23:03:36.355359 kernel: AVX2 version of gcm_enc/dec engaged. Jul 14 23:03:36.355577 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 14 23:03:36.358657 kernel: AES CTR mode by8 optimization enabled Jul 14 23:03:36.358674 kernel: vmw_pvscsi: enabling reqCallThreshold Jul 14 23:03:36.358683 kernel: vmw_pvscsi: driver-based request coalescing enabled Jul 14 23:03:36.358690 kernel: vmw_pvscsi: using MSI-X Jul 14 23:03:36.358698 kernel: libata version 3.00 loaded. Jul 14 23:03:36.358705 kernel: scsi host0: VMware PVSCSI storage adapter rev 2, req/cmp/msg rings: 8/8/1 pages, cmd_per_lun=254 Jul 14 23:03:36.355650 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 14 23:03:36.358645 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 14 23:03:36.358744 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 14 23:03:36.358829 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 14 23:03:36.359006 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 14 23:03:36.369847 kernel: vmw_pvscsi 0000:03:00.0: VMware PVSCSI rev 2 host #0 Jul 14 23:03:36.369950 kernel: scsi 0:0:0:0: Direct-Access VMware Virtual disk 2.0 PQ: 0 ANSI: 6 Jul 14 23:03:36.370039 kernel: ata_piix 0000:00:07.1: version 2.13 Jul 14 23:03:36.370112 kernel: scsi host1: ata_piix Jul 14 23:03:36.370181 kernel: vmxnet3 0000:0b:00.0 ens192: renamed from eth0 Jul 14 23:03:36.371196 kernel: scsi host2: ata_piix Jul 14 23:03:36.371277 kernel: ata1: PATA max UDMA/33 cmd 0x1f0 ctl 0x3f6 bmdma 0x1060 irq 14 Jul 14 23:03:36.371287 kernel: ata2: PATA max UDMA/33 cmd 0x170 ctl 0x376 bmdma 0x1068 irq 15 Jul 14 23:03:36.365606 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 14 23:03:36.386073 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 14 23:03:36.391454 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 14 23:03:36.402208 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 14 23:03:36.536364 kernel: ata2.00: ATAPI: VMware Virtual IDE CDROM Drive, 00000001, max UDMA/33 Jul 14 23:03:36.541386 kernel: scsi 2:0:0:0: CD-ROM NECVMWar VMware IDE CDR10 1.00 PQ: 0 ANSI: 5 Jul 14 23:03:36.555382 kernel: sd 0:0:0:0: [sda] 17805312 512-byte logical blocks: (9.12 GB/8.49 GiB) Jul 14 23:03:36.555473 kernel: sd 0:0:0:0: [sda] Write Protect is off Jul 14 23:03:36.556613 kernel: sd 0:0:0:0: [sda] Mode Sense: 31 00 00 00 Jul 14 23:03:36.556688 kernel: sd 0:0:0:0: [sda] Cache data unavailable Jul 14 23:03:36.556752 kernel: sd 0:0:0:0: [sda] Assuming drive cache: write through Jul 14 23:03:36.560362 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 1x/1x writer dvd-ram cd/rw xa/form2 cdda tray Jul 14 23:03:36.560475 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jul 14 23:03:36.562373 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 14 23:03:36.562391 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jul 14 23:03:36.574408 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jul 14 23:03:36.603318 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_disk ROOT. Jul 14 23:03:36.603949 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by (udev-worker) (486) Jul 14 23:03:36.607374 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_disk EFI-SYSTEM. Jul 14 23:03:36.610362 kernel: BTRFS: device fsid d23b6972-ad36-4741-bf36-4d440b923127 devid 1 transid 36 /dev/sda3 scanned by (udev-worker) (488) Jul 14 23:03:36.612088 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_disk OEM. Jul 14 23:03:36.616115 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_disk USR-A. Jul 14 23:03:36.616252 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_disk USR-A. Jul 14 23:03:36.625468 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 14 23:03:36.653385 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 14 23:03:36.659359 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 14 23:03:36.664359 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 14 23:03:37.664401 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 14 23:03:37.665570 disk-uuid[589]: The operation has completed successfully. Jul 14 23:03:37.706658 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 14 23:03:37.706716 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 14 23:03:37.711428 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 14 23:03:37.713808 sh[609]: Success Jul 14 23:03:37.722382 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jul 14 23:03:37.778471 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 14 23:03:37.795375 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 14 23:03:37.795634 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 14 23:03:37.821373 kernel: BTRFS info (device dm-0): first mount of filesystem d23b6972-ad36-4741-bf36-4d440b923127 Jul 14 23:03:37.821412 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jul 14 23:03:37.823925 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jul 14 23:03:37.823943 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jul 14 23:03:37.825834 kernel: BTRFS info (device dm-0): using free space tree Jul 14 23:03:37.842367 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jul 14 23:03:37.845381 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 14 23:03:37.860567 systemd[1]: Starting afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments... Jul 14 23:03:37.861970 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 14 23:03:37.883753 kernel: BTRFS info (device sda6): first mount of filesystem 1f379987-f438-494c-89f9-63473ca1b18d Jul 14 23:03:37.883795 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jul 14 23:03:37.883805 kernel: BTRFS info (device sda6): using free space tree Jul 14 23:03:37.888362 kernel: BTRFS info (device sda6): enabling ssd optimizations Jul 14 23:03:37.903919 systemd[1]: mnt-oem.mount: Deactivated successfully. Jul 14 23:03:37.904356 kernel: BTRFS info (device sda6): last unmount of filesystem 1f379987-f438-494c-89f9-63473ca1b18d Jul 14 23:03:37.908659 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 14 23:03:37.913459 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 14 23:03:37.935176 systemd[1]: Finished afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments. Jul 14 23:03:37.941495 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 14 23:03:38.003634 ignition[668]: Ignition 2.19.0 Jul 14 23:03:38.004187 ignition[668]: Stage: fetch-offline Jul 14 23:03:38.004222 ignition[668]: no configs at "/usr/lib/ignition/base.d" Jul 14 23:03:38.004231 ignition[668]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Jul 14 23:03:38.004312 ignition[668]: parsed url from cmdline: "" Jul 14 23:03:38.004315 ignition[668]: no config URL provided Jul 14 23:03:38.004320 ignition[668]: reading system config file "/usr/lib/ignition/user.ign" Jul 14 23:03:38.004327 ignition[668]: no config at "/usr/lib/ignition/user.ign" Jul 14 23:03:38.005259 ignition[668]: config successfully fetched Jul 14 23:03:38.005294 ignition[668]: parsing config with SHA512: c831134dcff4a48d0cb37b47062598fc42c1ba8d40d5754708a426b74bdd5d3b8009941c70738f1e0638725082897fec22bc766038005bb11117fab29b76cfe1 Jul 14 23:03:38.011151 unknown[668]: fetched base config from "system" Jul 14 23:03:38.011160 unknown[668]: fetched user config from "vmware" Jul 14 23:03:38.011595 ignition[668]: fetch-offline: fetch-offline passed Jul 14 23:03:38.011652 ignition[668]: Ignition finished successfully Jul 14 23:03:38.012540 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 14 23:03:38.027155 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 14 23:03:38.039622 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 14 23:03:38.051716 systemd-networkd[803]: lo: Link UP Jul 14 23:03:38.051722 systemd-networkd[803]: lo: Gained carrier Jul 14 23:03:38.052474 systemd-networkd[803]: Enumeration completed Jul 14 23:03:38.052663 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 14 23:03:38.052731 systemd-networkd[803]: ens192: Configuring with /etc/systemd/network/10-dracut-cmdline-99.network. Jul 14 23:03:38.052851 systemd[1]: Reached target network.target - Network. Jul 14 23:03:38.056421 kernel: vmxnet3 0000:0b:00.0 ens192: intr type 3, mode 0, 3 vectors allocated Jul 14 23:03:38.056585 kernel: vmxnet3 0000:0b:00.0 ens192: NIC Link is Up 10000 Mbps Jul 14 23:03:38.052961 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jul 14 23:03:38.056471 systemd-networkd[803]: ens192: Link UP Jul 14 23:03:38.056474 systemd-networkd[803]: ens192: Gained carrier Jul 14 23:03:38.066539 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 14 23:03:38.078599 ignition[805]: Ignition 2.19.0 Jul 14 23:03:38.078611 ignition[805]: Stage: kargs Jul 14 23:03:38.078759 ignition[805]: no configs at "/usr/lib/ignition/base.d" Jul 14 23:03:38.078769 ignition[805]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Jul 14 23:03:38.079680 ignition[805]: kargs: kargs passed Jul 14 23:03:38.079721 ignition[805]: Ignition finished successfully Jul 14 23:03:38.080879 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 14 23:03:38.086455 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 14 23:03:38.094429 ignition[813]: Ignition 2.19.0 Jul 14 23:03:38.094436 ignition[813]: Stage: disks Jul 14 23:03:38.094574 ignition[813]: no configs at "/usr/lib/ignition/base.d" Jul 14 23:03:38.094581 ignition[813]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Jul 14 23:03:38.095119 ignition[813]: disks: disks passed Jul 14 23:03:38.095147 ignition[813]: Ignition finished successfully Jul 14 23:03:38.095872 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 14 23:03:38.096316 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 14 23:03:38.096458 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 14 23:03:38.096645 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 14 23:03:38.096823 systemd[1]: Reached target sysinit.target - System Initialization. Jul 14 23:03:38.097010 systemd[1]: Reached target basic.target - Basic System. Jul 14 23:03:38.100499 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 14 23:03:38.112036 systemd-fsck[821]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Jul 14 23:03:38.112955 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 14 23:03:38.118454 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 14 23:03:38.184257 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 14 23:03:38.184420 kernel: EXT4-fs (sda9): mounted filesystem dda007d3-640b-4d11-976f-3b761ca7aabd r/w with ordered data mode. Quota mode: none. Jul 14 23:03:38.184765 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 14 23:03:38.190407 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 14 23:03:38.191394 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 14 23:03:38.192197 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jul 14 23:03:38.192224 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 14 23:03:38.192239 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 14 23:03:38.196071 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 14 23:03:38.197066 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 14 23:03:38.199376 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (829) Jul 14 23:03:38.201438 kernel: BTRFS info (device sda6): first mount of filesystem 1f379987-f438-494c-89f9-63473ca1b18d Jul 14 23:03:38.201466 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jul 14 23:03:38.203368 kernel: BTRFS info (device sda6): using free space tree Jul 14 23:03:38.207363 kernel: BTRFS info (device sda6): enabling ssd optimizations Jul 14 23:03:38.209053 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 14 23:03:38.226553 initrd-setup-root[853]: cut: /sysroot/etc/passwd: No such file or directory Jul 14 23:03:38.230558 initrd-setup-root[860]: cut: /sysroot/etc/group: No such file or directory Jul 14 23:03:38.233420 initrd-setup-root[867]: cut: /sysroot/etc/shadow: No such file or directory Jul 14 23:03:38.235488 initrd-setup-root[874]: cut: /sysroot/etc/gshadow: No such file or directory Jul 14 23:03:38.296526 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 14 23:03:38.301455 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 14 23:03:38.304056 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 14 23:03:38.308353 kernel: BTRFS info (device sda6): last unmount of filesystem 1f379987-f438-494c-89f9-63473ca1b18d Jul 14 23:03:38.324555 ignition[943]: INFO : Ignition 2.19.0 Jul 14 23:03:38.324555 ignition[943]: INFO : Stage: mount Jul 14 23:03:38.325014 ignition[943]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 14 23:03:38.325014 ignition[943]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Jul 14 23:03:38.325236 ignition[943]: INFO : mount: mount passed Jul 14 23:03:38.325372 ignition[943]: INFO : Ignition finished successfully Jul 14 23:03:38.325914 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 14 23:03:38.328758 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 14 23:03:38.328989 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 14 23:03:38.817962 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 14 23:03:38.828515 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 14 23:03:38.914364 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by mount (955) Jul 14 23:03:38.916991 kernel: BTRFS info (device sda6): first mount of filesystem 1f379987-f438-494c-89f9-63473ca1b18d Jul 14 23:03:38.917010 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jul 14 23:03:38.917018 kernel: BTRFS info (device sda6): using free space tree Jul 14 23:03:38.920358 kernel: BTRFS info (device sda6): enabling ssd optimizations Jul 14 23:03:38.921573 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 14 23:03:38.937059 ignition[972]: INFO : Ignition 2.19.0 Jul 14 23:03:38.937059 ignition[972]: INFO : Stage: files Jul 14 23:03:38.937588 ignition[972]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 14 23:03:38.937588 ignition[972]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Jul 14 23:03:38.937810 ignition[972]: DEBUG : files: compiled without relabeling support, skipping Jul 14 23:03:38.938521 ignition[972]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 14 23:03:38.938521 ignition[972]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 14 23:03:38.941652 ignition[972]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 14 23:03:38.941844 ignition[972]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 14 23:03:38.942209 unknown[972]: wrote ssh authorized keys file for user: core Jul 14 23:03:38.942475 ignition[972]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 14 23:03:38.945065 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jul 14 23:03:38.945065 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jul 14 23:03:38.983679 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 14 23:03:39.128630 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jul 14 23:03:39.128630 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jul 14 23:03:39.129063 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jul 14 23:03:39.129063 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 14 23:03:39.129063 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 14 23:03:39.129063 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 14 23:03:39.129762 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 14 23:03:39.129762 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 14 23:03:39.129762 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 14 23:03:39.129762 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 14 23:03:39.129762 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 14 23:03:39.129762 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 14 23:03:39.129762 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 14 23:03:39.129762 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 14 23:03:39.129762 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 Jul 14 23:03:39.564325 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jul 14 23:03:39.777549 systemd-networkd[803]: ens192: Gained IPv6LL Jul 14 23:03:40.023385 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 14 23:03:40.023385 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/etc/systemd/network/00-vmware.network" Jul 14 23:03:40.023938 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/etc/systemd/network/00-vmware.network" Jul 14 23:03:40.023938 ignition[972]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jul 14 23:03:40.029412 ignition[972]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 14 23:03:40.029653 ignition[972]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 14 23:03:40.029653 ignition[972]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jul 14 23:03:40.029653 ignition[972]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Jul 14 23:03:40.029653 ignition[972]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 14 23:03:40.029653 ignition[972]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 14 23:03:40.030404 ignition[972]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Jul 14 23:03:40.030404 ignition[972]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Jul 14 23:03:40.542355 ignition[972]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Jul 14 23:03:40.545269 ignition[972]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jul 14 23:03:40.545269 ignition[972]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Jul 14 23:03:40.545269 ignition[972]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Jul 14 23:03:40.545269 ignition[972]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Jul 14 23:03:40.545269 ignition[972]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 14 23:03:40.545269 ignition[972]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 14 23:03:40.545269 ignition[972]: INFO : files: files passed Jul 14 23:03:40.545269 ignition[972]: INFO : Ignition finished successfully Jul 14 23:03:40.547086 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 14 23:03:40.550427 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 14 23:03:40.552039 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 14 23:03:40.552623 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 14 23:03:40.552815 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 14 23:03:40.558069 initrd-setup-root-after-ignition[1003]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 14 23:03:40.558069 initrd-setup-root-after-ignition[1003]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 14 23:03:40.559093 initrd-setup-root-after-ignition[1007]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 14 23:03:40.559963 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 14 23:03:40.560532 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 14 23:03:40.564429 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 14 23:03:40.577037 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 14 23:03:40.577096 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 14 23:03:40.577519 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 14 23:03:40.577640 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 14 23:03:40.577838 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 14 23:03:40.578284 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 14 23:03:40.587534 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 14 23:03:40.591421 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 14 23:03:40.596869 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 14 23:03:40.597042 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 14 23:03:40.597266 systemd[1]: Stopped target timers.target - Timer Units. Jul 14 23:03:40.597453 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 14 23:03:40.597523 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 14 23:03:40.597770 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 14 23:03:40.598013 systemd[1]: Stopped target basic.target - Basic System. Jul 14 23:03:40.598196 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 14 23:03:40.598393 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 14 23:03:40.598588 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 14 23:03:40.598790 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 14 23:03:40.599138 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 14 23:03:40.599329 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 14 23:03:40.599547 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 14 23:03:40.599730 systemd[1]: Stopped target swap.target - Swaps. Jul 14 23:03:40.599886 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 14 23:03:40.599947 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 14 23:03:40.600197 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 14 23:03:40.600435 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 14 23:03:40.600618 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 14 23:03:40.600664 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 14 23:03:40.600830 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 14 23:03:40.600890 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 14 23:03:40.601177 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 14 23:03:40.601241 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 14 23:03:40.601492 systemd[1]: Stopped target paths.target - Path Units. Jul 14 23:03:40.601630 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 14 23:03:40.607367 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 14 23:03:40.607534 systemd[1]: Stopped target slices.target - Slice Units. Jul 14 23:03:40.607731 systemd[1]: Stopped target sockets.target - Socket Units. Jul 14 23:03:40.607916 systemd[1]: iscsid.socket: Deactivated successfully. Jul 14 23:03:40.607981 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 14 23:03:40.608198 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 14 23:03:40.608245 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 14 23:03:40.608524 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 14 23:03:40.608609 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 14 23:03:40.608838 systemd[1]: ignition-files.service: Deactivated successfully. Jul 14 23:03:40.608916 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 14 23:03:40.616501 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 14 23:03:40.616607 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 14 23:03:40.616698 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 14 23:03:40.619479 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 14 23:03:40.619593 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 14 23:03:40.619688 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 14 23:03:40.619866 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 14 23:03:40.619937 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 14 23:03:40.621672 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 14 23:03:40.621734 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 14 23:03:40.626780 ignition[1027]: INFO : Ignition 2.19.0 Jul 14 23:03:40.627280 ignition[1027]: INFO : Stage: umount Jul 14 23:03:40.627280 ignition[1027]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 14 23:03:40.627280 ignition[1027]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Jul 14 23:03:40.628206 ignition[1027]: INFO : umount: umount passed Jul 14 23:03:40.628381 ignition[1027]: INFO : Ignition finished successfully Jul 14 23:03:40.628998 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 14 23:03:40.629175 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 14 23:03:40.629721 systemd[1]: Stopped target network.target - Network. Jul 14 23:03:40.630048 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 14 23:03:40.630079 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 14 23:03:40.630298 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 14 23:03:40.630321 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 14 23:03:40.630810 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 14 23:03:40.630834 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 14 23:03:40.630944 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 14 23:03:40.630965 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 14 23:03:40.631155 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 14 23:03:40.631571 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 14 23:03:40.633640 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 14 23:03:40.633841 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 14 23:03:40.634855 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 14 23:03:40.635005 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 14 23:03:40.637173 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 14 23:03:40.637232 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 14 23:03:40.637583 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 14 23:03:40.637610 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 14 23:03:40.641439 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 14 23:03:40.641536 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 14 23:03:40.641563 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 14 23:03:40.641686 systemd[1]: afterburn-network-kargs.service: Deactivated successfully. Jul 14 23:03:40.641708 systemd[1]: Stopped afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments. Jul 14 23:03:40.641827 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 14 23:03:40.641847 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 14 23:03:40.641955 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 14 23:03:40.641975 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 14 23:03:40.642123 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 14 23:03:40.647643 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 14 23:03:40.647705 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 14 23:03:40.651681 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 14 23:03:40.651763 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 14 23:03:40.652227 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 14 23:03:40.652254 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 14 23:03:40.654252 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 14 23:03:40.654275 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 14 23:03:40.654387 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 14 23:03:40.654412 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 14 23:03:40.654559 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 14 23:03:40.654581 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 14 23:03:40.654706 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 14 23:03:40.654728 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 14 23:03:40.659446 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 14 23:03:40.659558 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 14 23:03:40.659592 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 14 23:03:40.659727 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 14 23:03:40.659750 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 14 23:03:40.660447 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 14 23:03:40.662714 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 14 23:03:40.662770 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 14 23:03:40.888836 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 14 23:03:40.888902 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 14 23:03:40.889184 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 14 23:03:40.889294 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 14 23:03:40.889320 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 14 23:03:40.893427 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 14 23:03:40.897043 systemd[1]: Switching root. Jul 14 23:03:40.924809 systemd-journald[216]: Journal stopped Jul 14 23:03:42.018199 systemd-journald[216]: Received SIGTERM from PID 1 (systemd). Jul 14 23:03:42.018228 kernel: SELinux: policy capability network_peer_controls=1 Jul 14 23:03:42.018236 kernel: SELinux: policy capability open_perms=1 Jul 14 23:03:42.018242 kernel: SELinux: policy capability extended_socket_class=1 Jul 14 23:03:42.018247 kernel: SELinux: policy capability always_check_network=0 Jul 14 23:03:42.018253 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 14 23:03:42.018260 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 14 23:03:42.018266 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 14 23:03:42.018272 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 14 23:03:42.018277 kernel: audit: type=1403 audit(1752534221.382:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 14 23:03:42.018284 systemd[1]: Successfully loaded SELinux policy in 36.784ms. Jul 14 23:03:42.018292 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 8.501ms. Jul 14 23:03:42.018299 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 14 23:03:42.018307 systemd[1]: Detected virtualization vmware. Jul 14 23:03:42.018314 systemd[1]: Detected architecture x86-64. Jul 14 23:03:42.018320 systemd[1]: Detected first boot. Jul 14 23:03:42.018326 systemd[1]: Initializing machine ID from random generator. Jul 14 23:03:42.018335 zram_generator::config[1069]: No configuration found. Jul 14 23:03:42.018350 systemd[1]: Populated /etc with preset unit settings. Jul 14 23:03:42.018361 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Jul 14 23:03:42.018368 systemd[1]: COREOS_CUSTOM_PUBLIC_IPV4=$(ip addr show ens192 | grep -v "inet 10." | grep -Po "inet \K[\d.]+")" > ${OUTPUT}" Jul 14 23:03:42.018375 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 14 23:03:42.018381 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 14 23:03:42.018387 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 14 23:03:42.018396 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 14 23:03:42.018404 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 14 23:03:42.018415 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 14 23:03:42.018422 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 14 23:03:42.018429 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 14 23:03:42.018436 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 14 23:03:42.018457 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 14 23:03:42.018465 systemd[1]: Created slice user.slice - User and Session Slice. Jul 14 23:03:42.018471 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 14 23:03:42.018478 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 14 23:03:42.018485 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 14 23:03:42.018491 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 14 23:03:42.018498 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 14 23:03:42.018504 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 14 23:03:42.018511 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jul 14 23:03:42.018519 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 14 23:03:42.018527 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 14 23:03:42.018535 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 14 23:03:42.018542 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 14 23:03:42.018548 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 14 23:03:42.018555 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 14 23:03:42.018562 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 14 23:03:42.018568 systemd[1]: Reached target slices.target - Slice Units. Jul 14 23:03:42.018576 systemd[1]: Reached target swap.target - Swaps. Jul 14 23:03:42.018583 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 14 23:03:42.018590 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 14 23:03:42.018596 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 14 23:03:42.018603 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 14 23:03:42.018612 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 14 23:03:42.018619 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 14 23:03:42.018626 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 14 23:03:42.018632 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 14 23:03:42.018639 systemd[1]: Mounting media.mount - External Media Directory... Jul 14 23:03:42.018646 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 14 23:03:42.018653 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 14 23:03:42.018660 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 14 23:03:42.018668 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 14 23:03:42.018675 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 14 23:03:42.018682 systemd[1]: Reached target machines.target - Containers. Jul 14 23:03:42.018689 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 14 23:03:42.018696 systemd[1]: Starting ignition-delete-config.service - Ignition (delete config)... Jul 14 23:03:42.018703 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 14 23:03:42.018710 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 14 23:03:42.018717 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 14 23:03:42.018725 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 14 23:03:42.018732 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 14 23:03:42.018738 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 14 23:03:42.018745 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 14 23:03:42.018752 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 14 23:03:42.018759 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 14 23:03:42.018765 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 14 23:03:42.018772 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 14 23:03:42.018779 systemd[1]: Stopped systemd-fsck-usr.service. Jul 14 23:03:42.018787 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 14 23:03:42.018795 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 14 23:03:42.018801 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 14 23:03:42.018808 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 14 23:03:42.018815 kernel: loop: module loaded Jul 14 23:03:42.018822 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 14 23:03:42.018829 systemd[1]: verity-setup.service: Deactivated successfully. Jul 14 23:03:42.018836 systemd[1]: Stopped verity-setup.service. Jul 14 23:03:42.018844 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 14 23:03:42.018851 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 14 23:03:42.018858 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 14 23:03:42.018865 systemd[1]: Mounted media.mount - External Media Directory. Jul 14 23:03:42.018872 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 14 23:03:42.018878 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 14 23:03:42.018885 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 14 23:03:42.018892 kernel: ACPI: bus type drm_connector registered Jul 14 23:03:42.018898 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 14 23:03:42.018907 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 14 23:03:42.018914 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 14 23:03:42.018921 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 14 23:03:42.018927 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 14 23:03:42.018934 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 14 23:03:42.018942 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 14 23:03:42.018949 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 14 23:03:42.018956 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 14 23:03:42.018962 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 14 23:03:42.018971 kernel: fuse: init (API version 7.39) Jul 14 23:03:42.018977 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 14 23:03:42.018984 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 14 23:03:42.018991 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 14 23:03:42.018998 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 14 23:03:42.019005 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 14 23:03:42.019012 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 14 23:03:42.019018 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 14 23:03:42.019041 systemd-journald[1159]: Collecting audit messages is disabled. Jul 14 23:03:42.019057 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 14 23:03:42.019065 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 14 23:03:42.019072 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 14 23:03:42.019081 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 14 23:03:42.019088 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jul 14 23:03:42.019095 systemd-journald[1159]: Journal started Jul 14 23:03:42.019112 systemd-journald[1159]: Runtime Journal (/run/log/journal/d800a3e707b74c62a13c5aadb0ca9bf7) is 4.8M, max 38.6M, 33.8M free. Jul 14 23:03:41.780225 systemd[1]: Queued start job for default target multi-user.target. Jul 14 23:03:41.829602 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jul 14 23:03:41.829816 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 14 23:03:42.022313 jq[1136]: true Jul 14 23:03:42.022848 jq[1167]: true Jul 14 23:03:42.026141 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 14 23:03:42.026157 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 14 23:03:42.028438 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 14 23:03:42.036400 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 14 23:03:42.038384 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 14 23:03:42.046137 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 14 23:03:42.046168 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 14 23:03:42.047023 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 14 23:03:42.052050 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 14 23:03:42.052079 systemd[1]: Started systemd-journald.service - Journal Service. Jul 14 23:03:42.051962 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 14 23:03:42.053538 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 14 23:03:42.053770 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 14 23:03:42.054117 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 14 23:03:42.068113 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 14 23:03:42.076404 kernel: loop0: detected capacity change from 0 to 2976 Jul 14 23:03:42.082172 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 14 23:03:42.101333 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 14 23:03:42.110453 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 14 23:03:42.113160 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jul 14 23:03:42.122301 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 14 23:03:42.137388 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 14 23:03:42.144505 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jul 14 23:03:42.145523 systemd-journald[1159]: Time spent on flushing to /var/log/journal/d800a3e707b74c62a13c5aadb0ca9bf7 is 29ms for 1840 entries. Jul 14 23:03:42.145523 systemd-journald[1159]: System Journal (/var/log/journal/d800a3e707b74c62a13c5aadb0ca9bf7) is 8.0M, max 584.8M, 576.8M free. Jul 14 23:03:42.221469 systemd-journald[1159]: Received client request to flush runtime journal. Jul 14 23:03:42.221518 kernel: loop1: detected capacity change from 0 to 221472 Jul 14 23:03:42.153980 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 14 23:03:42.199196 ignition[1187]: Ignition 2.19.0 Jul 14 23:03:42.161614 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jul 14 23:03:42.199412 ignition[1187]: deleting config from guestinfo properties Jul 14 23:03:42.166710 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 14 23:03:42.219552 ignition[1187]: Successfully deleted config Jul 14 23:03:42.171287 udevadm[1220]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jul 14 23:03:42.223142 systemd[1]: Finished ignition-delete-config.service - Ignition (delete config). Jul 14 23:03:42.223698 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 14 23:03:42.226943 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 14 23:03:42.233549 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 14 23:03:42.251880 systemd-tmpfiles[1233]: ACLs are not supported, ignoring. Jul 14 23:03:42.251892 systemd-tmpfiles[1233]: ACLs are not supported, ignoring. Jul 14 23:03:42.256545 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 14 23:03:42.259290 kernel: loop2: detected capacity change from 0 to 142488 Jul 14 23:03:42.311610 kernel: loop3: detected capacity change from 0 to 140768 Jul 14 23:03:42.407359 kernel: loop4: detected capacity change from 0 to 2976 Jul 14 23:03:42.455364 kernel: loop5: detected capacity change from 0 to 221472 Jul 14 23:03:42.473359 kernel: loop6: detected capacity change from 0 to 142488 Jul 14 23:03:42.499941 kernel: loop7: detected capacity change from 0 to 140768 Jul 14 23:03:42.528774 (sd-merge)[1239]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-vmware'. Jul 14 23:03:42.529068 (sd-merge)[1239]: Merged extensions into '/usr'. Jul 14 23:03:42.533405 systemd[1]: Reloading requested from client PID 1186 ('systemd-sysext') (unit systemd-sysext.service)... Jul 14 23:03:42.533415 systemd[1]: Reloading... Jul 14 23:03:42.585843 zram_generator::config[1265]: No configuration found. Jul 14 23:03:42.703093 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Jul 14 23:03:42.719255 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 14 23:03:42.756603 systemd[1]: Reloading finished in 222 ms. Jul 14 23:03:42.771313 ldconfig[1182]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 14 23:03:42.786057 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 14 23:03:42.786364 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 14 23:03:42.786614 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 14 23:03:42.795527 systemd[1]: Starting ensure-sysext.service... Jul 14 23:03:42.796348 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 14 23:03:42.799445 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 14 23:03:42.806223 systemd[1]: Reloading requested from client PID 1322 ('systemctl') (unit ensure-sysext.service)... Jul 14 23:03:42.806302 systemd[1]: Reloading... Jul 14 23:03:42.817021 systemd-tmpfiles[1323]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 14 23:03:42.817244 systemd-tmpfiles[1323]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 14 23:03:42.817790 systemd-tmpfiles[1323]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 14 23:03:42.817964 systemd-tmpfiles[1323]: ACLs are not supported, ignoring. Jul 14 23:03:42.818005 systemd-tmpfiles[1323]: ACLs are not supported, ignoring. Jul 14 23:03:42.819707 systemd-tmpfiles[1323]: Detected autofs mount point /boot during canonicalization of boot. Jul 14 23:03:42.819716 systemd-tmpfiles[1323]: Skipping /boot Jul 14 23:03:42.823100 systemd-udevd[1324]: Using default interface naming scheme 'v255'. Jul 14 23:03:42.829547 systemd-tmpfiles[1323]: Detected autofs mount point /boot during canonicalization of boot. Jul 14 23:03:42.829554 systemd-tmpfiles[1323]: Skipping /boot Jul 14 23:03:42.849378 zram_generator::config[1351]: No configuration found. Jul 14 23:03:42.944357 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Jul 14 23:03:42.947356 kernel: ACPI: button: Power Button [PWRF] Jul 14 23:03:42.970997 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Jul 14 23:03:42.994740 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 14 23:03:43.012379 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (1369) Jul 14 23:03:43.038750 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jul 14 23:03:43.038840 systemd[1]: Reloading finished in 232 ms. Jul 14 23:03:43.042355 kernel: piix4_smbus 0000:00:07.3: SMBus Host Controller not enabled! Jul 14 23:03:43.050691 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 14 23:03:43.051141 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 14 23:03:43.066372 kernel: vmw_vmci 0000:00:07.7: Using capabilities 0xc Jul 14 23:03:43.069366 kernel: Guest personality initialized and is active Jul 14 23:03:43.073392 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Jul 14 23:03:43.073428 kernel: Initialized host personality Jul 14 23:03:43.076497 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_disk OEM. Jul 14 23:03:43.077280 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 14 23:03:43.086015 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input3 Jul 14 23:03:43.084750 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 14 23:03:43.086795 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 14 23:03:43.089936 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 14 23:03:43.090746 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 14 23:03:43.092970 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 14 23:03:43.093165 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 14 23:03:43.094161 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 14 23:03:43.100962 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 14 23:03:43.102485 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 14 23:03:43.104488 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 14 23:03:43.108524 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 14 23:03:43.108660 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 14 23:03:43.110495 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 14 23:03:43.110638 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 14 23:03:43.110724 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 14 23:03:43.116617 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 14 23:03:43.116742 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 14 23:03:43.117150 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 14 23:03:43.118195 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 14 23:03:43.118393 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 14 23:03:43.118507 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 14 23:03:43.119949 systemd[1]: Finished ensure-sysext.service. Jul 14 23:03:43.129573 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jul 14 23:03:43.131557 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 14 23:03:43.131748 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 14 23:03:43.133890 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 14 23:03:43.141595 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 14 23:03:43.141905 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 14 23:03:43.143459 (udev-worker)[1371]: id: Truncating stdout of 'dmi_memory_id' up to 16384 byte. Jul 14 23:03:43.145961 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 14 23:03:43.155587 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 14 23:03:43.155712 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 14 23:03:43.155931 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 14 23:03:43.171665 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 14 23:03:43.172119 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 14 23:03:43.173361 kernel: mousedev: PS/2 mouse device common for all mice Jul 14 23:03:43.177532 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 14 23:03:43.177668 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 14 23:03:43.179997 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 14 23:03:43.183203 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 14 23:03:43.183311 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 14 23:03:43.192996 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jul 14 23:03:43.199591 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jul 14 23:03:43.200242 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 14 23:03:43.209383 lvm[1477]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 14 23:03:43.217380 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 14 23:03:43.219590 augenrules[1487]: No rules Jul 14 23:03:43.222286 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 14 23:03:43.229598 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jul 14 23:03:43.230155 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 14 23:03:43.234622 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jul 14 23:03:43.246358 lvm[1497]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 14 23:03:43.272545 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jul 14 23:03:43.279660 systemd-networkd[1449]: lo: Link UP Jul 14 23:03:43.279665 systemd-networkd[1449]: lo: Gained carrier Jul 14 23:03:43.280437 systemd-networkd[1449]: Enumeration completed Jul 14 23:03:43.280666 systemd-networkd[1449]: ens192: Configuring with /etc/systemd/network/00-vmware.network. Jul 14 23:03:43.281414 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 14 23:03:43.281938 systemd-networkd[1449]: ens192: Link UP Jul 14 23:03:43.282037 systemd-networkd[1449]: ens192: Gained carrier Jul 14 23:03:43.282358 kernel: vmxnet3 0000:0b:00.0 ens192: intr type 3, mode 0, 3 vectors allocated Jul 14 23:03:43.282667 kernel: vmxnet3 0000:0b:00.0 ens192: NIC Link is Up 10000 Mbps Jul 14 23:03:43.294093 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jul 14 23:03:43.295654 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 14 23:03:43.296091 systemd[1]: Reached target time-set.target - System Time Set. Jul 14 23:03:43.300068 systemd-resolved[1450]: Positive Trust Anchors: Jul 14 23:03:43.300166 systemd-resolved[1450]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 14 23:03:43.300191 systemd-resolved[1450]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 14 23:03:43.301527 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 14 23:03:43.310697 systemd-resolved[1450]: Defaulting to hostname 'linux'. Jul 14 23:03:43.311772 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 14 23:03:43.311996 systemd[1]: Reached target network.target - Network. Jul 14 23:03:43.312126 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 14 23:03:43.312238 systemd[1]: Reached target sysinit.target - System Initialization. Jul 14 23:03:43.312437 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 14 23:03:43.312563 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 14 23:03:43.312764 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 14 23:03:43.312922 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 14 23:03:43.313030 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 14 23:03:43.313136 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 14 23:03:43.313151 systemd[1]: Reached target paths.target - Path Units. Jul 14 23:03:43.313234 systemd[1]: Reached target timers.target - Timer Units. Jul 14 23:03:43.326393 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 14 23:03:43.327584 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 14 23:03:43.334587 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 14 23:03:43.335061 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 14 23:03:43.335210 systemd[1]: Reached target sockets.target - Socket Units. Jul 14 23:03:43.335302 systemd[1]: Reached target basic.target - Basic System. Jul 14 23:03:43.335434 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 14 23:03:43.335450 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 14 23:03:43.336224 systemd[1]: Starting containerd.service - containerd container runtime... Jul 14 23:03:43.339570 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 14 23:03:43.341945 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 14 23:03:43.344449 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 14 23:03:43.345284 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 14 23:03:43.346373 jq[1510]: false Jul 14 23:03:43.346699 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 14 23:03:43.356445 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 14 23:03:43.358229 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 14 23:03:43.360445 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 14 23:03:43.368762 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 14 23:03:43.369555 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 14 23:03:43.370046 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 14 23:03:43.371667 systemd[1]: Starting update-engine.service - Update Engine... Jul 14 23:03:43.374423 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 14 23:03:43.381163 extend-filesystems[1511]: Found loop4 Jul 14 23:03:43.381163 extend-filesystems[1511]: Found loop5 Jul 14 23:03:43.381163 extend-filesystems[1511]: Found loop6 Jul 14 23:03:43.381163 extend-filesystems[1511]: Found loop7 Jul 14 23:03:43.381163 extend-filesystems[1511]: Found sda Jul 14 23:03:43.381163 extend-filesystems[1511]: Found sda1 Jul 14 23:03:43.381163 extend-filesystems[1511]: Found sda2 Jul 14 23:03:43.381163 extend-filesystems[1511]: Found sda3 Jul 14 23:03:43.381163 extend-filesystems[1511]: Found usr Jul 14 23:03:43.381163 extend-filesystems[1511]: Found sda4 Jul 14 23:03:43.381163 extend-filesystems[1511]: Found sda6 Jul 14 23:03:43.381163 extend-filesystems[1511]: Found sda7 Jul 14 23:03:43.381163 extend-filesystems[1511]: Found sda9 Jul 14 23:03:43.381163 extend-filesystems[1511]: Checking size of /dev/sda9 Jul 14 23:03:43.395469 jq[1521]: true Jul 14 23:03:43.383434 systemd[1]: Starting vgauthd.service - VGAuth Service for open-vm-tools... Jul 14 23:03:43.395219 dbus-daemon[1509]: [system] SELinux support is enabled Jul 14 23:03:43.384963 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 14 23:03:43.385515 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 14 23:03:43.387533 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 14 23:03:43.387646 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 14 23:03:43.396033 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 14 23:03:43.405802 update_engine[1520]: I20250714 23:03:43.398788 1520 main.cc:92] Flatcar Update Engine starting Jul 14 23:03:43.405802 update_engine[1520]: I20250714 23:03:43.399683 1520 update_check_scheduler.cc:74] Next update check in 3m43s Jul 14 23:03:43.400086 systemd[1]: motdgen.service: Deactivated successfully. Jul 14 23:03:43.400520 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 14 23:03:43.408336 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 14 23:03:43.408377 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 14 23:03:43.408551 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 14 23:03:43.408563 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 14 23:03:43.412802 extend-filesystems[1511]: Old size kept for /dev/sda9 Jul 14 23:03:43.412802 extend-filesystems[1511]: Found sr0 Jul 14 23:03:43.413230 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 14 23:03:43.413848 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 14 23:03:43.414733 systemd[1]: Started update-engine.service - Update Engine. Jul 14 23:03:43.418376 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 14 23:03:43.418633 jq[1532]: true Jul 14 23:03:43.426278 (ntainerd)[1538]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 14 23:03:43.430629 systemd[1]: Started vgauthd.service - VGAuth Service for open-vm-tools. Jul 14 23:03:43.432390 systemd[1]: Starting vmtoolsd.service - Service for virtual machines hosted on VMware... Jul 14 23:03:43.433070 tar[1531]: linux-amd64/helm Jul 14 23:03:43.439576 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (1383) Jul 14 23:03:43.481978 systemd-logind[1517]: Watching system buttons on /dev/input/event1 (Power Button) Jul 14 23:03:43.484384 systemd-logind[1517]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jul 14 23:03:43.484492 systemd-logind[1517]: New seat seat0. Jul 14 23:03:43.490598 systemd[1]: Started vmtoolsd.service - Service for virtual machines hosted on VMware. Jul 14 23:03:43.490840 systemd[1]: Started systemd-logind.service - User Login Management. Jul 14 23:03:43.496682 bash[1568]: Updated "/home/core/.ssh/authorized_keys" Jul 14 23:03:43.498382 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 14 23:03:43.498965 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jul 14 23:03:43.502479 unknown[1553]: Pref_Init: Using '/etc/vmware-tools/vgauth.conf' as preferences filepath Jul 14 23:05:10.364202 systemd-timesyncd[1458]: Contacted time server 97.107.136.23:123 (0.flatcar.pool.ntp.org). Jul 14 23:05:10.364239 systemd-timesyncd[1458]: Initial clock synchronization to Mon 2025-07-14 23:05:10.364106 UTC. Jul 14 23:05:10.364844 unknown[1553]: Core dump limit set to -1 Jul 14 23:05:10.365127 systemd-resolved[1450]: Clock change detected. Flushing caches. Jul 14 23:05:10.374160 kernel: NET: Registered PF_VSOCK protocol family Jul 14 23:05:10.551126 locksmithd[1546]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 14 23:05:10.569167 containerd[1538]: time="2025-07-14T23:05:10.569112415Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jul 14 23:05:10.591607 sshd_keygen[1543]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 14 23:05:10.603556 containerd[1538]: time="2025-07-14T23:05:10.603521422Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jul 14 23:05:10.613233 containerd[1538]: time="2025-07-14T23:05:10.613197986Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.97-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jul 14 23:05:10.613233 containerd[1538]: time="2025-07-14T23:05:10.613229155Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jul 14 23:05:10.613342 containerd[1538]: time="2025-07-14T23:05:10.613243956Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jul 14 23:05:10.613358 containerd[1538]: time="2025-07-14T23:05:10.613349208Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jul 14 23:05:10.613373 containerd[1538]: time="2025-07-14T23:05:10.613360207Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jul 14 23:05:10.613418 containerd[1538]: time="2025-07-14T23:05:10.613405104Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jul 14 23:05:10.613437 containerd[1538]: time="2025-07-14T23:05:10.613416945Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jul 14 23:05:10.613532 containerd[1538]: time="2025-07-14T23:05:10.613517993Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 14 23:05:10.613532 containerd[1538]: time="2025-07-14T23:05:10.613529592Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jul 14 23:05:10.613570 containerd[1538]: time="2025-07-14T23:05:10.613537574Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jul 14 23:05:10.613570 containerd[1538]: time="2025-07-14T23:05:10.613543581Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jul 14 23:05:10.613604 containerd[1538]: time="2025-07-14T23:05:10.613587133Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jul 14 23:05:10.613723 containerd[1538]: time="2025-07-14T23:05:10.613711538Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jul 14 23:05:10.613790 containerd[1538]: time="2025-07-14T23:05:10.613777700Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 14 23:05:10.613809 containerd[1538]: time="2025-07-14T23:05:10.613788747Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jul 14 23:05:10.613845 containerd[1538]: time="2025-07-14T23:05:10.613834699Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jul 14 23:05:10.613876 containerd[1538]: time="2025-07-14T23:05:10.613866432Z" level=info msg="metadata content store policy set" policy=shared Jul 14 23:05:10.615675 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 14 23:05:10.622242 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 14 23:05:10.623920 containerd[1538]: time="2025-07-14T23:05:10.623895762Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jul 14 23:05:10.623967 containerd[1538]: time="2025-07-14T23:05:10.623935485Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jul 14 23:05:10.623967 containerd[1538]: time="2025-07-14T23:05:10.623947446Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jul 14 23:05:10.623967 containerd[1538]: time="2025-07-14T23:05:10.623961921Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jul 14 23:05:10.624041 containerd[1538]: time="2025-07-14T23:05:10.623971941Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jul 14 23:05:10.624065 containerd[1538]: time="2025-07-14T23:05:10.624058886Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jul 14 23:05:10.628069 containerd[1538]: time="2025-07-14T23:05:10.626023274Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jul 14 23:05:10.628069 containerd[1538]: time="2025-07-14T23:05:10.626123138Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jul 14 23:05:10.628069 containerd[1538]: time="2025-07-14T23:05:10.626134821Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jul 14 23:05:10.628069 containerd[1538]: time="2025-07-14T23:05:10.626142718Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jul 14 23:05:10.628069 containerd[1538]: time="2025-07-14T23:05:10.626151151Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jul 14 23:05:10.628069 containerd[1538]: time="2025-07-14T23:05:10.626159008Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jul 14 23:05:10.628069 containerd[1538]: time="2025-07-14T23:05:10.626166579Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jul 14 23:05:10.628069 containerd[1538]: time="2025-07-14T23:05:10.626175470Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jul 14 23:05:10.628069 containerd[1538]: time="2025-07-14T23:05:10.626184143Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jul 14 23:05:10.628069 containerd[1538]: time="2025-07-14T23:05:10.626192817Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jul 14 23:05:10.628069 containerd[1538]: time="2025-07-14T23:05:10.626201068Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jul 14 23:05:10.628069 containerd[1538]: time="2025-07-14T23:05:10.626207959Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jul 14 23:05:10.628069 containerd[1538]: time="2025-07-14T23:05:10.626223122Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jul 14 23:05:10.628069 containerd[1538]: time="2025-07-14T23:05:10.626231768Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jul 14 23:05:10.628333 containerd[1538]: time="2025-07-14T23:05:10.626239079Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jul 14 23:05:10.628333 containerd[1538]: time="2025-07-14T23:05:10.626247599Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jul 14 23:05:10.628333 containerd[1538]: time="2025-07-14T23:05:10.626255508Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jul 14 23:05:10.628333 containerd[1538]: time="2025-07-14T23:05:10.626262972Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jul 14 23:05:10.628333 containerd[1538]: time="2025-07-14T23:05:10.626270010Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jul 14 23:05:10.628333 containerd[1538]: time="2025-07-14T23:05:10.626277460Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jul 14 23:05:10.628333 containerd[1538]: time="2025-07-14T23:05:10.626284578Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jul 14 23:05:10.628333 containerd[1538]: time="2025-07-14T23:05:10.626293605Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jul 14 23:05:10.628333 containerd[1538]: time="2025-07-14T23:05:10.626300122Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jul 14 23:05:10.628333 containerd[1538]: time="2025-07-14T23:05:10.626307686Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jul 14 23:05:10.628333 containerd[1538]: time="2025-07-14T23:05:10.626315149Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jul 14 23:05:10.628333 containerd[1538]: time="2025-07-14T23:05:10.626327596Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jul 14 23:05:10.628333 containerd[1538]: time="2025-07-14T23:05:10.626342804Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jul 14 23:05:10.628333 containerd[1538]: time="2025-07-14T23:05:10.626350437Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jul 14 23:05:10.628333 containerd[1538]: time="2025-07-14T23:05:10.626363632Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jul 14 23:05:10.628541 containerd[1538]: time="2025-07-14T23:05:10.626396520Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jul 14 23:05:10.628541 containerd[1538]: time="2025-07-14T23:05:10.626408446Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jul 14 23:05:10.628541 containerd[1538]: time="2025-07-14T23:05:10.626415685Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jul 14 23:05:10.628541 containerd[1538]: time="2025-07-14T23:05:10.626425145Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jul 14 23:05:10.628541 containerd[1538]: time="2025-07-14T23:05:10.626430948Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jul 14 23:05:10.628541 containerd[1538]: time="2025-07-14T23:05:10.626438139Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jul 14 23:05:10.628541 containerd[1538]: time="2025-07-14T23:05:10.626446654Z" level=info msg="NRI interface is disabled by configuration." Jul 14 23:05:10.628541 containerd[1538]: time="2025-07-14T23:05:10.626453273Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jul 14 23:05:10.628389 systemd[1]: Started containerd.service - containerd container runtime. Jul 14 23:05:10.628689 containerd[1538]: time="2025-07-14T23:05:10.626623030Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jul 14 23:05:10.628689 containerd[1538]: time="2025-07-14T23:05:10.626661908Z" level=info msg="Connect containerd service" Jul 14 23:05:10.628689 containerd[1538]: time="2025-07-14T23:05:10.626682134Z" level=info msg="using legacy CRI server" Jul 14 23:05:10.628689 containerd[1538]: time="2025-07-14T23:05:10.626686787Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 14 23:05:10.628689 containerd[1538]: time="2025-07-14T23:05:10.626760599Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jul 14 23:05:10.628689 containerd[1538]: time="2025-07-14T23:05:10.627106192Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 14 23:05:10.628689 containerd[1538]: time="2025-07-14T23:05:10.627357589Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 14 23:05:10.628689 containerd[1538]: time="2025-07-14T23:05:10.627384372Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 14 23:05:10.628689 containerd[1538]: time="2025-07-14T23:05:10.627406132Z" level=info msg="Start subscribing containerd event" Jul 14 23:05:10.628689 containerd[1538]: time="2025-07-14T23:05:10.627430398Z" level=info msg="Start recovering state" Jul 14 23:05:10.628689 containerd[1538]: time="2025-07-14T23:05:10.627462585Z" level=info msg="Start event monitor" Jul 14 23:05:10.628689 containerd[1538]: time="2025-07-14T23:05:10.627469635Z" level=info msg="Start snapshots syncer" Jul 14 23:05:10.628689 containerd[1538]: time="2025-07-14T23:05:10.627475366Z" level=info msg="Start cni network conf syncer for default" Jul 14 23:05:10.628689 containerd[1538]: time="2025-07-14T23:05:10.627483120Z" level=info msg="Start streaming server" Jul 14 23:05:10.628689 containerd[1538]: time="2025-07-14T23:05:10.627519219Z" level=info msg="containerd successfully booted in 0.059722s" Jul 14 23:05:10.628897 systemd[1]: issuegen.service: Deactivated successfully. Jul 14 23:05:10.629022 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 14 23:05:10.634826 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 14 23:05:10.648346 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 14 23:05:10.655356 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 14 23:05:10.656619 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jul 14 23:05:10.656832 systemd[1]: Reached target getty.target - Login Prompts. Jul 14 23:05:10.804256 tar[1531]: linux-amd64/LICENSE Jul 14 23:05:10.804341 tar[1531]: linux-amd64/README.md Jul 14 23:05:10.818427 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 14 23:05:11.497331 systemd-networkd[1449]: ens192: Gained IPv6LL Jul 14 23:05:11.498298 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 14 23:05:11.499166 systemd[1]: Reached target network-online.target - Network is Online. Jul 14 23:05:11.504277 systemd[1]: Starting coreos-metadata.service - VMware metadata agent... Jul 14 23:05:11.506011 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 14 23:05:11.509742 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 14 23:05:11.527238 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 14 23:05:11.536849 systemd[1]: coreos-metadata.service: Deactivated successfully. Jul 14 23:05:11.536982 systemd[1]: Finished coreos-metadata.service - VMware metadata agent. Jul 14 23:05:11.537629 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 14 23:05:12.422176 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 14 23:05:12.422598 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 14 23:05:12.422861 systemd[1]: Startup finished in 998ms (kernel) + 5.750s (initrd) + 4.220s (userspace) = 10.968s. Jul 14 23:05:12.423421 (kubelet)[1688]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 14 23:05:12.449112 login[1653]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jul 14 23:05:12.449952 login[1654]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jul 14 23:05:12.456371 systemd-logind[1517]: New session 2 of user core. Jul 14 23:05:12.456867 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 14 23:05:12.462417 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 14 23:05:12.463994 systemd-logind[1517]: New session 1 of user core. Jul 14 23:05:12.469731 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 14 23:05:12.475888 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 14 23:05:12.477850 (systemd)[1695]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 14 23:05:12.602821 systemd[1695]: Queued start job for default target default.target. Jul 14 23:05:12.611181 systemd[1695]: Created slice app.slice - User Application Slice. Jul 14 23:05:12.611249 systemd[1695]: Reached target paths.target - Paths. Jul 14 23:05:12.611295 systemd[1695]: Reached target timers.target - Timers. Jul 14 23:05:12.612145 systemd[1695]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 14 23:05:12.619195 systemd[1695]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 14 23:05:12.619231 systemd[1695]: Reached target sockets.target - Sockets. Jul 14 23:05:12.619241 systemd[1695]: Reached target basic.target - Basic System. Jul 14 23:05:12.619268 systemd[1695]: Reached target default.target - Main User Target. Jul 14 23:05:12.619286 systemd[1695]: Startup finished in 138ms. Jul 14 23:05:12.619299 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 14 23:05:12.620270 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 14 23:05:12.620835 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 14 23:05:13.049115 kubelet[1688]: E0714 23:05:13.049067 1688 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 14 23:05:13.050544 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 14 23:05:13.050628 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 14 23:05:23.258146 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 14 23:05:23.266329 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 14 23:05:23.334005 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 14 23:05:23.336410 (kubelet)[1739]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 14 23:05:23.373137 kubelet[1739]: E0714 23:05:23.373101 1739 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 14 23:05:23.375532 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 14 23:05:23.375675 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 14 23:05:33.508203 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 14 23:05:33.515323 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 14 23:05:33.577343 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 14 23:05:33.579909 (kubelet)[1754]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 14 23:05:33.602496 kubelet[1754]: E0714 23:05:33.602468 1754 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 14 23:05:33.603988 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 14 23:05:33.604084 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 14 23:05:40.461243 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 14 23:05:40.462241 systemd[1]: Started sshd@0-139.178.70.101:22-139.178.68.195:37566.service - OpenSSH per-connection server daemon (139.178.68.195:37566). Jul 14 23:05:40.496032 sshd[1762]: Accepted publickey for core from 139.178.68.195 port 37566 ssh2: RSA SHA256:P8jM/IfcNW8CAymy10q/kVwVtMfLKdB1OJSN9TFdkWE Jul 14 23:05:40.496687 sshd[1762]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 23:05:40.498975 systemd-logind[1517]: New session 3 of user core. Jul 14 23:05:40.506283 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 14 23:05:40.558117 systemd[1]: Started sshd@1-139.178.70.101:22-139.178.68.195:37568.service - OpenSSH per-connection server daemon (139.178.68.195:37568). Jul 14 23:05:40.595618 sshd[1767]: Accepted publickey for core from 139.178.68.195 port 37568 ssh2: RSA SHA256:P8jM/IfcNW8CAymy10q/kVwVtMfLKdB1OJSN9TFdkWE Jul 14 23:05:40.594842 sshd[1767]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 23:05:40.597973 systemd-logind[1517]: New session 4 of user core. Jul 14 23:05:40.603373 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 14 23:05:40.652205 sshd[1767]: pam_unix(sshd:session): session closed for user core Jul 14 23:05:40.661585 systemd[1]: sshd@1-139.178.70.101:22-139.178.68.195:37568.service: Deactivated successfully. Jul 14 23:05:40.662466 systemd[1]: session-4.scope: Deactivated successfully. Jul 14 23:05:40.662852 systemd-logind[1517]: Session 4 logged out. Waiting for processes to exit. Jul 14 23:05:40.664066 systemd[1]: Started sshd@2-139.178.70.101:22-139.178.68.195:37578.service - OpenSSH per-connection server daemon (139.178.68.195:37578). Jul 14 23:05:40.665186 systemd-logind[1517]: Removed session 4. Jul 14 23:05:40.704544 sshd[1774]: Accepted publickey for core from 139.178.68.195 port 37578 ssh2: RSA SHA256:P8jM/IfcNW8CAymy10q/kVwVtMfLKdB1OJSN9TFdkWE Jul 14 23:05:40.705408 sshd[1774]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 23:05:40.708702 systemd-logind[1517]: New session 5 of user core. Jul 14 23:05:40.714176 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 14 23:05:40.762190 sshd[1774]: pam_unix(sshd:session): session closed for user core Jul 14 23:05:40.770835 systemd[1]: sshd@2-139.178.70.101:22-139.178.68.195:37578.service: Deactivated successfully. Jul 14 23:05:40.771780 systemd[1]: session-5.scope: Deactivated successfully. Jul 14 23:05:40.772859 systemd-logind[1517]: Session 5 logged out. Waiting for processes to exit. Jul 14 23:05:40.777309 systemd[1]: Started sshd@3-139.178.70.101:22-139.178.68.195:37594.service - OpenSSH per-connection server daemon (139.178.68.195:37594). Jul 14 23:05:40.778882 systemd-logind[1517]: Removed session 5. Jul 14 23:05:40.806046 sshd[1781]: Accepted publickey for core from 139.178.68.195 port 37594 ssh2: RSA SHA256:P8jM/IfcNW8CAymy10q/kVwVtMfLKdB1OJSN9TFdkWE Jul 14 23:05:40.806792 sshd[1781]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 23:05:40.809111 systemd-logind[1517]: New session 6 of user core. Jul 14 23:05:40.820313 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 14 23:05:40.870056 sshd[1781]: pam_unix(sshd:session): session closed for user core Jul 14 23:05:40.874893 systemd[1]: sshd@3-139.178.70.101:22-139.178.68.195:37594.service: Deactivated successfully. Jul 14 23:05:40.876028 systemd[1]: session-6.scope: Deactivated successfully. Jul 14 23:05:40.876951 systemd-logind[1517]: Session 6 logged out. Waiting for processes to exit. Jul 14 23:05:40.877703 systemd[1]: Started sshd@4-139.178.70.101:22-139.178.68.195:37596.service - OpenSSH per-connection server daemon (139.178.68.195:37596). Jul 14 23:05:40.879252 systemd-logind[1517]: Removed session 6. Jul 14 23:05:40.905318 sshd[1788]: Accepted publickey for core from 139.178.68.195 port 37596 ssh2: RSA SHA256:P8jM/IfcNW8CAymy10q/kVwVtMfLKdB1OJSN9TFdkWE Jul 14 23:05:40.906040 sshd[1788]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 23:05:40.908375 systemd-logind[1517]: New session 7 of user core. Jul 14 23:05:40.919278 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 14 23:05:40.973636 sudo[1791]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 14 23:05:40.973802 sudo[1791]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 14 23:05:40.985707 sudo[1791]: pam_unix(sudo:session): session closed for user root Jul 14 23:05:40.986707 sshd[1788]: pam_unix(sshd:session): session closed for user core Jul 14 23:05:40.995478 systemd[1]: sshd@4-139.178.70.101:22-139.178.68.195:37596.service: Deactivated successfully. Jul 14 23:05:40.996245 systemd[1]: session-7.scope: Deactivated successfully. Jul 14 23:05:40.996997 systemd-logind[1517]: Session 7 logged out. Waiting for processes to exit. Jul 14 23:05:40.997704 systemd[1]: Started sshd@5-139.178.70.101:22-139.178.68.195:37602.service - OpenSSH per-connection server daemon (139.178.68.195:37602). Jul 14 23:05:40.999218 systemd-logind[1517]: Removed session 7. Jul 14 23:05:41.025384 sshd[1796]: Accepted publickey for core from 139.178.68.195 port 37602 ssh2: RSA SHA256:P8jM/IfcNW8CAymy10q/kVwVtMfLKdB1OJSN9TFdkWE Jul 14 23:05:41.026170 sshd[1796]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 23:05:41.028739 systemd-logind[1517]: New session 8 of user core. Jul 14 23:05:41.035167 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 14 23:05:41.083530 sudo[1800]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 14 23:05:41.083977 sudo[1800]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 14 23:05:41.086403 sudo[1800]: pam_unix(sudo:session): session closed for user root Jul 14 23:05:41.090278 sudo[1799]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jul 14 23:05:41.090491 sudo[1799]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 14 23:05:41.105278 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jul 14 23:05:41.106196 auditctl[1803]: No rules Jul 14 23:05:41.106539 systemd[1]: audit-rules.service: Deactivated successfully. Jul 14 23:05:41.106696 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jul 14 23:05:41.108666 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 14 23:05:41.134790 augenrules[1821]: No rules Jul 14 23:05:41.135705 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 14 23:05:41.136433 sudo[1799]: pam_unix(sudo:session): session closed for user root Jul 14 23:05:41.138030 sshd[1796]: pam_unix(sshd:session): session closed for user core Jul 14 23:05:41.141486 systemd[1]: sshd@5-139.178.70.101:22-139.178.68.195:37602.service: Deactivated successfully. Jul 14 23:05:41.142275 systemd[1]: session-8.scope: Deactivated successfully. Jul 14 23:05:41.143055 systemd-logind[1517]: Session 8 logged out. Waiting for processes to exit. Jul 14 23:05:41.143791 systemd[1]: Started sshd@6-139.178.70.101:22-139.178.68.195:37618.service - OpenSSH per-connection server daemon (139.178.68.195:37618). Jul 14 23:05:41.145250 systemd-logind[1517]: Removed session 8. Jul 14 23:05:41.171360 sshd[1829]: Accepted publickey for core from 139.178.68.195 port 37618 ssh2: RSA SHA256:P8jM/IfcNW8CAymy10q/kVwVtMfLKdB1OJSN9TFdkWE Jul 14 23:05:41.172159 sshd[1829]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 23:05:41.174855 systemd-logind[1517]: New session 9 of user core. Jul 14 23:05:41.179154 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 14 23:05:41.228328 sudo[1832]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 14 23:05:41.228487 sudo[1832]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 14 23:05:41.497434 (dockerd)[1848]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 14 23:05:41.497732 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 14 23:05:41.739089 dockerd[1848]: time="2025-07-14T23:05:41.739035690Z" level=info msg="Starting up" Jul 14 23:05:41.801194 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport2476127062-merged.mount: Deactivated successfully. Jul 14 23:05:41.806931 systemd[1]: var-lib-docker-metacopy\x2dcheck3768996188-merged.mount: Deactivated successfully. Jul 14 23:05:41.819071 dockerd[1848]: time="2025-07-14T23:05:41.818942467Z" level=info msg="Loading containers: start." Jul 14 23:05:41.876091 kernel: Initializing XFRM netlink socket Jul 14 23:05:41.924289 systemd-networkd[1449]: docker0: Link UP Jul 14 23:05:41.930866 dockerd[1848]: time="2025-07-14T23:05:41.930839123Z" level=info msg="Loading containers: done." Jul 14 23:05:41.938318 dockerd[1848]: time="2025-07-14T23:05:41.938290197Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 14 23:05:41.938403 dockerd[1848]: time="2025-07-14T23:05:41.938353754Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jul 14 23:05:41.938423 dockerd[1848]: time="2025-07-14T23:05:41.938408592Z" level=info msg="Daemon has completed initialization" Jul 14 23:05:41.953237 dockerd[1848]: time="2025-07-14T23:05:41.952928710Z" level=info msg="API listen on /run/docker.sock" Jul 14 23:05:41.953408 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 14 23:05:42.795865 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2245910674-merged.mount: Deactivated successfully. Jul 14 23:05:43.459299 containerd[1538]: time="2025-07-14T23:05:43.459259461Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.10\"" Jul 14 23:05:43.757943 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jul 14 23:05:43.767286 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 14 23:05:43.947577 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 14 23:05:43.950246 (kubelet)[1995]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 14 23:05:43.988880 kubelet[1995]: E0714 23:05:43.988844 1995 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 14 23:05:43.989903 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 14 23:05:43.989981 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 14 23:05:44.271616 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1559696503.mount: Deactivated successfully. Jul 14 23:05:45.140105 containerd[1538]: time="2025-07-14T23:05:45.140038415Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 23:05:45.140842 containerd[1538]: time="2025-07-14T23:05:45.140711848Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.10: active requests=0, bytes read=28077744" Jul 14 23:05:45.141119 containerd[1538]: time="2025-07-14T23:05:45.141104134Z" level=info msg="ImageCreate event name:\"sha256:74c5154ea84d9a53c406e6c00e53cf66145cce821fd80e3c74e2e1bf312f3977\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 23:05:45.142890 containerd[1538]: time="2025-07-14T23:05:45.142876782Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:083d7d64af31cd090f870eb49fb815e6bb42c175fc602ee9dae2f28f082bd4dc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 23:05:45.143622 containerd[1538]: time="2025-07-14T23:05:45.143464758Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.10\" with image id \"sha256:74c5154ea84d9a53c406e6c00e53cf66145cce821fd80e3c74e2e1bf312f3977\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.10\", repo digest \"registry.k8s.io/kube-apiserver@sha256:083d7d64af31cd090f870eb49fb815e6bb42c175fc602ee9dae2f28f082bd4dc\", size \"28074544\" in 1.684169519s" Jul 14 23:05:45.143622 containerd[1538]: time="2025-07-14T23:05:45.143484450Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.10\" returns image reference \"sha256:74c5154ea84d9a53c406e6c00e53cf66145cce821fd80e3c74e2e1bf312f3977\"" Jul 14 23:05:45.143897 containerd[1538]: time="2025-07-14T23:05:45.143773155Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.10\"" Jul 14 23:05:46.718633 containerd[1538]: time="2025-07-14T23:05:46.718595769Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 23:05:46.719486 containerd[1538]: time="2025-07-14T23:05:46.719447100Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.10: active requests=0, bytes read=24713294" Jul 14 23:05:46.720094 containerd[1538]: time="2025-07-14T23:05:46.719782968Z" level=info msg="ImageCreate event name:\"sha256:c285c4e62c91c434e9928bee7063b361509f43f43faa31641b626d6eff97616d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 23:05:46.722377 containerd[1538]: time="2025-07-14T23:05:46.722352237Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3c67387d023c6114879f1e817669fd641797d30f117230682faf3930ecaaf0fe\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 23:05:46.723037 containerd[1538]: time="2025-07-14T23:05:46.722837912Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.10\" with image id \"sha256:c285c4e62c91c434e9928bee7063b361509f43f43faa31641b626d6eff97616d\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.10\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3c67387d023c6114879f1e817669fd641797d30f117230682faf3930ecaaf0fe\", size \"26315128\" in 1.579048418s" Jul 14 23:05:46.723037 containerd[1538]: time="2025-07-14T23:05:46.722862607Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.10\" returns image reference \"sha256:c285c4e62c91c434e9928bee7063b361509f43f43faa31641b626d6eff97616d\"" Jul 14 23:05:46.723336 containerd[1538]: time="2025-07-14T23:05:46.723208504Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.10\"" Jul 14 23:05:47.669474 containerd[1538]: time="2025-07-14T23:05:47.669436263Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 23:05:47.669910 containerd[1538]: time="2025-07-14T23:05:47.669889557Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.10: active requests=0, bytes read=18783671" Jul 14 23:05:47.670125 containerd[1538]: time="2025-07-14T23:05:47.670109831Z" level=info msg="ImageCreate event name:\"sha256:61daeb7d112d9547792027cb16242b1d131f357f511545477381457fff5a69e2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 23:05:47.671805 containerd[1538]: time="2025-07-14T23:05:47.671777839Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:284dc2a5cf6afc9b76e39ad4b79c680c23d289488517643b28784a06d0141272\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 23:05:47.672492 containerd[1538]: time="2025-07-14T23:05:47.672395376Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.10\" with image id \"sha256:61daeb7d112d9547792027cb16242b1d131f357f511545477381457fff5a69e2\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.10\", repo digest \"registry.k8s.io/kube-scheduler@sha256:284dc2a5cf6afc9b76e39ad4b79c680c23d289488517643b28784a06d0141272\", size \"20385523\" in 949.169897ms" Jul 14 23:05:47.672492 containerd[1538]: time="2025-07-14T23:05:47.672413919Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.10\" returns image reference \"sha256:61daeb7d112d9547792027cb16242b1d131f357f511545477381457fff5a69e2\"" Jul 14 23:05:47.672798 containerd[1538]: time="2025-07-14T23:05:47.672783663Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.10\"" Jul 14 23:05:49.219299 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3373430963.mount: Deactivated successfully. Jul 14 23:05:49.667552 containerd[1538]: time="2025-07-14T23:05:49.667326392Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 23:05:49.673068 containerd[1538]: time="2025-07-14T23:05:49.673040086Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.10: active requests=0, bytes read=30383943" Jul 14 23:05:49.681691 containerd[1538]: time="2025-07-14T23:05:49.681645984Z" level=info msg="ImageCreate event name:\"sha256:3ed600862d3e69931e0f9f4dbf5c2b46343af40aa079772434f13de771bdc30c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 23:05:49.685040 containerd[1538]: time="2025-07-14T23:05:49.685015773Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bcbb293812bdf587b28ea98369a8c347ca84884160046296761acdf12b27029d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 23:05:49.685502 containerd[1538]: time="2025-07-14T23:05:49.685292629Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.10\" with image id \"sha256:3ed600862d3e69931e0f9f4dbf5c2b46343af40aa079772434f13de771bdc30c\", repo tag \"registry.k8s.io/kube-proxy:v1.31.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:bcbb293812bdf587b28ea98369a8c347ca84884160046296761acdf12b27029d\", size \"30382962\" in 2.01248798s" Jul 14 23:05:49.685502 containerd[1538]: time="2025-07-14T23:05:49.685310722Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.10\" returns image reference \"sha256:3ed600862d3e69931e0f9f4dbf5c2b46343af40aa079772434f13de771bdc30c\"" Jul 14 23:05:49.685614 containerd[1538]: time="2025-07-14T23:05:49.685600023Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jul 14 23:05:50.390485 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2512252721.mount: Deactivated successfully. Jul 14 23:05:51.261425 containerd[1538]: time="2025-07-14T23:05:51.261371596Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 23:05:51.263580 containerd[1538]: time="2025-07-14T23:05:51.263452141Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Jul 14 23:05:51.264311 containerd[1538]: time="2025-07-14T23:05:51.264284042Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 23:05:51.267087 containerd[1538]: time="2025-07-14T23:05:51.266493206Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 23:05:51.267211 containerd[1538]: time="2025-07-14T23:05:51.267140025Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.581523438s" Jul 14 23:05:51.267211 containerd[1538]: time="2025-07-14T23:05:51.267158973Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Jul 14 23:05:51.267588 containerd[1538]: time="2025-07-14T23:05:51.267530712Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 14 23:05:52.160864 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2123464116.mount: Deactivated successfully. Jul 14 23:05:52.371826 containerd[1538]: time="2025-07-14T23:05:52.371262083Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 23:05:52.374199 containerd[1538]: time="2025-07-14T23:05:52.374171137Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Jul 14 23:05:52.383839 containerd[1538]: time="2025-07-14T23:05:52.383806888Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 23:05:52.391505 containerd[1538]: time="2025-07-14T23:05:52.391477644Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 23:05:52.392364 containerd[1538]: time="2025-07-14T23:05:52.391953249Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 1.1244058s" Jul 14 23:05:52.392364 containerd[1538]: time="2025-07-14T23:05:52.391976683Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jul 14 23:05:52.392364 containerd[1538]: time="2025-07-14T23:05:52.392290497Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Jul 14 23:05:52.822613 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3638436429.mount: Deactivated successfully. Jul 14 23:05:54.008162 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jul 14 23:05:54.015202 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 14 23:05:54.623652 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 14 23:05:54.626253 (kubelet)[2183]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 14 23:05:54.749742 kubelet[2183]: E0714 23:05:54.749708 2183 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 14 23:05:54.750794 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 14 23:05:54.750871 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 14 23:05:55.456755 update_engine[1520]: I20250714 23:05:55.456693 1520 update_attempter.cc:509] Updating boot flags... Jul 14 23:05:55.485182 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (2199) Jul 14 23:05:55.516093 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (2202) Jul 14 23:05:58.101981 containerd[1538]: time="2025-07-14T23:05:58.101943901Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 23:05:58.102621 containerd[1538]: time="2025-07-14T23:05:58.102606551Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56780013" Jul 14 23:05:58.102674 containerd[1538]: time="2025-07-14T23:05:58.102607873Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 23:05:58.104433 containerd[1538]: time="2025-07-14T23:05:58.104421301Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 23:05:58.105531 containerd[1538]: time="2025-07-14T23:05:58.105070385Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 5.712765008s" Jul 14 23:05:58.105531 containerd[1538]: time="2025-07-14T23:05:58.105101294Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Jul 14 23:06:04.758474 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Jul 14 23:06:04.765951 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 14 23:06:04.916643 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 14 23:06:04.916697 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 14 23:06:04.916855 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 14 23:06:04.922302 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 14 23:06:04.939399 systemd[1]: Reloading requested from client PID 2244 ('systemctl') (unit session-9.scope)... Jul 14 23:06:04.939413 systemd[1]: Reloading... Jul 14 23:06:04.994095 zram_generator::config[2285]: No configuration found. Jul 14 23:06:05.050795 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Jul 14 23:06:05.066189 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 14 23:06:05.110381 systemd[1]: Reloading finished in 170 ms. Jul 14 23:06:05.158007 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 14 23:06:05.158067 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 14 23:06:05.158268 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 14 23:06:05.169270 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 14 23:06:05.417036 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 14 23:06:05.424339 (kubelet)[2349]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 14 23:06:05.456464 kubelet[2349]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 14 23:06:05.456464 kubelet[2349]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 14 23:06:05.456464 kubelet[2349]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 14 23:06:05.456707 kubelet[2349]: I0714 23:06:05.456502 2349 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 14 23:06:05.761709 kubelet[2349]: I0714 23:06:05.761689 2349 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jul 14 23:06:05.761709 kubelet[2349]: I0714 23:06:05.761706 2349 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 14 23:06:05.761849 kubelet[2349]: I0714 23:06:05.761839 2349 server.go:934] "Client rotation is on, will bootstrap in background" Jul 14 23:06:05.782692 kubelet[2349]: I0714 23:06:05.782589 2349 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 14 23:06:05.783928 kubelet[2349]: E0714 23:06:05.783901 2349 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://139.178.70.101:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 139.178.70.101:6443: connect: connection refused" logger="UnhandledError" Jul 14 23:06:05.799626 kubelet[2349]: E0714 23:06:05.799581 2349 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 14 23:06:05.799720 kubelet[2349]: I0714 23:06:05.799681 2349 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 14 23:06:05.804948 kubelet[2349]: I0714 23:06:05.804918 2349 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 14 23:06:05.804984 kubelet[2349]: I0714 23:06:05.804981 2349 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jul 14 23:06:05.805110 kubelet[2349]: I0714 23:06:05.805069 2349 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 14 23:06:05.805209 kubelet[2349]: I0714 23:06:05.805111 2349 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 14 23:06:05.806274 kubelet[2349]: I0714 23:06:05.806259 2349 topology_manager.go:138] "Creating topology manager with none policy" Jul 14 23:06:05.806274 kubelet[2349]: I0714 23:06:05.806272 2349 container_manager_linux.go:300] "Creating device plugin manager" Jul 14 23:06:05.806345 kubelet[2349]: I0714 23:06:05.806332 2349 state_mem.go:36] "Initialized new in-memory state store" Jul 14 23:06:05.808291 kubelet[2349]: I0714 23:06:05.808277 2349 kubelet.go:408] "Attempting to sync node with API server" Jul 14 23:06:05.808329 kubelet[2349]: I0714 23:06:05.808311 2349 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 14 23:06:05.809477 kubelet[2349]: I0714 23:06:05.809324 2349 kubelet.go:314] "Adding apiserver pod source" Jul 14 23:06:05.809477 kubelet[2349]: I0714 23:06:05.809344 2349 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 14 23:06:05.811708 kubelet[2349]: W0714 23:06:05.811449 2349 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://139.178.70.101:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 139.178.70.101:6443: connect: connection refused Jul 14 23:06:05.811708 kubelet[2349]: E0714 23:06:05.811481 2349 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://139.178.70.101:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 139.178.70.101:6443: connect: connection refused" logger="UnhandledError" Jul 14 23:06:05.811708 kubelet[2349]: W0714 23:06:05.811664 2349 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://139.178.70.101:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 139.178.70.101:6443: connect: connection refused Jul 14 23:06:05.811708 kubelet[2349]: E0714 23:06:05.811686 2349 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://139.178.70.101:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 139.178.70.101:6443: connect: connection refused" logger="UnhandledError" Jul 14 23:06:05.811799 kubelet[2349]: I0714 23:06:05.811718 2349 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jul 14 23:06:05.814935 kubelet[2349]: I0714 23:06:05.814924 2349 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 14 23:06:05.814969 kubelet[2349]: W0714 23:06:05.814954 2349 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 14 23:06:05.815269 kubelet[2349]: I0714 23:06:05.815257 2349 server.go:1274] "Started kubelet" Jul 14 23:06:05.815809 kubelet[2349]: I0714 23:06:05.815796 2349 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 14 23:06:05.818017 kubelet[2349]: I0714 23:06:05.817511 2349 server.go:449] "Adding debug handlers to kubelet server" Jul 14 23:06:05.819883 kubelet[2349]: I0714 23:06:05.819582 2349 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 14 23:06:05.819883 kubelet[2349]: I0714 23:06:05.819708 2349 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 14 23:06:05.821744 kubelet[2349]: I0714 23:06:05.821559 2349 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 14 23:06:05.822395 kubelet[2349]: E0714 23:06:05.819814 2349 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://139.178.70.101:6443/api/v1/namespaces/default/events\": dial tcp 139.178.70.101:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.185240b86b150e86 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-07-14 23:06:05.81524647 +0000 UTC m=+0.388472602,LastTimestamp:2025-07-14 23:06:05.81524647 +0000 UTC m=+0.388472602,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jul 14 23:06:05.822531 kubelet[2349]: I0714 23:06:05.822523 2349 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 14 23:06:05.826502 kubelet[2349]: E0714 23:06:05.826489 2349 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 23:06:05.826635 kubelet[2349]: I0714 23:06:05.826628 2349 volume_manager.go:289] "Starting Kubelet Volume Manager" Jul 14 23:06:05.827762 kubelet[2349]: I0714 23:06:05.827754 2349 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jul 14 23:06:05.833050 kubelet[2349]: I0714 23:06:05.832521 2349 reconciler.go:26] "Reconciler: start to sync state" Jul 14 23:06:05.833050 kubelet[2349]: W0714 23:06:05.832838 2349 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://139.178.70.101:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 139.178.70.101:6443: connect: connection refused Jul 14 23:06:05.833050 kubelet[2349]: E0714 23:06:05.832865 2349 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://139.178.70.101:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 139.178.70.101:6443: connect: connection refused" logger="UnhandledError" Jul 14 23:06:05.833050 kubelet[2349]: E0714 23:06:05.832896 2349 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.101:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.101:6443: connect: connection refused" interval="200ms" Jul 14 23:06:05.839006 kubelet[2349]: I0714 23:06:05.838946 2349 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 14 23:06:05.839608 kubelet[2349]: I0714 23:06:05.839600 2349 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 14 23:06:05.839787 kubelet[2349]: I0714 23:06:05.839651 2349 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 14 23:06:05.839787 kubelet[2349]: I0714 23:06:05.839666 2349 kubelet.go:2321] "Starting kubelet main sync loop" Jul 14 23:06:05.839787 kubelet[2349]: E0714 23:06:05.839687 2349 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 14 23:06:05.840409 kubelet[2349]: I0714 23:06:05.840401 2349 factory.go:221] Registration of the systemd container factory successfully Jul 14 23:06:05.840590 kubelet[2349]: I0714 23:06:05.840580 2349 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 14 23:06:05.843389 kubelet[2349]: I0714 23:06:05.843380 2349 factory.go:221] Registration of the containerd container factory successfully Jul 14 23:06:05.844363 kubelet[2349]: W0714 23:06:05.844342 2349 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://139.178.70.101:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 139.178.70.101:6443: connect: connection refused Jul 14 23:06:05.845590 kubelet[2349]: E0714 23:06:05.845531 2349 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://139.178.70.101:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 139.178.70.101:6443: connect: connection refused" logger="UnhandledError" Jul 14 23:06:05.853947 kubelet[2349]: E0714 23:06:05.853918 2349 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 14 23:06:05.872408 kubelet[2349]: I0714 23:06:05.872371 2349 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 14 23:06:05.872408 kubelet[2349]: I0714 23:06:05.872380 2349 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 14 23:06:05.872408 kubelet[2349]: I0714 23:06:05.872388 2349 state_mem.go:36] "Initialized new in-memory state store" Jul 14 23:06:05.885453 kubelet[2349]: I0714 23:06:05.885439 2349 policy_none.go:49] "None policy: Start" Jul 14 23:06:05.885709 kubelet[2349]: I0714 23:06:05.885696 2349 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 14 23:06:05.885709 kubelet[2349]: I0714 23:06:05.885708 2349 state_mem.go:35] "Initializing new in-memory state store" Jul 14 23:06:05.898172 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 14 23:06:05.909242 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 14 23:06:05.911847 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 14 23:06:05.922872 kubelet[2349]: I0714 23:06:05.922421 2349 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 14 23:06:05.922872 kubelet[2349]: I0714 23:06:05.922559 2349 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 14 23:06:05.922872 kubelet[2349]: I0714 23:06:05.922571 2349 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 14 23:06:05.922872 kubelet[2349]: I0714 23:06:05.922795 2349 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 14 23:06:05.923856 kubelet[2349]: E0714 23:06:05.923811 2349 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jul 14 23:06:05.947931 systemd[1]: Created slice kubepods-burstable-pod39d5c0fad02818b465d7fda7bcd4b156.slice - libcontainer container kubepods-burstable-pod39d5c0fad02818b465d7fda7bcd4b156.slice. Jul 14 23:06:05.966791 systemd[1]: Created slice kubepods-burstable-pod3f04709fe51ae4ab5abd58e8da771b74.slice - libcontainer container kubepods-burstable-pod3f04709fe51ae4ab5abd58e8da771b74.slice. Jul 14 23:06:05.970729 systemd[1]: Created slice kubepods-burstable-podb35b56493416c25588cb530e37ffc065.slice - libcontainer container kubepods-burstable-podb35b56493416c25588cb530e37ffc065.slice. Jul 14 23:06:06.023866 kubelet[2349]: I0714 23:06:06.023807 2349 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 14 23:06:06.024276 kubelet[2349]: E0714 23:06:06.024253 2349 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://139.178.70.101:6443/api/v1/nodes\": dial tcp 139.178.70.101:6443: connect: connection refused" node="localhost" Jul 14 23:06:06.033628 kubelet[2349]: I0714 23:06:06.033606 2349 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 14 23:06:06.033683 kubelet[2349]: I0714 23:06:06.033637 2349 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 14 23:06:06.033683 kubelet[2349]: I0714 23:06:06.033655 2349 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 14 23:06:06.033683 kubelet[2349]: I0714 23:06:06.033669 2349 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 14 23:06:06.033753 kubelet[2349]: I0714 23:06:06.033682 2349 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b35b56493416c25588cb530e37ffc065-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"b35b56493416c25588cb530e37ffc065\") " pod="kube-system/kube-scheduler-localhost" Jul 14 23:06:06.033753 kubelet[2349]: I0714 23:06:06.033699 2349 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/39d5c0fad02818b465d7fda7bcd4b156-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"39d5c0fad02818b465d7fda7bcd4b156\") " pod="kube-system/kube-apiserver-localhost" Jul 14 23:06:06.033753 kubelet[2349]: I0714 23:06:06.033710 2349 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/39d5c0fad02818b465d7fda7bcd4b156-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"39d5c0fad02818b465d7fda7bcd4b156\") " pod="kube-system/kube-apiserver-localhost" Jul 14 23:06:06.033753 kubelet[2349]: I0714 23:06:06.033731 2349 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/39d5c0fad02818b465d7fda7bcd4b156-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"39d5c0fad02818b465d7fda7bcd4b156\") " pod="kube-system/kube-apiserver-localhost" Jul 14 23:06:06.033753 kubelet[2349]: I0714 23:06:06.033743 2349 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 14 23:06:06.033952 kubelet[2349]: E0714 23:06:06.033929 2349 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.101:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.101:6443: connect: connection refused" interval="400ms" Jul 14 23:06:06.225346 kubelet[2349]: I0714 23:06:06.225301 2349 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 14 23:06:06.225523 kubelet[2349]: E0714 23:06:06.225504 2349 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://139.178.70.101:6443/api/v1/nodes\": dial tcp 139.178.70.101:6443: connect: connection refused" node="localhost" Jul 14 23:06:06.266857 containerd[1538]: time="2025-07-14T23:06:06.266816140Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:39d5c0fad02818b465d7fda7bcd4b156,Namespace:kube-system,Attempt:0,}" Jul 14 23:06:06.273273 containerd[1538]: time="2025-07-14T23:06:06.273125192Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:3f04709fe51ae4ab5abd58e8da771b74,Namespace:kube-system,Attempt:0,}" Jul 14 23:06:06.273273 containerd[1538]: time="2025-07-14T23:06:06.273159922Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:b35b56493416c25588cb530e37ffc065,Namespace:kube-system,Attempt:0,}" Jul 14 23:06:06.435133 kubelet[2349]: E0714 23:06:06.435050 2349 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.101:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.101:6443: connect: connection refused" interval="800ms" Jul 14 23:06:06.626716 kubelet[2349]: I0714 23:06:06.626681 2349 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 14 23:06:06.626970 kubelet[2349]: E0714 23:06:06.626885 2349 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://139.178.70.101:6443/api/v1/nodes\": dial tcp 139.178.70.101:6443: connect: connection refused" node="localhost" Jul 14 23:06:06.746557 kubelet[2349]: W0714 23:06:06.746475 2349 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://139.178.70.101:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 139.178.70.101:6443: connect: connection refused Jul 14 23:06:06.746557 kubelet[2349]: E0714 23:06:06.746524 2349 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://139.178.70.101:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 139.178.70.101:6443: connect: connection refused" logger="UnhandledError" Jul 14 23:06:06.757541 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount864657537.mount: Deactivated successfully. Jul 14 23:06:06.760090 containerd[1538]: time="2025-07-14T23:06:06.760060772Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 14 23:06:06.760918 containerd[1538]: time="2025-07-14T23:06:06.760825704Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jul 14 23:06:06.761479 containerd[1538]: time="2025-07-14T23:06:06.761423707Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 14 23:06:06.761972 containerd[1538]: time="2025-07-14T23:06:06.761953914Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 14 23:06:06.762646 containerd[1538]: time="2025-07-14T23:06:06.762592002Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 14 23:06:06.763090 containerd[1538]: time="2025-07-14T23:06:06.763064387Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 14 23:06:06.763829 containerd[1538]: time="2025-07-14T23:06:06.763708737Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 496.807016ms" Jul 14 23:06:06.765821 containerd[1538]: time="2025-07-14T23:06:06.765319804Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 14 23:06:06.765821 containerd[1538]: time="2025-07-14T23:06:06.765771469Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 492.575789ms" Jul 14 23:06:06.766949 containerd[1538]: time="2025-07-14T23:06:06.766652173Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 14 23:06:06.768334 containerd[1538]: time="2025-07-14T23:06:06.768261987Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 495.094221ms" Jul 14 23:06:06.850786 kubelet[2349]: W0714 23:06:06.850713 2349 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://139.178.70.101:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 139.178.70.101:6443: connect: connection refused Jul 14 23:06:06.850786 kubelet[2349]: E0714 23:06:06.850758 2349 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://139.178.70.101:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 139.178.70.101:6443: connect: connection refused" logger="UnhandledError" Jul 14 23:06:06.866092 containerd[1538]: time="2025-07-14T23:06:06.865807496Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 23:06:06.866092 containerd[1538]: time="2025-07-14T23:06:06.865856801Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 23:06:06.866092 containerd[1538]: time="2025-07-14T23:06:06.865874740Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 23:06:06.866092 containerd[1538]: time="2025-07-14T23:06:06.865930153Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 23:06:06.867434 containerd[1538]: time="2025-07-14T23:06:06.867379535Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 23:06:06.867434 containerd[1538]: time="2025-07-14T23:06:06.867418572Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 23:06:06.867484 containerd[1538]: time="2025-07-14T23:06:06.867451703Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 23:06:06.868020 containerd[1538]: time="2025-07-14T23:06:06.867938784Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 23:06:06.870297 containerd[1538]: time="2025-07-14T23:06:06.870063828Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 23:06:06.870297 containerd[1538]: time="2025-07-14T23:06:06.870110580Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 23:06:06.870297 containerd[1538]: time="2025-07-14T23:06:06.870120403Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 23:06:06.870297 containerd[1538]: time="2025-07-14T23:06:06.870263030Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 23:06:06.885363 systemd[1]: Started cri-containerd-dca8e2e97aa422370a6ffc7b6872866ca9a76b0d66135b336858a70a53fc1d95.scope - libcontainer container dca8e2e97aa422370a6ffc7b6872866ca9a76b0d66135b336858a70a53fc1d95. Jul 14 23:06:06.889565 systemd[1]: Started cri-containerd-058f25addf8f86bcce22b6e2196d833a0fe9572b544fd165fbe1e11d1104a23b.scope - libcontainer container 058f25addf8f86bcce22b6e2196d833a0fe9572b544fd165fbe1e11d1104a23b. Jul 14 23:06:06.892738 systemd[1]: Started cri-containerd-44a1d43c6ca9d4a1c165453fab0b48d40b577c1c79b725bce4831cf920d22346.scope - libcontainer container 44a1d43c6ca9d4a1c165453fab0b48d40b577c1c79b725bce4831cf920d22346. Jul 14 23:06:06.931564 containerd[1538]: time="2025-07-14T23:06:06.931510363Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:b35b56493416c25588cb530e37ffc065,Namespace:kube-system,Attempt:0,} returns sandbox id \"44a1d43c6ca9d4a1c165453fab0b48d40b577c1c79b725bce4831cf920d22346\"" Jul 14 23:06:06.932806 containerd[1538]: time="2025-07-14T23:06:06.932538972Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:3f04709fe51ae4ab5abd58e8da771b74,Namespace:kube-system,Attempt:0,} returns sandbox id \"058f25addf8f86bcce22b6e2196d833a0fe9572b544fd165fbe1e11d1104a23b\"" Jul 14 23:06:06.936571 containerd[1538]: time="2025-07-14T23:06:06.936549704Z" level=info msg="CreateContainer within sandbox \"44a1d43c6ca9d4a1c165453fab0b48d40b577c1c79b725bce4831cf920d22346\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 14 23:06:06.936732 containerd[1538]: time="2025-07-14T23:06:06.936658373Z" level=info msg="CreateContainer within sandbox \"058f25addf8f86bcce22b6e2196d833a0fe9572b544fd165fbe1e11d1104a23b\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 14 23:06:06.940776 containerd[1538]: time="2025-07-14T23:06:06.940753211Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:39d5c0fad02818b465d7fda7bcd4b156,Namespace:kube-system,Attempt:0,} returns sandbox id \"dca8e2e97aa422370a6ffc7b6872866ca9a76b0d66135b336858a70a53fc1d95\"" Jul 14 23:06:06.943978 containerd[1538]: time="2025-07-14T23:06:06.943624662Z" level=info msg="CreateContainer within sandbox \"dca8e2e97aa422370a6ffc7b6872866ca9a76b0d66135b336858a70a53fc1d95\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 14 23:06:06.948245 containerd[1538]: time="2025-07-14T23:06:06.948225198Z" level=info msg="CreateContainer within sandbox \"058f25addf8f86bcce22b6e2196d833a0fe9572b544fd165fbe1e11d1104a23b\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"80808158cfe8d10cace816d832f7bf16ee7b539b14af9fc88cef17aa6ae1839e\"" Jul 14 23:06:06.948710 containerd[1538]: time="2025-07-14T23:06:06.948697456Z" level=info msg="StartContainer for \"80808158cfe8d10cace816d832f7bf16ee7b539b14af9fc88cef17aa6ae1839e\"" Jul 14 23:06:06.949739 containerd[1538]: time="2025-07-14T23:06:06.949726216Z" level=info msg="CreateContainer within sandbox \"dca8e2e97aa422370a6ffc7b6872866ca9a76b0d66135b336858a70a53fc1d95\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"23895b3bef9ed2a3e55e8eb7b347135821dcf47aec8fa419b1123872d758332e\"" Jul 14 23:06:06.950189 containerd[1538]: time="2025-07-14T23:06:06.950041179Z" level=info msg="StartContainer for \"23895b3bef9ed2a3e55e8eb7b347135821dcf47aec8fa419b1123872d758332e\"" Jul 14 23:06:06.951092 containerd[1538]: time="2025-07-14T23:06:06.951065306Z" level=info msg="CreateContainer within sandbox \"44a1d43c6ca9d4a1c165453fab0b48d40b577c1c79b725bce4831cf920d22346\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"8dd6e1e148549067a723914ff6ac0e1b25e0098d60dc907435517bf7642b2a9c\"" Jul 14 23:06:06.951422 containerd[1538]: time="2025-07-14T23:06:06.951406098Z" level=info msg="StartContainer for \"8dd6e1e148549067a723914ff6ac0e1b25e0098d60dc907435517bf7642b2a9c\"" Jul 14 23:06:06.972218 systemd[1]: Started cri-containerd-23895b3bef9ed2a3e55e8eb7b347135821dcf47aec8fa419b1123872d758332e.scope - libcontainer container 23895b3bef9ed2a3e55e8eb7b347135821dcf47aec8fa419b1123872d758332e. Jul 14 23:06:06.975033 systemd[1]: Started cri-containerd-8dd6e1e148549067a723914ff6ac0e1b25e0098d60dc907435517bf7642b2a9c.scope - libcontainer container 8dd6e1e148549067a723914ff6ac0e1b25e0098d60dc907435517bf7642b2a9c. Jul 14 23:06:06.977216 systemd[1]: Started cri-containerd-80808158cfe8d10cace816d832f7bf16ee7b539b14af9fc88cef17aa6ae1839e.scope - libcontainer container 80808158cfe8d10cace816d832f7bf16ee7b539b14af9fc88cef17aa6ae1839e. Jul 14 23:06:07.039154 containerd[1538]: time="2025-07-14T23:06:07.039130324Z" level=info msg="StartContainer for \"23895b3bef9ed2a3e55e8eb7b347135821dcf47aec8fa419b1123872d758332e\" returns successfully" Jul 14 23:06:07.039352 containerd[1538]: time="2025-07-14T23:06:07.039207992Z" level=info msg="StartContainer for \"8dd6e1e148549067a723914ff6ac0e1b25e0098d60dc907435517bf7642b2a9c\" returns successfully" Jul 14 23:06:07.039381 containerd[1538]: time="2025-07-14T23:06:07.039209980Z" level=info msg="StartContainer for \"80808158cfe8d10cace816d832f7bf16ee7b539b14af9fc88cef17aa6ae1839e\" returns successfully" Jul 14 23:06:07.085124 kubelet[2349]: W0714 23:06:07.085087 2349 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://139.178.70.101:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 139.178.70.101:6443: connect: connection refused Jul 14 23:06:07.085232 kubelet[2349]: E0714 23:06:07.085129 2349 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://139.178.70.101:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 139.178.70.101:6443: connect: connection refused" logger="UnhandledError" Jul 14 23:06:07.236179 kubelet[2349]: E0714 23:06:07.236141 2349 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.101:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.101:6443: connect: connection refused" interval="1.6s" Jul 14 23:06:07.386321 kubelet[2349]: W0714 23:06:07.386243 2349 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://139.178.70.101:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 139.178.70.101:6443: connect: connection refused Jul 14 23:06:07.386321 kubelet[2349]: E0714 23:06:07.386285 2349 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://139.178.70.101:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 139.178.70.101:6443: connect: connection refused" logger="UnhandledError" Jul 14 23:06:07.435976 kubelet[2349]: I0714 23:06:07.435777 2349 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 14 23:06:07.435976 kubelet[2349]: E0714 23:06:07.435958 2349 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://139.178.70.101:6443/api/v1/nodes\": dial tcp 139.178.70.101:6443: connect: connection refused" node="localhost" Jul 14 23:06:08.712449 kubelet[2349]: E0714 23:06:08.712413 2349 csi_plugin.go:305] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Jul 14 23:06:08.838475 kubelet[2349]: E0714 23:06:08.838450 2349 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jul 14 23:06:09.037821 kubelet[2349]: I0714 23:06:09.037799 2349 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 14 23:06:09.048607 kubelet[2349]: I0714 23:06:09.048583 2349 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Jul 14 23:06:09.048607 kubelet[2349]: E0714 23:06:09.048607 2349 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Jul 14 23:06:09.055600 kubelet[2349]: E0714 23:06:09.055578 2349 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 23:06:09.156154 kubelet[2349]: E0714 23:06:09.156129 2349 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 23:06:09.256989 kubelet[2349]: E0714 23:06:09.256963 2349 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 23:06:09.357767 kubelet[2349]: E0714 23:06:09.357696 2349 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 23:06:09.457959 kubelet[2349]: E0714 23:06:09.457936 2349 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 23:06:09.558883 kubelet[2349]: E0714 23:06:09.558855 2349 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 23:06:09.659241 kubelet[2349]: E0714 23:06:09.659163 2349 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 23:06:09.759849 kubelet[2349]: E0714 23:06:09.759818 2349 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 23:06:09.860355 kubelet[2349]: E0714 23:06:09.860324 2349 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 23:06:09.961320 kubelet[2349]: E0714 23:06:09.961232 2349 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 23:06:10.055386 systemd[1]: Reloading requested from client PID 2620 ('systemctl') (unit session-9.scope)... Jul 14 23:06:10.055589 systemd[1]: Reloading... Jul 14 23:06:10.061837 kubelet[2349]: E0714 23:06:10.061818 2349 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 23:06:10.118116 zram_generator::config[2661]: No configuration found. Jul 14 23:06:10.162300 kubelet[2349]: E0714 23:06:10.162278 2349 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 23:06:10.194899 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Jul 14 23:06:10.209793 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 14 23:06:10.261361 systemd[1]: Reloading finished in 205 ms. Jul 14 23:06:10.262623 kubelet[2349]: E0714 23:06:10.262605 2349 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 23:06:10.286996 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 14 23:06:10.299804 systemd[1]: kubelet.service: Deactivated successfully. Jul 14 23:06:10.299949 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 14 23:06:10.304467 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 14 23:06:11.036002 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 14 23:06:11.045308 (kubelet)[2725]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 14 23:06:11.108816 kubelet[2725]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 14 23:06:11.108816 kubelet[2725]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 14 23:06:11.108816 kubelet[2725]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 14 23:06:11.109067 kubelet[2725]: I0714 23:06:11.108854 2725 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 14 23:06:11.116493 kubelet[2725]: I0714 23:06:11.115972 2725 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jul 14 23:06:11.116493 kubelet[2725]: I0714 23:06:11.115986 2725 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 14 23:06:11.116493 kubelet[2725]: I0714 23:06:11.116128 2725 server.go:934] "Client rotation is on, will bootstrap in background" Jul 14 23:06:11.118146 kubelet[2725]: I0714 23:06:11.118136 2725 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 14 23:06:11.128352 kubelet[2725]: I0714 23:06:11.128273 2725 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 14 23:06:11.133287 kubelet[2725]: E0714 23:06:11.133122 2725 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 14 23:06:11.133287 kubelet[2725]: I0714 23:06:11.133147 2725 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 14 23:06:11.135818 kubelet[2725]: I0714 23:06:11.135809 2725 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 14 23:06:11.135913 kubelet[2725]: I0714 23:06:11.135907 2725 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jul 14 23:06:11.136010 kubelet[2725]: I0714 23:06:11.135994 2725 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 14 23:06:11.136146 kubelet[2725]: I0714 23:06:11.136041 2725 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 14 23:06:11.136622 kubelet[2725]: I0714 23:06:11.136242 2725 topology_manager.go:138] "Creating topology manager with none policy" Jul 14 23:06:11.136622 kubelet[2725]: I0714 23:06:11.136250 2725 container_manager_linux.go:300] "Creating device plugin manager" Jul 14 23:06:11.136622 kubelet[2725]: I0714 23:06:11.136267 2725 state_mem.go:36] "Initialized new in-memory state store" Jul 14 23:06:11.136622 kubelet[2725]: I0714 23:06:11.136324 2725 kubelet.go:408] "Attempting to sync node with API server" Jul 14 23:06:11.136622 kubelet[2725]: I0714 23:06:11.136332 2725 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 14 23:06:11.136622 kubelet[2725]: I0714 23:06:11.136348 2725 kubelet.go:314] "Adding apiserver pod source" Jul 14 23:06:11.136622 kubelet[2725]: I0714 23:06:11.136353 2725 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 14 23:06:11.138008 kubelet[2725]: I0714 23:06:11.137971 2725 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jul 14 23:06:11.145543 kubelet[2725]: I0714 23:06:11.145334 2725 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 14 23:06:11.148090 kubelet[2725]: I0714 23:06:11.148065 2725 server.go:1274] "Started kubelet" Jul 14 23:06:11.151416 kubelet[2725]: I0714 23:06:11.151401 2725 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 14 23:06:11.154858 kubelet[2725]: E0714 23:06:11.154770 2725 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 14 23:06:11.156619 kubelet[2725]: I0714 23:06:11.156599 2725 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 14 23:06:11.157353 kubelet[2725]: I0714 23:06:11.157345 2725 server.go:449] "Adding debug handlers to kubelet server" Jul 14 23:06:11.159028 kubelet[2725]: I0714 23:06:11.158426 2725 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 14 23:06:11.159028 kubelet[2725]: I0714 23:06:11.158532 2725 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 14 23:06:11.159028 kubelet[2725]: I0714 23:06:11.158646 2725 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 14 23:06:11.159028 kubelet[2725]: I0714 23:06:11.158914 2725 volume_manager.go:289] "Starting Kubelet Volume Manager" Jul 14 23:06:11.159887 kubelet[2725]: I0714 23:06:11.159879 2725 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jul 14 23:06:11.159991 kubelet[2725]: I0714 23:06:11.159985 2725 reconciler.go:26] "Reconciler: start to sync state" Jul 14 23:06:11.161972 kubelet[2725]: I0714 23:06:11.161933 2725 factory.go:221] Registration of the systemd container factory successfully Jul 14 23:06:11.162204 kubelet[2725]: I0714 23:06:11.162150 2725 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 14 23:06:11.162583 kubelet[2725]: I0714 23:06:11.162566 2725 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 14 23:06:11.163391 kubelet[2725]: I0714 23:06:11.163141 2725 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 14 23:06:11.163391 kubelet[2725]: I0714 23:06:11.163153 2725 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 14 23:06:11.163391 kubelet[2725]: I0714 23:06:11.163162 2725 kubelet.go:2321] "Starting kubelet main sync loop" Jul 14 23:06:11.163391 kubelet[2725]: E0714 23:06:11.163183 2725 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 14 23:06:11.167141 kubelet[2725]: I0714 23:06:11.166431 2725 factory.go:221] Registration of the containerd container factory successfully Jul 14 23:06:11.198047 kubelet[2725]: I0714 23:06:11.198024 2725 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 14 23:06:11.198249 kubelet[2725]: I0714 23:06:11.198243 2725 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 14 23:06:11.198310 kubelet[2725]: I0714 23:06:11.198305 2725 state_mem.go:36] "Initialized new in-memory state store" Jul 14 23:06:11.198449 kubelet[2725]: I0714 23:06:11.198441 2725 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 14 23:06:11.198501 kubelet[2725]: I0714 23:06:11.198480 2725 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 14 23:06:11.198532 kubelet[2725]: I0714 23:06:11.198529 2725 policy_none.go:49] "None policy: Start" Jul 14 23:06:11.198998 kubelet[2725]: I0714 23:06:11.198991 2725 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 14 23:06:11.199059 kubelet[2725]: I0714 23:06:11.199053 2725 state_mem.go:35] "Initializing new in-memory state store" Jul 14 23:06:11.199233 kubelet[2725]: I0714 23:06:11.199226 2725 state_mem.go:75] "Updated machine memory state" Jul 14 23:06:11.205618 kubelet[2725]: I0714 23:06:11.205573 2725 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 14 23:06:11.206093 kubelet[2725]: I0714 23:06:11.205797 2725 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 14 23:06:11.206093 kubelet[2725]: I0714 23:06:11.205808 2725 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 14 23:06:11.206707 kubelet[2725]: I0714 23:06:11.206577 2725 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 14 23:06:11.311395 kubelet[2725]: I0714 23:06:11.311331 2725 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 14 23:06:11.315915 kubelet[2725]: I0714 23:06:11.315820 2725 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Jul 14 23:06:11.316561 kubelet[2725]: I0714 23:06:11.316125 2725 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Jul 14 23:06:11.461186 kubelet[2725]: I0714 23:06:11.461113 2725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/39d5c0fad02818b465d7fda7bcd4b156-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"39d5c0fad02818b465d7fda7bcd4b156\") " pod="kube-system/kube-apiserver-localhost" Jul 14 23:06:11.461186 kubelet[2725]: I0714 23:06:11.461170 2725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 14 23:06:11.461186 kubelet[2725]: I0714 23:06:11.461185 2725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 14 23:06:11.461358 kubelet[2725]: I0714 23:06:11.461197 2725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b35b56493416c25588cb530e37ffc065-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"b35b56493416c25588cb530e37ffc065\") " pod="kube-system/kube-scheduler-localhost" Jul 14 23:06:11.461358 kubelet[2725]: I0714 23:06:11.461207 2725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/39d5c0fad02818b465d7fda7bcd4b156-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"39d5c0fad02818b465d7fda7bcd4b156\") " pod="kube-system/kube-apiserver-localhost" Jul 14 23:06:11.461358 kubelet[2725]: I0714 23:06:11.461215 2725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/39d5c0fad02818b465d7fda7bcd4b156-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"39d5c0fad02818b465d7fda7bcd4b156\") " pod="kube-system/kube-apiserver-localhost" Jul 14 23:06:11.461358 kubelet[2725]: I0714 23:06:11.461224 2725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 14 23:06:11.461358 kubelet[2725]: I0714 23:06:11.461232 2725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 14 23:06:11.461448 kubelet[2725]: I0714 23:06:11.461240 2725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 14 23:06:12.136923 kubelet[2725]: I0714 23:06:12.136891 2725 apiserver.go:52] "Watching apiserver" Jul 14 23:06:12.160773 kubelet[2725]: I0714 23:06:12.160745 2725 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jul 14 23:06:12.187712 kubelet[2725]: E0714 23:06:12.187582 2725 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jul 14 23:06:12.199335 kubelet[2725]: I0714 23:06:12.198298 2725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.198285735 podStartE2EDuration="1.198285735s" podCreationTimestamp="2025-07-14 23:06:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-14 23:06:12.192676799 +0000 UTC m=+1.113383093" watchObservedRunningTime="2025-07-14 23:06:12.198285735 +0000 UTC m=+1.118992027" Jul 14 23:06:12.204634 kubelet[2725]: I0714 23:06:12.204599 2725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.204562854 podStartE2EDuration="1.204562854s" podCreationTimestamp="2025-07-14 23:06:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-14 23:06:12.198623737 +0000 UTC m=+1.119330028" watchObservedRunningTime="2025-07-14 23:06:12.204562854 +0000 UTC m=+1.125269144" Jul 14 23:06:12.204757 kubelet[2725]: I0714 23:06:12.204678 2725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.20467305 podStartE2EDuration="1.20467305s" podCreationTimestamp="2025-07-14 23:06:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-14 23:06:12.204034261 +0000 UTC m=+1.124740560" watchObservedRunningTime="2025-07-14 23:06:12.20467305 +0000 UTC m=+1.125379344" Jul 14 23:06:16.305816 kubelet[2725]: I0714 23:06:16.305793 2725 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 14 23:06:16.306278 containerd[1538]: time="2025-07-14T23:06:16.306221402Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 14 23:06:16.306444 kubelet[2725]: I0714 23:06:16.306343 2725 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 14 23:06:17.312939 systemd[1]: Created slice kubepods-besteffort-pod329c98ae_b27c_4eb0_878f_a09b022b2273.slice - libcontainer container kubepods-besteffort-pod329c98ae_b27c_4eb0_878f_a09b022b2273.slice. Jul 14 23:06:17.397735 kubelet[2725]: I0714 23:06:17.397510 2725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-spdkr\" (UniqueName: \"kubernetes.io/projected/329c98ae-b27c-4eb0-878f-a09b022b2273-kube-api-access-spdkr\") pod \"kube-proxy-9sg27\" (UID: \"329c98ae-b27c-4eb0-878f-a09b022b2273\") " pod="kube-system/kube-proxy-9sg27" Jul 14 23:06:17.397735 kubelet[2725]: I0714 23:06:17.397553 2725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/329c98ae-b27c-4eb0-878f-a09b022b2273-kube-proxy\") pod \"kube-proxy-9sg27\" (UID: \"329c98ae-b27c-4eb0-878f-a09b022b2273\") " pod="kube-system/kube-proxy-9sg27" Jul 14 23:06:17.397735 kubelet[2725]: I0714 23:06:17.397570 2725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/329c98ae-b27c-4eb0-878f-a09b022b2273-xtables-lock\") pod \"kube-proxy-9sg27\" (UID: \"329c98ae-b27c-4eb0-878f-a09b022b2273\") " pod="kube-system/kube-proxy-9sg27" Jul 14 23:06:17.397735 kubelet[2725]: I0714 23:06:17.397582 2725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/329c98ae-b27c-4eb0-878f-a09b022b2273-lib-modules\") pod \"kube-proxy-9sg27\" (UID: \"329c98ae-b27c-4eb0-878f-a09b022b2273\") " pod="kube-system/kube-proxy-9sg27" Jul 14 23:06:17.425112 systemd[1]: Created slice kubepods-besteffort-pod169fc4d1_35fc_46da_964f_e8ea9a8ba30b.slice - libcontainer container kubepods-besteffort-pod169fc4d1_35fc_46da_964f_e8ea9a8ba30b.slice. Jul 14 23:06:17.498386 kubelet[2725]: I0714 23:06:17.498338 2725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lc2cz\" (UniqueName: \"kubernetes.io/projected/169fc4d1-35fc-46da-964f-e8ea9a8ba30b-kube-api-access-lc2cz\") pod \"tigera-operator-5bf8dfcb4-j6wtd\" (UID: \"169fc4d1-35fc-46da-964f-e8ea9a8ba30b\") " pod="tigera-operator/tigera-operator-5bf8dfcb4-j6wtd" Jul 14 23:06:17.498576 kubelet[2725]: I0714 23:06:17.498421 2725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/169fc4d1-35fc-46da-964f-e8ea9a8ba30b-var-lib-calico\") pod \"tigera-operator-5bf8dfcb4-j6wtd\" (UID: \"169fc4d1-35fc-46da-964f-e8ea9a8ba30b\") " pod="tigera-operator/tigera-operator-5bf8dfcb4-j6wtd" Jul 14 23:06:17.620235 containerd[1538]: time="2025-07-14T23:06:17.620171059Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-9sg27,Uid:329c98ae-b27c-4eb0-878f-a09b022b2273,Namespace:kube-system,Attempt:0,}" Jul 14 23:06:17.638711 containerd[1538]: time="2025-07-14T23:06:17.638361087Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 23:06:17.638711 containerd[1538]: time="2025-07-14T23:06:17.638413984Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 23:06:17.638711 containerd[1538]: time="2025-07-14T23:06:17.638447665Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 23:06:17.638711 containerd[1538]: time="2025-07-14T23:06:17.638531326Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 23:06:17.654194 systemd[1]: Started cri-containerd-b0c0883d9aaa7d970709758d880b707da2c1447fc2217f0806de51ec07d369f2.scope - libcontainer container b0c0883d9aaa7d970709758d880b707da2c1447fc2217f0806de51ec07d369f2. Jul 14 23:06:17.666217 containerd[1538]: time="2025-07-14T23:06:17.666156313Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-9sg27,Uid:329c98ae-b27c-4eb0-878f-a09b022b2273,Namespace:kube-system,Attempt:0,} returns sandbox id \"b0c0883d9aaa7d970709758d880b707da2c1447fc2217f0806de51ec07d369f2\"" Jul 14 23:06:17.668288 containerd[1538]: time="2025-07-14T23:06:17.668227411Z" level=info msg="CreateContainer within sandbox \"b0c0883d9aaa7d970709758d880b707da2c1447fc2217f0806de51ec07d369f2\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 14 23:06:17.674315 containerd[1538]: time="2025-07-14T23:06:17.674300278Z" level=info msg="CreateContainer within sandbox \"b0c0883d9aaa7d970709758d880b707da2c1447fc2217f0806de51ec07d369f2\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"4ba90d609135739e4f8968adb04ac3ba110db481aaa4cf7dad679db4d838fbc3\"" Jul 14 23:06:17.674853 containerd[1538]: time="2025-07-14T23:06:17.674780643Z" level=info msg="StartContainer for \"4ba90d609135739e4f8968adb04ac3ba110db481aaa4cf7dad679db4d838fbc3\"" Jul 14 23:06:17.693217 systemd[1]: Started cri-containerd-4ba90d609135739e4f8968adb04ac3ba110db481aaa4cf7dad679db4d838fbc3.scope - libcontainer container 4ba90d609135739e4f8968adb04ac3ba110db481aaa4cf7dad679db4d838fbc3. Jul 14 23:06:17.708719 containerd[1538]: time="2025-07-14T23:06:17.708669334Z" level=info msg="StartContainer for \"4ba90d609135739e4f8968adb04ac3ba110db481aaa4cf7dad679db4d838fbc3\" returns successfully" Jul 14 23:06:17.729273 containerd[1538]: time="2025-07-14T23:06:17.729194068Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5bf8dfcb4-j6wtd,Uid:169fc4d1-35fc-46da-964f-e8ea9a8ba30b,Namespace:tigera-operator,Attempt:0,}" Jul 14 23:06:17.741656 containerd[1538]: time="2025-07-14T23:06:17.741585595Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 23:06:17.741656 containerd[1538]: time="2025-07-14T23:06:17.741613352Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 23:06:17.741935 containerd[1538]: time="2025-07-14T23:06:17.741636333Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 23:06:17.741935 containerd[1538]: time="2025-07-14T23:06:17.741681046Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 23:06:17.754172 systemd[1]: Started cri-containerd-06ca673d66a95ec390cede00d2001a582aedbd41356878c232107aa130f2d1bf.scope - libcontainer container 06ca673d66a95ec390cede00d2001a582aedbd41356878c232107aa130f2d1bf. Jul 14 23:06:17.779627 containerd[1538]: time="2025-07-14T23:06:17.779594900Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5bf8dfcb4-j6wtd,Uid:169fc4d1-35fc-46da-964f-e8ea9a8ba30b,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"06ca673d66a95ec390cede00d2001a582aedbd41356878c232107aa130f2d1bf\"" Jul 14 23:06:17.781092 containerd[1538]: time="2025-07-14T23:06:17.781023878Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\"" Jul 14 23:06:18.200096 kubelet[2725]: I0714 23:06:18.200035 2725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-9sg27" podStartSLOduration=1.200022334 podStartE2EDuration="1.200022334s" podCreationTimestamp="2025-07-14 23:06:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-14 23:06:18.19995482 +0000 UTC m=+7.120661127" watchObservedRunningTime="2025-07-14 23:06:18.200022334 +0000 UTC m=+7.120728632" Jul 14 23:06:19.312572 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3947198279.mount: Deactivated successfully. Jul 14 23:06:20.042044 containerd[1538]: time="2025-07-14T23:06:20.041864961Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 23:06:20.042044 containerd[1538]: time="2025-07-14T23:06:20.042013262Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.3: active requests=0, bytes read=25056543" Jul 14 23:06:20.042894 containerd[1538]: time="2025-07-14T23:06:20.042606025Z" level=info msg="ImageCreate event name:\"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 23:06:20.043765 containerd[1538]: time="2025-07-14T23:06:20.043742290Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 23:06:20.044463 containerd[1538]: time="2025-07-14T23:06:20.044178280Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.3\" with image id \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\", repo tag \"quay.io/tigera/operator:v1.38.3\", repo digest \"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\", size \"25052538\" in 2.263136728s" Jul 14 23:06:20.044463 containerd[1538]: time="2025-07-14T23:06:20.044199034Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\" returns image reference \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\"" Jul 14 23:06:20.045847 containerd[1538]: time="2025-07-14T23:06:20.045830040Z" level=info msg="CreateContainer within sandbox \"06ca673d66a95ec390cede00d2001a582aedbd41356878c232107aa130f2d1bf\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jul 14 23:06:20.051946 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount742113336.mount: Deactivated successfully. Jul 14 23:06:20.064336 containerd[1538]: time="2025-07-14T23:06:20.064312454Z" level=info msg="CreateContainer within sandbox \"06ca673d66a95ec390cede00d2001a582aedbd41356878c232107aa130f2d1bf\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"c9049d7e85baa49ca1d2f30afb4b33b24d52ce54d3c92c29765e0a711b2b9f75\"" Jul 14 23:06:20.064762 containerd[1538]: time="2025-07-14T23:06:20.064725583Z" level=info msg="StartContainer for \"c9049d7e85baa49ca1d2f30afb4b33b24d52ce54d3c92c29765e0a711b2b9f75\"" Jul 14 23:06:20.089158 systemd[1]: Started cri-containerd-c9049d7e85baa49ca1d2f30afb4b33b24d52ce54d3c92c29765e0a711b2b9f75.scope - libcontainer container c9049d7e85baa49ca1d2f30afb4b33b24d52ce54d3c92c29765e0a711b2b9f75. Jul 14 23:06:20.103485 containerd[1538]: time="2025-07-14T23:06:20.103460847Z" level=info msg="StartContainer for \"c9049d7e85baa49ca1d2f30afb4b33b24d52ce54d3c92c29765e0a711b2b9f75\" returns successfully" Jul 14 23:06:20.202035 kubelet[2725]: I0714 23:06:20.201992 2725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-5bf8dfcb4-j6wtd" podStartSLOduration=0.937939382 podStartE2EDuration="3.201979174s" podCreationTimestamp="2025-07-14 23:06:17 +0000 UTC" firstStartedPulling="2025-07-14 23:06:17.780553999 +0000 UTC m=+6.701260288" lastFinishedPulling="2025-07-14 23:06:20.044593791 +0000 UTC m=+8.965300080" observedRunningTime="2025-07-14 23:06:20.201825163 +0000 UTC m=+9.122531463" watchObservedRunningTime="2025-07-14 23:06:20.201979174 +0000 UTC m=+9.122685474" Jul 14 23:06:25.216432 sudo[1832]: pam_unix(sudo:session): session closed for user root Jul 14 23:06:25.218297 sshd[1829]: pam_unix(sshd:session): session closed for user core Jul 14 23:06:25.220912 systemd[1]: sshd@6-139.178.70.101:22-139.178.68.195:37618.service: Deactivated successfully. Jul 14 23:06:25.224379 systemd[1]: session-9.scope: Deactivated successfully. Jul 14 23:06:25.224902 systemd[1]: session-9.scope: Consumed 2.633s CPU time, 142.1M memory peak, 0B memory swap peak. Jul 14 23:06:25.225963 systemd-logind[1517]: Session 9 logged out. Waiting for processes to exit. Jul 14 23:06:25.227719 systemd-logind[1517]: Removed session 9. Jul 14 23:06:27.614449 systemd[1]: Created slice kubepods-besteffort-pod4060e505_f08d_4fa3_b6a5_68b110309ab3.slice - libcontainer container kubepods-besteffort-pod4060e505_f08d_4fa3_b6a5_68b110309ab3.slice. Jul 14 23:06:27.768519 kubelet[2725]: I0714 23:06:27.768490 2725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/4060e505-f08d-4fa3-b6a5-68b110309ab3-typha-certs\") pod \"calico-typha-74587cd959-c8kjz\" (UID: \"4060e505-f08d-4fa3-b6a5-68b110309ab3\") " pod="calico-system/calico-typha-74587cd959-c8kjz" Jul 14 23:06:27.768824 kubelet[2725]: I0714 23:06:27.768522 2725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s9dm2\" (UniqueName: \"kubernetes.io/projected/4060e505-f08d-4fa3-b6a5-68b110309ab3-kube-api-access-s9dm2\") pod \"calico-typha-74587cd959-c8kjz\" (UID: \"4060e505-f08d-4fa3-b6a5-68b110309ab3\") " pod="calico-system/calico-typha-74587cd959-c8kjz" Jul 14 23:06:27.768824 kubelet[2725]: I0714 23:06:27.768545 2725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4060e505-f08d-4fa3-b6a5-68b110309ab3-tigera-ca-bundle\") pod \"calico-typha-74587cd959-c8kjz\" (UID: \"4060e505-f08d-4fa3-b6a5-68b110309ab3\") " pod="calico-system/calico-typha-74587cd959-c8kjz" Jul 14 23:06:27.926669 containerd[1538]: time="2025-07-14T23:06:27.926477422Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-74587cd959-c8kjz,Uid:4060e505-f08d-4fa3-b6a5-68b110309ab3,Namespace:calico-system,Attempt:0,}" Jul 14 23:06:27.958117 systemd[1]: Created slice kubepods-besteffort-pod05d09353_cdba_4623_abc4_75b496012fe5.slice - libcontainer container kubepods-besteffort-pod05d09353_cdba_4623_abc4_75b496012fe5.slice. Jul 14 23:06:27.964764 containerd[1538]: time="2025-07-14T23:06:27.964585090Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 23:06:27.964764 containerd[1538]: time="2025-07-14T23:06:27.964626390Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 23:06:27.964764 containerd[1538]: time="2025-07-14T23:06:27.964639802Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 23:06:27.965932 containerd[1538]: time="2025-07-14T23:06:27.965797779Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 23:06:27.972897 kubelet[2725]: I0714 23:06:27.972841 2725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/05d09353-cdba-4623-abc4-75b496012fe5-flexvol-driver-host\") pod \"calico-node-mg8v2\" (UID: \"05d09353-cdba-4623-abc4-75b496012fe5\") " pod="calico-system/calico-node-mg8v2" Jul 14 23:06:27.973179 kubelet[2725]: I0714 23:06:27.973125 2725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/05d09353-cdba-4623-abc4-75b496012fe5-tigera-ca-bundle\") pod \"calico-node-mg8v2\" (UID: \"05d09353-cdba-4623-abc4-75b496012fe5\") " pod="calico-system/calico-node-mg8v2" Jul 14 23:06:27.973532 kubelet[2725]: I0714 23:06:27.973252 2725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/05d09353-cdba-4623-abc4-75b496012fe5-var-lib-calico\") pod \"calico-node-mg8v2\" (UID: \"05d09353-cdba-4623-abc4-75b496012fe5\") " pod="calico-system/calico-node-mg8v2" Jul 14 23:06:27.973532 kubelet[2725]: I0714 23:06:27.973270 2725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/05d09353-cdba-4623-abc4-75b496012fe5-var-run-calico\") pod \"calico-node-mg8v2\" (UID: \"05d09353-cdba-4623-abc4-75b496012fe5\") " pod="calico-system/calico-node-mg8v2" Jul 14 23:06:27.973532 kubelet[2725]: I0714 23:06:27.973282 2725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d6wxk\" (UniqueName: \"kubernetes.io/projected/05d09353-cdba-4623-abc4-75b496012fe5-kube-api-access-d6wxk\") pod \"calico-node-mg8v2\" (UID: \"05d09353-cdba-4623-abc4-75b496012fe5\") " pod="calico-system/calico-node-mg8v2" Jul 14 23:06:27.974275 kubelet[2725]: I0714 23:06:27.973781 2725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/05d09353-cdba-4623-abc4-75b496012fe5-cni-bin-dir\") pod \"calico-node-mg8v2\" (UID: \"05d09353-cdba-4623-abc4-75b496012fe5\") " pod="calico-system/calico-node-mg8v2" Jul 14 23:06:27.974275 kubelet[2725]: I0714 23:06:27.973907 2725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/05d09353-cdba-4623-abc4-75b496012fe5-cni-net-dir\") pod \"calico-node-mg8v2\" (UID: \"05d09353-cdba-4623-abc4-75b496012fe5\") " pod="calico-system/calico-node-mg8v2" Jul 14 23:06:27.974275 kubelet[2725]: I0714 23:06:27.973923 2725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/05d09353-cdba-4623-abc4-75b496012fe5-policysync\") pod \"calico-node-mg8v2\" (UID: \"05d09353-cdba-4623-abc4-75b496012fe5\") " pod="calico-system/calico-node-mg8v2" Jul 14 23:06:27.974275 kubelet[2725]: I0714 23:06:27.973933 2725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/05d09353-cdba-4623-abc4-75b496012fe5-node-certs\") pod \"calico-node-mg8v2\" (UID: \"05d09353-cdba-4623-abc4-75b496012fe5\") " pod="calico-system/calico-node-mg8v2" Jul 14 23:06:27.974275 kubelet[2725]: I0714 23:06:27.973943 2725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/05d09353-cdba-4623-abc4-75b496012fe5-cni-log-dir\") pod \"calico-node-mg8v2\" (UID: \"05d09353-cdba-4623-abc4-75b496012fe5\") " pod="calico-system/calico-node-mg8v2" Jul 14 23:06:27.974480 kubelet[2725]: I0714 23:06:27.973952 2725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/05d09353-cdba-4623-abc4-75b496012fe5-xtables-lock\") pod \"calico-node-mg8v2\" (UID: \"05d09353-cdba-4623-abc4-75b496012fe5\") " pod="calico-system/calico-node-mg8v2" Jul 14 23:06:27.974480 kubelet[2725]: I0714 23:06:27.974069 2725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/05d09353-cdba-4623-abc4-75b496012fe5-lib-modules\") pod \"calico-node-mg8v2\" (UID: \"05d09353-cdba-4623-abc4-75b496012fe5\") " pod="calico-system/calico-node-mg8v2" Jul 14 23:06:28.003207 systemd[1]: Started cri-containerd-e345db21a8406e6eb9f4e55bd6f3cf8efed4b20820b36f77e8a0be7439206551.scope - libcontainer container e345db21a8406e6eb9f4e55bd6f3cf8efed4b20820b36f77e8a0be7439206551. Jul 14 23:06:28.040534 containerd[1538]: time="2025-07-14T23:06:28.040506239Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-74587cd959-c8kjz,Uid:4060e505-f08d-4fa3-b6a5-68b110309ab3,Namespace:calico-system,Attempt:0,} returns sandbox id \"e345db21a8406e6eb9f4e55bd6f3cf8efed4b20820b36f77e8a0be7439206551\"" Jul 14 23:06:28.042762 containerd[1538]: time="2025-07-14T23:06:28.042698911Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\"" Jul 14 23:06:28.082397 kubelet[2725]: E0714 23:06:28.082377 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 23:06:28.082397 kubelet[2725]: W0714 23:06:28.082392 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 23:06:28.082486 kubelet[2725]: E0714 23:06:28.082411 2725 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 23:06:28.262099 containerd[1538]: time="2025-07-14T23:06:28.261659184Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-mg8v2,Uid:05d09353-cdba-4623-abc4-75b496012fe5,Namespace:calico-system,Attempt:0,}" Jul 14 23:06:28.264782 kubelet[2725]: E0714 23:06:28.264678 2725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4jr6z" podUID="ffa01b57-cf5c-4652-8eda-490fdd179a1b" Jul 14 23:06:28.273396 kubelet[2725]: E0714 23:06:28.273232 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 23:06:28.273396 kubelet[2725]: W0714 23:06:28.273251 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 23:06:28.273528 kubelet[2725]: E0714 23:06:28.273414 2725 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 23:06:28.277964 kubelet[2725]: E0714 23:06:28.277670 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 23:06:28.277964 kubelet[2725]: W0714 23:06:28.277691 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 23:06:28.278259 kubelet[2725]: E0714 23:06:28.278107 2725 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 23:06:28.278672 kubelet[2725]: E0714 23:06:28.278532 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 23:06:28.278672 kubelet[2725]: W0714 23:06:28.278543 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 23:06:28.278672 kubelet[2725]: E0714 23:06:28.278554 2725 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 23:06:28.279518 kubelet[2725]: E0714 23:06:28.279358 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 23:06:28.279518 kubelet[2725]: W0714 23:06:28.279369 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 23:06:28.279518 kubelet[2725]: E0714 23:06:28.279380 2725 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 23:06:28.280095 kubelet[2725]: E0714 23:06:28.279848 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 23:06:28.280095 kubelet[2725]: W0714 23:06:28.279860 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 23:06:28.280095 kubelet[2725]: E0714 23:06:28.279869 2725 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 23:06:28.281785 kubelet[2725]: E0714 23:06:28.280388 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 23:06:28.281785 kubelet[2725]: W0714 23:06:28.280398 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 23:06:28.281785 kubelet[2725]: E0714 23:06:28.280406 2725 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 23:06:28.281785 kubelet[2725]: E0714 23:06:28.281655 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 23:06:28.281785 kubelet[2725]: W0714 23:06:28.281665 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 23:06:28.281785 kubelet[2725]: E0714 23:06:28.281675 2725 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 23:06:28.282025 kubelet[2725]: E0714 23:06:28.281957 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 23:06:28.282025 kubelet[2725]: W0714 23:06:28.281965 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 23:06:28.282025 kubelet[2725]: E0714 23:06:28.281971 2725 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 23:06:28.282497 kubelet[2725]: E0714 23:06:28.282489 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 23:06:28.282542 kubelet[2725]: W0714 23:06:28.282536 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 23:06:28.282579 kubelet[2725]: E0714 23:06:28.282570 2725 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 23:06:28.282707 kubelet[2725]: E0714 23:06:28.282701 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 23:06:28.282783 kubelet[2725]: W0714 23:06:28.282777 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 23:06:28.283023 kubelet[2725]: E0714 23:06:28.283015 2725 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 23:06:28.283618 kubelet[2725]: E0714 23:06:28.283480 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 23:06:28.283618 kubelet[2725]: W0714 23:06:28.283488 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 23:06:28.283618 kubelet[2725]: E0714 23:06:28.283496 2725 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 23:06:28.283914 kubelet[2725]: E0714 23:06:28.283861 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 23:06:28.283914 kubelet[2725]: W0714 23:06:28.283870 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 23:06:28.283914 kubelet[2725]: E0714 23:06:28.283880 2725 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 23:06:28.284331 kubelet[2725]: E0714 23:06:28.284321 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 23:06:28.284534 kubelet[2725]: W0714 23:06:28.284376 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 23:06:28.284534 kubelet[2725]: E0714 23:06:28.284387 2725 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 23:06:28.284978 kubelet[2725]: E0714 23:06:28.284836 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 23:06:28.284978 kubelet[2725]: W0714 23:06:28.284845 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 23:06:28.284978 kubelet[2725]: E0714 23:06:28.284852 2725 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 23:06:28.287168 kubelet[2725]: E0714 23:06:28.287051 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 23:06:28.287168 kubelet[2725]: W0714 23:06:28.287071 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 23:06:28.287168 kubelet[2725]: E0714 23:06:28.287094 2725 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 23:06:28.287845 kubelet[2725]: E0714 23:06:28.287634 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 23:06:28.287845 kubelet[2725]: W0714 23:06:28.287643 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 23:06:28.287845 kubelet[2725]: E0714 23:06:28.287649 2725 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 23:06:28.289138 kubelet[2725]: E0714 23:06:28.288062 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 23:06:28.289138 kubelet[2725]: W0714 23:06:28.288068 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 23:06:28.289138 kubelet[2725]: E0714 23:06:28.288091 2725 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 23:06:28.289138 kubelet[2725]: E0714 23:06:28.288283 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 23:06:28.289138 kubelet[2725]: W0714 23:06:28.288289 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 23:06:28.289138 kubelet[2725]: E0714 23:06:28.288294 2725 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 23:06:28.289138 kubelet[2725]: E0714 23:06:28.288806 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 23:06:28.289138 kubelet[2725]: W0714 23:06:28.288813 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 23:06:28.289138 kubelet[2725]: E0714 23:06:28.288820 2725 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 23:06:28.289138 kubelet[2725]: E0714 23:06:28.289017 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 23:06:28.289881 kubelet[2725]: W0714 23:06:28.289022 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 23:06:28.289881 kubelet[2725]: E0714 23:06:28.289029 2725 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 23:06:28.289881 kubelet[2725]: E0714 23:06:28.289712 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 23:06:28.289881 kubelet[2725]: W0714 23:06:28.289719 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 23:06:28.289881 kubelet[2725]: E0714 23:06:28.289727 2725 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 23:06:28.289881 kubelet[2725]: I0714 23:06:28.289744 2725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jqz5g\" (UniqueName: \"kubernetes.io/projected/ffa01b57-cf5c-4652-8eda-490fdd179a1b-kube-api-access-jqz5g\") pod \"csi-node-driver-4jr6z\" (UID: \"ffa01b57-cf5c-4652-8eda-490fdd179a1b\") " pod="calico-system/csi-node-driver-4jr6z" Jul 14 23:06:28.290403 kubelet[2725]: E0714 23:06:28.290198 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 23:06:28.290403 kubelet[2725]: W0714 23:06:28.290210 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 23:06:28.290403 kubelet[2725]: E0714 23:06:28.290224 2725 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 23:06:28.290403 kubelet[2725]: I0714 23:06:28.290238 2725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/ffa01b57-cf5c-4652-8eda-490fdd179a1b-registration-dir\") pod \"csi-node-driver-4jr6z\" (UID: \"ffa01b57-cf5c-4652-8eda-490fdd179a1b\") " pod="calico-system/csi-node-driver-4jr6z" Jul 14 23:06:28.290594 kubelet[2725]: E0714 23:06:28.290464 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 23:06:28.290594 kubelet[2725]: W0714 23:06:28.290472 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 23:06:28.290594 kubelet[2725]: E0714 23:06:28.290559 2725 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 23:06:28.290897 kubelet[2725]: E0714 23:06:28.290832 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 23:06:28.290897 kubelet[2725]: W0714 23:06:28.290840 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 23:06:28.291202 kubelet[2725]: E0714 23:06:28.291004 2725 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 23:06:28.291517 kubelet[2725]: E0714 23:06:28.291485 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 23:06:28.291517 kubelet[2725]: W0714 23:06:28.291495 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 23:06:28.291517 kubelet[2725]: E0714 23:06:28.291506 2725 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 23:06:28.291594 kubelet[2725]: I0714 23:06:28.291519 2725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/ffa01b57-cf5c-4652-8eda-490fdd179a1b-varrun\") pod \"csi-node-driver-4jr6z\" (UID: \"ffa01b57-cf5c-4652-8eda-490fdd179a1b\") " pod="calico-system/csi-node-driver-4jr6z" Jul 14 23:06:28.291819 kubelet[2725]: E0714 23:06:28.291808 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 23:06:28.291990 kubelet[2725]: W0714 23:06:28.291818 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 23:06:28.291990 kubelet[2725]: E0714 23:06:28.291931 2725 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 23:06:28.291990 kubelet[2725]: I0714 23:06:28.291948 2725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/ffa01b57-cf5c-4652-8eda-490fdd179a1b-socket-dir\") pod \"csi-node-driver-4jr6z\" (UID: \"ffa01b57-cf5c-4652-8eda-490fdd179a1b\") " pod="calico-system/csi-node-driver-4jr6z" Jul 14 23:06:28.292898 kubelet[2725]: E0714 23:06:28.292791 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 23:06:28.292898 kubelet[2725]: W0714 23:06:28.292801 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 23:06:28.292898 kubelet[2725]: E0714 23:06:28.292849 2725 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 23:06:28.293434 kubelet[2725]: E0714 23:06:28.293422 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 23:06:28.293434 kubelet[2725]: W0714 23:06:28.293431 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 23:06:28.293622 kubelet[2725]: E0714 23:06:28.293450 2725 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 23:06:28.293622 kubelet[2725]: E0714 23:06:28.293616 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 23:06:28.293762 containerd[1538]: time="2025-07-14T23:06:28.293404704Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 23:06:28.293762 containerd[1538]: time="2025-07-14T23:06:28.293545338Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 23:06:28.293762 containerd[1538]: time="2025-07-14T23:06:28.293557460Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 23:06:28.293872 kubelet[2725]: W0714 23:06:28.293624 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 23:06:28.293872 kubelet[2725]: E0714 23:06:28.293642 2725 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 23:06:28.293872 kubelet[2725]: E0714 23:06:28.293779 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 23:06:28.294048 kubelet[2725]: W0714 23:06:28.293786 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 23:06:28.294102 containerd[1538]: time="2025-07-14T23:06:28.293999481Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 23:06:28.294132 kubelet[2725]: E0714 23:06:28.294115 2725 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 23:06:28.294152 kubelet[2725]: I0714 23:06:28.294136 2725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ffa01b57-cf5c-4652-8eda-490fdd179a1b-kubelet-dir\") pod \"csi-node-driver-4jr6z\" (UID: \"ffa01b57-cf5c-4652-8eda-490fdd179a1b\") " pod="calico-system/csi-node-driver-4jr6z" Jul 14 23:06:28.294530 kubelet[2725]: E0714 23:06:28.294518 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 23:06:28.294530 kubelet[2725]: W0714 23:06:28.294528 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 23:06:28.294673 kubelet[2725]: E0714 23:06:28.294648 2725 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 23:06:28.295149 kubelet[2725]: E0714 23:06:28.294797 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 23:06:28.295149 kubelet[2725]: W0714 23:06:28.294804 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 23:06:28.295149 kubelet[2725]: E0714 23:06:28.294810 2725 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 23:06:28.295286 kubelet[2725]: E0714 23:06:28.295276 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 23:06:28.295286 kubelet[2725]: W0714 23:06:28.295284 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 23:06:28.295345 kubelet[2725]: E0714 23:06:28.295296 2725 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 23:06:28.295508 kubelet[2725]: E0714 23:06:28.295496 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 23:06:28.295508 kubelet[2725]: W0714 23:06:28.295504 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 23:06:28.295567 kubelet[2725]: E0714 23:06:28.295511 2725 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 23:06:28.295753 kubelet[2725]: E0714 23:06:28.295718 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 23:06:28.295753 kubelet[2725]: W0714 23:06:28.295726 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 23:06:28.295753 kubelet[2725]: E0714 23:06:28.295733 2725 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 23:06:28.311232 systemd[1]: Started cri-containerd-3ddb7720d26e82575a874210b0c7e2f61732f4148c873f1ed0afc2165722b0b4.scope - libcontainer container 3ddb7720d26e82575a874210b0c7e2f61732f4148c873f1ed0afc2165722b0b4. Jul 14 23:06:28.368524 containerd[1538]: time="2025-07-14T23:06:28.368497555Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-mg8v2,Uid:05d09353-cdba-4623-abc4-75b496012fe5,Namespace:calico-system,Attempt:0,} returns sandbox id \"3ddb7720d26e82575a874210b0c7e2f61732f4148c873f1ed0afc2165722b0b4\"" Jul 14 23:06:28.396695 kubelet[2725]: E0714 23:06:28.396319 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 23:06:28.396695 kubelet[2725]: W0714 23:06:28.396336 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 23:06:28.396695 kubelet[2725]: E0714 23:06:28.396350 2725 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 23:06:28.396695 kubelet[2725]: E0714 23:06:28.396482 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 23:06:28.396695 kubelet[2725]: W0714 23:06:28.396488 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 23:06:28.396695 kubelet[2725]: E0714 23:06:28.396498 2725 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 23:06:28.396695 kubelet[2725]: E0714 23:06:28.396607 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 23:06:28.396695 kubelet[2725]: W0714 23:06:28.396612 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 23:06:28.396695 kubelet[2725]: E0714 23:06:28.396617 2725 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 23:06:28.397389 kubelet[2725]: E0714 23:06:28.397381 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 23:06:28.397435 kubelet[2725]: W0714 23:06:28.397428 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 23:06:28.397537 kubelet[2725]: E0714 23:06:28.397466 2725 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 23:06:28.400981 kubelet[2725]: E0714 23:06:28.397656 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 23:06:28.400981 kubelet[2725]: W0714 23:06:28.397665 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 23:06:28.400981 kubelet[2725]: E0714 23:06:28.397672 2725 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 23:06:28.400981 kubelet[2725]: E0714 23:06:28.397778 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 23:06:28.400981 kubelet[2725]: W0714 23:06:28.397783 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 23:06:28.400981 kubelet[2725]: E0714 23:06:28.397791 2725 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 23:06:28.400981 kubelet[2725]: E0714 23:06:28.397872 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 23:06:28.400981 kubelet[2725]: W0714 23:06:28.397881 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 23:06:28.400981 kubelet[2725]: E0714 23:06:28.397886 2725 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 23:06:28.400981 kubelet[2725]: E0714 23:06:28.399172 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 23:06:28.401473 kubelet[2725]: W0714 23:06:28.399180 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 23:06:28.401473 kubelet[2725]: E0714 23:06:28.399186 2725 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 23:06:28.401473 kubelet[2725]: E0714 23:06:28.399319 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 23:06:28.401473 kubelet[2725]: W0714 23:06:28.399324 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 23:06:28.401473 kubelet[2725]: E0714 23:06:28.399329 2725 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 23:06:28.401473 kubelet[2725]: E0714 23:06:28.400295 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 23:06:28.401473 kubelet[2725]: W0714 23:06:28.400300 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 23:06:28.401473 kubelet[2725]: E0714 23:06:28.400306 2725 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 23:06:28.401473 kubelet[2725]: E0714 23:06:28.400516 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 23:06:28.401473 kubelet[2725]: W0714 23:06:28.400522 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 23:06:28.401633 kubelet[2725]: E0714 23:06:28.400744 2725 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 23:06:28.401633 kubelet[2725]: E0714 23:06:28.400843 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 23:06:28.401633 kubelet[2725]: W0714 23:06:28.400848 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 23:06:28.401633 kubelet[2725]: E0714 23:06:28.400894 2725 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 23:06:28.401633 kubelet[2725]: E0714 23:06:28.401105 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 23:06:28.401633 kubelet[2725]: W0714 23:06:28.401111 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 23:06:28.401633 kubelet[2725]: E0714 23:06:28.401128 2725 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 23:06:28.401633 kubelet[2725]: E0714 23:06:28.401339 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 23:06:28.401633 kubelet[2725]: W0714 23:06:28.401345 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 23:06:28.401633 kubelet[2725]: E0714 23:06:28.401355 2725 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 23:06:28.405619 kubelet[2725]: E0714 23:06:28.401828 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 23:06:28.405619 kubelet[2725]: W0714 23:06:28.401834 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 23:06:28.405619 kubelet[2725]: E0714 23:06:28.401843 2725 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 23:06:28.405619 kubelet[2725]: E0714 23:06:28.401953 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 23:06:28.405619 kubelet[2725]: W0714 23:06:28.401957 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 23:06:28.405619 kubelet[2725]: E0714 23:06:28.402029 2725 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 23:06:28.405619 kubelet[2725]: E0714 23:06:28.402055 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 23:06:28.405619 kubelet[2725]: W0714 23:06:28.402088 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 23:06:28.405619 kubelet[2725]: E0714 23:06:28.402203 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 23:06:28.405619 kubelet[2725]: W0714 23:06:28.402208 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 23:06:28.405619 kubelet[2725]: E0714 23:06:28.402299 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 23:06:28.410096 kubelet[2725]: W0714 23:06:28.402304 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 23:06:28.410096 kubelet[2725]: E0714 23:06:28.402309 2725 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 23:06:28.410096 kubelet[2725]: E0714 23:06:28.402498 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 23:06:28.410096 kubelet[2725]: W0714 23:06:28.402503 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 23:06:28.410096 kubelet[2725]: E0714 23:06:28.402509 2725 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 23:06:28.410096 kubelet[2725]: E0714 23:06:28.402968 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 23:06:28.410096 kubelet[2725]: W0714 23:06:28.402975 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 23:06:28.410096 kubelet[2725]: E0714 23:06:28.402981 2725 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 23:06:28.410096 kubelet[2725]: E0714 23:06:28.402991 2725 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 23:06:28.410096 kubelet[2725]: E0714 23:06:28.403119 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 23:06:28.410261 kubelet[2725]: W0714 23:06:28.403124 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 23:06:28.410261 kubelet[2725]: E0714 23:06:28.403130 2725 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 23:06:28.410261 kubelet[2725]: E0714 23:06:28.403138 2725 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 23:06:28.410261 kubelet[2725]: E0714 23:06:28.403532 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 23:06:28.410261 kubelet[2725]: W0714 23:06:28.403538 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 23:06:28.410261 kubelet[2725]: E0714 23:06:28.403550 2725 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 23:06:28.410261 kubelet[2725]: E0714 23:06:28.403696 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 23:06:28.410261 kubelet[2725]: W0714 23:06:28.403739 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 23:06:28.410261 kubelet[2725]: E0714 23:06:28.403749 2725 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 23:06:28.410261 kubelet[2725]: E0714 23:06:28.406011 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 23:06:28.410455 kubelet[2725]: W0714 23:06:28.406018 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 23:06:28.410455 kubelet[2725]: E0714 23:06:28.406024 2725 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 23:06:28.410455 kubelet[2725]: E0714 23:06:28.408249 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 23:06:28.410455 kubelet[2725]: W0714 23:06:28.408256 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 23:06:28.410455 kubelet[2725]: E0714 23:06:28.408262 2725 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 23:06:29.721982 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4100065203.mount: Deactivated successfully. Jul 14 23:06:30.164433 kubelet[2725]: E0714 23:06:30.164348 2725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4jr6z" podUID="ffa01b57-cf5c-4652-8eda-490fdd179a1b" Jul 14 23:06:30.791623 containerd[1538]: time="2025-07-14T23:06:30.791590552Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 23:06:30.792606 containerd[1538]: time="2025-07-14T23:06:30.792057188Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.2: active requests=0, bytes read=35233364" Jul 14 23:06:30.793307 containerd[1538]: time="2025-07-14T23:06:30.793283170Z" level=info msg="ImageCreate event name:\"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 23:06:30.795428 containerd[1538]: time="2025-07-14T23:06:30.795322582Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 23:06:30.796060 containerd[1538]: time="2025-07-14T23:06:30.795953352Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.2\" with image id \"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\", size \"35233218\" in 2.753232774s" Jul 14 23:06:30.796203 containerd[1538]: time="2025-07-14T23:06:30.796134459Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\" returns image reference \"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\"" Jul 14 23:06:30.797196 containerd[1538]: time="2025-07-14T23:06:30.797184088Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\"" Jul 14 23:06:30.813991 containerd[1538]: time="2025-07-14T23:06:30.812269943Z" level=info msg="CreateContainer within sandbox \"e345db21a8406e6eb9f4e55bd6f3cf8efed4b20820b36f77e8a0be7439206551\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jul 14 23:06:30.826538 containerd[1538]: time="2025-07-14T23:06:30.826509044Z" level=info msg="CreateContainer within sandbox \"e345db21a8406e6eb9f4e55bd6f3cf8efed4b20820b36f77e8a0be7439206551\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"7431f7c2da435599ed73180a38430f09517a71e13fa0b354e91633e488b4ce81\"" Jul 14 23:06:30.827013 containerd[1538]: time="2025-07-14T23:06:30.826987411Z" level=info msg="StartContainer for \"7431f7c2da435599ed73180a38430f09517a71e13fa0b354e91633e488b4ce81\"" Jul 14 23:06:30.859175 systemd[1]: Started cri-containerd-7431f7c2da435599ed73180a38430f09517a71e13fa0b354e91633e488b4ce81.scope - libcontainer container 7431f7c2da435599ed73180a38430f09517a71e13fa0b354e91633e488b4ce81. Jul 14 23:06:30.893271 containerd[1538]: time="2025-07-14T23:06:30.893246076Z" level=info msg="StartContainer for \"7431f7c2da435599ed73180a38430f09517a71e13fa0b354e91633e488b4ce81\" returns successfully" Jul 14 23:06:31.304758 kubelet[2725]: E0714 23:06:31.304699 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 23:06:31.304758 kubelet[2725]: W0714 23:06:31.304721 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 23:06:31.304758 kubelet[2725]: E0714 23:06:31.304742 2725 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 23:06:31.305281 kubelet[2725]: E0714 23:06:31.304911 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 23:06:31.305281 kubelet[2725]: W0714 23:06:31.304917 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 23:06:31.305281 kubelet[2725]: E0714 23:06:31.304927 2725 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 23:06:31.305281 kubelet[2725]: E0714 23:06:31.305037 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 23:06:31.305281 kubelet[2725]: W0714 23:06:31.305043 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 23:06:31.305281 kubelet[2725]: E0714 23:06:31.305048 2725 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 23:06:31.305281 kubelet[2725]: E0714 23:06:31.305208 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 23:06:31.305281 kubelet[2725]: W0714 23:06:31.305215 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 23:06:31.305281 kubelet[2725]: E0714 23:06:31.305224 2725 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 23:06:31.305597 kubelet[2725]: E0714 23:06:31.305339 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 23:06:31.305597 kubelet[2725]: W0714 23:06:31.305344 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 23:06:31.305597 kubelet[2725]: E0714 23:06:31.305350 2725 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 23:06:31.305597 kubelet[2725]: E0714 23:06:31.305456 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 23:06:31.305597 kubelet[2725]: W0714 23:06:31.305462 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 23:06:31.305597 kubelet[2725]: E0714 23:06:31.305469 2725 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 23:06:31.305597 kubelet[2725]: E0714 23:06:31.305582 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 23:06:31.305597 kubelet[2725]: W0714 23:06:31.305587 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 23:06:31.305597 kubelet[2725]: E0714 23:06:31.305593 2725 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 23:06:31.306329 kubelet[2725]: E0714 23:06:31.305693 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 23:06:31.306329 kubelet[2725]: W0714 23:06:31.305700 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 23:06:31.306329 kubelet[2725]: E0714 23:06:31.305707 2725 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 23:06:31.306329 kubelet[2725]: E0714 23:06:31.305818 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 23:06:31.306329 kubelet[2725]: W0714 23:06:31.305825 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 23:06:31.306329 kubelet[2725]: E0714 23:06:31.305834 2725 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 23:06:31.306329 kubelet[2725]: E0714 23:06:31.305938 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 23:06:31.306329 kubelet[2725]: W0714 23:06:31.305945 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 23:06:31.306329 kubelet[2725]: E0714 23:06:31.305950 2725 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 23:06:31.306329 kubelet[2725]: E0714 23:06:31.306081 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 23:06:31.306671 kubelet[2725]: W0714 23:06:31.306086 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 23:06:31.306671 kubelet[2725]: E0714 23:06:31.306092 2725 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 23:06:31.306671 kubelet[2725]: E0714 23:06:31.306243 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 23:06:31.306671 kubelet[2725]: W0714 23:06:31.306249 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 23:06:31.306671 kubelet[2725]: E0714 23:06:31.306256 2725 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 23:06:31.306671 kubelet[2725]: E0714 23:06:31.306367 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 23:06:31.306671 kubelet[2725]: W0714 23:06:31.306373 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 23:06:31.306671 kubelet[2725]: E0714 23:06:31.306378 2725 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 23:06:31.306671 kubelet[2725]: E0714 23:06:31.306476 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 23:06:31.306671 kubelet[2725]: W0714 23:06:31.306483 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 23:06:31.306908 kubelet[2725]: E0714 23:06:31.306491 2725 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 23:06:31.306908 kubelet[2725]: E0714 23:06:31.306601 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 23:06:31.306908 kubelet[2725]: W0714 23:06:31.306608 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 23:06:31.306908 kubelet[2725]: E0714 23:06:31.306616 2725 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 23:06:31.315049 kubelet[2725]: E0714 23:06:31.314983 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 23:06:31.315049 kubelet[2725]: W0714 23:06:31.315001 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 23:06:31.315049 kubelet[2725]: E0714 23:06:31.315015 2725 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 23:06:31.315389 kubelet[2725]: E0714 23:06:31.315182 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 23:06:31.315389 kubelet[2725]: W0714 23:06:31.315188 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 23:06:31.315389 kubelet[2725]: E0714 23:06:31.315197 2725 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 23:06:31.315618 kubelet[2725]: E0714 23:06:31.315494 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 23:06:31.315618 kubelet[2725]: W0714 23:06:31.315503 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 23:06:31.315618 kubelet[2725]: E0714 23:06:31.315516 2725 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 23:06:31.315837 kubelet[2725]: E0714 23:06:31.315752 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 23:06:31.315837 kubelet[2725]: W0714 23:06:31.315762 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 23:06:31.315837 kubelet[2725]: E0714 23:06:31.315779 2725 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 23:06:31.316130 kubelet[2725]: E0714 23:06:31.316038 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 23:06:31.316130 kubelet[2725]: W0714 23:06:31.316045 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 23:06:31.316130 kubelet[2725]: E0714 23:06:31.316056 2725 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 23:06:31.316714 kubelet[2725]: E0714 23:06:31.316424 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 23:06:31.316714 kubelet[2725]: W0714 23:06:31.316432 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 23:06:31.316714 kubelet[2725]: E0714 23:06:31.316446 2725 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 23:06:31.316714 kubelet[2725]: E0714 23:06:31.316538 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 23:06:31.316714 kubelet[2725]: W0714 23:06:31.316544 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 23:06:31.316714 kubelet[2725]: E0714 23:06:31.316551 2725 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 23:06:31.316714 kubelet[2725]: E0714 23:06:31.316683 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 23:06:31.316714 kubelet[2725]: W0714 23:06:31.316688 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 23:06:31.316714 kubelet[2725]: E0714 23:06:31.316694 2725 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 23:06:31.317698 kubelet[2725]: E0714 23:06:31.316784 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 23:06:31.317698 kubelet[2725]: W0714 23:06:31.316789 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 23:06:31.317698 kubelet[2725]: E0714 23:06:31.316794 2725 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 23:06:31.317698 kubelet[2725]: E0714 23:06:31.316882 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 23:06:31.317698 kubelet[2725]: W0714 23:06:31.316887 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 23:06:31.317698 kubelet[2725]: E0714 23:06:31.316892 2725 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 23:06:31.317698 kubelet[2725]: E0714 23:06:31.317165 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 23:06:31.317698 kubelet[2725]: W0714 23:06:31.317172 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 23:06:31.317698 kubelet[2725]: E0714 23:06:31.317186 2725 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 23:06:31.318677 kubelet[2725]: E0714 23:06:31.317883 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 23:06:31.318677 kubelet[2725]: W0714 23:06:31.317890 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 23:06:31.318677 kubelet[2725]: E0714 23:06:31.317905 2725 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 23:06:31.318677 kubelet[2725]: E0714 23:06:31.318103 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 23:06:31.318677 kubelet[2725]: W0714 23:06:31.318109 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 23:06:31.318677 kubelet[2725]: E0714 23:06:31.318131 2725 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 23:06:31.318677 kubelet[2725]: E0714 23:06:31.318244 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 23:06:31.318677 kubelet[2725]: W0714 23:06:31.318252 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 23:06:31.318677 kubelet[2725]: E0714 23:06:31.318349 2725 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 23:06:31.318677 kubelet[2725]: E0714 23:06:31.318564 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 23:06:31.318894 kubelet[2725]: W0714 23:06:31.318572 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 23:06:31.318894 kubelet[2725]: E0714 23:06:31.318593 2725 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 23:06:31.319211 kubelet[2725]: E0714 23:06:31.318937 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 23:06:31.319211 kubelet[2725]: W0714 23:06:31.318944 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 23:06:31.319211 kubelet[2725]: E0714 23:06:31.318957 2725 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 23:06:31.319211 kubelet[2725]: E0714 23:06:31.319146 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 23:06:31.319211 kubelet[2725]: W0714 23:06:31.319152 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 23:06:31.319211 kubelet[2725]: E0714 23:06:31.319166 2725 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 23:06:31.319367 kubelet[2725]: E0714 23:06:31.319281 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 23:06:31.319367 kubelet[2725]: W0714 23:06:31.319286 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 23:06:31.319367 kubelet[2725]: E0714 23:06:31.319292 2725 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 23:06:31.807968 systemd[1]: run-containerd-runc-k8s.io-7431f7c2da435599ed73180a38430f09517a71e13fa0b354e91633e488b4ce81-runc.8A8EFX.mount: Deactivated successfully. Jul 14 23:06:32.163826 kubelet[2725]: E0714 23:06:32.163602 2725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4jr6z" podUID="ffa01b57-cf5c-4652-8eda-490fdd179a1b" Jul 14 23:06:32.225236 kubelet[2725]: I0714 23:06:32.225218 2725 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 14 23:06:32.313632 kubelet[2725]: E0714 23:06:32.313611 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 23:06:32.313632 kubelet[2725]: W0714 23:06:32.313627 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 23:06:32.314007 kubelet[2725]: E0714 23:06:32.313641 2725 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 23:06:32.314007 kubelet[2725]: E0714 23:06:32.313768 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 23:06:32.314007 kubelet[2725]: W0714 23:06:32.313773 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 23:06:32.314007 kubelet[2725]: E0714 23:06:32.313779 2725 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 23:06:32.314007 kubelet[2725]: E0714 23:06:32.313867 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 23:06:32.314007 kubelet[2725]: W0714 23:06:32.313872 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 23:06:32.314007 kubelet[2725]: E0714 23:06:32.313877 2725 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 23:06:32.314007 kubelet[2725]: E0714 23:06:32.313959 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 23:06:32.314007 kubelet[2725]: W0714 23:06:32.313965 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 23:06:32.314007 kubelet[2725]: E0714 23:06:32.313970 2725 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 23:06:32.314364 kubelet[2725]: E0714 23:06:32.314053 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 23:06:32.314364 kubelet[2725]: W0714 23:06:32.314057 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 23:06:32.314364 kubelet[2725]: E0714 23:06:32.314062 2725 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 23:06:32.314364 kubelet[2725]: E0714 23:06:32.314153 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 23:06:32.314364 kubelet[2725]: W0714 23:06:32.314158 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 23:06:32.314364 kubelet[2725]: E0714 23:06:32.314162 2725 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 23:06:32.314364 kubelet[2725]: E0714 23:06:32.314241 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 23:06:32.314364 kubelet[2725]: W0714 23:06:32.314246 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 23:06:32.314364 kubelet[2725]: E0714 23:06:32.314250 2725 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 23:06:32.314364 kubelet[2725]: E0714 23:06:32.314329 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 23:06:32.314636 kubelet[2725]: W0714 23:06:32.314333 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 23:06:32.314636 kubelet[2725]: E0714 23:06:32.314339 2725 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 23:06:32.314636 kubelet[2725]: E0714 23:06:32.314423 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 23:06:32.314636 kubelet[2725]: W0714 23:06:32.314427 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 23:06:32.314636 kubelet[2725]: E0714 23:06:32.314431 2725 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 23:06:32.314636 kubelet[2725]: E0714 23:06:32.314509 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 23:06:32.314636 kubelet[2725]: W0714 23:06:32.314513 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 23:06:32.314636 kubelet[2725]: E0714 23:06:32.314518 2725 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 23:06:32.314636 kubelet[2725]: E0714 23:06:32.314593 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 23:06:32.314636 kubelet[2725]: W0714 23:06:32.314597 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 23:06:32.314890 kubelet[2725]: E0714 23:06:32.314602 2725 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 23:06:32.314890 kubelet[2725]: E0714 23:06:32.314678 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 23:06:32.314890 kubelet[2725]: W0714 23:06:32.314682 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 23:06:32.314890 kubelet[2725]: E0714 23:06:32.314687 2725 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 23:06:32.314890 kubelet[2725]: E0714 23:06:32.314763 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 23:06:32.314890 kubelet[2725]: W0714 23:06:32.314769 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 23:06:32.314890 kubelet[2725]: E0714 23:06:32.314773 2725 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 23:06:32.314890 kubelet[2725]: E0714 23:06:32.314850 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 23:06:32.314890 kubelet[2725]: W0714 23:06:32.314854 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 23:06:32.314890 kubelet[2725]: E0714 23:06:32.314858 2725 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 23:06:32.315113 kubelet[2725]: E0714 23:06:32.314940 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 23:06:32.315113 kubelet[2725]: W0714 23:06:32.314944 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 23:06:32.315113 kubelet[2725]: E0714 23:06:32.314949 2725 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 23:06:32.323310 kubelet[2725]: E0714 23:06:32.323293 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 23:06:32.323502 kubelet[2725]: W0714 23:06:32.323407 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 23:06:32.323502 kubelet[2725]: E0714 23:06:32.323423 2725 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 23:06:32.323648 kubelet[2725]: E0714 23:06:32.323597 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 23:06:32.323648 kubelet[2725]: W0714 23:06:32.323604 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 23:06:32.323648 kubelet[2725]: E0714 23:06:32.323614 2725 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 23:06:32.323885 kubelet[2725]: E0714 23:06:32.323793 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 23:06:32.323885 kubelet[2725]: W0714 23:06:32.323799 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 23:06:32.323885 kubelet[2725]: E0714 23:06:32.323809 2725 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 23:06:32.323990 kubelet[2725]: E0714 23:06:32.323983 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 23:06:32.324083 kubelet[2725]: W0714 23:06:32.324020 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 23:06:32.324083 kubelet[2725]: E0714 23:06:32.324032 2725 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 23:06:32.324263 kubelet[2725]: E0714 23:06:32.324202 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 23:06:32.324263 kubelet[2725]: W0714 23:06:32.324209 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 23:06:32.324263 kubelet[2725]: E0714 23:06:32.324219 2725 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 23:06:32.324495 kubelet[2725]: E0714 23:06:32.324412 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 23:06:32.324495 kubelet[2725]: W0714 23:06:32.324419 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 23:06:32.324495 kubelet[2725]: E0714 23:06:32.324427 2725 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 23:06:32.324646 kubelet[2725]: E0714 23:06:32.324552 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 23:06:32.324646 kubelet[2725]: W0714 23:06:32.324557 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 23:06:32.324646 kubelet[2725]: E0714 23:06:32.324565 2725 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 23:06:32.324723 kubelet[2725]: E0714 23:06:32.324660 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 23:06:32.324723 kubelet[2725]: W0714 23:06:32.324667 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 23:06:32.324723 kubelet[2725]: E0714 23:06:32.324674 2725 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 23:06:32.324779 kubelet[2725]: E0714 23:06:32.324761 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 23:06:32.324779 kubelet[2725]: W0714 23:06:32.324767 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 23:06:32.324779 kubelet[2725]: E0714 23:06:32.324772 2725 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 23:06:32.324984 kubelet[2725]: E0714 23:06:32.324863 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 23:06:32.324984 kubelet[2725]: W0714 23:06:32.324867 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 23:06:32.324984 kubelet[2725]: E0714 23:06:32.324872 2725 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 23:06:32.324984 kubelet[2725]: E0714 23:06:32.324968 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 23:06:32.324984 kubelet[2725]: W0714 23:06:32.324972 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 23:06:32.324984 kubelet[2725]: E0714 23:06:32.324977 2725 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 23:06:32.325259 kubelet[2725]: E0714 23:06:32.325228 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 23:06:32.325259 kubelet[2725]: W0714 23:06:32.325235 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 23:06:32.325259 kubelet[2725]: E0714 23:06:32.325246 2725 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 23:06:32.325408 kubelet[2725]: E0714 23:06:32.325379 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 23:06:32.325408 kubelet[2725]: W0714 23:06:32.325385 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 23:06:32.325408 kubelet[2725]: E0714 23:06:32.325393 2725 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 23:06:32.325496 kubelet[2725]: E0714 23:06:32.325487 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 23:06:32.325496 kubelet[2725]: W0714 23:06:32.325493 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 23:06:32.325623 kubelet[2725]: E0714 23:06:32.325501 2725 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 23:06:32.325623 kubelet[2725]: E0714 23:06:32.325619 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 23:06:32.325683 kubelet[2725]: W0714 23:06:32.325623 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 23:06:32.325683 kubelet[2725]: E0714 23:06:32.325628 2725 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 23:06:32.325733 kubelet[2725]: E0714 23:06:32.325712 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 23:06:32.325733 kubelet[2725]: W0714 23:06:32.325716 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 23:06:32.325733 kubelet[2725]: E0714 23:06:32.325721 2725 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 23:06:32.325827 kubelet[2725]: E0714 23:06:32.325818 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 23:06:32.325827 kubelet[2725]: W0714 23:06:32.325824 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 23:06:32.325827 kubelet[2725]: E0714 23:06:32.325828 2725 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 23:06:32.329476 kubelet[2725]: E0714 23:06:32.325966 2725 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 23:06:32.329476 kubelet[2725]: W0714 23:06:32.325971 2725 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 23:06:32.329476 kubelet[2725]: E0714 23:06:32.325976 2725 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 23:06:32.678829 containerd[1538]: time="2025-07-14T23:06:32.678374356Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 23:06:32.678829 containerd[1538]: time="2025-07-14T23:06:32.678766607Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2: active requests=0, bytes read=4446956" Jul 14 23:06:32.678829 containerd[1538]: time="2025-07-14T23:06:32.678804950Z" level=info msg="ImageCreate event name:\"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 23:06:32.679994 containerd[1538]: time="2025-07-14T23:06:32.679980571Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 23:06:32.680425 containerd[1538]: time="2025-07-14T23:06:32.680405915Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" with image id \"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\", size \"5939619\" in 1.883139095s" Jul 14 23:06:32.680452 containerd[1538]: time="2025-07-14T23:06:32.680425309Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" returns image reference \"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\"" Jul 14 23:06:32.681824 containerd[1538]: time="2025-07-14T23:06:32.681811651Z" level=info msg="CreateContainer within sandbox \"3ddb7720d26e82575a874210b0c7e2f61732f4148c873f1ed0afc2165722b0b4\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jul 14 23:06:32.688164 containerd[1538]: time="2025-07-14T23:06:32.688142037Z" level=info msg="CreateContainer within sandbox \"3ddb7720d26e82575a874210b0c7e2f61732f4148c873f1ed0afc2165722b0b4\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"53b6af9c8ae6605b1340ae1c37cf5269096cb2f16521cb4b0d0c9be18e055e39\"" Jul 14 23:06:32.688622 containerd[1538]: time="2025-07-14T23:06:32.688604550Z" level=info msg="StartContainer for \"53b6af9c8ae6605b1340ae1c37cf5269096cb2f16521cb4b0d0c9be18e055e39\"" Jul 14 23:06:32.711209 systemd[1]: Started cri-containerd-53b6af9c8ae6605b1340ae1c37cf5269096cb2f16521cb4b0d0c9be18e055e39.scope - libcontainer container 53b6af9c8ae6605b1340ae1c37cf5269096cb2f16521cb4b0d0c9be18e055e39. Jul 14 23:06:32.762531 systemd[1]: cri-containerd-53b6af9c8ae6605b1340ae1c37cf5269096cb2f16521cb4b0d0c9be18e055e39.scope: Deactivated successfully. Jul 14 23:06:32.770192 containerd[1538]: time="2025-07-14T23:06:32.765423452Z" level=info msg="StartContainer for \"53b6af9c8ae6605b1340ae1c37cf5269096cb2f16521cb4b0d0c9be18e055e39\" returns successfully" Jul 14 23:06:32.807867 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-53b6af9c8ae6605b1340ae1c37cf5269096cb2f16521cb4b0d0c9be18e055e39-rootfs.mount: Deactivated successfully. Jul 14 23:06:33.237470 containerd[1538]: time="2025-07-14T23:06:33.223876434Z" level=info msg="shim disconnected" id=53b6af9c8ae6605b1340ae1c37cf5269096cb2f16521cb4b0d0c9be18e055e39 namespace=k8s.io Jul 14 23:06:33.237789 containerd[1538]: time="2025-07-14T23:06:33.237472747Z" level=warning msg="cleaning up after shim disconnected" id=53b6af9c8ae6605b1340ae1c37cf5269096cb2f16521cb4b0d0c9be18e055e39 namespace=k8s.io Jul 14 23:06:33.237789 containerd[1538]: time="2025-07-14T23:06:33.237486671Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 14 23:06:33.240420 kubelet[2725]: I0714 23:06:33.240243 2725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-74587cd959-c8kjz" podStartSLOduration=3.485702953 podStartE2EDuration="6.240229925s" podCreationTimestamp="2025-07-14 23:06:27 +0000 UTC" firstStartedPulling="2025-07-14 23:06:28.042333602 +0000 UTC m=+16.963039892" lastFinishedPulling="2025-07-14 23:06:30.796860569 +0000 UTC m=+19.717566864" observedRunningTime="2025-07-14 23:06:31.233416962 +0000 UTC m=+20.154123263" watchObservedRunningTime="2025-07-14 23:06:33.240229925 +0000 UTC m=+22.160936224" Jul 14 23:06:34.164434 kubelet[2725]: E0714 23:06:34.164393 2725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4jr6z" podUID="ffa01b57-cf5c-4652-8eda-490fdd179a1b" Jul 14 23:06:34.231122 containerd[1538]: time="2025-07-14T23:06:34.230856210Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\"" Jul 14 23:06:36.164204 kubelet[2725]: E0714 23:06:36.164134 2725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4jr6z" podUID="ffa01b57-cf5c-4652-8eda-490fdd179a1b" Jul 14 23:06:37.695964 containerd[1538]: time="2025-07-14T23:06:37.695930437Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 23:06:37.703662 containerd[1538]: time="2025-07-14T23:06:37.703560878Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.2: active requests=0, bytes read=70436221" Jul 14 23:06:37.704407 containerd[1538]: time="2025-07-14T23:06:37.704172382Z" level=info msg="ImageCreate event name:\"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 23:06:37.707171 containerd[1538]: time="2025-07-14T23:06:37.707140524Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 23:06:37.707659 containerd[1538]: time="2025-07-14T23:06:37.707592241Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.2\" with image id \"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\", size \"71928924\" in 3.476713555s" Jul 14 23:06:37.707659 containerd[1538]: time="2025-07-14T23:06:37.707610398Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\" returns image reference \"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\"" Jul 14 23:06:37.709438 containerd[1538]: time="2025-07-14T23:06:37.709418325Z" level=info msg="CreateContainer within sandbox \"3ddb7720d26e82575a874210b0c7e2f61732f4148c873f1ed0afc2165722b0b4\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jul 14 23:06:37.730505 containerd[1538]: time="2025-07-14T23:06:37.730458594Z" level=info msg="CreateContainer within sandbox \"3ddb7720d26e82575a874210b0c7e2f61732f4148c873f1ed0afc2165722b0b4\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"4b355c0e7833995e9ab1c4c7d1587e2e8328d518929245f0d3d6bcd41c9dec47\"" Jul 14 23:06:37.731311 containerd[1538]: time="2025-07-14T23:06:37.731226675Z" level=info msg="StartContainer for \"4b355c0e7833995e9ab1c4c7d1587e2e8328d518929245f0d3d6bcd41c9dec47\"" Jul 14 23:06:37.753164 systemd[1]: Started cri-containerd-4b355c0e7833995e9ab1c4c7d1587e2e8328d518929245f0d3d6bcd41c9dec47.scope - libcontainer container 4b355c0e7833995e9ab1c4c7d1587e2e8328d518929245f0d3d6bcd41c9dec47. Jul 14 23:06:37.772759 containerd[1538]: time="2025-07-14T23:06:37.772688593Z" level=info msg="StartContainer for \"4b355c0e7833995e9ab1c4c7d1587e2e8328d518929245f0d3d6bcd41c9dec47\" returns successfully" Jul 14 23:06:38.164094 kubelet[2725]: E0714 23:06:38.164041 2725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4jr6z" podUID="ffa01b57-cf5c-4652-8eda-490fdd179a1b" Jul 14 23:06:38.741562 kubelet[2725]: I0714 23:06:38.741418 2725 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 14 23:06:39.144156 systemd[1]: cri-containerd-4b355c0e7833995e9ab1c4c7d1587e2e8328d518929245f0d3d6bcd41c9dec47.scope: Deactivated successfully. Jul 14 23:06:39.208059 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4b355c0e7833995e9ab1c4c7d1587e2e8328d518929245f0d3d6bcd41c9dec47-rootfs.mount: Deactivated successfully. Jul 14 23:06:39.210011 containerd[1538]: time="2025-07-14T23:06:39.208718700Z" level=info msg="shim disconnected" id=4b355c0e7833995e9ab1c4c7d1587e2e8328d518929245f0d3d6bcd41c9dec47 namespace=k8s.io Jul 14 23:06:39.210011 containerd[1538]: time="2025-07-14T23:06:39.208756159Z" level=warning msg="cleaning up after shim disconnected" id=4b355c0e7833995e9ab1c4c7d1587e2e8328d518929245f0d3d6bcd41c9dec47 namespace=k8s.io Jul 14 23:06:39.210011 containerd[1538]: time="2025-07-14T23:06:39.208764070Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 14 23:06:39.241350 containerd[1538]: time="2025-07-14T23:06:39.241323282Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\"" Jul 14 23:06:39.300608 kubelet[2725]: I0714 23:06:39.296543 2725 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Jul 14 23:06:39.357790 systemd[1]: Created slice kubepods-besteffort-podb3b1fbe5_9018_4070_bc43_78a6548c5e8b.slice - libcontainer container kubepods-besteffort-podb3b1fbe5_9018_4070_bc43_78a6548c5e8b.slice. Jul 14 23:06:39.363164 systemd[1]: Created slice kubepods-besteffort-podf381ec3d_1f99_48f5_a759_d4ef727cd042.slice - libcontainer container kubepods-besteffort-podf381ec3d_1f99_48f5_a759_d4ef727cd042.slice. Jul 14 23:06:39.364793 kubelet[2725]: I0714 23:06:39.364733 2725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/eac42073-634d-4a92-8c8b-4e4d39002987-config-volume\") pod \"coredns-7c65d6cfc9-hc96w\" (UID: \"eac42073-634d-4a92-8c8b-4e4d39002987\") " pod="kube-system/coredns-7c65d6cfc9-hc96w" Jul 14 23:06:39.365085 kubelet[2725]: I0714 23:06:39.364964 2725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/c1d1e827-ce30-4bdb-94ef-778bb4b83e4f-whisker-backend-key-pair\") pod \"whisker-5b6c867988-cbx84\" (UID: \"c1d1e827-ce30-4bdb-94ef-778bb4b83e4f\") " pod="calico-system/whisker-5b6c867988-cbx84" Jul 14 23:06:39.365146 kubelet[2725]: I0714 23:06:39.365138 2725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c1d1e827-ce30-4bdb-94ef-778bb4b83e4f-whisker-ca-bundle\") pod \"whisker-5b6c867988-cbx84\" (UID: \"c1d1e827-ce30-4bdb-94ef-778bb4b83e4f\") " pod="calico-system/whisker-5b6c867988-cbx84" Jul 14 23:06:39.365195 kubelet[2725]: I0714 23:06:39.365188 2725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5z4r4\" (UniqueName: \"kubernetes.io/projected/c1d1e827-ce30-4bdb-94ef-778bb4b83e4f-kube-api-access-5z4r4\") pod \"whisker-5b6c867988-cbx84\" (UID: \"c1d1e827-ce30-4bdb-94ef-778bb4b83e4f\") " pod="calico-system/whisker-5b6c867988-cbx84" Jul 14 23:06:39.365865 kubelet[2725]: I0714 23:06:39.365230 2725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f381ec3d-1f99-48f5-a759-d4ef727cd042-tigera-ca-bundle\") pod \"calico-kube-controllers-789c44b4f5-hbf9t\" (UID: \"f381ec3d-1f99-48f5-a759-d4ef727cd042\") " pod="calico-system/calico-kube-controllers-789c44b4f5-hbf9t" Jul 14 23:06:39.365865 kubelet[2725]: I0714 23:06:39.365249 2725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/43e0fd53-8ec8-4b02-86e2-a124557aa367-goldmane-key-pair\") pod \"goldmane-58fd7646b9-5d4g6\" (UID: \"43e0fd53-8ec8-4b02-86e2-a124557aa367\") " pod="calico-system/goldmane-58fd7646b9-5d4g6" Jul 14 23:06:39.365865 kubelet[2725]: I0714 23:06:39.365264 2725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wbr49\" (UniqueName: \"kubernetes.io/projected/43e0fd53-8ec8-4b02-86e2-a124557aa367-kube-api-access-wbr49\") pod \"goldmane-58fd7646b9-5d4g6\" (UID: \"43e0fd53-8ec8-4b02-86e2-a124557aa367\") " pod="calico-system/goldmane-58fd7646b9-5d4g6" Jul 14 23:06:39.365865 kubelet[2725]: I0714 23:06:39.365275 2725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g4vxj\" (UniqueName: \"kubernetes.io/projected/eac42073-634d-4a92-8c8b-4e4d39002987-kube-api-access-g4vxj\") pod \"coredns-7c65d6cfc9-hc96w\" (UID: \"eac42073-634d-4a92-8c8b-4e4d39002987\") " pod="kube-system/coredns-7c65d6cfc9-hc96w" Jul 14 23:06:39.365865 kubelet[2725]: I0714 23:06:39.365287 2725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qs45v\" (UniqueName: \"kubernetes.io/projected/010be3c5-59c9-4ff4-a6d2-2124514c9299-kube-api-access-qs45v\") pod \"calico-apiserver-7fb9554685-xx2mf\" (UID: \"010be3c5-59c9-4ff4-a6d2-2124514c9299\") " pod="calico-apiserver/calico-apiserver-7fb9554685-xx2mf" Jul 14 23:06:39.366609 kubelet[2725]: I0714 23:06:39.365300 2725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nfc4v\" (UniqueName: \"kubernetes.io/projected/b3b1fbe5-9018-4070-bc43-78a6548c5e8b-kube-api-access-nfc4v\") pod \"calico-apiserver-7fb9554685-vnrld\" (UID: \"b3b1fbe5-9018-4070-bc43-78a6548c5e8b\") " pod="calico-apiserver/calico-apiserver-7fb9554685-vnrld" Jul 14 23:06:39.366609 kubelet[2725]: I0714 23:06:39.365316 2725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43e0fd53-8ec8-4b02-86e2-a124557aa367-goldmane-ca-bundle\") pod \"goldmane-58fd7646b9-5d4g6\" (UID: \"43e0fd53-8ec8-4b02-86e2-a124557aa367\") " pod="calico-system/goldmane-58fd7646b9-5d4g6" Jul 14 23:06:39.366609 kubelet[2725]: I0714 23:06:39.365326 2725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/010be3c5-59c9-4ff4-a6d2-2124514c9299-calico-apiserver-certs\") pod \"calico-apiserver-7fb9554685-xx2mf\" (UID: \"010be3c5-59c9-4ff4-a6d2-2124514c9299\") " pod="calico-apiserver/calico-apiserver-7fb9554685-xx2mf" Jul 14 23:06:39.366609 kubelet[2725]: I0714 23:06:39.365335 2725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q2lvw\" (UniqueName: \"kubernetes.io/projected/47889b81-1613-42d1-9473-e89886fa669f-kube-api-access-q2lvw\") pod \"coredns-7c65d6cfc9-86f6f\" (UID: \"47889b81-1613-42d1-9473-e89886fa669f\") " pod="kube-system/coredns-7c65d6cfc9-86f6f" Jul 14 23:06:39.366609 kubelet[2725]: I0714 23:06:39.365346 2725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/47889b81-1613-42d1-9473-e89886fa669f-config-volume\") pod \"coredns-7c65d6cfc9-86f6f\" (UID: \"47889b81-1613-42d1-9473-e89886fa669f\") " pod="kube-system/coredns-7c65d6cfc9-86f6f" Jul 14 23:06:39.366713 kubelet[2725]: I0714 23:06:39.365355 2725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/43e0fd53-8ec8-4b02-86e2-a124557aa367-config\") pod \"goldmane-58fd7646b9-5d4g6\" (UID: \"43e0fd53-8ec8-4b02-86e2-a124557aa367\") " pod="calico-system/goldmane-58fd7646b9-5d4g6" Jul 14 23:06:39.366713 kubelet[2725]: I0714 23:06:39.365364 2725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t6z9x\" (UniqueName: \"kubernetes.io/projected/f381ec3d-1f99-48f5-a759-d4ef727cd042-kube-api-access-t6z9x\") pod \"calico-kube-controllers-789c44b4f5-hbf9t\" (UID: \"f381ec3d-1f99-48f5-a759-d4ef727cd042\") " pod="calico-system/calico-kube-controllers-789c44b4f5-hbf9t" Jul 14 23:06:39.366713 kubelet[2725]: I0714 23:06:39.365375 2725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/b3b1fbe5-9018-4070-bc43-78a6548c5e8b-calico-apiserver-certs\") pod \"calico-apiserver-7fb9554685-vnrld\" (UID: \"b3b1fbe5-9018-4070-bc43-78a6548c5e8b\") " pod="calico-apiserver/calico-apiserver-7fb9554685-vnrld" Jul 14 23:06:39.374264 systemd[1]: Created slice kubepods-besteffort-pod010be3c5_59c9_4ff4_a6d2_2124514c9299.slice - libcontainer container kubepods-besteffort-pod010be3c5_59c9_4ff4_a6d2_2124514c9299.slice. Jul 14 23:06:39.379058 systemd[1]: Created slice kubepods-burstable-podeac42073_634d_4a92_8c8b_4e4d39002987.slice - libcontainer container kubepods-burstable-podeac42073_634d_4a92_8c8b_4e4d39002987.slice. Jul 14 23:06:39.382794 systemd[1]: Created slice kubepods-burstable-pod47889b81_1613_42d1_9473_e89886fa669f.slice - libcontainer container kubepods-burstable-pod47889b81_1613_42d1_9473_e89886fa669f.slice. Jul 14 23:06:39.389043 systemd[1]: Created slice kubepods-besteffort-pod43e0fd53_8ec8_4b02_86e2_a124557aa367.slice - libcontainer container kubepods-besteffort-pod43e0fd53_8ec8_4b02_86e2_a124557aa367.slice. Jul 14 23:06:39.393583 systemd[1]: Created slice kubepods-besteffort-podc1d1e827_ce30_4bdb_94ef_778bb4b83e4f.slice - libcontainer container kubepods-besteffort-podc1d1e827_ce30_4bdb_94ef_778bb4b83e4f.slice. Jul 14 23:06:39.669171 containerd[1538]: time="2025-07-14T23:06:39.668658769Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-789c44b4f5-hbf9t,Uid:f381ec3d-1f99-48f5-a759-d4ef727cd042,Namespace:calico-system,Attempt:0,}" Jul 14 23:06:39.669631 containerd[1538]: time="2025-07-14T23:06:39.669285722Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7fb9554685-vnrld,Uid:b3b1fbe5-9018-4070-bc43-78a6548c5e8b,Namespace:calico-apiserver,Attempt:0,}" Jul 14 23:06:39.680544 containerd[1538]: time="2025-07-14T23:06:39.680311749Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7fb9554685-xx2mf,Uid:010be3c5-59c9-4ff4-a6d2-2124514c9299,Namespace:calico-apiserver,Attempt:0,}" Jul 14 23:06:39.683346 containerd[1538]: time="2025-07-14T23:06:39.683249953Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-hc96w,Uid:eac42073-634d-4a92-8c8b-4e4d39002987,Namespace:kube-system,Attempt:0,}" Jul 14 23:06:39.687112 containerd[1538]: time="2025-07-14T23:06:39.686552551Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-86f6f,Uid:47889b81-1613-42d1-9473-e89886fa669f,Namespace:kube-system,Attempt:0,}" Jul 14 23:06:39.698537 containerd[1538]: time="2025-07-14T23:06:39.698381344Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5b6c867988-cbx84,Uid:c1d1e827-ce30-4bdb-94ef-778bb4b83e4f,Namespace:calico-system,Attempt:0,}" Jul 14 23:06:39.699768 containerd[1538]: time="2025-07-14T23:06:39.699749083Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-5d4g6,Uid:43e0fd53-8ec8-4b02-86e2-a124557aa367,Namespace:calico-system,Attempt:0,}" Jul 14 23:06:40.030503 containerd[1538]: time="2025-07-14T23:06:40.030413667Z" level=error msg="Failed to destroy network for sandbox \"d7ba934ca5fa8f5a942e798510589363b14ba71279585d1870a93dee910bdc2b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 23:06:40.034056 containerd[1538]: time="2025-07-14T23:06:40.033796966Z" level=error msg="Failed to destroy network for sandbox \"60b63cd1d2bc8137d3b2c460cacbcefb6a777adc4ebc12fbbcc857d1a5535357\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 23:06:40.037049 containerd[1538]: time="2025-07-14T23:06:40.037022724Z" level=error msg="encountered an error cleaning up failed sandbox \"60b63cd1d2bc8137d3b2c460cacbcefb6a777adc4ebc12fbbcc857d1a5535357\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 23:06:40.037208 containerd[1538]: time="2025-07-14T23:06:40.037190081Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-hc96w,Uid:eac42073-634d-4a92-8c8b-4e4d39002987,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"60b63cd1d2bc8137d3b2c460cacbcefb6a777adc4ebc12fbbcc857d1a5535357\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 23:06:40.037567 containerd[1538]: time="2025-07-14T23:06:40.037242380Z" level=error msg="encountered an error cleaning up failed sandbox \"d7ba934ca5fa8f5a942e798510589363b14ba71279585d1870a93dee910bdc2b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 23:06:40.037567 containerd[1538]: time="2025-07-14T23:06:40.037346519Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7fb9554685-vnrld,Uid:b3b1fbe5-9018-4070-bc43-78a6548c5e8b,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d7ba934ca5fa8f5a942e798510589363b14ba71279585d1870a93dee910bdc2b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 23:06:40.043292 containerd[1538]: time="2025-07-14T23:06:40.037247275Z" level=error msg="Failed to destroy network for sandbox \"d554541d3db0c3b607ce2e6782d0ca7031b7705c981827114079a1a167c3af10\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 23:06:40.044170 containerd[1538]: time="2025-07-14T23:06:40.044094072Z" level=error msg="encountered an error cleaning up failed sandbox \"d554541d3db0c3b607ce2e6782d0ca7031b7705c981827114079a1a167c3af10\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 23:06:40.044170 containerd[1538]: time="2025-07-14T23:06:40.044128405Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-86f6f,Uid:47889b81-1613-42d1-9473-e89886fa669f,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d554541d3db0c3b607ce2e6782d0ca7031b7705c981827114079a1a167c3af10\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 23:06:40.050533 kubelet[2725]: E0714 23:06:40.050381 2725 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"60b63cd1d2bc8137d3b2c460cacbcefb6a777adc4ebc12fbbcc857d1a5535357\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 23:06:40.050635 containerd[1538]: time="2025-07-14T23:06:40.050457207Z" level=error msg="Failed to destroy network for sandbox \"8ebf0099ef01d13457469ce1ae86fc508ffb7913518970c58c807e2985630951\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 23:06:40.050694 containerd[1538]: time="2025-07-14T23:06:40.050676871Z" level=error msg="encountered an error cleaning up failed sandbox \"8ebf0099ef01d13457469ce1ae86fc508ffb7913518970c58c807e2985630951\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 23:06:40.050720 containerd[1538]: time="2025-07-14T23:06:40.050709309Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-789c44b4f5-hbf9t,Uid:f381ec3d-1f99-48f5-a759-d4ef727cd042,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"8ebf0099ef01d13457469ce1ae86fc508ffb7913518970c58c807e2985630951\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 23:06:40.050780 kubelet[2725]: E0714 23:06:40.050739 2725 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d554541d3db0c3b607ce2e6782d0ca7031b7705c981827114079a1a167c3af10\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 23:06:40.051989 kubelet[2725]: E0714 23:06:40.051912 2725 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"60b63cd1d2bc8137d3b2c460cacbcefb6a777adc4ebc12fbbcc857d1a5535357\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-hc96w" Jul 14 23:06:40.051989 kubelet[2725]: E0714 23:06:40.051942 2725 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d7ba934ca5fa8f5a942e798510589363b14ba71279585d1870a93dee910bdc2b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 23:06:40.051989 kubelet[2725]: E0714 23:06:40.051959 2725 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d7ba934ca5fa8f5a942e798510589363b14ba71279585d1870a93dee910bdc2b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7fb9554685-vnrld" Jul 14 23:06:40.053453 kubelet[2725]: E0714 23:06:40.053432 2725 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d7ba934ca5fa8f5a942e798510589363b14ba71279585d1870a93dee910bdc2b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7fb9554685-vnrld" Jul 14 23:06:40.053493 kubelet[2725]: E0714 23:06:40.053476 2725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7fb9554685-vnrld_calico-apiserver(b3b1fbe5-9018-4070-bc43-78a6548c5e8b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7fb9554685-vnrld_calico-apiserver(b3b1fbe5-9018-4070-bc43-78a6548c5e8b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d7ba934ca5fa8f5a942e798510589363b14ba71279585d1870a93dee910bdc2b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7fb9554685-vnrld" podUID="b3b1fbe5-9018-4070-bc43-78a6548c5e8b" Jul 14 23:06:40.054360 kubelet[2725]: E0714 23:06:40.053877 2725 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8ebf0099ef01d13457469ce1ae86fc508ffb7913518970c58c807e2985630951\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 23:06:40.054360 kubelet[2725]: E0714 23:06:40.053898 2725 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8ebf0099ef01d13457469ce1ae86fc508ffb7913518970c58c807e2985630951\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-789c44b4f5-hbf9t" Jul 14 23:06:40.054360 kubelet[2725]: E0714 23:06:40.053908 2725 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8ebf0099ef01d13457469ce1ae86fc508ffb7913518970c58c807e2985630951\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-789c44b4f5-hbf9t" Jul 14 23:06:40.054467 kubelet[2725]: E0714 23:06:40.053925 2725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-789c44b4f5-hbf9t_calico-system(f381ec3d-1f99-48f5-a759-d4ef727cd042)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-789c44b4f5-hbf9t_calico-system(f381ec3d-1f99-48f5-a759-d4ef727cd042)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8ebf0099ef01d13457469ce1ae86fc508ffb7913518970c58c807e2985630951\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-789c44b4f5-hbf9t" podUID="f381ec3d-1f99-48f5-a759-d4ef727cd042" Jul 14 23:06:40.054467 kubelet[2725]: E0714 23:06:40.051912 2725 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d554541d3db0c3b607ce2e6782d0ca7031b7705c981827114079a1a167c3af10\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-86f6f" Jul 14 23:06:40.054467 kubelet[2725]: E0714 23:06:40.053963 2725 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d554541d3db0c3b607ce2e6782d0ca7031b7705c981827114079a1a167c3af10\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-86f6f" Jul 14 23:06:40.054640 kubelet[2725]: E0714 23:06:40.053980 2725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-86f6f_kube-system(47889b81-1613-42d1-9473-e89886fa669f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-86f6f_kube-system(47889b81-1613-42d1-9473-e89886fa669f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d554541d3db0c3b607ce2e6782d0ca7031b7705c981827114079a1a167c3af10\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-86f6f" podUID="47889b81-1613-42d1-9473-e89886fa669f" Jul 14 23:06:40.054750 kubelet[2725]: E0714 23:06:40.054732 2725 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"60b63cd1d2bc8137d3b2c460cacbcefb6a777adc4ebc12fbbcc857d1a5535357\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-hc96w" Jul 14 23:06:40.054828 kubelet[2725]: E0714 23:06:40.054809 2725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-hc96w_kube-system(eac42073-634d-4a92-8c8b-4e4d39002987)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-hc96w_kube-system(eac42073-634d-4a92-8c8b-4e4d39002987)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"60b63cd1d2bc8137d3b2c460cacbcefb6a777adc4ebc12fbbcc857d1a5535357\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-hc96w" podUID="eac42073-634d-4a92-8c8b-4e4d39002987" Jul 14 23:06:40.059071 containerd[1538]: time="2025-07-14T23:06:40.058534462Z" level=error msg="Failed to destroy network for sandbox \"8fae9a5219270f031d65f7835bdf8284ac7002124c33d69de3f815a76bdd3e1e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 23:06:40.059071 containerd[1538]: time="2025-07-14T23:06:40.058821526Z" level=error msg="encountered an error cleaning up failed sandbox \"8fae9a5219270f031d65f7835bdf8284ac7002124c33d69de3f815a76bdd3e1e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 23:06:40.059071 containerd[1538]: time="2025-07-14T23:06:40.058853042Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-5d4g6,Uid:43e0fd53-8ec8-4b02-86e2-a124557aa367,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"8fae9a5219270f031d65f7835bdf8284ac7002124c33d69de3f815a76bdd3e1e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 23:06:40.060314 kubelet[2725]: E0714 23:06:40.058978 2725 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8fae9a5219270f031d65f7835bdf8284ac7002124c33d69de3f815a76bdd3e1e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 23:06:40.060314 kubelet[2725]: E0714 23:06:40.059011 2725 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8fae9a5219270f031d65f7835bdf8284ac7002124c33d69de3f815a76bdd3e1e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-58fd7646b9-5d4g6" Jul 14 23:06:40.060314 kubelet[2725]: E0714 23:06:40.059059 2725 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8fae9a5219270f031d65f7835bdf8284ac7002124c33d69de3f815a76bdd3e1e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-58fd7646b9-5d4g6" Jul 14 23:06:40.060389 kubelet[2725]: E0714 23:06:40.059166 2725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-58fd7646b9-5d4g6_calico-system(43e0fd53-8ec8-4b02-86e2-a124557aa367)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-58fd7646b9-5d4g6_calico-system(43e0fd53-8ec8-4b02-86e2-a124557aa367)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8fae9a5219270f031d65f7835bdf8284ac7002124c33d69de3f815a76bdd3e1e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-58fd7646b9-5d4g6" podUID="43e0fd53-8ec8-4b02-86e2-a124557aa367" Jul 14 23:06:40.061841 containerd[1538]: time="2025-07-14T23:06:40.061793002Z" level=error msg="Failed to destroy network for sandbox \"d43c1a91dd2276562727eefbcc01517e504d878328879e85a13408edec409254\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 23:06:40.062246 containerd[1538]: time="2025-07-14T23:06:40.062221683Z" level=error msg="encountered an error cleaning up failed sandbox \"d43c1a91dd2276562727eefbcc01517e504d878328879e85a13408edec409254\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 23:06:40.062280 containerd[1538]: time="2025-07-14T23:06:40.062267248Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7fb9554685-xx2mf,Uid:010be3c5-59c9-4ff4-a6d2-2124514c9299,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d43c1a91dd2276562727eefbcc01517e504d878328879e85a13408edec409254\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 23:06:40.062517 containerd[1538]: time="2025-07-14T23:06:40.062359304Z" level=error msg="Failed to destroy network for sandbox \"1cd51275a95c9ccc42b96cd64df25c1c45e843975e12ce4fd50640deae067dbe\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 23:06:40.062551 kubelet[2725]: E0714 23:06:40.062390 2725 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d43c1a91dd2276562727eefbcc01517e504d878328879e85a13408edec409254\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 23:06:40.062551 kubelet[2725]: E0714 23:06:40.062434 2725 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d43c1a91dd2276562727eefbcc01517e504d878328879e85a13408edec409254\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7fb9554685-xx2mf" Jul 14 23:06:40.062551 kubelet[2725]: E0714 23:06:40.062446 2725 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d43c1a91dd2276562727eefbcc01517e504d878328879e85a13408edec409254\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7fb9554685-xx2mf" Jul 14 23:06:40.062622 containerd[1538]: time="2025-07-14T23:06:40.062520400Z" level=error msg="encountered an error cleaning up failed sandbox \"1cd51275a95c9ccc42b96cd64df25c1c45e843975e12ce4fd50640deae067dbe\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 23:06:40.062622 containerd[1538]: time="2025-07-14T23:06:40.062540946Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5b6c867988-cbx84,Uid:c1d1e827-ce30-4bdb-94ef-778bb4b83e4f,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"1cd51275a95c9ccc42b96cd64df25c1c45e843975e12ce4fd50640deae067dbe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 23:06:40.062677 kubelet[2725]: E0714 23:06:40.062472 2725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7fb9554685-xx2mf_calico-apiserver(010be3c5-59c9-4ff4-a6d2-2124514c9299)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7fb9554685-xx2mf_calico-apiserver(010be3c5-59c9-4ff4-a6d2-2124514c9299)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d43c1a91dd2276562727eefbcc01517e504d878328879e85a13408edec409254\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7fb9554685-xx2mf" podUID="010be3c5-59c9-4ff4-a6d2-2124514c9299" Jul 14 23:06:40.062836 kubelet[2725]: E0714 23:06:40.062760 2725 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1cd51275a95c9ccc42b96cd64df25c1c45e843975e12ce4fd50640deae067dbe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 23:06:40.062836 kubelet[2725]: E0714 23:06:40.062779 2725 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1cd51275a95c9ccc42b96cd64df25c1c45e843975e12ce4fd50640deae067dbe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5b6c867988-cbx84" Jul 14 23:06:40.062836 kubelet[2725]: E0714 23:06:40.062790 2725 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1cd51275a95c9ccc42b96cd64df25c1c45e843975e12ce4fd50640deae067dbe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5b6c867988-cbx84" Jul 14 23:06:40.063249 kubelet[2725]: E0714 23:06:40.062817 2725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-5b6c867988-cbx84_calico-system(c1d1e827-ce30-4bdb-94ef-778bb4b83e4f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-5b6c867988-cbx84_calico-system(c1d1e827-ce30-4bdb-94ef-778bb4b83e4f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1cd51275a95c9ccc42b96cd64df25c1c45e843975e12ce4fd50640deae067dbe\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-5b6c867988-cbx84" podUID="c1d1e827-ce30-4bdb-94ef-778bb4b83e4f" Jul 14 23:06:40.167686 systemd[1]: Created slice kubepods-besteffort-podffa01b57_cf5c_4652_8eda_490fdd179a1b.slice - libcontainer container kubepods-besteffort-podffa01b57_cf5c_4652_8eda_490fdd179a1b.slice. Jul 14 23:06:40.169302 containerd[1538]: time="2025-07-14T23:06:40.169277096Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4jr6z,Uid:ffa01b57-cf5c-4652-8eda-490fdd179a1b,Namespace:calico-system,Attempt:0,}" Jul 14 23:06:40.204253 containerd[1538]: time="2025-07-14T23:06:40.204175613Z" level=error msg="Failed to destroy network for sandbox \"80facd741d5e4ffb378eb31c003e2b8b943dc8d829374aa5eb7a612dc68ca69d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 23:06:40.204619 containerd[1538]: time="2025-07-14T23:06:40.204530121Z" level=error msg="encountered an error cleaning up failed sandbox \"80facd741d5e4ffb378eb31c003e2b8b943dc8d829374aa5eb7a612dc68ca69d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 23:06:40.204619 containerd[1538]: time="2025-07-14T23:06:40.204562308Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4jr6z,Uid:ffa01b57-cf5c-4652-8eda-490fdd179a1b,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"80facd741d5e4ffb378eb31c003e2b8b943dc8d829374aa5eb7a612dc68ca69d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 23:06:40.204752 kubelet[2725]: E0714 23:06:40.204724 2725 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"80facd741d5e4ffb378eb31c003e2b8b943dc8d829374aa5eb7a612dc68ca69d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 23:06:40.204803 kubelet[2725]: E0714 23:06:40.204772 2725 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"80facd741d5e4ffb378eb31c003e2b8b943dc8d829374aa5eb7a612dc68ca69d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-4jr6z" Jul 14 23:06:40.204803 kubelet[2725]: E0714 23:06:40.204784 2725 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"80facd741d5e4ffb378eb31c003e2b8b943dc8d829374aa5eb7a612dc68ca69d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-4jr6z" Jul 14 23:06:40.204859 kubelet[2725]: E0714 23:06:40.204814 2725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-4jr6z_calico-system(ffa01b57-cf5c-4652-8eda-490fdd179a1b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-4jr6z_calico-system(ffa01b57-cf5c-4652-8eda-490fdd179a1b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"80facd741d5e4ffb378eb31c003e2b8b943dc8d829374aa5eb7a612dc68ca69d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-4jr6z" podUID="ffa01b57-cf5c-4652-8eda-490fdd179a1b" Jul 14 23:06:40.208807 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8ebf0099ef01d13457469ce1ae86fc508ffb7913518970c58c807e2985630951-shm.mount: Deactivated successfully. Jul 14 23:06:40.254513 kubelet[2725]: I0714 23:06:40.254498 2725 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8fae9a5219270f031d65f7835bdf8284ac7002124c33d69de3f815a76bdd3e1e" Jul 14 23:06:40.255333 kubelet[2725]: I0714 23:06:40.255057 2725 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1cd51275a95c9ccc42b96cd64df25c1c45e843975e12ce4fd50640deae067dbe" Jul 14 23:06:40.264012 kubelet[2725]: I0714 23:06:40.263992 2725 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d43c1a91dd2276562727eefbcc01517e504d878328879e85a13408edec409254" Jul 14 23:06:40.266785 kubelet[2725]: I0714 23:06:40.266610 2725 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="80facd741d5e4ffb378eb31c003e2b8b943dc8d829374aa5eb7a612dc68ca69d" Jul 14 23:06:40.268969 kubelet[2725]: I0714 23:06:40.268922 2725 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8ebf0099ef01d13457469ce1ae86fc508ffb7913518970c58c807e2985630951" Jul 14 23:06:40.293124 containerd[1538]: time="2025-07-14T23:06:40.292237533Z" level=info msg="StopPodSandbox for \"80facd741d5e4ffb378eb31c003e2b8b943dc8d829374aa5eb7a612dc68ca69d\"" Jul 14 23:06:40.293124 containerd[1538]: time="2025-07-14T23:06:40.292911769Z" level=info msg="Ensure that sandbox 80facd741d5e4ffb378eb31c003e2b8b943dc8d829374aa5eb7a612dc68ca69d in task-service has been cleanup successfully" Jul 14 23:06:40.295200 containerd[1538]: time="2025-07-14T23:06:40.294447570Z" level=info msg="StopPodSandbox for \"8fae9a5219270f031d65f7835bdf8284ac7002124c33d69de3f815a76bdd3e1e\"" Jul 14 23:06:40.295200 containerd[1538]: time="2025-07-14T23:06:40.294550892Z" level=info msg="Ensure that sandbox 8fae9a5219270f031d65f7835bdf8284ac7002124c33d69de3f815a76bdd3e1e in task-service has been cleanup successfully" Jul 14 23:06:40.295200 containerd[1538]: time="2025-07-14T23:06:40.294982036Z" level=info msg="StopPodSandbox for \"1cd51275a95c9ccc42b96cd64df25c1c45e843975e12ce4fd50640deae067dbe\"" Jul 14 23:06:40.295200 containerd[1538]: time="2025-07-14T23:06:40.295066814Z" level=info msg="Ensure that sandbox 1cd51275a95c9ccc42b96cd64df25c1c45e843975e12ce4fd50640deae067dbe in task-service has been cleanup successfully" Jul 14 23:06:40.296816 containerd[1538]: time="2025-07-14T23:06:40.296124797Z" level=info msg="StopPodSandbox for \"8ebf0099ef01d13457469ce1ae86fc508ffb7913518970c58c807e2985630951\"" Jul 14 23:06:40.296816 containerd[1538]: time="2025-07-14T23:06:40.296197098Z" level=info msg="Ensure that sandbox 8ebf0099ef01d13457469ce1ae86fc508ffb7913518970c58c807e2985630951 in task-service has been cleanup successfully" Jul 14 23:06:40.296816 containerd[1538]: time="2025-07-14T23:06:40.296348643Z" level=info msg="StopPodSandbox for \"d43c1a91dd2276562727eefbcc01517e504d878328879e85a13408edec409254\"" Jul 14 23:06:40.296889 kubelet[2725]: I0714 23:06:40.296463 2725 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d554541d3db0c3b607ce2e6782d0ca7031b7705c981827114079a1a167c3af10" Jul 14 23:06:40.297160 containerd[1538]: time="2025-07-14T23:06:40.297142605Z" level=info msg="Ensure that sandbox d43c1a91dd2276562727eefbcc01517e504d878328879e85a13408edec409254 in task-service has been cleanup successfully" Jul 14 23:06:40.297972 containerd[1538]: time="2025-07-14T23:06:40.297956747Z" level=info msg="StopPodSandbox for \"d554541d3db0c3b607ce2e6782d0ca7031b7705c981827114079a1a167c3af10\"" Jul 14 23:06:40.299160 containerd[1538]: time="2025-07-14T23:06:40.299146286Z" level=info msg="Ensure that sandbox d554541d3db0c3b607ce2e6782d0ca7031b7705c981827114079a1a167c3af10 in task-service has been cleanup successfully" Jul 14 23:06:40.300671 kubelet[2725]: I0714 23:06:40.300659 2725 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d7ba934ca5fa8f5a942e798510589363b14ba71279585d1870a93dee910bdc2b" Jul 14 23:06:40.303519 containerd[1538]: time="2025-07-14T23:06:40.303499465Z" level=info msg="StopPodSandbox for \"d7ba934ca5fa8f5a942e798510589363b14ba71279585d1870a93dee910bdc2b\"" Jul 14 23:06:40.311527 containerd[1538]: time="2025-07-14T23:06:40.311347709Z" level=info msg="Ensure that sandbox d7ba934ca5fa8f5a942e798510589363b14ba71279585d1870a93dee910bdc2b in task-service has been cleanup successfully" Jul 14 23:06:40.319795 kubelet[2725]: I0714 23:06:40.319720 2725 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="60b63cd1d2bc8137d3b2c460cacbcefb6a777adc4ebc12fbbcc857d1a5535357" Jul 14 23:06:40.321108 containerd[1538]: time="2025-07-14T23:06:40.320985704Z" level=info msg="StopPodSandbox for \"60b63cd1d2bc8137d3b2c460cacbcefb6a777adc4ebc12fbbcc857d1a5535357\"" Jul 14 23:06:40.326647 containerd[1538]: time="2025-07-14T23:06:40.326552016Z" level=info msg="Ensure that sandbox 60b63cd1d2bc8137d3b2c460cacbcefb6a777adc4ebc12fbbcc857d1a5535357 in task-service has been cleanup successfully" Jul 14 23:06:40.351022 containerd[1538]: time="2025-07-14T23:06:40.350744223Z" level=error msg="StopPodSandbox for \"8fae9a5219270f031d65f7835bdf8284ac7002124c33d69de3f815a76bdd3e1e\" failed" error="failed to destroy network for sandbox \"8fae9a5219270f031d65f7835bdf8284ac7002124c33d69de3f815a76bdd3e1e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 23:06:40.351117 kubelet[2725]: E0714 23:06:40.350893 2725 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"8fae9a5219270f031d65f7835bdf8284ac7002124c33d69de3f815a76bdd3e1e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="8fae9a5219270f031d65f7835bdf8284ac7002124c33d69de3f815a76bdd3e1e" Jul 14 23:06:40.351117 kubelet[2725]: E0714 23:06:40.350937 2725 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"8fae9a5219270f031d65f7835bdf8284ac7002124c33d69de3f815a76bdd3e1e"} Jul 14 23:06:40.351117 kubelet[2725]: E0714 23:06:40.350987 2725 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"43e0fd53-8ec8-4b02-86e2-a124557aa367\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8fae9a5219270f031d65f7835bdf8284ac7002124c33d69de3f815a76bdd3e1e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 14 23:06:40.351117 kubelet[2725]: E0714 23:06:40.351001 2725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"43e0fd53-8ec8-4b02-86e2-a124557aa367\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8fae9a5219270f031d65f7835bdf8284ac7002124c33d69de3f815a76bdd3e1e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-58fd7646b9-5d4g6" podUID="43e0fd53-8ec8-4b02-86e2-a124557aa367" Jul 14 23:06:40.364984 containerd[1538]: time="2025-07-14T23:06:40.364687347Z" level=error msg="StopPodSandbox for \"d43c1a91dd2276562727eefbcc01517e504d878328879e85a13408edec409254\" failed" error="failed to destroy network for sandbox \"d43c1a91dd2276562727eefbcc01517e504d878328879e85a13408edec409254\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 23:06:40.365152 kubelet[2725]: E0714 23:06:40.364860 2725 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d43c1a91dd2276562727eefbcc01517e504d878328879e85a13408edec409254\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d43c1a91dd2276562727eefbcc01517e504d878328879e85a13408edec409254" Jul 14 23:06:40.365152 kubelet[2725]: E0714 23:06:40.364894 2725 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d43c1a91dd2276562727eefbcc01517e504d878328879e85a13408edec409254"} Jul 14 23:06:40.365152 kubelet[2725]: E0714 23:06:40.364917 2725 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"010be3c5-59c9-4ff4-a6d2-2124514c9299\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d43c1a91dd2276562727eefbcc01517e504d878328879e85a13408edec409254\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 14 23:06:40.365152 kubelet[2725]: E0714 23:06:40.364931 2725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"010be3c5-59c9-4ff4-a6d2-2124514c9299\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d43c1a91dd2276562727eefbcc01517e504d878328879e85a13408edec409254\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7fb9554685-xx2mf" podUID="010be3c5-59c9-4ff4-a6d2-2124514c9299" Jul 14 23:06:40.366454 containerd[1538]: time="2025-07-14T23:06:40.366430622Z" level=error msg="StopPodSandbox for \"8ebf0099ef01d13457469ce1ae86fc508ffb7913518970c58c807e2985630951\" failed" error="failed to destroy network for sandbox \"8ebf0099ef01d13457469ce1ae86fc508ffb7913518970c58c807e2985630951\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 23:06:40.366633 kubelet[2725]: E0714 23:06:40.366532 2725 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"8ebf0099ef01d13457469ce1ae86fc508ffb7913518970c58c807e2985630951\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="8ebf0099ef01d13457469ce1ae86fc508ffb7913518970c58c807e2985630951" Jul 14 23:06:40.366633 kubelet[2725]: E0714 23:06:40.366557 2725 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"8ebf0099ef01d13457469ce1ae86fc508ffb7913518970c58c807e2985630951"} Jul 14 23:06:40.366633 kubelet[2725]: E0714 23:06:40.366583 2725 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"f381ec3d-1f99-48f5-a759-d4ef727cd042\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8ebf0099ef01d13457469ce1ae86fc508ffb7913518970c58c807e2985630951\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 14 23:06:40.366633 kubelet[2725]: E0714 23:06:40.366596 2725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"f381ec3d-1f99-48f5-a759-d4ef727cd042\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8ebf0099ef01d13457469ce1ae86fc508ffb7913518970c58c807e2985630951\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-789c44b4f5-hbf9t" podUID="f381ec3d-1f99-48f5-a759-d4ef727cd042" Jul 14 23:06:40.370144 containerd[1538]: time="2025-07-14T23:06:40.369956307Z" level=error msg="StopPodSandbox for \"d554541d3db0c3b607ce2e6782d0ca7031b7705c981827114079a1a167c3af10\" failed" error="failed to destroy network for sandbox \"d554541d3db0c3b607ce2e6782d0ca7031b7705c981827114079a1a167c3af10\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 23:06:40.370210 kubelet[2725]: E0714 23:06:40.370069 2725 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d554541d3db0c3b607ce2e6782d0ca7031b7705c981827114079a1a167c3af10\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d554541d3db0c3b607ce2e6782d0ca7031b7705c981827114079a1a167c3af10" Jul 14 23:06:40.370210 kubelet[2725]: E0714 23:06:40.370146 2725 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d554541d3db0c3b607ce2e6782d0ca7031b7705c981827114079a1a167c3af10"} Jul 14 23:06:40.370210 kubelet[2725]: E0714 23:06:40.370169 2725 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"47889b81-1613-42d1-9473-e89886fa669f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d554541d3db0c3b607ce2e6782d0ca7031b7705c981827114079a1a167c3af10\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 14 23:06:40.370210 kubelet[2725]: E0714 23:06:40.370183 2725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"47889b81-1613-42d1-9473-e89886fa669f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d554541d3db0c3b607ce2e6782d0ca7031b7705c981827114079a1a167c3af10\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-86f6f" podUID="47889b81-1613-42d1-9473-e89886fa669f" Jul 14 23:06:40.379957 containerd[1538]: time="2025-07-14T23:06:40.379680033Z" level=error msg="StopPodSandbox for \"60b63cd1d2bc8137d3b2c460cacbcefb6a777adc4ebc12fbbcc857d1a5535357\" failed" error="failed to destroy network for sandbox \"60b63cd1d2bc8137d3b2c460cacbcefb6a777adc4ebc12fbbcc857d1a5535357\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 23:06:40.380050 kubelet[2725]: E0714 23:06:40.379837 2725 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"60b63cd1d2bc8137d3b2c460cacbcefb6a777adc4ebc12fbbcc857d1a5535357\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="60b63cd1d2bc8137d3b2c460cacbcefb6a777adc4ebc12fbbcc857d1a5535357" Jul 14 23:06:40.380050 kubelet[2725]: E0714 23:06:40.379867 2725 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"60b63cd1d2bc8137d3b2c460cacbcefb6a777adc4ebc12fbbcc857d1a5535357"} Jul 14 23:06:40.380050 kubelet[2725]: E0714 23:06:40.379890 2725 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"eac42073-634d-4a92-8c8b-4e4d39002987\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"60b63cd1d2bc8137d3b2c460cacbcefb6a777adc4ebc12fbbcc857d1a5535357\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 14 23:06:40.380050 kubelet[2725]: E0714 23:06:40.379915 2725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"eac42073-634d-4a92-8c8b-4e4d39002987\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"60b63cd1d2bc8137d3b2c460cacbcefb6a777adc4ebc12fbbcc857d1a5535357\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-hc96w" podUID="eac42073-634d-4a92-8c8b-4e4d39002987" Jul 14 23:06:40.380880 containerd[1538]: time="2025-07-14T23:06:40.380469296Z" level=error msg="StopPodSandbox for \"1cd51275a95c9ccc42b96cd64df25c1c45e843975e12ce4fd50640deae067dbe\" failed" error="failed to destroy network for sandbox \"1cd51275a95c9ccc42b96cd64df25c1c45e843975e12ce4fd50640deae067dbe\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 23:06:40.380927 kubelet[2725]: E0714 23:06:40.380546 2725 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"1cd51275a95c9ccc42b96cd64df25c1c45e843975e12ce4fd50640deae067dbe\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="1cd51275a95c9ccc42b96cd64df25c1c45e843975e12ce4fd50640deae067dbe" Jul 14 23:06:40.380927 kubelet[2725]: E0714 23:06:40.380561 2725 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"1cd51275a95c9ccc42b96cd64df25c1c45e843975e12ce4fd50640deae067dbe"} Jul 14 23:06:40.380927 kubelet[2725]: E0714 23:06:40.380579 2725 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c1d1e827-ce30-4bdb-94ef-778bb4b83e4f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1cd51275a95c9ccc42b96cd64df25c1c45e843975e12ce4fd50640deae067dbe\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 14 23:06:40.380927 kubelet[2725]: E0714 23:06:40.380592 2725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c1d1e827-ce30-4bdb-94ef-778bb4b83e4f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1cd51275a95c9ccc42b96cd64df25c1c45e843975e12ce4fd50640deae067dbe\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-5b6c867988-cbx84" podUID="c1d1e827-ce30-4bdb-94ef-778bb4b83e4f" Jul 14 23:06:40.383645 containerd[1538]: time="2025-07-14T23:06:40.383628656Z" level=error msg="StopPodSandbox for \"80facd741d5e4ffb378eb31c003e2b8b943dc8d829374aa5eb7a612dc68ca69d\" failed" error="failed to destroy network for sandbox \"80facd741d5e4ffb378eb31c003e2b8b943dc8d829374aa5eb7a612dc68ca69d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 23:06:40.383855 kubelet[2725]: E0714 23:06:40.383771 2725 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"80facd741d5e4ffb378eb31c003e2b8b943dc8d829374aa5eb7a612dc68ca69d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="80facd741d5e4ffb378eb31c003e2b8b943dc8d829374aa5eb7a612dc68ca69d" Jul 14 23:06:40.383855 kubelet[2725]: E0714 23:06:40.383798 2725 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"80facd741d5e4ffb378eb31c003e2b8b943dc8d829374aa5eb7a612dc68ca69d"} Jul 14 23:06:40.383855 kubelet[2725]: E0714 23:06:40.383822 2725 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ffa01b57-cf5c-4652-8eda-490fdd179a1b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"80facd741d5e4ffb378eb31c003e2b8b943dc8d829374aa5eb7a612dc68ca69d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 14 23:06:40.383855 kubelet[2725]: E0714 23:06:40.383839 2725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ffa01b57-cf5c-4652-8eda-490fdd179a1b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"80facd741d5e4ffb378eb31c003e2b8b943dc8d829374aa5eb7a612dc68ca69d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-4jr6z" podUID="ffa01b57-cf5c-4652-8eda-490fdd179a1b" Jul 14 23:06:40.386379 containerd[1538]: time="2025-07-14T23:06:40.386339397Z" level=error msg="StopPodSandbox for \"d7ba934ca5fa8f5a942e798510589363b14ba71279585d1870a93dee910bdc2b\" failed" error="failed to destroy network for sandbox \"d7ba934ca5fa8f5a942e798510589363b14ba71279585d1870a93dee910bdc2b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 23:06:40.386561 kubelet[2725]: E0714 23:06:40.386475 2725 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d7ba934ca5fa8f5a942e798510589363b14ba71279585d1870a93dee910bdc2b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d7ba934ca5fa8f5a942e798510589363b14ba71279585d1870a93dee910bdc2b" Jul 14 23:06:40.386561 kubelet[2725]: E0714 23:06:40.386495 2725 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d7ba934ca5fa8f5a942e798510589363b14ba71279585d1870a93dee910bdc2b"} Jul 14 23:06:40.386561 kubelet[2725]: E0714 23:06:40.386522 2725 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b3b1fbe5-9018-4070-bc43-78a6548c5e8b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d7ba934ca5fa8f5a942e798510589363b14ba71279585d1870a93dee910bdc2b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 14 23:06:40.386561 kubelet[2725]: E0714 23:06:40.386538 2725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b3b1fbe5-9018-4070-bc43-78a6548c5e8b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d7ba934ca5fa8f5a942e798510589363b14ba71279585d1870a93dee910bdc2b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7fb9554685-vnrld" podUID="b3b1fbe5-9018-4070-bc43-78a6548c5e8b" Jul 14 23:06:45.539164 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2196408372.mount: Deactivated successfully. Jul 14 23:06:45.615377 containerd[1538]: time="2025-07-14T23:06:45.592115028Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.2: active requests=0, bytes read=158500163" Jul 14 23:06:45.615377 containerd[1538]: time="2025-07-14T23:06:45.614709600Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 23:06:45.638880 containerd[1538]: time="2025-07-14T23:06:45.638840200Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.2\" with image id \"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\", size \"158500025\" in 6.392511417s" Jul 14 23:06:45.639195 containerd[1538]: time="2025-07-14T23:06:45.639025757Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\" returns image reference \"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\"" Jul 14 23:06:45.646451 containerd[1538]: time="2025-07-14T23:06:45.646417786Z" level=info msg="ImageCreate event name:\"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 23:06:45.651648 containerd[1538]: time="2025-07-14T23:06:45.651544282Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 23:06:45.666257 containerd[1538]: time="2025-07-14T23:06:45.666195164Z" level=info msg="CreateContainer within sandbox \"3ddb7720d26e82575a874210b0c7e2f61732f4148c873f1ed0afc2165722b0b4\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jul 14 23:06:45.710289 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount764029968.mount: Deactivated successfully. Jul 14 23:06:45.723066 containerd[1538]: time="2025-07-14T23:06:45.723033780Z" level=info msg="CreateContainer within sandbox \"3ddb7720d26e82575a874210b0c7e2f61732f4148c873f1ed0afc2165722b0b4\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"666f9036db36bdc2651baf685f6ec33ab82ba988c387f39457afc4f00b49c935\"" Jul 14 23:06:45.723594 containerd[1538]: time="2025-07-14T23:06:45.723578231Z" level=info msg="StartContainer for \"666f9036db36bdc2651baf685f6ec33ab82ba988c387f39457afc4f00b49c935\"" Jul 14 23:06:45.816329 systemd[1]: Started cri-containerd-666f9036db36bdc2651baf685f6ec33ab82ba988c387f39457afc4f00b49c935.scope - libcontainer container 666f9036db36bdc2651baf685f6ec33ab82ba988c387f39457afc4f00b49c935. Jul 14 23:06:45.837620 containerd[1538]: time="2025-07-14T23:06:45.837594032Z" level=info msg="StartContainer for \"666f9036db36bdc2651baf685f6ec33ab82ba988c387f39457afc4f00b49c935\" returns successfully" Jul 14 23:06:46.030157 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jul 14 23:06:46.032707 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jul 14 23:06:46.696715 kubelet[2725]: I0714 23:06:46.693901 2725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-mg8v2" podStartSLOduration=2.409077089 podStartE2EDuration="19.69092098s" podCreationTimestamp="2025-07-14 23:06:27 +0000 UTC" firstStartedPulling="2025-07-14 23:06:28.370134096 +0000 UTC m=+17.290840386" lastFinishedPulling="2025-07-14 23:06:45.651977981 +0000 UTC m=+34.572684277" observedRunningTime="2025-07-14 23:06:46.371199494 +0000 UTC m=+35.291905793" watchObservedRunningTime="2025-07-14 23:06:46.69092098 +0000 UTC m=+35.611627275" Jul 14 23:06:46.703802 containerd[1538]: time="2025-07-14T23:06:46.703772300Z" level=info msg="StopPodSandbox for \"1cd51275a95c9ccc42b96cd64df25c1c45e843975e12ce4fd50640deae067dbe\"" Jul 14 23:06:47.076517 containerd[1538]: 2025-07-14 23:06:46.778 [INFO][3953] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1cd51275a95c9ccc42b96cd64df25c1c45e843975e12ce4fd50640deae067dbe" Jul 14 23:06:47.076517 containerd[1538]: 2025-07-14 23:06:46.779 [INFO][3953] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="1cd51275a95c9ccc42b96cd64df25c1c45e843975e12ce4fd50640deae067dbe" iface="eth0" netns="/var/run/netns/cni-5ce95df5-6eba-1941-7bf2-1c74a9dc6580" Jul 14 23:06:47.076517 containerd[1538]: 2025-07-14 23:06:46.780 [INFO][3953] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="1cd51275a95c9ccc42b96cd64df25c1c45e843975e12ce4fd50640deae067dbe" iface="eth0" netns="/var/run/netns/cni-5ce95df5-6eba-1941-7bf2-1c74a9dc6580" Jul 14 23:06:47.076517 containerd[1538]: 2025-07-14 23:06:46.781 [INFO][3953] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="1cd51275a95c9ccc42b96cd64df25c1c45e843975e12ce4fd50640deae067dbe" iface="eth0" netns="/var/run/netns/cni-5ce95df5-6eba-1941-7bf2-1c74a9dc6580" Jul 14 23:06:47.076517 containerd[1538]: 2025-07-14 23:06:46.781 [INFO][3953] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1cd51275a95c9ccc42b96cd64df25c1c45e843975e12ce4fd50640deae067dbe" Jul 14 23:06:47.076517 containerd[1538]: 2025-07-14 23:06:46.781 [INFO][3953] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1cd51275a95c9ccc42b96cd64df25c1c45e843975e12ce4fd50640deae067dbe" Jul 14 23:06:47.076517 containerd[1538]: 2025-07-14 23:06:47.058 [INFO][3971] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1cd51275a95c9ccc42b96cd64df25c1c45e843975e12ce4fd50640deae067dbe" HandleID="k8s-pod-network.1cd51275a95c9ccc42b96cd64df25c1c45e843975e12ce4fd50640deae067dbe" Workload="localhost-k8s-whisker--5b6c867988--cbx84-eth0" Jul 14 23:06:47.076517 containerd[1538]: 2025-07-14 23:06:47.061 [INFO][3971] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 23:06:47.076517 containerd[1538]: 2025-07-14 23:06:47.062 [INFO][3971] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 23:06:47.076517 containerd[1538]: 2025-07-14 23:06:47.072 [WARNING][3971] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1cd51275a95c9ccc42b96cd64df25c1c45e843975e12ce4fd50640deae067dbe" HandleID="k8s-pod-network.1cd51275a95c9ccc42b96cd64df25c1c45e843975e12ce4fd50640deae067dbe" Workload="localhost-k8s-whisker--5b6c867988--cbx84-eth0" Jul 14 23:06:47.076517 containerd[1538]: 2025-07-14 23:06:47.072 [INFO][3971] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1cd51275a95c9ccc42b96cd64df25c1c45e843975e12ce4fd50640deae067dbe" HandleID="k8s-pod-network.1cd51275a95c9ccc42b96cd64df25c1c45e843975e12ce4fd50640deae067dbe" Workload="localhost-k8s-whisker--5b6c867988--cbx84-eth0" Jul 14 23:06:47.076517 containerd[1538]: 2025-07-14 23:06:47.073 [INFO][3971] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 23:06:47.076517 containerd[1538]: 2025-07-14 23:06:47.075 [INFO][3953] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1cd51275a95c9ccc42b96cd64df25c1c45e843975e12ce4fd50640deae067dbe" Jul 14 23:06:47.079931 systemd[1]: run-netns-cni\x2d5ce95df5\x2d6eba\x2d1941\x2d7bf2\x2d1c74a9dc6580.mount: Deactivated successfully. Jul 14 23:06:47.083839 containerd[1538]: time="2025-07-14T23:06:47.083808978Z" level=info msg="TearDown network for sandbox \"1cd51275a95c9ccc42b96cd64df25c1c45e843975e12ce4fd50640deae067dbe\" successfully" Jul 14 23:06:47.083889 containerd[1538]: time="2025-07-14T23:06:47.083837826Z" level=info msg="StopPodSandbox for \"1cd51275a95c9ccc42b96cd64df25c1c45e843975e12ce4fd50640deae067dbe\" returns successfully" Jul 14 23:06:47.113695 kubelet[2725]: I0714 23:06:47.113504 2725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c1d1e827-ce30-4bdb-94ef-778bb4b83e4f-whisker-ca-bundle\") pod \"c1d1e827-ce30-4bdb-94ef-778bb4b83e4f\" (UID: \"c1d1e827-ce30-4bdb-94ef-778bb4b83e4f\") " Jul 14 23:06:47.113695 kubelet[2725]: I0714 23:06:47.113539 2725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5z4r4\" (UniqueName: \"kubernetes.io/projected/c1d1e827-ce30-4bdb-94ef-778bb4b83e4f-kube-api-access-5z4r4\") pod \"c1d1e827-ce30-4bdb-94ef-778bb4b83e4f\" (UID: \"c1d1e827-ce30-4bdb-94ef-778bb4b83e4f\") " Jul 14 23:06:47.113695 kubelet[2725]: I0714 23:06:47.113558 2725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/c1d1e827-ce30-4bdb-94ef-778bb4b83e4f-whisker-backend-key-pair\") pod \"c1d1e827-ce30-4bdb-94ef-778bb4b83e4f\" (UID: \"c1d1e827-ce30-4bdb-94ef-778bb4b83e4f\") " Jul 14 23:06:47.132161 kubelet[2725]: I0714 23:06:47.128534 2725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c1d1e827-ce30-4bdb-94ef-778bb4b83e4f-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "c1d1e827-ce30-4bdb-94ef-778bb4b83e4f" (UID: "c1d1e827-ce30-4bdb-94ef-778bb4b83e4f"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 14 23:06:47.134145 kubelet[2725]: I0714 23:06:47.133486 2725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c1d1e827-ce30-4bdb-94ef-778bb4b83e4f-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "c1d1e827-ce30-4bdb-94ef-778bb4b83e4f" (UID: "c1d1e827-ce30-4bdb-94ef-778bb4b83e4f"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 14 23:06:47.134145 kubelet[2725]: I0714 23:06:47.134123 2725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c1d1e827-ce30-4bdb-94ef-778bb4b83e4f-kube-api-access-5z4r4" (OuterVolumeSpecName: "kube-api-access-5z4r4") pod "c1d1e827-ce30-4bdb-94ef-778bb4b83e4f" (UID: "c1d1e827-ce30-4bdb-94ef-778bb4b83e4f"). InnerVolumeSpecName "kube-api-access-5z4r4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 14 23:06:47.134714 systemd[1]: var-lib-kubelet-pods-c1d1e827\x2dce30\x2d4bdb\x2d94ef\x2d778bb4b83e4f-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d5z4r4.mount: Deactivated successfully. Jul 14 23:06:47.134768 systemd[1]: var-lib-kubelet-pods-c1d1e827\x2dce30\x2d4bdb\x2d94ef\x2d778bb4b83e4f-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jul 14 23:06:47.191637 systemd[1]: Removed slice kubepods-besteffort-podc1d1e827_ce30_4bdb_94ef_778bb4b83e4f.slice - libcontainer container kubepods-besteffort-podc1d1e827_ce30_4bdb_94ef_778bb4b83e4f.slice. Jul 14 23:06:47.213943 kubelet[2725]: I0714 23:06:47.213896 2725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5z4r4\" (UniqueName: \"kubernetes.io/projected/c1d1e827-ce30-4bdb-94ef-778bb4b83e4f-kube-api-access-5z4r4\") on node \"localhost\" DevicePath \"\"" Jul 14 23:06:47.213943 kubelet[2725]: I0714 23:06:47.213920 2725 reconciler_common.go:293] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/c1d1e827-ce30-4bdb-94ef-778bb4b83e4f-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Jul 14 23:06:47.213943 kubelet[2725]: I0714 23:06:47.213928 2725 reconciler_common.go:293] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c1d1e827-ce30-4bdb-94ef-778bb4b83e4f-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Jul 14 23:06:47.332220 kubelet[2725]: I0714 23:06:47.331775 2725 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 14 23:06:47.393038 systemd[1]: Created slice kubepods-besteffort-pod7b96fee8_5b1c_4633_a017_c28e7a1156e8.slice - libcontainer container kubepods-besteffort-pod7b96fee8_5b1c_4633_a017_c28e7a1156e8.slice. Jul 14 23:06:47.415745 kubelet[2725]: I0714 23:06:47.415291 2725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/7b96fee8-5b1c-4633-a017-c28e7a1156e8-whisker-backend-key-pair\") pod \"whisker-85b6d95f4d-b42t8\" (UID: \"7b96fee8-5b1c-4633-a017-c28e7a1156e8\") " pod="calico-system/whisker-85b6d95f4d-b42t8" Jul 14 23:06:47.415745 kubelet[2725]: I0714 23:06:47.415340 2725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7b96fee8-5b1c-4633-a017-c28e7a1156e8-whisker-ca-bundle\") pod \"whisker-85b6d95f4d-b42t8\" (UID: \"7b96fee8-5b1c-4633-a017-c28e7a1156e8\") " pod="calico-system/whisker-85b6d95f4d-b42t8" Jul 14 23:06:47.415745 kubelet[2725]: I0714 23:06:47.415356 2725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s542f\" (UniqueName: \"kubernetes.io/projected/7b96fee8-5b1c-4633-a017-c28e7a1156e8-kube-api-access-s542f\") pod \"whisker-85b6d95f4d-b42t8\" (UID: \"7b96fee8-5b1c-4633-a017-c28e7a1156e8\") " pod="calico-system/whisker-85b6d95f4d-b42t8" Jul 14 23:06:47.695592 containerd[1538]: time="2025-07-14T23:06:47.695519204Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-85b6d95f4d-b42t8,Uid:7b96fee8-5b1c-4633-a017-c28e7a1156e8,Namespace:calico-system,Attempt:0,}" Jul 14 23:06:47.828389 systemd-networkd[1449]: calibf747c952e7: Link UP Jul 14 23:06:47.828768 systemd-networkd[1449]: calibf747c952e7: Gained carrier Jul 14 23:06:47.841309 containerd[1538]: 2025-07-14 23:06:47.731 [INFO][3982] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 14 23:06:47.841309 containerd[1538]: 2025-07-14 23:06:47.748 [INFO][3982] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--85b6d95f4d--b42t8-eth0 whisker-85b6d95f4d- calico-system 7b96fee8-5b1c-4633-a017-c28e7a1156e8 908 0 2025-07-14 23:06:47 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:85b6d95f4d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-85b6d95f4d-b42t8 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calibf747c952e7 [] [] }} ContainerID="cd6b1c0fdd51d1e3531309fc702308e49634ad73ed531e4b291a9cf8be588953" Namespace="calico-system" Pod="whisker-85b6d95f4d-b42t8" WorkloadEndpoint="localhost-k8s-whisker--85b6d95f4d--b42t8-" Jul 14 23:06:47.841309 containerd[1538]: 2025-07-14 23:06:47.748 [INFO][3982] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="cd6b1c0fdd51d1e3531309fc702308e49634ad73ed531e4b291a9cf8be588953" Namespace="calico-system" Pod="whisker-85b6d95f4d-b42t8" WorkloadEndpoint="localhost-k8s-whisker--85b6d95f4d--b42t8-eth0" Jul 14 23:06:47.841309 containerd[1538]: 2025-07-14 23:06:47.773 [INFO][3994] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="cd6b1c0fdd51d1e3531309fc702308e49634ad73ed531e4b291a9cf8be588953" HandleID="k8s-pod-network.cd6b1c0fdd51d1e3531309fc702308e49634ad73ed531e4b291a9cf8be588953" Workload="localhost-k8s-whisker--85b6d95f4d--b42t8-eth0" Jul 14 23:06:47.841309 containerd[1538]: 2025-07-14 23:06:47.773 [INFO][3994] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="cd6b1c0fdd51d1e3531309fc702308e49634ad73ed531e4b291a9cf8be588953" HandleID="k8s-pod-network.cd6b1c0fdd51d1e3531309fc702308e49634ad73ed531e4b291a9cf8be588953" Workload="localhost-k8s-whisker--85b6d95f4d--b42t8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024eff0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-85b6d95f4d-b42t8", "timestamp":"2025-07-14 23:06:47.773159242 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 14 23:06:47.841309 containerd[1538]: 2025-07-14 23:06:47.773 [INFO][3994] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 23:06:47.841309 containerd[1538]: 2025-07-14 23:06:47.773 [INFO][3994] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 23:06:47.841309 containerd[1538]: 2025-07-14 23:06:47.773 [INFO][3994] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 14 23:06:47.841309 containerd[1538]: 2025-07-14 23:06:47.781 [INFO][3994] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.cd6b1c0fdd51d1e3531309fc702308e49634ad73ed531e4b291a9cf8be588953" host="localhost" Jul 14 23:06:47.841309 containerd[1538]: 2025-07-14 23:06:47.796 [INFO][3994] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 14 23:06:47.841309 containerd[1538]: 2025-07-14 23:06:47.800 [INFO][3994] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 14 23:06:47.841309 containerd[1538]: 2025-07-14 23:06:47.801 [INFO][3994] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 14 23:06:47.841309 containerd[1538]: 2025-07-14 23:06:47.803 [INFO][3994] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 14 23:06:47.841309 containerd[1538]: 2025-07-14 23:06:47.803 [INFO][3994] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.cd6b1c0fdd51d1e3531309fc702308e49634ad73ed531e4b291a9cf8be588953" host="localhost" Jul 14 23:06:47.841309 containerd[1538]: 2025-07-14 23:06:47.803 [INFO][3994] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.cd6b1c0fdd51d1e3531309fc702308e49634ad73ed531e4b291a9cf8be588953 Jul 14 23:06:47.841309 containerd[1538]: 2025-07-14 23:06:47.806 [INFO][3994] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.cd6b1c0fdd51d1e3531309fc702308e49634ad73ed531e4b291a9cf8be588953" host="localhost" Jul 14 23:06:47.841309 containerd[1538]: 2025-07-14 23:06:47.810 [INFO][3994] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.cd6b1c0fdd51d1e3531309fc702308e49634ad73ed531e4b291a9cf8be588953" host="localhost" Jul 14 23:06:47.841309 containerd[1538]: 2025-07-14 23:06:47.810 [INFO][3994] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.cd6b1c0fdd51d1e3531309fc702308e49634ad73ed531e4b291a9cf8be588953" host="localhost" Jul 14 23:06:47.841309 containerd[1538]: 2025-07-14 23:06:47.810 [INFO][3994] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 23:06:47.841309 containerd[1538]: 2025-07-14 23:06:47.810 [INFO][3994] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="cd6b1c0fdd51d1e3531309fc702308e49634ad73ed531e4b291a9cf8be588953" HandleID="k8s-pod-network.cd6b1c0fdd51d1e3531309fc702308e49634ad73ed531e4b291a9cf8be588953" Workload="localhost-k8s-whisker--85b6d95f4d--b42t8-eth0" Jul 14 23:06:47.848202 containerd[1538]: 2025-07-14 23:06:47.814 [INFO][3982] cni-plugin/k8s.go 418: Populated endpoint ContainerID="cd6b1c0fdd51d1e3531309fc702308e49634ad73ed531e4b291a9cf8be588953" Namespace="calico-system" Pod="whisker-85b6d95f4d-b42t8" WorkloadEndpoint="localhost-k8s-whisker--85b6d95f4d--b42t8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--85b6d95f4d--b42t8-eth0", GenerateName:"whisker-85b6d95f4d-", Namespace:"calico-system", SelfLink:"", UID:"7b96fee8-5b1c-4633-a017-c28e7a1156e8", ResourceVersion:"908", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 23, 6, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"85b6d95f4d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-85b6d95f4d-b42t8", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calibf747c952e7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 23:06:47.848202 containerd[1538]: 2025-07-14 23:06:47.814 [INFO][3982] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="cd6b1c0fdd51d1e3531309fc702308e49634ad73ed531e4b291a9cf8be588953" Namespace="calico-system" Pod="whisker-85b6d95f4d-b42t8" WorkloadEndpoint="localhost-k8s-whisker--85b6d95f4d--b42t8-eth0" Jul 14 23:06:47.848202 containerd[1538]: 2025-07-14 23:06:47.814 [INFO][3982] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibf747c952e7 ContainerID="cd6b1c0fdd51d1e3531309fc702308e49634ad73ed531e4b291a9cf8be588953" Namespace="calico-system" Pod="whisker-85b6d95f4d-b42t8" WorkloadEndpoint="localhost-k8s-whisker--85b6d95f4d--b42t8-eth0" Jul 14 23:06:47.848202 containerd[1538]: 2025-07-14 23:06:47.831 [INFO][3982] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="cd6b1c0fdd51d1e3531309fc702308e49634ad73ed531e4b291a9cf8be588953" Namespace="calico-system" Pod="whisker-85b6d95f4d-b42t8" WorkloadEndpoint="localhost-k8s-whisker--85b6d95f4d--b42t8-eth0" Jul 14 23:06:47.848202 containerd[1538]: 2025-07-14 23:06:47.831 [INFO][3982] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="cd6b1c0fdd51d1e3531309fc702308e49634ad73ed531e4b291a9cf8be588953" Namespace="calico-system" Pod="whisker-85b6d95f4d-b42t8" WorkloadEndpoint="localhost-k8s-whisker--85b6d95f4d--b42t8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--85b6d95f4d--b42t8-eth0", GenerateName:"whisker-85b6d95f4d-", Namespace:"calico-system", SelfLink:"", UID:"7b96fee8-5b1c-4633-a017-c28e7a1156e8", ResourceVersion:"908", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 23, 6, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"85b6d95f4d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"cd6b1c0fdd51d1e3531309fc702308e49634ad73ed531e4b291a9cf8be588953", Pod:"whisker-85b6d95f4d-b42t8", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calibf747c952e7", MAC:"42:6d:15:fa:f5:ce", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 23:06:47.848202 containerd[1538]: 2025-07-14 23:06:47.839 [INFO][3982] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="cd6b1c0fdd51d1e3531309fc702308e49634ad73ed531e4b291a9cf8be588953" Namespace="calico-system" Pod="whisker-85b6d95f4d-b42t8" WorkloadEndpoint="localhost-k8s-whisker--85b6d95f4d--b42t8-eth0" Jul 14 23:06:47.864768 containerd[1538]: time="2025-07-14T23:06:47.863161025Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 23:06:47.864768 containerd[1538]: time="2025-07-14T23:06:47.863206439Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 23:06:47.864768 containerd[1538]: time="2025-07-14T23:06:47.863225091Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 23:06:47.864768 containerd[1538]: time="2025-07-14T23:06:47.863287326Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 23:06:47.892423 systemd[1]: Started cri-containerd-cd6b1c0fdd51d1e3531309fc702308e49634ad73ed531e4b291a9cf8be588953.scope - libcontainer container cd6b1c0fdd51d1e3531309fc702308e49634ad73ed531e4b291a9cf8be588953. Jul 14 23:06:47.919732 systemd-resolved[1450]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 14 23:06:47.961137 containerd[1538]: time="2025-07-14T23:06:47.960899972Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-85b6d95f4d-b42t8,Uid:7b96fee8-5b1c-4633-a017-c28e7a1156e8,Namespace:calico-system,Attempt:0,} returns sandbox id \"cd6b1c0fdd51d1e3531309fc702308e49634ad73ed531e4b291a9cf8be588953\"" Jul 14 23:06:47.967569 containerd[1538]: time="2025-07-14T23:06:47.966678565Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\"" Jul 14 23:06:48.121094 kernel: bpftool[4171]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jul 14 23:06:48.272417 systemd-networkd[1449]: vxlan.calico: Link UP Jul 14 23:06:48.272422 systemd-networkd[1449]: vxlan.calico: Gained carrier Jul 14 23:06:49.170895 kubelet[2725]: I0714 23:06:49.170860 2725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c1d1e827-ce30-4bdb-94ef-778bb4b83e4f" path="/var/lib/kubelet/pods/c1d1e827-ce30-4bdb-94ef-778bb4b83e4f/volumes" Jul 14 23:06:49.573584 containerd[1538]: time="2025-07-14T23:06:49.573553855Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 23:06:49.574249 containerd[1538]: time="2025-07-14T23:06:49.574223345Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.2: active requests=0, bytes read=4661207" Jul 14 23:06:49.574626 containerd[1538]: time="2025-07-14T23:06:49.574610381Z" level=info msg="ImageCreate event name:\"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 23:06:49.575666 containerd[1538]: time="2025-07-14T23:06:49.575640365Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 23:06:49.576172 containerd[1538]: time="2025-07-14T23:06:49.576059010Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.2\" with image id \"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\", size \"6153902\" in 1.60930959s" Jul 14 23:06:49.576172 containerd[1538]: time="2025-07-14T23:06:49.576086373Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\" returns image reference \"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\"" Jul 14 23:06:49.578209 containerd[1538]: time="2025-07-14T23:06:49.578188447Z" level=info msg="CreateContainer within sandbox \"cd6b1c0fdd51d1e3531309fc702308e49634ad73ed531e4b291a9cf8be588953\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Jul 14 23:06:49.584064 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount102258860.mount: Deactivated successfully. Jul 14 23:06:49.585761 containerd[1538]: time="2025-07-14T23:06:49.585745935Z" level=info msg="CreateContainer within sandbox \"cd6b1c0fdd51d1e3531309fc702308e49634ad73ed531e4b291a9cf8be588953\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"50f0dc69255f0805bed30ba34dd963cfe9ad2abd1138ad8c667200ecfc627803\"" Jul 14 23:06:49.586761 containerd[1538]: time="2025-07-14T23:06:49.586190346Z" level=info msg="StartContainer for \"50f0dc69255f0805bed30ba34dd963cfe9ad2abd1138ad8c667200ecfc627803\"" Jul 14 23:06:49.610203 systemd[1]: Started cri-containerd-50f0dc69255f0805bed30ba34dd963cfe9ad2abd1138ad8c667200ecfc627803.scope - libcontainer container 50f0dc69255f0805bed30ba34dd963cfe9ad2abd1138ad8c667200ecfc627803. Jul 14 23:06:49.640323 containerd[1538]: time="2025-07-14T23:06:49.640277327Z" level=info msg="StartContainer for \"50f0dc69255f0805bed30ba34dd963cfe9ad2abd1138ad8c667200ecfc627803\" returns successfully" Jul 14 23:06:49.641468 containerd[1538]: time="2025-07-14T23:06:49.641450318Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\"" Jul 14 23:06:49.737213 systemd-networkd[1449]: calibf747c952e7: Gained IPv6LL Jul 14 23:06:50.185405 systemd-networkd[1449]: vxlan.calico: Gained IPv6LL Jul 14 23:06:51.951945 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2505544497.mount: Deactivated successfully. Jul 14 23:06:52.023111 containerd[1538]: time="2025-07-14T23:06:52.023020580Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 23:06:52.023650 containerd[1538]: time="2025-07-14T23:06:52.023613599Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.2: active requests=0, bytes read=33083477" Jul 14 23:06:52.024105 containerd[1538]: time="2025-07-14T23:06:52.023893430Z" level=info msg="ImageCreate event name:\"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 23:06:52.025477 containerd[1538]: time="2025-07-14T23:06:52.025438815Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 23:06:52.027196 containerd[1538]: time="2025-07-14T23:06:52.026875248Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" with image id \"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\", size \"33083307\" in 2.385403026s" Jul 14 23:06:52.027196 containerd[1538]: time="2025-07-14T23:06:52.026899686Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" returns image reference \"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\"" Jul 14 23:06:52.029795 containerd[1538]: time="2025-07-14T23:06:52.029767726Z" level=info msg="CreateContainer within sandbox \"cd6b1c0fdd51d1e3531309fc702308e49634ad73ed531e4b291a9cf8be588953\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Jul 14 23:06:52.037465 containerd[1538]: time="2025-07-14T23:06:52.037439004Z" level=info msg="CreateContainer within sandbox \"cd6b1c0fdd51d1e3531309fc702308e49634ad73ed531e4b291a9cf8be588953\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"90090fc44ff096bec66944a069e67371c207abd0cffaabf7907e90884325ac92\"" Jul 14 23:06:52.038345 containerd[1538]: time="2025-07-14T23:06:52.038189050Z" level=info msg="StartContainer for \"90090fc44ff096bec66944a069e67371c207abd0cffaabf7907e90884325ac92\"" Jul 14 23:06:52.060227 systemd[1]: Started cri-containerd-90090fc44ff096bec66944a069e67371c207abd0cffaabf7907e90884325ac92.scope - libcontainer container 90090fc44ff096bec66944a069e67371c207abd0cffaabf7907e90884325ac92. Jul 14 23:06:52.086840 containerd[1538]: time="2025-07-14T23:06:52.086635019Z" level=info msg="StartContainer for \"90090fc44ff096bec66944a069e67371c207abd0cffaabf7907e90884325ac92\" returns successfully" Jul 14 23:06:52.168185 containerd[1538]: time="2025-07-14T23:06:52.167927386Z" level=info msg="StopPodSandbox for \"80facd741d5e4ffb378eb31c003e2b8b943dc8d829374aa5eb7a612dc68ca69d\"" Jul 14 23:06:52.168628 containerd[1538]: time="2025-07-14T23:06:52.168398566Z" level=info msg="StopPodSandbox for \"d554541d3db0c3b607ce2e6782d0ca7031b7705c981827114079a1a167c3af10\"" Jul 14 23:06:52.169295 containerd[1538]: time="2025-07-14T23:06:52.168952086Z" level=info msg="StopPodSandbox for \"60b63cd1d2bc8137d3b2c460cacbcefb6a777adc4ebc12fbbcc857d1a5535357\"" Jul 14 23:06:52.260238 containerd[1538]: 2025-07-14 23:06:52.227 [INFO][4378] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="80facd741d5e4ffb378eb31c003e2b8b943dc8d829374aa5eb7a612dc68ca69d" Jul 14 23:06:52.260238 containerd[1538]: 2025-07-14 23:06:52.227 [INFO][4378] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="80facd741d5e4ffb378eb31c003e2b8b943dc8d829374aa5eb7a612dc68ca69d" iface="eth0" netns="/var/run/netns/cni-915bab26-584a-3399-b7c9-f1e70e5677aa" Jul 14 23:06:52.260238 containerd[1538]: 2025-07-14 23:06:52.227 [INFO][4378] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="80facd741d5e4ffb378eb31c003e2b8b943dc8d829374aa5eb7a612dc68ca69d" iface="eth0" netns="/var/run/netns/cni-915bab26-584a-3399-b7c9-f1e70e5677aa" Jul 14 23:06:52.260238 containerd[1538]: 2025-07-14 23:06:52.227 [INFO][4378] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="80facd741d5e4ffb378eb31c003e2b8b943dc8d829374aa5eb7a612dc68ca69d" iface="eth0" netns="/var/run/netns/cni-915bab26-584a-3399-b7c9-f1e70e5677aa" Jul 14 23:06:52.260238 containerd[1538]: 2025-07-14 23:06:52.227 [INFO][4378] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="80facd741d5e4ffb378eb31c003e2b8b943dc8d829374aa5eb7a612dc68ca69d" Jul 14 23:06:52.260238 containerd[1538]: 2025-07-14 23:06:52.227 [INFO][4378] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="80facd741d5e4ffb378eb31c003e2b8b943dc8d829374aa5eb7a612dc68ca69d" Jul 14 23:06:52.260238 containerd[1538]: 2025-07-14 23:06:52.251 [INFO][4403] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="80facd741d5e4ffb378eb31c003e2b8b943dc8d829374aa5eb7a612dc68ca69d" HandleID="k8s-pod-network.80facd741d5e4ffb378eb31c003e2b8b943dc8d829374aa5eb7a612dc68ca69d" Workload="localhost-k8s-csi--node--driver--4jr6z-eth0" Jul 14 23:06:52.260238 containerd[1538]: 2025-07-14 23:06:52.251 [INFO][4403] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 23:06:52.260238 containerd[1538]: 2025-07-14 23:06:52.251 [INFO][4403] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 23:06:52.260238 containerd[1538]: 2025-07-14 23:06:52.255 [WARNING][4403] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="80facd741d5e4ffb378eb31c003e2b8b943dc8d829374aa5eb7a612dc68ca69d" HandleID="k8s-pod-network.80facd741d5e4ffb378eb31c003e2b8b943dc8d829374aa5eb7a612dc68ca69d" Workload="localhost-k8s-csi--node--driver--4jr6z-eth0" Jul 14 23:06:52.260238 containerd[1538]: 2025-07-14 23:06:52.255 [INFO][4403] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="80facd741d5e4ffb378eb31c003e2b8b943dc8d829374aa5eb7a612dc68ca69d" HandleID="k8s-pod-network.80facd741d5e4ffb378eb31c003e2b8b943dc8d829374aa5eb7a612dc68ca69d" Workload="localhost-k8s-csi--node--driver--4jr6z-eth0" Jul 14 23:06:52.260238 containerd[1538]: 2025-07-14 23:06:52.255 [INFO][4403] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 23:06:52.260238 containerd[1538]: 2025-07-14 23:06:52.258 [INFO][4378] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="80facd741d5e4ffb378eb31c003e2b8b943dc8d829374aa5eb7a612dc68ca69d" Jul 14 23:06:52.262654 containerd[1538]: time="2025-07-14T23:06:52.260341734Z" level=info msg="TearDown network for sandbox \"80facd741d5e4ffb378eb31c003e2b8b943dc8d829374aa5eb7a612dc68ca69d\" successfully" Jul 14 23:06:52.262654 containerd[1538]: time="2025-07-14T23:06:52.260357988Z" level=info msg="StopPodSandbox for \"80facd741d5e4ffb378eb31c003e2b8b943dc8d829374aa5eb7a612dc68ca69d\" returns successfully" Jul 14 23:06:52.262654 containerd[1538]: time="2025-07-14T23:06:52.260828679Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4jr6z,Uid:ffa01b57-cf5c-4652-8eda-490fdd179a1b,Namespace:calico-system,Attempt:1,}" Jul 14 23:06:52.267565 containerd[1538]: 2025-07-14 23:06:52.222 [INFO][4377] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d554541d3db0c3b607ce2e6782d0ca7031b7705c981827114079a1a167c3af10" Jul 14 23:06:52.267565 containerd[1538]: 2025-07-14 23:06:52.222 [INFO][4377] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="d554541d3db0c3b607ce2e6782d0ca7031b7705c981827114079a1a167c3af10" iface="eth0" netns="/var/run/netns/cni-7b1f32db-e0a0-48c7-b347-8ad706051774" Jul 14 23:06:52.267565 containerd[1538]: 2025-07-14 23:06:52.222 [INFO][4377] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="d554541d3db0c3b607ce2e6782d0ca7031b7705c981827114079a1a167c3af10" iface="eth0" netns="/var/run/netns/cni-7b1f32db-e0a0-48c7-b347-8ad706051774" Jul 14 23:06:52.267565 containerd[1538]: 2025-07-14 23:06:52.222 [INFO][4377] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="d554541d3db0c3b607ce2e6782d0ca7031b7705c981827114079a1a167c3af10" iface="eth0" netns="/var/run/netns/cni-7b1f32db-e0a0-48c7-b347-8ad706051774" Jul 14 23:06:52.267565 containerd[1538]: 2025-07-14 23:06:52.222 [INFO][4377] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d554541d3db0c3b607ce2e6782d0ca7031b7705c981827114079a1a167c3af10" Jul 14 23:06:52.267565 containerd[1538]: 2025-07-14 23:06:52.222 [INFO][4377] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d554541d3db0c3b607ce2e6782d0ca7031b7705c981827114079a1a167c3af10" Jul 14 23:06:52.267565 containerd[1538]: 2025-07-14 23:06:52.252 [INFO][4396] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d554541d3db0c3b607ce2e6782d0ca7031b7705c981827114079a1a167c3af10" HandleID="k8s-pod-network.d554541d3db0c3b607ce2e6782d0ca7031b7705c981827114079a1a167c3af10" Workload="localhost-k8s-coredns--7c65d6cfc9--86f6f-eth0" Jul 14 23:06:52.267565 containerd[1538]: 2025-07-14 23:06:52.252 [INFO][4396] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 23:06:52.267565 containerd[1538]: 2025-07-14 23:06:52.255 [INFO][4396] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 23:06:52.267565 containerd[1538]: 2025-07-14 23:06:52.261 [WARNING][4396] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d554541d3db0c3b607ce2e6782d0ca7031b7705c981827114079a1a167c3af10" HandleID="k8s-pod-network.d554541d3db0c3b607ce2e6782d0ca7031b7705c981827114079a1a167c3af10" Workload="localhost-k8s-coredns--7c65d6cfc9--86f6f-eth0" Jul 14 23:06:52.267565 containerd[1538]: 2025-07-14 23:06:52.261 [INFO][4396] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d554541d3db0c3b607ce2e6782d0ca7031b7705c981827114079a1a167c3af10" HandleID="k8s-pod-network.d554541d3db0c3b607ce2e6782d0ca7031b7705c981827114079a1a167c3af10" Workload="localhost-k8s-coredns--7c65d6cfc9--86f6f-eth0" Jul 14 23:06:52.267565 containerd[1538]: 2025-07-14 23:06:52.263 [INFO][4396] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 23:06:52.267565 containerd[1538]: 2025-07-14 23:06:52.265 [INFO][4377] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d554541d3db0c3b607ce2e6782d0ca7031b7705c981827114079a1a167c3af10" Jul 14 23:06:52.267565 containerd[1538]: time="2025-07-14T23:06:52.267339685Z" level=info msg="TearDown network for sandbox \"d554541d3db0c3b607ce2e6782d0ca7031b7705c981827114079a1a167c3af10\" successfully" Jul 14 23:06:52.267565 containerd[1538]: time="2025-07-14T23:06:52.267352726Z" level=info msg="StopPodSandbox for \"d554541d3db0c3b607ce2e6782d0ca7031b7705c981827114079a1a167c3af10\" returns successfully" Jul 14 23:06:52.286948 containerd[1538]: time="2025-07-14T23:06:52.267815792Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-86f6f,Uid:47889b81-1613-42d1-9473-e89886fa669f,Namespace:kube-system,Attempt:1,}" Jul 14 23:06:52.286948 containerd[1538]: 2025-07-14 23:06:52.222 [INFO][4382] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="60b63cd1d2bc8137d3b2c460cacbcefb6a777adc4ebc12fbbcc857d1a5535357" Jul 14 23:06:52.286948 containerd[1538]: 2025-07-14 23:06:52.223 [INFO][4382] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="60b63cd1d2bc8137d3b2c460cacbcefb6a777adc4ebc12fbbcc857d1a5535357" iface="eth0" netns="/var/run/netns/cni-4546026b-d808-a36b-4391-9a677db26266" Jul 14 23:06:52.286948 containerd[1538]: 2025-07-14 23:06:52.223 [INFO][4382] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="60b63cd1d2bc8137d3b2c460cacbcefb6a777adc4ebc12fbbcc857d1a5535357" iface="eth0" netns="/var/run/netns/cni-4546026b-d808-a36b-4391-9a677db26266" Jul 14 23:06:52.286948 containerd[1538]: 2025-07-14 23:06:52.224 [INFO][4382] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="60b63cd1d2bc8137d3b2c460cacbcefb6a777adc4ebc12fbbcc857d1a5535357" iface="eth0" netns="/var/run/netns/cni-4546026b-d808-a36b-4391-9a677db26266" Jul 14 23:06:52.286948 containerd[1538]: 2025-07-14 23:06:52.224 [INFO][4382] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="60b63cd1d2bc8137d3b2c460cacbcefb6a777adc4ebc12fbbcc857d1a5535357" Jul 14 23:06:52.286948 containerd[1538]: 2025-07-14 23:06:52.224 [INFO][4382] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="60b63cd1d2bc8137d3b2c460cacbcefb6a777adc4ebc12fbbcc857d1a5535357" Jul 14 23:06:52.286948 containerd[1538]: 2025-07-14 23:06:52.253 [INFO][4398] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="60b63cd1d2bc8137d3b2c460cacbcefb6a777adc4ebc12fbbcc857d1a5535357" HandleID="k8s-pod-network.60b63cd1d2bc8137d3b2c460cacbcefb6a777adc4ebc12fbbcc857d1a5535357" Workload="localhost-k8s-coredns--7c65d6cfc9--hc96w-eth0" Jul 14 23:06:52.286948 containerd[1538]: 2025-07-14 23:06:52.253 [INFO][4398] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 23:06:52.286948 containerd[1538]: 2025-07-14 23:06:52.263 [INFO][4398] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 23:06:52.286948 containerd[1538]: 2025-07-14 23:06:52.268 [WARNING][4398] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="60b63cd1d2bc8137d3b2c460cacbcefb6a777adc4ebc12fbbcc857d1a5535357" HandleID="k8s-pod-network.60b63cd1d2bc8137d3b2c460cacbcefb6a777adc4ebc12fbbcc857d1a5535357" Workload="localhost-k8s-coredns--7c65d6cfc9--hc96w-eth0" Jul 14 23:06:52.286948 containerd[1538]: 2025-07-14 23:06:52.269 [INFO][4398] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="60b63cd1d2bc8137d3b2c460cacbcefb6a777adc4ebc12fbbcc857d1a5535357" HandleID="k8s-pod-network.60b63cd1d2bc8137d3b2c460cacbcefb6a777adc4ebc12fbbcc857d1a5535357" Workload="localhost-k8s-coredns--7c65d6cfc9--hc96w-eth0" Jul 14 23:06:52.286948 containerd[1538]: 2025-07-14 23:06:52.272 [INFO][4398] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 23:06:52.286948 containerd[1538]: 2025-07-14 23:06:52.274 [INFO][4382] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="60b63cd1d2bc8137d3b2c460cacbcefb6a777adc4ebc12fbbcc857d1a5535357" Jul 14 23:06:52.286948 containerd[1538]: time="2025-07-14T23:06:52.275204060Z" level=info msg="TearDown network for sandbox \"60b63cd1d2bc8137d3b2c460cacbcefb6a777adc4ebc12fbbcc857d1a5535357\" successfully" Jul 14 23:06:52.286948 containerd[1538]: time="2025-07-14T23:06:52.275217943Z" level=info msg="StopPodSandbox for \"60b63cd1d2bc8137d3b2c460cacbcefb6a777adc4ebc12fbbcc857d1a5535357\" returns successfully" Jul 14 23:06:52.286948 containerd[1538]: time="2025-07-14T23:06:52.275547521Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-hc96w,Uid:eac42073-634d-4a92-8c8b-4e4d39002987,Namespace:kube-system,Attempt:1,}" Jul 14 23:06:52.362415 systemd-networkd[1449]: cali93cabc924ca: Link UP Jul 14 23:06:52.363555 systemd-networkd[1449]: cali93cabc924ca: Gained carrier Jul 14 23:06:52.386442 containerd[1538]: 2025-07-14 23:06:52.308 [INFO][4420] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--4jr6z-eth0 csi-node-driver- calico-system ffa01b57-cf5c-4652-8eda-490fdd179a1b 935 0 2025-07-14 23:06:28 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:57bd658777 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-4jr6z eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali93cabc924ca [] [] }} ContainerID="a13ce779967ebe010f7e625b9a81e00a36d8fefd375ff48732cc04c50f17cb91" Namespace="calico-system" Pod="csi-node-driver-4jr6z" WorkloadEndpoint="localhost-k8s-csi--node--driver--4jr6z-" Jul 14 23:06:52.386442 containerd[1538]: 2025-07-14 23:06:52.309 [INFO][4420] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a13ce779967ebe010f7e625b9a81e00a36d8fefd375ff48732cc04c50f17cb91" Namespace="calico-system" Pod="csi-node-driver-4jr6z" WorkloadEndpoint="localhost-k8s-csi--node--driver--4jr6z-eth0" Jul 14 23:06:52.386442 containerd[1538]: 2025-07-14 23:06:52.325 [INFO][4432] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a13ce779967ebe010f7e625b9a81e00a36d8fefd375ff48732cc04c50f17cb91" HandleID="k8s-pod-network.a13ce779967ebe010f7e625b9a81e00a36d8fefd375ff48732cc04c50f17cb91" Workload="localhost-k8s-csi--node--driver--4jr6z-eth0" Jul 14 23:06:52.386442 containerd[1538]: 2025-07-14 23:06:52.325 [INFO][4432] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a13ce779967ebe010f7e625b9a81e00a36d8fefd375ff48732cc04c50f17cb91" HandleID="k8s-pod-network.a13ce779967ebe010f7e625b9a81e00a36d8fefd375ff48732cc04c50f17cb91" Workload="localhost-k8s-csi--node--driver--4jr6z-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f1a0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-4jr6z", "timestamp":"2025-07-14 23:06:52.325755637 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 14 23:06:52.386442 containerd[1538]: 2025-07-14 23:06:52.325 [INFO][4432] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 23:06:52.386442 containerd[1538]: 2025-07-14 23:06:52.325 [INFO][4432] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 23:06:52.386442 containerd[1538]: 2025-07-14 23:06:52.325 [INFO][4432] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 14 23:06:52.386442 containerd[1538]: 2025-07-14 23:06:52.331 [INFO][4432] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a13ce779967ebe010f7e625b9a81e00a36d8fefd375ff48732cc04c50f17cb91" host="localhost" Jul 14 23:06:52.386442 containerd[1538]: 2025-07-14 23:06:52.333 [INFO][4432] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 14 23:06:52.386442 containerd[1538]: 2025-07-14 23:06:52.335 [INFO][4432] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 14 23:06:52.386442 containerd[1538]: 2025-07-14 23:06:52.339 [INFO][4432] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 14 23:06:52.386442 containerd[1538]: 2025-07-14 23:06:52.340 [INFO][4432] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 14 23:06:52.386442 containerd[1538]: 2025-07-14 23:06:52.340 [INFO][4432] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.a13ce779967ebe010f7e625b9a81e00a36d8fefd375ff48732cc04c50f17cb91" host="localhost" Jul 14 23:06:52.386442 containerd[1538]: 2025-07-14 23:06:52.343 [INFO][4432] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.a13ce779967ebe010f7e625b9a81e00a36d8fefd375ff48732cc04c50f17cb91 Jul 14 23:06:52.386442 containerd[1538]: 2025-07-14 23:06:52.351 [INFO][4432] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.a13ce779967ebe010f7e625b9a81e00a36d8fefd375ff48732cc04c50f17cb91" host="localhost" Jul 14 23:06:52.386442 containerd[1538]: 2025-07-14 23:06:52.357 [INFO][4432] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.a13ce779967ebe010f7e625b9a81e00a36d8fefd375ff48732cc04c50f17cb91" host="localhost" Jul 14 23:06:52.386442 containerd[1538]: 2025-07-14 23:06:52.357 [INFO][4432] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.a13ce779967ebe010f7e625b9a81e00a36d8fefd375ff48732cc04c50f17cb91" host="localhost" Jul 14 23:06:52.386442 containerd[1538]: 2025-07-14 23:06:52.357 [INFO][4432] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 23:06:52.386442 containerd[1538]: 2025-07-14 23:06:52.357 [INFO][4432] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="a13ce779967ebe010f7e625b9a81e00a36d8fefd375ff48732cc04c50f17cb91" HandleID="k8s-pod-network.a13ce779967ebe010f7e625b9a81e00a36d8fefd375ff48732cc04c50f17cb91" Workload="localhost-k8s-csi--node--driver--4jr6z-eth0" Jul 14 23:06:52.399504 containerd[1538]: 2025-07-14 23:06:52.360 [INFO][4420] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a13ce779967ebe010f7e625b9a81e00a36d8fefd375ff48732cc04c50f17cb91" Namespace="calico-system" Pod="csi-node-driver-4jr6z" WorkloadEndpoint="localhost-k8s-csi--node--driver--4jr6z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--4jr6z-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"ffa01b57-cf5c-4652-8eda-490fdd179a1b", ResourceVersion:"935", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 23, 6, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-4jr6z", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali93cabc924ca", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 23:06:52.399504 containerd[1538]: 2025-07-14 23:06:52.360 [INFO][4420] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="a13ce779967ebe010f7e625b9a81e00a36d8fefd375ff48732cc04c50f17cb91" Namespace="calico-system" Pod="csi-node-driver-4jr6z" WorkloadEndpoint="localhost-k8s-csi--node--driver--4jr6z-eth0" Jul 14 23:06:52.399504 containerd[1538]: 2025-07-14 23:06:52.360 [INFO][4420] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali93cabc924ca ContainerID="a13ce779967ebe010f7e625b9a81e00a36d8fefd375ff48732cc04c50f17cb91" Namespace="calico-system" Pod="csi-node-driver-4jr6z" WorkloadEndpoint="localhost-k8s-csi--node--driver--4jr6z-eth0" Jul 14 23:06:52.399504 containerd[1538]: 2025-07-14 23:06:52.365 [INFO][4420] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a13ce779967ebe010f7e625b9a81e00a36d8fefd375ff48732cc04c50f17cb91" Namespace="calico-system" Pod="csi-node-driver-4jr6z" WorkloadEndpoint="localhost-k8s-csi--node--driver--4jr6z-eth0" Jul 14 23:06:52.399504 containerd[1538]: 2025-07-14 23:06:52.367 [INFO][4420] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a13ce779967ebe010f7e625b9a81e00a36d8fefd375ff48732cc04c50f17cb91" Namespace="calico-system" Pod="csi-node-driver-4jr6z" WorkloadEndpoint="localhost-k8s-csi--node--driver--4jr6z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--4jr6z-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"ffa01b57-cf5c-4652-8eda-490fdd179a1b", ResourceVersion:"935", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 23, 6, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a13ce779967ebe010f7e625b9a81e00a36d8fefd375ff48732cc04c50f17cb91", Pod:"csi-node-driver-4jr6z", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali93cabc924ca", MAC:"ea:8b:29:b9:a5:1d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 23:06:52.399504 containerd[1538]: 2025-07-14 23:06:52.381 [INFO][4420] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a13ce779967ebe010f7e625b9a81e00a36d8fefd375ff48732cc04c50f17cb91" Namespace="calico-system" Pod="csi-node-driver-4jr6z" WorkloadEndpoint="localhost-k8s-csi--node--driver--4jr6z-eth0" Jul 14 23:06:52.449016 containerd[1538]: time="2025-07-14T23:06:52.448904923Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 23:06:52.449016 containerd[1538]: time="2025-07-14T23:06:52.448939693Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 23:06:52.449016 containerd[1538]: time="2025-07-14T23:06:52.448949646Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 23:06:52.449366 containerd[1538]: time="2025-07-14T23:06:52.449348703Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 23:06:52.479222 systemd[1]: Started cri-containerd-a13ce779967ebe010f7e625b9a81e00a36d8fefd375ff48732cc04c50f17cb91.scope - libcontainer container a13ce779967ebe010f7e625b9a81e00a36d8fefd375ff48732cc04c50f17cb91. Jul 14 23:06:52.487656 systemd-resolved[1450]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 14 23:06:52.497155 containerd[1538]: time="2025-07-14T23:06:52.496860479Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4jr6z,Uid:ffa01b57-cf5c-4652-8eda-490fdd179a1b,Namespace:calico-system,Attempt:1,} returns sandbox id \"a13ce779967ebe010f7e625b9a81e00a36d8fefd375ff48732cc04c50f17cb91\"" Jul 14 23:06:52.512957 kubelet[2725]: I0714 23:06:52.445746 2725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-85b6d95f4d-b42t8" podStartSLOduration=1.380356833 podStartE2EDuration="5.445628833s" podCreationTimestamp="2025-07-14 23:06:47 +0000 UTC" firstStartedPulling="2025-07-14 23:06:47.962525717 +0000 UTC m=+36.883232007" lastFinishedPulling="2025-07-14 23:06:52.027797709 +0000 UTC m=+40.948504007" observedRunningTime="2025-07-14 23:06:52.445525428 +0000 UTC m=+41.366231728" watchObservedRunningTime="2025-07-14 23:06:52.445628833 +0000 UTC m=+41.366335126" Jul 14 23:06:52.514929 containerd[1538]: time="2025-07-14T23:06:52.514911877Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\"" Jul 14 23:06:52.536639 systemd-networkd[1449]: calif28411e7715: Link UP Jul 14 23:06:52.537219 systemd-networkd[1449]: calif28411e7715: Gained carrier Jul 14 23:06:52.555426 containerd[1538]: 2025-07-14 23:06:52.403 [INFO][4438] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7c65d6cfc9--86f6f-eth0 coredns-7c65d6cfc9- kube-system 47889b81-1613-42d1-9473-e89886fa669f 933 0 2025-07-14 23:06:17 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7c65d6cfc9-86f6f eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calif28411e7715 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="d77dc65ffc35b92ec2952aeb03ab362af3887c9017312b0ff818032f4be181ac" Namespace="kube-system" Pod="coredns-7c65d6cfc9-86f6f" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--86f6f-" Jul 14 23:06:52.555426 containerd[1538]: 2025-07-14 23:06:52.404 [INFO][4438] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d77dc65ffc35b92ec2952aeb03ab362af3887c9017312b0ff818032f4be181ac" Namespace="kube-system" Pod="coredns-7c65d6cfc9-86f6f" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--86f6f-eth0" Jul 14 23:06:52.555426 containerd[1538]: 2025-07-14 23:06:52.436 [INFO][4470] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d77dc65ffc35b92ec2952aeb03ab362af3887c9017312b0ff818032f4be181ac" HandleID="k8s-pod-network.d77dc65ffc35b92ec2952aeb03ab362af3887c9017312b0ff818032f4be181ac" Workload="localhost-k8s-coredns--7c65d6cfc9--86f6f-eth0" Jul 14 23:06:52.555426 containerd[1538]: 2025-07-14 23:06:52.437 [INFO][4470] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d77dc65ffc35b92ec2952aeb03ab362af3887c9017312b0ff818032f4be181ac" HandleID="k8s-pod-network.d77dc65ffc35b92ec2952aeb03ab362af3887c9017312b0ff818032f4be181ac" Workload="localhost-k8s-coredns--7c65d6cfc9--86f6f-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5710), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7c65d6cfc9-86f6f", "timestamp":"2025-07-14 23:06:52.435665935 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 14 23:06:52.555426 containerd[1538]: 2025-07-14 23:06:52.437 [INFO][4470] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 23:06:52.555426 containerd[1538]: 2025-07-14 23:06:52.438 [INFO][4470] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 23:06:52.555426 containerd[1538]: 2025-07-14 23:06:52.438 [INFO][4470] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 14 23:06:52.555426 containerd[1538]: 2025-07-14 23:06:52.449 [INFO][4470] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d77dc65ffc35b92ec2952aeb03ab362af3887c9017312b0ff818032f4be181ac" host="localhost" Jul 14 23:06:52.555426 containerd[1538]: 2025-07-14 23:06:52.454 [INFO][4470] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 14 23:06:52.555426 containerd[1538]: 2025-07-14 23:06:52.471 [INFO][4470] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 14 23:06:52.555426 containerd[1538]: 2025-07-14 23:06:52.482 [INFO][4470] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 14 23:06:52.555426 containerd[1538]: 2025-07-14 23:06:52.506 [INFO][4470] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 14 23:06:52.555426 containerd[1538]: 2025-07-14 23:06:52.506 [INFO][4470] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.d77dc65ffc35b92ec2952aeb03ab362af3887c9017312b0ff818032f4be181ac" host="localhost" Jul 14 23:06:52.555426 containerd[1538]: 2025-07-14 23:06:52.508 [INFO][4470] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.d77dc65ffc35b92ec2952aeb03ab362af3887c9017312b0ff818032f4be181ac Jul 14 23:06:52.555426 containerd[1538]: 2025-07-14 23:06:52.514 [INFO][4470] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.d77dc65ffc35b92ec2952aeb03ab362af3887c9017312b0ff818032f4be181ac" host="localhost" Jul 14 23:06:52.555426 containerd[1538]: 2025-07-14 23:06:52.530 [INFO][4470] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.d77dc65ffc35b92ec2952aeb03ab362af3887c9017312b0ff818032f4be181ac" host="localhost" Jul 14 23:06:52.555426 containerd[1538]: 2025-07-14 23:06:52.530 [INFO][4470] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.d77dc65ffc35b92ec2952aeb03ab362af3887c9017312b0ff818032f4be181ac" host="localhost" Jul 14 23:06:52.555426 containerd[1538]: 2025-07-14 23:06:52.530 [INFO][4470] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 23:06:52.555426 containerd[1538]: 2025-07-14 23:06:52.530 [INFO][4470] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="d77dc65ffc35b92ec2952aeb03ab362af3887c9017312b0ff818032f4be181ac" HandleID="k8s-pod-network.d77dc65ffc35b92ec2952aeb03ab362af3887c9017312b0ff818032f4be181ac" Workload="localhost-k8s-coredns--7c65d6cfc9--86f6f-eth0" Jul 14 23:06:52.555851 containerd[1538]: 2025-07-14 23:06:52.531 [INFO][4438] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d77dc65ffc35b92ec2952aeb03ab362af3887c9017312b0ff818032f4be181ac" Namespace="kube-system" Pod="coredns-7c65d6cfc9-86f6f" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--86f6f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--86f6f-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"47889b81-1613-42d1-9473-e89886fa669f", ResourceVersion:"933", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 23, 6, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7c65d6cfc9-86f6f", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif28411e7715", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 23:06:52.555851 containerd[1538]: 2025-07-14 23:06:52.531 [INFO][4438] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="d77dc65ffc35b92ec2952aeb03ab362af3887c9017312b0ff818032f4be181ac" Namespace="kube-system" Pod="coredns-7c65d6cfc9-86f6f" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--86f6f-eth0" Jul 14 23:06:52.555851 containerd[1538]: 2025-07-14 23:06:52.531 [INFO][4438] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif28411e7715 ContainerID="d77dc65ffc35b92ec2952aeb03ab362af3887c9017312b0ff818032f4be181ac" Namespace="kube-system" Pod="coredns-7c65d6cfc9-86f6f" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--86f6f-eth0" Jul 14 23:06:52.555851 containerd[1538]: 2025-07-14 23:06:52.537 [INFO][4438] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d77dc65ffc35b92ec2952aeb03ab362af3887c9017312b0ff818032f4be181ac" Namespace="kube-system" Pod="coredns-7c65d6cfc9-86f6f" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--86f6f-eth0" Jul 14 23:06:52.555851 containerd[1538]: 2025-07-14 23:06:52.538 [INFO][4438] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d77dc65ffc35b92ec2952aeb03ab362af3887c9017312b0ff818032f4be181ac" Namespace="kube-system" Pod="coredns-7c65d6cfc9-86f6f" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--86f6f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--86f6f-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"47889b81-1613-42d1-9473-e89886fa669f", ResourceVersion:"933", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 23, 6, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d77dc65ffc35b92ec2952aeb03ab362af3887c9017312b0ff818032f4be181ac", Pod:"coredns-7c65d6cfc9-86f6f", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif28411e7715", MAC:"32:51:ad:9c:25:9a", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 23:06:52.555851 containerd[1538]: 2025-07-14 23:06:52.553 [INFO][4438] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d77dc65ffc35b92ec2952aeb03ab362af3887c9017312b0ff818032f4be181ac" Namespace="kube-system" Pod="coredns-7c65d6cfc9-86f6f" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--86f6f-eth0" Jul 14 23:06:52.580121 containerd[1538]: time="2025-07-14T23:06:52.580012425Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 23:06:52.580121 containerd[1538]: time="2025-07-14T23:06:52.580046454Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 23:06:52.580121 containerd[1538]: time="2025-07-14T23:06:52.580056649Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 23:06:52.580777 containerd[1538]: time="2025-07-14T23:06:52.580159226Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 23:06:52.594307 systemd[1]: Started cri-containerd-d77dc65ffc35b92ec2952aeb03ab362af3887c9017312b0ff818032f4be181ac.scope - libcontainer container d77dc65ffc35b92ec2952aeb03ab362af3887c9017312b0ff818032f4be181ac. Jul 14 23:06:52.596760 systemd-networkd[1449]: cali54d9e493df0: Link UP Jul 14 23:06:52.597339 systemd-networkd[1449]: cali54d9e493df0: Gained carrier Jul 14 23:06:52.607626 systemd-resolved[1450]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 14 23:06:52.623880 containerd[1538]: 2025-07-14 23:06:52.418 [INFO][4450] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7c65d6cfc9--hc96w-eth0 coredns-7c65d6cfc9- kube-system eac42073-634d-4a92-8c8b-4e4d39002987 934 0 2025-07-14 23:06:17 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7c65d6cfc9-hc96w eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali54d9e493df0 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="59e41c1efb25b4ce6b2ad587c763b9477923da04545c866aa20d8bc50c9c6c4f" Namespace="kube-system" Pod="coredns-7c65d6cfc9-hc96w" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--hc96w-" Jul 14 23:06:52.623880 containerd[1538]: 2025-07-14 23:06:52.418 [INFO][4450] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="59e41c1efb25b4ce6b2ad587c763b9477923da04545c866aa20d8bc50c9c6c4f" Namespace="kube-system" Pod="coredns-7c65d6cfc9-hc96w" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--hc96w-eth0" Jul 14 23:06:52.623880 containerd[1538]: 2025-07-14 23:06:52.464 [INFO][4475] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="59e41c1efb25b4ce6b2ad587c763b9477923da04545c866aa20d8bc50c9c6c4f" HandleID="k8s-pod-network.59e41c1efb25b4ce6b2ad587c763b9477923da04545c866aa20d8bc50c9c6c4f" Workload="localhost-k8s-coredns--7c65d6cfc9--hc96w-eth0" Jul 14 23:06:52.623880 containerd[1538]: 2025-07-14 23:06:52.466 [INFO][4475] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="59e41c1efb25b4ce6b2ad587c763b9477923da04545c866aa20d8bc50c9c6c4f" HandleID="k8s-pod-network.59e41c1efb25b4ce6b2ad587c763b9477923da04545c866aa20d8bc50c9c6c4f" Workload="localhost-k8s-coredns--7c65d6cfc9--hc96w-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d56d0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7c65d6cfc9-hc96w", "timestamp":"2025-07-14 23:06:52.464876539 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 14 23:06:52.623880 containerd[1538]: 2025-07-14 23:06:52.466 [INFO][4475] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 23:06:52.623880 containerd[1538]: 2025-07-14 23:06:52.530 [INFO][4475] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 23:06:52.623880 containerd[1538]: 2025-07-14 23:06:52.530 [INFO][4475] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 14 23:06:52.623880 containerd[1538]: 2025-07-14 23:06:52.552 [INFO][4475] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.59e41c1efb25b4ce6b2ad587c763b9477923da04545c866aa20d8bc50c9c6c4f" host="localhost" Jul 14 23:06:52.623880 containerd[1538]: 2025-07-14 23:06:52.558 [INFO][4475] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 14 23:06:52.623880 containerd[1538]: 2025-07-14 23:06:52.561 [INFO][4475] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 14 23:06:52.623880 containerd[1538]: 2025-07-14 23:06:52.565 [INFO][4475] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 14 23:06:52.623880 containerd[1538]: 2025-07-14 23:06:52.566 [INFO][4475] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 14 23:06:52.623880 containerd[1538]: 2025-07-14 23:06:52.566 [INFO][4475] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.59e41c1efb25b4ce6b2ad587c763b9477923da04545c866aa20d8bc50c9c6c4f" host="localhost" Jul 14 23:06:52.623880 containerd[1538]: 2025-07-14 23:06:52.567 [INFO][4475] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.59e41c1efb25b4ce6b2ad587c763b9477923da04545c866aa20d8bc50c9c6c4f Jul 14 23:06:52.623880 containerd[1538]: 2025-07-14 23:06:52.574 [INFO][4475] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.59e41c1efb25b4ce6b2ad587c763b9477923da04545c866aa20d8bc50c9c6c4f" host="localhost" Jul 14 23:06:52.623880 containerd[1538]: 2025-07-14 23:06:52.593 [INFO][4475] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.59e41c1efb25b4ce6b2ad587c763b9477923da04545c866aa20d8bc50c9c6c4f" host="localhost" Jul 14 23:06:52.623880 containerd[1538]: 2025-07-14 23:06:52.593 [INFO][4475] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.59e41c1efb25b4ce6b2ad587c763b9477923da04545c866aa20d8bc50c9c6c4f" host="localhost" Jul 14 23:06:52.623880 containerd[1538]: 2025-07-14 23:06:52.593 [INFO][4475] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 23:06:52.623880 containerd[1538]: 2025-07-14 23:06:52.593 [INFO][4475] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="59e41c1efb25b4ce6b2ad587c763b9477923da04545c866aa20d8bc50c9c6c4f" HandleID="k8s-pod-network.59e41c1efb25b4ce6b2ad587c763b9477923da04545c866aa20d8bc50c9c6c4f" Workload="localhost-k8s-coredns--7c65d6cfc9--hc96w-eth0" Jul 14 23:06:52.624520 containerd[1538]: 2025-07-14 23:06:52.594 [INFO][4450] cni-plugin/k8s.go 418: Populated endpoint ContainerID="59e41c1efb25b4ce6b2ad587c763b9477923da04545c866aa20d8bc50c9c6c4f" Namespace="kube-system" Pod="coredns-7c65d6cfc9-hc96w" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--hc96w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--hc96w-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"eac42073-634d-4a92-8c8b-4e4d39002987", ResourceVersion:"934", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 23, 6, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7c65d6cfc9-hc96w", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali54d9e493df0", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 23:06:52.624520 containerd[1538]: 2025-07-14 23:06:52.594 [INFO][4450] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="59e41c1efb25b4ce6b2ad587c763b9477923da04545c866aa20d8bc50c9c6c4f" Namespace="kube-system" Pod="coredns-7c65d6cfc9-hc96w" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--hc96w-eth0" Jul 14 23:06:52.624520 containerd[1538]: 2025-07-14 23:06:52.594 [INFO][4450] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali54d9e493df0 ContainerID="59e41c1efb25b4ce6b2ad587c763b9477923da04545c866aa20d8bc50c9c6c4f" Namespace="kube-system" Pod="coredns-7c65d6cfc9-hc96w" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--hc96w-eth0" Jul 14 23:06:52.624520 containerd[1538]: 2025-07-14 23:06:52.598 [INFO][4450] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="59e41c1efb25b4ce6b2ad587c763b9477923da04545c866aa20d8bc50c9c6c4f" Namespace="kube-system" Pod="coredns-7c65d6cfc9-hc96w" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--hc96w-eth0" Jul 14 23:06:52.624520 containerd[1538]: 2025-07-14 23:06:52.599 [INFO][4450] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="59e41c1efb25b4ce6b2ad587c763b9477923da04545c866aa20d8bc50c9c6c4f" Namespace="kube-system" Pod="coredns-7c65d6cfc9-hc96w" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--hc96w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--hc96w-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"eac42073-634d-4a92-8c8b-4e4d39002987", ResourceVersion:"934", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 23, 6, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"59e41c1efb25b4ce6b2ad587c763b9477923da04545c866aa20d8bc50c9c6c4f", Pod:"coredns-7c65d6cfc9-hc96w", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali54d9e493df0", MAC:"c2:3f:3a:11:2c:ab", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 23:06:52.624520 containerd[1538]: 2025-07-14 23:06:52.621 [INFO][4450] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="59e41c1efb25b4ce6b2ad587c763b9477923da04545c866aa20d8bc50c9c6c4f" Namespace="kube-system" Pod="coredns-7c65d6cfc9-hc96w" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--hc96w-eth0" Jul 14 23:06:52.637378 containerd[1538]: time="2025-07-14T23:06:52.637207216Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-86f6f,Uid:47889b81-1613-42d1-9473-e89886fa669f,Namespace:kube-system,Attempt:1,} returns sandbox id \"d77dc65ffc35b92ec2952aeb03ab362af3887c9017312b0ff818032f4be181ac\"" Jul 14 23:06:52.638385 systemd[1]: run-netns-cni\x2d915bab26\x2d584a\x2d3399\x2db7c9\x2df1e70e5677aa.mount: Deactivated successfully. Jul 14 23:06:52.638441 systemd[1]: run-netns-cni\x2d7b1f32db\x2de0a0\x2d48c7\x2db347\x2d8ad706051774.mount: Deactivated successfully. Jul 14 23:06:52.638476 systemd[1]: run-netns-cni\x2d4546026b\x2dd808\x2da36b\x2d4391\x2d9a677db26266.mount: Deactivated successfully. Jul 14 23:06:52.650540 containerd[1538]: time="2025-07-14T23:06:52.650358432Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 23:06:52.650540 containerd[1538]: time="2025-07-14T23:06:52.650414573Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 23:06:52.650540 containerd[1538]: time="2025-07-14T23:06:52.650436213Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 23:06:52.651159 containerd[1538]: time="2025-07-14T23:06:52.650907913Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 23:06:52.673172 systemd[1]: Started cri-containerd-59e41c1efb25b4ce6b2ad587c763b9477923da04545c866aa20d8bc50c9c6c4f.scope - libcontainer container 59e41c1efb25b4ce6b2ad587c763b9477923da04545c866aa20d8bc50c9c6c4f. Jul 14 23:06:52.681716 systemd-resolved[1450]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 14 23:06:52.701615 containerd[1538]: time="2025-07-14T23:06:52.701521759Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-hc96w,Uid:eac42073-634d-4a92-8c8b-4e4d39002987,Namespace:kube-system,Attempt:1,} returns sandbox id \"59e41c1efb25b4ce6b2ad587c763b9477923da04545c866aa20d8bc50c9c6c4f\"" Jul 14 23:06:52.717292 containerd[1538]: time="2025-07-14T23:06:52.717274294Z" level=info msg="CreateContainer within sandbox \"d77dc65ffc35b92ec2952aeb03ab362af3887c9017312b0ff818032f4be181ac\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 14 23:06:52.726723 containerd[1538]: time="2025-07-14T23:06:52.726613485Z" level=info msg="CreateContainer within sandbox \"59e41c1efb25b4ce6b2ad587c763b9477923da04545c866aa20d8bc50c9c6c4f\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 14 23:06:52.761117 containerd[1538]: time="2025-07-14T23:06:52.761087547Z" level=info msg="CreateContainer within sandbox \"59e41c1efb25b4ce6b2ad587c763b9477923da04545c866aa20d8bc50c9c6c4f\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"521029700d4e70d86b67b6e16985b9510628601b688187dcd72ae7516153c5a2\"" Jul 14 23:06:52.761421 containerd[1538]: time="2025-07-14T23:06:52.761385963Z" level=info msg="StartContainer for \"521029700d4e70d86b67b6e16985b9510628601b688187dcd72ae7516153c5a2\"" Jul 14 23:06:52.764124 containerd[1538]: time="2025-07-14T23:06:52.762834648Z" level=info msg="CreateContainer within sandbox \"d77dc65ffc35b92ec2952aeb03ab362af3887c9017312b0ff818032f4be181ac\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"7fab6ea707903f7346806fa547233649fad574ccc789eb0c2078fe73136c0903\"" Jul 14 23:06:52.764124 containerd[1538]: time="2025-07-14T23:06:52.763016860Z" level=info msg="StartContainer for \"7fab6ea707903f7346806fa547233649fad574ccc789eb0c2078fe73136c0903\"" Jul 14 23:06:52.785304 systemd[1]: Started cri-containerd-7fab6ea707903f7346806fa547233649fad574ccc789eb0c2078fe73136c0903.scope - libcontainer container 7fab6ea707903f7346806fa547233649fad574ccc789eb0c2078fe73136c0903. Jul 14 23:06:52.787934 systemd[1]: Started cri-containerd-521029700d4e70d86b67b6e16985b9510628601b688187dcd72ae7516153c5a2.scope - libcontainer container 521029700d4e70d86b67b6e16985b9510628601b688187dcd72ae7516153c5a2. Jul 14 23:06:52.808355 containerd[1538]: time="2025-07-14T23:06:52.808303219Z" level=info msg="StartContainer for \"521029700d4e70d86b67b6e16985b9510628601b688187dcd72ae7516153c5a2\" returns successfully" Jul 14 23:06:52.808355 containerd[1538]: time="2025-07-14T23:06:52.808339245Z" level=info msg="StartContainer for \"7fab6ea707903f7346806fa547233649fad574ccc789eb0c2078fe73136c0903\" returns successfully" Jul 14 23:06:53.164865 containerd[1538]: time="2025-07-14T23:06:53.163931963Z" level=info msg="StopPodSandbox for \"d7ba934ca5fa8f5a942e798510589363b14ba71279585d1870a93dee910bdc2b\"" Jul 14 23:06:53.228244 containerd[1538]: 2025-07-14 23:06:53.203 [INFO][4707] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d7ba934ca5fa8f5a942e798510589363b14ba71279585d1870a93dee910bdc2b" Jul 14 23:06:53.228244 containerd[1538]: 2025-07-14 23:06:53.204 [INFO][4707] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="d7ba934ca5fa8f5a942e798510589363b14ba71279585d1870a93dee910bdc2b" iface="eth0" netns="/var/run/netns/cni-0bb283a6-89cf-794b-b9ed-cedb8ff045a3" Jul 14 23:06:53.228244 containerd[1538]: 2025-07-14 23:06:53.204 [INFO][4707] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="d7ba934ca5fa8f5a942e798510589363b14ba71279585d1870a93dee910bdc2b" iface="eth0" netns="/var/run/netns/cni-0bb283a6-89cf-794b-b9ed-cedb8ff045a3" Jul 14 23:06:53.228244 containerd[1538]: 2025-07-14 23:06:53.204 [INFO][4707] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="d7ba934ca5fa8f5a942e798510589363b14ba71279585d1870a93dee910bdc2b" iface="eth0" netns="/var/run/netns/cni-0bb283a6-89cf-794b-b9ed-cedb8ff045a3" Jul 14 23:06:53.228244 containerd[1538]: 2025-07-14 23:06:53.204 [INFO][4707] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d7ba934ca5fa8f5a942e798510589363b14ba71279585d1870a93dee910bdc2b" Jul 14 23:06:53.228244 containerd[1538]: 2025-07-14 23:06:53.204 [INFO][4707] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d7ba934ca5fa8f5a942e798510589363b14ba71279585d1870a93dee910bdc2b" Jul 14 23:06:53.228244 containerd[1538]: 2025-07-14 23:06:53.221 [INFO][4714] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d7ba934ca5fa8f5a942e798510589363b14ba71279585d1870a93dee910bdc2b" HandleID="k8s-pod-network.d7ba934ca5fa8f5a942e798510589363b14ba71279585d1870a93dee910bdc2b" Workload="localhost-k8s-calico--apiserver--7fb9554685--vnrld-eth0" Jul 14 23:06:53.228244 containerd[1538]: 2025-07-14 23:06:53.221 [INFO][4714] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 23:06:53.228244 containerd[1538]: 2025-07-14 23:06:53.221 [INFO][4714] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 23:06:53.228244 containerd[1538]: 2025-07-14 23:06:53.225 [WARNING][4714] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d7ba934ca5fa8f5a942e798510589363b14ba71279585d1870a93dee910bdc2b" HandleID="k8s-pod-network.d7ba934ca5fa8f5a942e798510589363b14ba71279585d1870a93dee910bdc2b" Workload="localhost-k8s-calico--apiserver--7fb9554685--vnrld-eth0" Jul 14 23:06:53.228244 containerd[1538]: 2025-07-14 23:06:53.225 [INFO][4714] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d7ba934ca5fa8f5a942e798510589363b14ba71279585d1870a93dee910bdc2b" HandleID="k8s-pod-network.d7ba934ca5fa8f5a942e798510589363b14ba71279585d1870a93dee910bdc2b" Workload="localhost-k8s-calico--apiserver--7fb9554685--vnrld-eth0" Jul 14 23:06:53.228244 containerd[1538]: 2025-07-14 23:06:53.226 [INFO][4714] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 23:06:53.228244 containerd[1538]: 2025-07-14 23:06:53.227 [INFO][4707] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d7ba934ca5fa8f5a942e798510589363b14ba71279585d1870a93dee910bdc2b" Jul 14 23:06:53.229357 containerd[1538]: time="2025-07-14T23:06:53.228309137Z" level=info msg="TearDown network for sandbox \"d7ba934ca5fa8f5a942e798510589363b14ba71279585d1870a93dee910bdc2b\" successfully" Jul 14 23:06:53.229357 containerd[1538]: time="2025-07-14T23:06:53.228325479Z" level=info msg="StopPodSandbox for \"d7ba934ca5fa8f5a942e798510589363b14ba71279585d1870a93dee910bdc2b\" returns successfully" Jul 14 23:06:53.229357 containerd[1538]: time="2025-07-14T23:06:53.228920302Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7fb9554685-vnrld,Uid:b3b1fbe5-9018-4070-bc43-78a6548c5e8b,Namespace:calico-apiserver,Attempt:1,}" Jul 14 23:06:53.290029 systemd-networkd[1449]: cali7e858eefa40: Link UP Jul 14 23:06:53.290893 systemd-networkd[1449]: cali7e858eefa40: Gained carrier Jul 14 23:06:53.301622 containerd[1538]: 2025-07-14 23:06:53.251 [INFO][4721] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--7fb9554685--vnrld-eth0 calico-apiserver-7fb9554685- calico-apiserver b3b1fbe5-9018-4070-bc43-78a6548c5e8b 962 0 2025-07-14 23:06:25 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7fb9554685 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-7fb9554685-vnrld eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali7e858eefa40 [] [] }} ContainerID="8231eb6b89596f8bb82467154a9641ecdf78f3080ea626b95504303389ad45ab" Namespace="calico-apiserver" Pod="calico-apiserver-7fb9554685-vnrld" WorkloadEndpoint="localhost-k8s-calico--apiserver--7fb9554685--vnrld-" Jul 14 23:06:53.301622 containerd[1538]: 2025-07-14 23:06:53.252 [INFO][4721] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8231eb6b89596f8bb82467154a9641ecdf78f3080ea626b95504303389ad45ab" Namespace="calico-apiserver" Pod="calico-apiserver-7fb9554685-vnrld" WorkloadEndpoint="localhost-k8s-calico--apiserver--7fb9554685--vnrld-eth0" Jul 14 23:06:53.301622 containerd[1538]: 2025-07-14 23:06:53.266 [INFO][4732] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8231eb6b89596f8bb82467154a9641ecdf78f3080ea626b95504303389ad45ab" HandleID="k8s-pod-network.8231eb6b89596f8bb82467154a9641ecdf78f3080ea626b95504303389ad45ab" Workload="localhost-k8s-calico--apiserver--7fb9554685--vnrld-eth0" Jul 14 23:06:53.301622 containerd[1538]: 2025-07-14 23:06:53.267 [INFO][4732] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="8231eb6b89596f8bb82467154a9641ecdf78f3080ea626b95504303389ad45ab" HandleID="k8s-pod-network.8231eb6b89596f8bb82467154a9641ecdf78f3080ea626b95504303389ad45ab" Workload="localhost-k8s-calico--apiserver--7fb9554685--vnrld-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f8a0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-7fb9554685-vnrld", "timestamp":"2025-07-14 23:06:53.266888101 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 14 23:06:53.301622 containerd[1538]: 2025-07-14 23:06:53.267 [INFO][4732] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 23:06:53.301622 containerd[1538]: 2025-07-14 23:06:53.267 [INFO][4732] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 23:06:53.301622 containerd[1538]: 2025-07-14 23:06:53.267 [INFO][4732] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 14 23:06:53.301622 containerd[1538]: 2025-07-14 23:06:53.272 [INFO][4732] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8231eb6b89596f8bb82467154a9641ecdf78f3080ea626b95504303389ad45ab" host="localhost" Jul 14 23:06:53.301622 containerd[1538]: 2025-07-14 23:06:53.274 [INFO][4732] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 14 23:06:53.301622 containerd[1538]: 2025-07-14 23:06:53.276 [INFO][4732] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 14 23:06:53.301622 containerd[1538]: 2025-07-14 23:06:53.277 [INFO][4732] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 14 23:06:53.301622 containerd[1538]: 2025-07-14 23:06:53.278 [INFO][4732] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 14 23:06:53.301622 containerd[1538]: 2025-07-14 23:06:53.278 [INFO][4732] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.8231eb6b89596f8bb82467154a9641ecdf78f3080ea626b95504303389ad45ab" host="localhost" Jul 14 23:06:53.301622 containerd[1538]: 2025-07-14 23:06:53.278 [INFO][4732] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.8231eb6b89596f8bb82467154a9641ecdf78f3080ea626b95504303389ad45ab Jul 14 23:06:53.301622 containerd[1538]: 2025-07-14 23:06:53.282 [INFO][4732] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.8231eb6b89596f8bb82467154a9641ecdf78f3080ea626b95504303389ad45ab" host="localhost" Jul 14 23:06:53.301622 containerd[1538]: 2025-07-14 23:06:53.286 [INFO][4732] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.8231eb6b89596f8bb82467154a9641ecdf78f3080ea626b95504303389ad45ab" host="localhost" Jul 14 23:06:53.301622 containerd[1538]: 2025-07-14 23:06:53.286 [INFO][4732] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.8231eb6b89596f8bb82467154a9641ecdf78f3080ea626b95504303389ad45ab" host="localhost" Jul 14 23:06:53.301622 containerd[1538]: 2025-07-14 23:06:53.286 [INFO][4732] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 23:06:53.301622 containerd[1538]: 2025-07-14 23:06:53.286 [INFO][4732] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="8231eb6b89596f8bb82467154a9641ecdf78f3080ea626b95504303389ad45ab" HandleID="k8s-pod-network.8231eb6b89596f8bb82467154a9641ecdf78f3080ea626b95504303389ad45ab" Workload="localhost-k8s-calico--apiserver--7fb9554685--vnrld-eth0" Jul 14 23:06:53.302515 containerd[1538]: 2025-07-14 23:06:53.287 [INFO][4721] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8231eb6b89596f8bb82467154a9641ecdf78f3080ea626b95504303389ad45ab" Namespace="calico-apiserver" Pod="calico-apiserver-7fb9554685-vnrld" WorkloadEndpoint="localhost-k8s-calico--apiserver--7fb9554685--vnrld-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7fb9554685--vnrld-eth0", GenerateName:"calico-apiserver-7fb9554685-", Namespace:"calico-apiserver", SelfLink:"", UID:"b3b1fbe5-9018-4070-bc43-78a6548c5e8b", ResourceVersion:"962", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 23, 6, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7fb9554685", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-7fb9554685-vnrld", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7e858eefa40", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 23:06:53.302515 containerd[1538]: 2025-07-14 23:06:53.287 [INFO][4721] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="8231eb6b89596f8bb82467154a9641ecdf78f3080ea626b95504303389ad45ab" Namespace="calico-apiserver" Pod="calico-apiserver-7fb9554685-vnrld" WorkloadEndpoint="localhost-k8s-calico--apiserver--7fb9554685--vnrld-eth0" Jul 14 23:06:53.302515 containerd[1538]: 2025-07-14 23:06:53.287 [INFO][4721] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7e858eefa40 ContainerID="8231eb6b89596f8bb82467154a9641ecdf78f3080ea626b95504303389ad45ab" Namespace="calico-apiserver" Pod="calico-apiserver-7fb9554685-vnrld" WorkloadEndpoint="localhost-k8s-calico--apiserver--7fb9554685--vnrld-eth0" Jul 14 23:06:53.302515 containerd[1538]: 2025-07-14 23:06:53.290 [INFO][4721] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8231eb6b89596f8bb82467154a9641ecdf78f3080ea626b95504303389ad45ab" Namespace="calico-apiserver" Pod="calico-apiserver-7fb9554685-vnrld" WorkloadEndpoint="localhost-k8s-calico--apiserver--7fb9554685--vnrld-eth0" Jul 14 23:06:53.302515 containerd[1538]: 2025-07-14 23:06:53.291 [INFO][4721] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8231eb6b89596f8bb82467154a9641ecdf78f3080ea626b95504303389ad45ab" Namespace="calico-apiserver" Pod="calico-apiserver-7fb9554685-vnrld" WorkloadEndpoint="localhost-k8s-calico--apiserver--7fb9554685--vnrld-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7fb9554685--vnrld-eth0", GenerateName:"calico-apiserver-7fb9554685-", Namespace:"calico-apiserver", SelfLink:"", UID:"b3b1fbe5-9018-4070-bc43-78a6548c5e8b", ResourceVersion:"962", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 23, 6, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7fb9554685", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8231eb6b89596f8bb82467154a9641ecdf78f3080ea626b95504303389ad45ab", Pod:"calico-apiserver-7fb9554685-vnrld", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7e858eefa40", MAC:"56:31:fa:85:88:b1", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 23:06:53.302515 containerd[1538]: 2025-07-14 23:06:53.299 [INFO][4721] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8231eb6b89596f8bb82467154a9641ecdf78f3080ea626b95504303389ad45ab" Namespace="calico-apiserver" Pod="calico-apiserver-7fb9554685-vnrld" WorkloadEndpoint="localhost-k8s-calico--apiserver--7fb9554685--vnrld-eth0" Jul 14 23:06:53.316574 containerd[1538]: time="2025-07-14T23:06:53.316204005Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 23:06:53.316651 containerd[1538]: time="2025-07-14T23:06:53.316582552Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 23:06:53.316651 containerd[1538]: time="2025-07-14T23:06:53.316601491Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 23:06:53.316933 containerd[1538]: time="2025-07-14T23:06:53.316661401Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 23:06:53.335185 systemd[1]: Started cri-containerd-8231eb6b89596f8bb82467154a9641ecdf78f3080ea626b95504303389ad45ab.scope - libcontainer container 8231eb6b89596f8bb82467154a9641ecdf78f3080ea626b95504303389ad45ab. Jul 14 23:06:53.342633 systemd-resolved[1450]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 14 23:06:53.365279 kubelet[2725]: I0714 23:06:53.364984 2725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-86f6f" podStartSLOduration=36.364973081 podStartE2EDuration="36.364973081s" podCreationTimestamp="2025-07-14 23:06:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-14 23:06:53.364415229 +0000 UTC m=+42.285121528" watchObservedRunningTime="2025-07-14 23:06:53.364973081 +0000 UTC m=+42.285679375" Jul 14 23:06:53.373389 containerd[1538]: time="2025-07-14T23:06:53.373024489Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7fb9554685-vnrld,Uid:b3b1fbe5-9018-4070-bc43-78a6548c5e8b,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"8231eb6b89596f8bb82467154a9641ecdf78f3080ea626b95504303389ad45ab\"" Jul 14 23:06:53.636272 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4264363391.mount: Deactivated successfully. Jul 14 23:06:53.636338 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount332352486.mount: Deactivated successfully. Jul 14 23:06:53.636406 systemd[1]: run-netns-cni\x2d0bb283a6\x2d89cf\x2d794b\x2db9ed\x2dcedb8ff045a3.mount: Deactivated successfully. Jul 14 23:06:53.769302 systemd-networkd[1449]: cali93cabc924ca: Gained IPv6LL Jul 14 23:06:54.163961 containerd[1538]: time="2025-07-14T23:06:54.163936971Z" level=info msg="StopPodSandbox for \"8ebf0099ef01d13457469ce1ae86fc508ffb7913518970c58c807e2985630951\"" Jul 14 23:06:54.164705 containerd[1538]: time="2025-07-14T23:06:54.163946717Z" level=info msg="StopPodSandbox for \"d43c1a91dd2276562727eefbcc01517e504d878328879e85a13408edec409254\"" Jul 14 23:06:54.203039 kubelet[2725]: I0714 23:06:54.203001 2725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-hc96w" podStartSLOduration=37.202987595 podStartE2EDuration="37.202987595s" podCreationTimestamp="2025-07-14 23:06:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-14 23:06:53.390765867 +0000 UTC m=+42.311472166" watchObservedRunningTime="2025-07-14 23:06:54.202987595 +0000 UTC m=+43.123693888" Jul 14 23:06:54.236825 containerd[1538]: 2025-07-14 23:06:54.208 [INFO][4814] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8ebf0099ef01d13457469ce1ae86fc508ffb7913518970c58c807e2985630951" Jul 14 23:06:54.236825 containerd[1538]: 2025-07-14 23:06:54.208 [INFO][4814] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="8ebf0099ef01d13457469ce1ae86fc508ffb7913518970c58c807e2985630951" iface="eth0" netns="/var/run/netns/cni-3391564b-e461-11e3-ebc7-94f9d700bebc" Jul 14 23:06:54.236825 containerd[1538]: 2025-07-14 23:06:54.208 [INFO][4814] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="8ebf0099ef01d13457469ce1ae86fc508ffb7913518970c58c807e2985630951" iface="eth0" netns="/var/run/netns/cni-3391564b-e461-11e3-ebc7-94f9d700bebc" Jul 14 23:06:54.236825 containerd[1538]: 2025-07-14 23:06:54.208 [INFO][4814] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="8ebf0099ef01d13457469ce1ae86fc508ffb7913518970c58c807e2985630951" iface="eth0" netns="/var/run/netns/cni-3391564b-e461-11e3-ebc7-94f9d700bebc" Jul 14 23:06:54.236825 containerd[1538]: 2025-07-14 23:06:54.208 [INFO][4814] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8ebf0099ef01d13457469ce1ae86fc508ffb7913518970c58c807e2985630951" Jul 14 23:06:54.236825 containerd[1538]: 2025-07-14 23:06:54.208 [INFO][4814] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8ebf0099ef01d13457469ce1ae86fc508ffb7913518970c58c807e2985630951" Jul 14 23:06:54.236825 containerd[1538]: 2025-07-14 23:06:54.227 [INFO][4832] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8ebf0099ef01d13457469ce1ae86fc508ffb7913518970c58c807e2985630951" HandleID="k8s-pod-network.8ebf0099ef01d13457469ce1ae86fc508ffb7913518970c58c807e2985630951" Workload="localhost-k8s-calico--kube--controllers--789c44b4f5--hbf9t-eth0" Jul 14 23:06:54.236825 containerd[1538]: 2025-07-14 23:06:54.228 [INFO][4832] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 23:06:54.236825 containerd[1538]: 2025-07-14 23:06:54.228 [INFO][4832] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 23:06:54.236825 containerd[1538]: 2025-07-14 23:06:54.232 [WARNING][4832] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8ebf0099ef01d13457469ce1ae86fc508ffb7913518970c58c807e2985630951" HandleID="k8s-pod-network.8ebf0099ef01d13457469ce1ae86fc508ffb7913518970c58c807e2985630951" Workload="localhost-k8s-calico--kube--controllers--789c44b4f5--hbf9t-eth0" Jul 14 23:06:54.236825 containerd[1538]: 2025-07-14 23:06:54.232 [INFO][4832] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8ebf0099ef01d13457469ce1ae86fc508ffb7913518970c58c807e2985630951" HandleID="k8s-pod-network.8ebf0099ef01d13457469ce1ae86fc508ffb7913518970c58c807e2985630951" Workload="localhost-k8s-calico--kube--controllers--789c44b4f5--hbf9t-eth0" Jul 14 23:06:54.236825 containerd[1538]: 2025-07-14 23:06:54.233 [INFO][4832] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 23:06:54.236825 containerd[1538]: 2025-07-14 23:06:54.235 [INFO][4814] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8ebf0099ef01d13457469ce1ae86fc508ffb7913518970c58c807e2985630951" Jul 14 23:06:54.238106 containerd[1538]: time="2025-07-14T23:06:54.237476893Z" level=info msg="TearDown network for sandbox \"8ebf0099ef01d13457469ce1ae86fc508ffb7913518970c58c807e2985630951\" successfully" Jul 14 23:06:54.238106 containerd[1538]: time="2025-07-14T23:06:54.237495208Z" level=info msg="StopPodSandbox for \"8ebf0099ef01d13457469ce1ae86fc508ffb7913518970c58c807e2985630951\" returns successfully" Jul 14 23:06:54.238276 containerd[1538]: time="2025-07-14T23:06:54.238200535Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-789c44b4f5-hbf9t,Uid:f381ec3d-1f99-48f5-a759-d4ef727cd042,Namespace:calico-system,Attempt:1,}" Jul 14 23:06:54.238379 systemd[1]: run-netns-cni\x2d3391564b\x2de461\x2d11e3\x2debc7\x2d94f9d700bebc.mount: Deactivated successfully. Jul 14 23:06:54.243566 containerd[1538]: 2025-07-14 23:06:54.202 [INFO][4813] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d43c1a91dd2276562727eefbcc01517e504d878328879e85a13408edec409254" Jul 14 23:06:54.243566 containerd[1538]: 2025-07-14 23:06:54.203 [INFO][4813] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="d43c1a91dd2276562727eefbcc01517e504d878328879e85a13408edec409254" iface="eth0" netns="/var/run/netns/cni-2361a451-1cdc-2af2-93d9-dd682c7a6580" Jul 14 23:06:54.243566 containerd[1538]: 2025-07-14 23:06:54.203 [INFO][4813] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="d43c1a91dd2276562727eefbcc01517e504d878328879e85a13408edec409254" iface="eth0" netns="/var/run/netns/cni-2361a451-1cdc-2af2-93d9-dd682c7a6580" Jul 14 23:06:54.243566 containerd[1538]: 2025-07-14 23:06:54.205 [INFO][4813] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="d43c1a91dd2276562727eefbcc01517e504d878328879e85a13408edec409254" iface="eth0" netns="/var/run/netns/cni-2361a451-1cdc-2af2-93d9-dd682c7a6580" Jul 14 23:06:54.243566 containerd[1538]: 2025-07-14 23:06:54.205 [INFO][4813] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d43c1a91dd2276562727eefbcc01517e504d878328879e85a13408edec409254" Jul 14 23:06:54.243566 containerd[1538]: 2025-07-14 23:06:54.205 [INFO][4813] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d43c1a91dd2276562727eefbcc01517e504d878328879e85a13408edec409254" Jul 14 23:06:54.243566 containerd[1538]: 2025-07-14 23:06:54.232 [INFO][4827] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d43c1a91dd2276562727eefbcc01517e504d878328879e85a13408edec409254" HandleID="k8s-pod-network.d43c1a91dd2276562727eefbcc01517e504d878328879e85a13408edec409254" Workload="localhost-k8s-calico--apiserver--7fb9554685--xx2mf-eth0" Jul 14 23:06:54.243566 containerd[1538]: 2025-07-14 23:06:54.232 [INFO][4827] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 23:06:54.243566 containerd[1538]: 2025-07-14 23:06:54.233 [INFO][4827] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 23:06:54.243566 containerd[1538]: 2025-07-14 23:06:54.239 [WARNING][4827] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d43c1a91dd2276562727eefbcc01517e504d878328879e85a13408edec409254" HandleID="k8s-pod-network.d43c1a91dd2276562727eefbcc01517e504d878328879e85a13408edec409254" Workload="localhost-k8s-calico--apiserver--7fb9554685--xx2mf-eth0" Jul 14 23:06:54.243566 containerd[1538]: 2025-07-14 23:06:54.239 [INFO][4827] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d43c1a91dd2276562727eefbcc01517e504d878328879e85a13408edec409254" HandleID="k8s-pod-network.d43c1a91dd2276562727eefbcc01517e504d878328879e85a13408edec409254" Workload="localhost-k8s-calico--apiserver--7fb9554685--xx2mf-eth0" Jul 14 23:06:54.243566 containerd[1538]: 2025-07-14 23:06:54.240 [INFO][4827] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 23:06:54.243566 containerd[1538]: 2025-07-14 23:06:54.241 [INFO][4813] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d43c1a91dd2276562727eefbcc01517e504d878328879e85a13408edec409254" Jul 14 23:06:54.243975 containerd[1538]: time="2025-07-14T23:06:54.243961378Z" level=info msg="TearDown network for sandbox \"d43c1a91dd2276562727eefbcc01517e504d878328879e85a13408edec409254\" successfully" Jul 14 23:06:54.244015 containerd[1538]: time="2025-07-14T23:06:54.244008187Z" level=info msg="StopPodSandbox for \"d43c1a91dd2276562727eefbcc01517e504d878328879e85a13408edec409254\" returns successfully" Jul 14 23:06:54.244622 containerd[1538]: time="2025-07-14T23:06:54.244600000Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7fb9554685-xx2mf,Uid:010be3c5-59c9-4ff4-a6d2-2124514c9299,Namespace:calico-apiserver,Attempt:1,}" Jul 14 23:06:54.247849 systemd[1]: run-netns-cni\x2d2361a451\x2d1cdc\x2d2af2\x2d93d9\x2ddd682c7a6580.mount: Deactivated successfully. Jul 14 23:06:54.282129 systemd-networkd[1449]: cali54d9e493df0: Gained IPv6LL Jul 14 23:06:54.372509 systemd-networkd[1449]: cali55fefe79caa: Link UP Jul 14 23:06:54.372876 systemd-networkd[1449]: cali55fefe79caa: Gained carrier Jul 14 23:06:54.390969 containerd[1538]: 2025-07-14 23:06:54.284 [INFO][4840] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--789c44b4f5--hbf9t-eth0 calico-kube-controllers-789c44b4f5- calico-system f381ec3d-1f99-48f5-a759-d4ef727cd042 985 0 2025-07-14 23:06:28 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:789c44b4f5 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-789c44b4f5-hbf9t eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali55fefe79caa [] [] }} ContainerID="f749f27f997f7e4dc3ff818327a30e1d248688b1b63bb69ea330f3e29bfb752c" Namespace="calico-system" Pod="calico-kube-controllers-789c44b4f5-hbf9t" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--789c44b4f5--hbf9t-" Jul 14 23:06:54.390969 containerd[1538]: 2025-07-14 23:06:54.284 [INFO][4840] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f749f27f997f7e4dc3ff818327a30e1d248688b1b63bb69ea330f3e29bfb752c" Namespace="calico-system" Pod="calico-kube-controllers-789c44b4f5-hbf9t" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--789c44b4f5--hbf9t-eth0" Jul 14 23:06:54.390969 containerd[1538]: 2025-07-14 23:06:54.305 [INFO][4862] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f749f27f997f7e4dc3ff818327a30e1d248688b1b63bb69ea330f3e29bfb752c" HandleID="k8s-pod-network.f749f27f997f7e4dc3ff818327a30e1d248688b1b63bb69ea330f3e29bfb752c" Workload="localhost-k8s-calico--kube--controllers--789c44b4f5--hbf9t-eth0" Jul 14 23:06:54.390969 containerd[1538]: 2025-07-14 23:06:54.305 [INFO][4862] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f749f27f997f7e4dc3ff818327a30e1d248688b1b63bb69ea330f3e29bfb752c" HandleID="k8s-pod-network.f749f27f997f7e4dc3ff818327a30e1d248688b1b63bb69ea330f3e29bfb752c" Workload="localhost-k8s-calico--kube--controllers--789c44b4f5--hbf9t-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5860), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-789c44b4f5-hbf9t", "timestamp":"2025-07-14 23:06:54.305705308 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 14 23:06:54.390969 containerd[1538]: 2025-07-14 23:06:54.306 [INFO][4862] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 23:06:54.390969 containerd[1538]: 2025-07-14 23:06:54.306 [INFO][4862] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 23:06:54.390969 containerd[1538]: 2025-07-14 23:06:54.306 [INFO][4862] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 14 23:06:54.390969 containerd[1538]: 2025-07-14 23:06:54.311 [INFO][4862] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f749f27f997f7e4dc3ff818327a30e1d248688b1b63bb69ea330f3e29bfb752c" host="localhost" Jul 14 23:06:54.390969 containerd[1538]: 2025-07-14 23:06:54.314 [INFO][4862] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 14 23:06:54.390969 containerd[1538]: 2025-07-14 23:06:54.317 [INFO][4862] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 14 23:06:54.390969 containerd[1538]: 2025-07-14 23:06:54.318 [INFO][4862] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 14 23:06:54.390969 containerd[1538]: 2025-07-14 23:06:54.319 [INFO][4862] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 14 23:06:54.390969 containerd[1538]: 2025-07-14 23:06:54.319 [INFO][4862] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.f749f27f997f7e4dc3ff818327a30e1d248688b1b63bb69ea330f3e29bfb752c" host="localhost" Jul 14 23:06:54.390969 containerd[1538]: 2025-07-14 23:06:54.320 [INFO][4862] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.f749f27f997f7e4dc3ff818327a30e1d248688b1b63bb69ea330f3e29bfb752c Jul 14 23:06:54.390969 containerd[1538]: 2025-07-14 23:06:54.338 [INFO][4862] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.f749f27f997f7e4dc3ff818327a30e1d248688b1b63bb69ea330f3e29bfb752c" host="localhost" Jul 14 23:06:54.390969 containerd[1538]: 2025-07-14 23:06:54.365 [INFO][4862] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.f749f27f997f7e4dc3ff818327a30e1d248688b1b63bb69ea330f3e29bfb752c" host="localhost" Jul 14 23:06:54.390969 containerd[1538]: 2025-07-14 23:06:54.365 [INFO][4862] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.f749f27f997f7e4dc3ff818327a30e1d248688b1b63bb69ea330f3e29bfb752c" host="localhost" Jul 14 23:06:54.390969 containerd[1538]: 2025-07-14 23:06:54.365 [INFO][4862] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 23:06:54.390969 containerd[1538]: 2025-07-14 23:06:54.366 [INFO][4862] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="f749f27f997f7e4dc3ff818327a30e1d248688b1b63bb69ea330f3e29bfb752c" HandleID="k8s-pod-network.f749f27f997f7e4dc3ff818327a30e1d248688b1b63bb69ea330f3e29bfb752c" Workload="localhost-k8s-calico--kube--controllers--789c44b4f5--hbf9t-eth0" Jul 14 23:06:54.393049 containerd[1538]: 2025-07-14 23:06:54.369 [INFO][4840] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f749f27f997f7e4dc3ff818327a30e1d248688b1b63bb69ea330f3e29bfb752c" Namespace="calico-system" Pod="calico-kube-controllers-789c44b4f5-hbf9t" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--789c44b4f5--hbf9t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--789c44b4f5--hbf9t-eth0", GenerateName:"calico-kube-controllers-789c44b4f5-", Namespace:"calico-system", SelfLink:"", UID:"f381ec3d-1f99-48f5-a759-d4ef727cd042", ResourceVersion:"985", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 23, 6, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"789c44b4f5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-789c44b4f5-hbf9t", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali55fefe79caa", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 23:06:54.393049 containerd[1538]: 2025-07-14 23:06:54.369 [INFO][4840] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="f749f27f997f7e4dc3ff818327a30e1d248688b1b63bb69ea330f3e29bfb752c" Namespace="calico-system" Pod="calico-kube-controllers-789c44b4f5-hbf9t" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--789c44b4f5--hbf9t-eth0" Jul 14 23:06:54.393049 containerd[1538]: 2025-07-14 23:06:54.369 [INFO][4840] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali55fefe79caa ContainerID="f749f27f997f7e4dc3ff818327a30e1d248688b1b63bb69ea330f3e29bfb752c" Namespace="calico-system" Pod="calico-kube-controllers-789c44b4f5-hbf9t" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--789c44b4f5--hbf9t-eth0" Jul 14 23:06:54.393049 containerd[1538]: 2025-07-14 23:06:54.373 [INFO][4840] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f749f27f997f7e4dc3ff818327a30e1d248688b1b63bb69ea330f3e29bfb752c" Namespace="calico-system" Pod="calico-kube-controllers-789c44b4f5-hbf9t" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--789c44b4f5--hbf9t-eth0" Jul 14 23:06:54.393049 containerd[1538]: 2025-07-14 23:06:54.373 [INFO][4840] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f749f27f997f7e4dc3ff818327a30e1d248688b1b63bb69ea330f3e29bfb752c" Namespace="calico-system" Pod="calico-kube-controllers-789c44b4f5-hbf9t" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--789c44b4f5--hbf9t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--789c44b4f5--hbf9t-eth0", GenerateName:"calico-kube-controllers-789c44b4f5-", Namespace:"calico-system", SelfLink:"", UID:"f381ec3d-1f99-48f5-a759-d4ef727cd042", ResourceVersion:"985", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 23, 6, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"789c44b4f5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f749f27f997f7e4dc3ff818327a30e1d248688b1b63bb69ea330f3e29bfb752c", Pod:"calico-kube-controllers-789c44b4f5-hbf9t", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali55fefe79caa", MAC:"b6:86:6e:79:9e:f1", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 23:06:54.393049 containerd[1538]: 2025-07-14 23:06:54.389 [INFO][4840] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f749f27f997f7e4dc3ff818327a30e1d248688b1b63bb69ea330f3e29bfb752c" Namespace="calico-system" Pod="calico-kube-controllers-789c44b4f5-hbf9t" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--789c44b4f5--hbf9t-eth0" Jul 14 23:06:54.423614 containerd[1538]: time="2025-07-14T23:06:54.422880960Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 23:06:54.423614 containerd[1538]: time="2025-07-14T23:06:54.422930098Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 23:06:54.423614 containerd[1538]: time="2025-07-14T23:06:54.422938601Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 23:06:54.423614 containerd[1538]: time="2025-07-14T23:06:54.422998651Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 23:06:54.436150 systemd-networkd[1449]: cali8fe711b9864: Link UP Jul 14 23:06:54.436756 systemd-networkd[1449]: cali8fe711b9864: Gained carrier Jul 14 23:06:54.443168 systemd[1]: Started cri-containerd-f749f27f997f7e4dc3ff818327a30e1d248688b1b63bb69ea330f3e29bfb752c.scope - libcontainer container f749f27f997f7e4dc3ff818327a30e1d248688b1b63bb69ea330f3e29bfb752c. Jul 14 23:06:54.452122 containerd[1538]: 2025-07-14 23:06:54.302 [INFO][4851] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--7fb9554685--xx2mf-eth0 calico-apiserver-7fb9554685- calico-apiserver 010be3c5-59c9-4ff4-a6d2-2124514c9299 984 0 2025-07-14 23:06:25 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7fb9554685 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-7fb9554685-xx2mf eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali8fe711b9864 [] [] }} ContainerID="9b5404d95355c758267e0a757a3012352d7b3fe8101f319462097d064c02c4ee" Namespace="calico-apiserver" Pod="calico-apiserver-7fb9554685-xx2mf" WorkloadEndpoint="localhost-k8s-calico--apiserver--7fb9554685--xx2mf-" Jul 14 23:06:54.452122 containerd[1538]: 2025-07-14 23:06:54.302 [INFO][4851] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9b5404d95355c758267e0a757a3012352d7b3fe8101f319462097d064c02c4ee" Namespace="calico-apiserver" Pod="calico-apiserver-7fb9554685-xx2mf" WorkloadEndpoint="localhost-k8s-calico--apiserver--7fb9554685--xx2mf-eth0" Jul 14 23:06:54.452122 containerd[1538]: 2025-07-14 23:06:54.324 [INFO][4870] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9b5404d95355c758267e0a757a3012352d7b3fe8101f319462097d064c02c4ee" HandleID="k8s-pod-network.9b5404d95355c758267e0a757a3012352d7b3fe8101f319462097d064c02c4ee" Workload="localhost-k8s-calico--apiserver--7fb9554685--xx2mf-eth0" Jul 14 23:06:54.452122 containerd[1538]: 2025-07-14 23:06:54.324 [INFO][4870] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="9b5404d95355c758267e0a757a3012352d7b3fe8101f319462097d064c02c4ee" HandleID="k8s-pod-network.9b5404d95355c758267e0a757a3012352d7b3fe8101f319462097d064c02c4ee" Workload="localhost-k8s-calico--apiserver--7fb9554685--xx2mf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f150), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-7fb9554685-xx2mf", "timestamp":"2025-07-14 23:06:54.324258354 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 14 23:06:54.452122 containerd[1538]: 2025-07-14 23:06:54.324 [INFO][4870] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 23:06:54.452122 containerd[1538]: 2025-07-14 23:06:54.366 [INFO][4870] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 23:06:54.452122 containerd[1538]: 2025-07-14 23:06:54.366 [INFO][4870] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 14 23:06:54.452122 containerd[1538]: 2025-07-14 23:06:54.411 [INFO][4870] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9b5404d95355c758267e0a757a3012352d7b3fe8101f319462097d064c02c4ee" host="localhost" Jul 14 23:06:54.452122 containerd[1538]: 2025-07-14 23:06:54.414 [INFO][4870] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 14 23:06:54.452122 containerd[1538]: 2025-07-14 23:06:54.420 [INFO][4870] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 14 23:06:54.452122 containerd[1538]: 2025-07-14 23:06:54.421 [INFO][4870] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 14 23:06:54.452122 containerd[1538]: 2025-07-14 23:06:54.424 [INFO][4870] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 14 23:06:54.452122 containerd[1538]: 2025-07-14 23:06:54.424 [INFO][4870] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.9b5404d95355c758267e0a757a3012352d7b3fe8101f319462097d064c02c4ee" host="localhost" Jul 14 23:06:54.452122 containerd[1538]: 2025-07-14 23:06:54.425 [INFO][4870] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.9b5404d95355c758267e0a757a3012352d7b3fe8101f319462097d064c02c4ee Jul 14 23:06:54.452122 containerd[1538]: 2025-07-14 23:06:54.428 [INFO][4870] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.9b5404d95355c758267e0a757a3012352d7b3fe8101f319462097d064c02c4ee" host="localhost" Jul 14 23:06:54.452122 containerd[1538]: 2025-07-14 23:06:54.431 [INFO][4870] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.9b5404d95355c758267e0a757a3012352d7b3fe8101f319462097d064c02c4ee" host="localhost" Jul 14 23:06:54.452122 containerd[1538]: 2025-07-14 23:06:54.431 [INFO][4870] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.9b5404d95355c758267e0a757a3012352d7b3fe8101f319462097d064c02c4ee" host="localhost" Jul 14 23:06:54.452122 containerd[1538]: 2025-07-14 23:06:54.431 [INFO][4870] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 23:06:54.452122 containerd[1538]: 2025-07-14 23:06:54.431 [INFO][4870] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="9b5404d95355c758267e0a757a3012352d7b3fe8101f319462097d064c02c4ee" HandleID="k8s-pod-network.9b5404d95355c758267e0a757a3012352d7b3fe8101f319462097d064c02c4ee" Workload="localhost-k8s-calico--apiserver--7fb9554685--xx2mf-eth0" Jul 14 23:06:54.453284 containerd[1538]: 2025-07-14 23:06:54.434 [INFO][4851] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9b5404d95355c758267e0a757a3012352d7b3fe8101f319462097d064c02c4ee" Namespace="calico-apiserver" Pod="calico-apiserver-7fb9554685-xx2mf" WorkloadEndpoint="localhost-k8s-calico--apiserver--7fb9554685--xx2mf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7fb9554685--xx2mf-eth0", GenerateName:"calico-apiserver-7fb9554685-", Namespace:"calico-apiserver", SelfLink:"", UID:"010be3c5-59c9-4ff4-a6d2-2124514c9299", ResourceVersion:"984", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 23, 6, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7fb9554685", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-7fb9554685-xx2mf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8fe711b9864", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 23:06:54.453284 containerd[1538]: 2025-07-14 23:06:54.434 [INFO][4851] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="9b5404d95355c758267e0a757a3012352d7b3fe8101f319462097d064c02c4ee" Namespace="calico-apiserver" Pod="calico-apiserver-7fb9554685-xx2mf" WorkloadEndpoint="localhost-k8s-calico--apiserver--7fb9554685--xx2mf-eth0" Jul 14 23:06:54.453284 containerd[1538]: 2025-07-14 23:06:54.434 [INFO][4851] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8fe711b9864 ContainerID="9b5404d95355c758267e0a757a3012352d7b3fe8101f319462097d064c02c4ee" Namespace="calico-apiserver" Pod="calico-apiserver-7fb9554685-xx2mf" WorkloadEndpoint="localhost-k8s-calico--apiserver--7fb9554685--xx2mf-eth0" Jul 14 23:06:54.453284 containerd[1538]: 2025-07-14 23:06:54.436 [INFO][4851] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9b5404d95355c758267e0a757a3012352d7b3fe8101f319462097d064c02c4ee" Namespace="calico-apiserver" Pod="calico-apiserver-7fb9554685-xx2mf" WorkloadEndpoint="localhost-k8s-calico--apiserver--7fb9554685--xx2mf-eth0" Jul 14 23:06:54.453284 containerd[1538]: 2025-07-14 23:06:54.437 [INFO][4851] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9b5404d95355c758267e0a757a3012352d7b3fe8101f319462097d064c02c4ee" Namespace="calico-apiserver" Pod="calico-apiserver-7fb9554685-xx2mf" WorkloadEndpoint="localhost-k8s-calico--apiserver--7fb9554685--xx2mf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7fb9554685--xx2mf-eth0", GenerateName:"calico-apiserver-7fb9554685-", Namespace:"calico-apiserver", SelfLink:"", UID:"010be3c5-59c9-4ff4-a6d2-2124514c9299", ResourceVersion:"984", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 23, 6, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7fb9554685", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9b5404d95355c758267e0a757a3012352d7b3fe8101f319462097d064c02c4ee", Pod:"calico-apiserver-7fb9554685-xx2mf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8fe711b9864", MAC:"8e:fe:aa:61:4e:52", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 23:06:54.453284 containerd[1538]: 2025-07-14 23:06:54.447 [INFO][4851] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9b5404d95355c758267e0a757a3012352d7b3fe8101f319462097d064c02c4ee" Namespace="calico-apiserver" Pod="calico-apiserver-7fb9554685-xx2mf" WorkloadEndpoint="localhost-k8s-calico--apiserver--7fb9554685--xx2mf-eth0" Jul 14 23:06:54.460969 systemd-resolved[1450]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 14 23:06:54.471113 containerd[1538]: time="2025-07-14T23:06:54.470931969Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 23:06:54.471113 containerd[1538]: time="2025-07-14T23:06:54.470994413Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 23:06:54.471113 containerd[1538]: time="2025-07-14T23:06:54.471008873Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 23:06:54.471113 containerd[1538]: time="2025-07-14T23:06:54.471057890Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 23:06:54.473886 systemd-networkd[1449]: calif28411e7715: Gained IPv6LL Jul 14 23:06:54.490259 systemd[1]: Started cri-containerd-9b5404d95355c758267e0a757a3012352d7b3fe8101f319462097d064c02c4ee.scope - libcontainer container 9b5404d95355c758267e0a757a3012352d7b3fe8101f319462097d064c02c4ee. Jul 14 23:06:54.491747 containerd[1538]: time="2025-07-14T23:06:54.491725650Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-789c44b4f5-hbf9t,Uid:f381ec3d-1f99-48f5-a759-d4ef727cd042,Namespace:calico-system,Attempt:1,} returns sandbox id \"f749f27f997f7e4dc3ff818327a30e1d248688b1b63bb69ea330f3e29bfb752c\"" Jul 14 23:06:54.499796 systemd-resolved[1450]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 14 23:06:54.517852 containerd[1538]: time="2025-07-14T23:06:54.517828814Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7fb9554685-xx2mf,Uid:010be3c5-59c9-4ff4-a6d2-2124514c9299,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"9b5404d95355c758267e0a757a3012352d7b3fe8101f319462097d064c02c4ee\"" Jul 14 23:06:54.601227 systemd-networkd[1449]: cali7e858eefa40: Gained IPv6LL Jul 14 23:06:55.165836 containerd[1538]: time="2025-07-14T23:06:55.165636755Z" level=info msg="StopPodSandbox for \"8fae9a5219270f031d65f7835bdf8284ac7002124c33d69de3f815a76bdd3e1e\"" Jul 14 23:06:55.239780 containerd[1538]: 2025-07-14 23:06:55.218 [INFO][4991] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8fae9a5219270f031d65f7835bdf8284ac7002124c33d69de3f815a76bdd3e1e" Jul 14 23:06:55.239780 containerd[1538]: 2025-07-14 23:06:55.218 [INFO][4991] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="8fae9a5219270f031d65f7835bdf8284ac7002124c33d69de3f815a76bdd3e1e" iface="eth0" netns="/var/run/netns/cni-5426c6ea-6f87-886f-bd03-31075545eb6d" Jul 14 23:06:55.239780 containerd[1538]: 2025-07-14 23:06:55.218 [INFO][4991] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="8fae9a5219270f031d65f7835bdf8284ac7002124c33d69de3f815a76bdd3e1e" iface="eth0" netns="/var/run/netns/cni-5426c6ea-6f87-886f-bd03-31075545eb6d" Jul 14 23:06:55.239780 containerd[1538]: 2025-07-14 23:06:55.218 [INFO][4991] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="8fae9a5219270f031d65f7835bdf8284ac7002124c33d69de3f815a76bdd3e1e" iface="eth0" netns="/var/run/netns/cni-5426c6ea-6f87-886f-bd03-31075545eb6d" Jul 14 23:06:55.239780 containerd[1538]: 2025-07-14 23:06:55.218 [INFO][4991] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8fae9a5219270f031d65f7835bdf8284ac7002124c33d69de3f815a76bdd3e1e" Jul 14 23:06:55.239780 containerd[1538]: 2025-07-14 23:06:55.218 [INFO][4991] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8fae9a5219270f031d65f7835bdf8284ac7002124c33d69de3f815a76bdd3e1e" Jul 14 23:06:55.239780 containerd[1538]: 2025-07-14 23:06:55.232 [INFO][4999] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8fae9a5219270f031d65f7835bdf8284ac7002124c33d69de3f815a76bdd3e1e" HandleID="k8s-pod-network.8fae9a5219270f031d65f7835bdf8284ac7002124c33d69de3f815a76bdd3e1e" Workload="localhost-k8s-goldmane--58fd7646b9--5d4g6-eth0" Jul 14 23:06:55.239780 containerd[1538]: 2025-07-14 23:06:55.232 [INFO][4999] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 23:06:55.239780 containerd[1538]: 2025-07-14 23:06:55.232 [INFO][4999] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 23:06:55.239780 containerd[1538]: 2025-07-14 23:06:55.237 [WARNING][4999] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8fae9a5219270f031d65f7835bdf8284ac7002124c33d69de3f815a76bdd3e1e" HandleID="k8s-pod-network.8fae9a5219270f031d65f7835bdf8284ac7002124c33d69de3f815a76bdd3e1e" Workload="localhost-k8s-goldmane--58fd7646b9--5d4g6-eth0" Jul 14 23:06:55.239780 containerd[1538]: 2025-07-14 23:06:55.237 [INFO][4999] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8fae9a5219270f031d65f7835bdf8284ac7002124c33d69de3f815a76bdd3e1e" HandleID="k8s-pod-network.8fae9a5219270f031d65f7835bdf8284ac7002124c33d69de3f815a76bdd3e1e" Workload="localhost-k8s-goldmane--58fd7646b9--5d4g6-eth0" Jul 14 23:06:55.239780 containerd[1538]: 2025-07-14 23:06:55.237 [INFO][4999] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 23:06:55.239780 containerd[1538]: 2025-07-14 23:06:55.238 [INFO][4991] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8fae9a5219270f031d65f7835bdf8284ac7002124c33d69de3f815a76bdd3e1e" Jul 14 23:06:55.240225 containerd[1538]: time="2025-07-14T23:06:55.239891825Z" level=info msg="TearDown network for sandbox \"8fae9a5219270f031d65f7835bdf8284ac7002124c33d69de3f815a76bdd3e1e\" successfully" Jul 14 23:06:55.240225 containerd[1538]: time="2025-07-14T23:06:55.239907418Z" level=info msg="StopPodSandbox for \"8fae9a5219270f031d65f7835bdf8284ac7002124c33d69de3f815a76bdd3e1e\" returns successfully" Jul 14 23:06:55.241500 systemd[1]: run-netns-cni\x2d5426c6ea\x2d6f87\x2d886f\x2dbd03\x2d31075545eb6d.mount: Deactivated successfully. Jul 14 23:06:55.242298 containerd[1538]: time="2025-07-14T23:06:55.242274191Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-5d4g6,Uid:43e0fd53-8ec8-4b02-86e2-a124557aa367,Namespace:calico-system,Attempt:1,}" Jul 14 23:06:55.341823 systemd-networkd[1449]: cali6f7462abe57: Link UP Jul 14 23:06:55.343354 systemd-networkd[1449]: cali6f7462abe57: Gained carrier Jul 14 23:06:55.362649 containerd[1538]: 2025-07-14 23:06:55.282 [INFO][5009] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--58fd7646b9--5d4g6-eth0 goldmane-58fd7646b9- calico-system 43e0fd53-8ec8-4b02-86e2-a124557aa367 996 0 2025-07-14 23:06:27 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:58fd7646b9 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-58fd7646b9-5d4g6 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali6f7462abe57 [] [] }} ContainerID="26130304085c8326dfc85985702e52e66090d61123d8558b02387790c0ab5454" Namespace="calico-system" Pod="goldmane-58fd7646b9-5d4g6" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--5d4g6-" Jul 14 23:06:55.362649 containerd[1538]: 2025-07-14 23:06:55.282 [INFO][5009] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="26130304085c8326dfc85985702e52e66090d61123d8558b02387790c0ab5454" Namespace="calico-system" Pod="goldmane-58fd7646b9-5d4g6" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--5d4g6-eth0" Jul 14 23:06:55.362649 containerd[1538]: 2025-07-14 23:06:55.301 [INFO][5023] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="26130304085c8326dfc85985702e52e66090d61123d8558b02387790c0ab5454" HandleID="k8s-pod-network.26130304085c8326dfc85985702e52e66090d61123d8558b02387790c0ab5454" Workload="localhost-k8s-goldmane--58fd7646b9--5d4g6-eth0" Jul 14 23:06:55.362649 containerd[1538]: 2025-07-14 23:06:55.301 [INFO][5023] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="26130304085c8326dfc85985702e52e66090d61123d8558b02387790c0ab5454" HandleID="k8s-pod-network.26130304085c8326dfc85985702e52e66090d61123d8558b02387790c0ab5454" Workload="localhost-k8s-goldmane--58fd7646b9--5d4g6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5600), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-58fd7646b9-5d4g6", "timestamp":"2025-07-14 23:06:55.301124145 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 14 23:06:55.362649 containerd[1538]: 2025-07-14 23:06:55.301 [INFO][5023] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 23:06:55.362649 containerd[1538]: 2025-07-14 23:06:55.301 [INFO][5023] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 23:06:55.362649 containerd[1538]: 2025-07-14 23:06:55.301 [INFO][5023] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 14 23:06:55.362649 containerd[1538]: 2025-07-14 23:06:55.305 [INFO][5023] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.26130304085c8326dfc85985702e52e66090d61123d8558b02387790c0ab5454" host="localhost" Jul 14 23:06:55.362649 containerd[1538]: 2025-07-14 23:06:55.308 [INFO][5023] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 14 23:06:55.362649 containerd[1538]: 2025-07-14 23:06:55.310 [INFO][5023] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 14 23:06:55.362649 containerd[1538]: 2025-07-14 23:06:55.312 [INFO][5023] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 14 23:06:55.362649 containerd[1538]: 2025-07-14 23:06:55.314 [INFO][5023] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 14 23:06:55.362649 containerd[1538]: 2025-07-14 23:06:55.314 [INFO][5023] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.26130304085c8326dfc85985702e52e66090d61123d8558b02387790c0ab5454" host="localhost" Jul 14 23:06:55.362649 containerd[1538]: 2025-07-14 23:06:55.314 [INFO][5023] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.26130304085c8326dfc85985702e52e66090d61123d8558b02387790c0ab5454 Jul 14 23:06:55.362649 containerd[1538]: 2025-07-14 23:06:55.320 [INFO][5023] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.26130304085c8326dfc85985702e52e66090d61123d8558b02387790c0ab5454" host="localhost" Jul 14 23:06:55.362649 containerd[1538]: 2025-07-14 23:06:55.332 [INFO][5023] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.26130304085c8326dfc85985702e52e66090d61123d8558b02387790c0ab5454" host="localhost" Jul 14 23:06:55.362649 containerd[1538]: 2025-07-14 23:06:55.333 [INFO][5023] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.26130304085c8326dfc85985702e52e66090d61123d8558b02387790c0ab5454" host="localhost" Jul 14 23:06:55.362649 containerd[1538]: 2025-07-14 23:06:55.333 [INFO][5023] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 23:06:55.362649 containerd[1538]: 2025-07-14 23:06:55.333 [INFO][5023] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="26130304085c8326dfc85985702e52e66090d61123d8558b02387790c0ab5454" HandleID="k8s-pod-network.26130304085c8326dfc85985702e52e66090d61123d8558b02387790c0ab5454" Workload="localhost-k8s-goldmane--58fd7646b9--5d4g6-eth0" Jul 14 23:06:55.363430 containerd[1538]: 2025-07-14 23:06:55.335 [INFO][5009] cni-plugin/k8s.go 418: Populated endpoint ContainerID="26130304085c8326dfc85985702e52e66090d61123d8558b02387790c0ab5454" Namespace="calico-system" Pod="goldmane-58fd7646b9-5d4g6" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--5d4g6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--58fd7646b9--5d4g6-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"43e0fd53-8ec8-4b02-86e2-a124557aa367", ResourceVersion:"996", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 23, 6, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-58fd7646b9-5d4g6", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali6f7462abe57", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 23:06:55.363430 containerd[1538]: 2025-07-14 23:06:55.335 [INFO][5009] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="26130304085c8326dfc85985702e52e66090d61123d8558b02387790c0ab5454" Namespace="calico-system" Pod="goldmane-58fd7646b9-5d4g6" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--5d4g6-eth0" Jul 14 23:06:55.363430 containerd[1538]: 2025-07-14 23:06:55.335 [INFO][5009] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6f7462abe57 ContainerID="26130304085c8326dfc85985702e52e66090d61123d8558b02387790c0ab5454" Namespace="calico-system" Pod="goldmane-58fd7646b9-5d4g6" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--5d4g6-eth0" Jul 14 23:06:55.363430 containerd[1538]: 2025-07-14 23:06:55.344 [INFO][5009] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="26130304085c8326dfc85985702e52e66090d61123d8558b02387790c0ab5454" Namespace="calico-system" Pod="goldmane-58fd7646b9-5d4g6" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--5d4g6-eth0" Jul 14 23:06:55.363430 containerd[1538]: 2025-07-14 23:06:55.345 [INFO][5009] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="26130304085c8326dfc85985702e52e66090d61123d8558b02387790c0ab5454" Namespace="calico-system" Pod="goldmane-58fd7646b9-5d4g6" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--5d4g6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--58fd7646b9--5d4g6-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"43e0fd53-8ec8-4b02-86e2-a124557aa367", ResourceVersion:"996", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 23, 6, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"26130304085c8326dfc85985702e52e66090d61123d8558b02387790c0ab5454", Pod:"goldmane-58fd7646b9-5d4g6", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali6f7462abe57", MAC:"6e:4e:fa:6a:d3:d5", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 23:06:55.363430 containerd[1538]: 2025-07-14 23:06:55.361 [INFO][5009] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="26130304085c8326dfc85985702e52e66090d61123d8558b02387790c0ab5454" Namespace="calico-system" Pod="goldmane-58fd7646b9-5d4g6" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--5d4g6-eth0" Jul 14 23:06:55.393006 containerd[1538]: time="2025-07-14T23:06:55.392883349Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 23:06:55.393006 containerd[1538]: time="2025-07-14T23:06:55.392914793Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 23:06:55.393144 containerd[1538]: time="2025-07-14T23:06:55.392924590Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 23:06:55.393144 containerd[1538]: time="2025-07-14T23:06:55.393040542Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 23:06:55.407183 systemd[1]: Started cri-containerd-26130304085c8326dfc85985702e52e66090d61123d8558b02387790c0ab5454.scope - libcontainer container 26130304085c8326dfc85985702e52e66090d61123d8558b02387790c0ab5454. Jul 14 23:06:55.419365 systemd-resolved[1450]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 14 23:06:55.439577 containerd[1538]: time="2025-07-14T23:06:55.439547161Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-5d4g6,Uid:43e0fd53-8ec8-4b02-86e2-a124557aa367,Namespace:calico-system,Attempt:1,} returns sandbox id \"26130304085c8326dfc85985702e52e66090d61123d8558b02387790c0ab5454\"" Jul 14 23:06:55.481254 containerd[1538]: time="2025-07-14T23:06:55.481211417Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.2: active requests=0, bytes read=8759190" Jul 14 23:06:55.487009 containerd[1538]: time="2025-07-14T23:06:55.486982964Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 23:06:55.488158 containerd[1538]: time="2025-07-14T23:06:55.488121204Z" level=info msg="ImageCreate event name:\"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 23:06:55.493198 containerd[1538]: time="2025-07-14T23:06:55.493170504Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 23:06:55.498729 containerd[1538]: time="2025-07-14T23:06:55.493491375Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.2\" with image id \"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\", size \"10251893\" in 2.978448622s" Jul 14 23:06:55.498729 containerd[1538]: time="2025-07-14T23:06:55.493511040Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\" returns image reference \"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\"" Jul 14 23:06:55.498729 containerd[1538]: time="2025-07-14T23:06:55.494524081Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 14 23:06:55.499405 containerd[1538]: time="2025-07-14T23:06:55.499302678Z" level=info msg="CreateContainer within sandbox \"a13ce779967ebe010f7e625b9a81e00a36d8fefd375ff48732cc04c50f17cb91\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jul 14 23:06:55.512753 containerd[1538]: time="2025-07-14T23:06:55.512711114Z" level=info msg="CreateContainer within sandbox \"a13ce779967ebe010f7e625b9a81e00a36d8fefd375ff48732cc04c50f17cb91\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"da22491e761ebdee1c5dae16e90d26e76e647f2f1d9cf4d81ae1e0fae1e69da6\"" Jul 14 23:06:55.513139 containerd[1538]: time="2025-07-14T23:06:55.512962591Z" level=info msg="StartContainer for \"da22491e761ebdee1c5dae16e90d26e76e647f2f1d9cf4d81ae1e0fae1e69da6\"" Jul 14 23:06:55.530157 systemd[1]: Started cri-containerd-da22491e761ebdee1c5dae16e90d26e76e647f2f1d9cf4d81ae1e0fae1e69da6.scope - libcontainer container da22491e761ebdee1c5dae16e90d26e76e647f2f1d9cf4d81ae1e0fae1e69da6. Jul 14 23:06:55.551356 containerd[1538]: time="2025-07-14T23:06:55.551330750Z" level=info msg="StartContainer for \"da22491e761ebdee1c5dae16e90d26e76e647f2f1d9cf4d81ae1e0fae1e69da6\" returns successfully" Jul 14 23:06:55.689286 systemd-networkd[1449]: cali55fefe79caa: Gained IPv6LL Jul 14 23:06:56.201342 systemd-networkd[1449]: cali8fe711b9864: Gained IPv6LL Jul 14 23:06:56.454704 kubelet[2725]: I0714 23:06:56.454631 2725 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 14 23:06:56.576785 systemd[1]: run-containerd-runc-k8s.io-666f9036db36bdc2651baf685f6ec33ab82ba988c387f39457afc4f00b49c935-runc.UgUMEH.mount: Deactivated successfully. Jul 14 23:06:56.691338 systemd[1]: run-containerd-runc-k8s.io-666f9036db36bdc2651baf685f6ec33ab82ba988c387f39457afc4f00b49c935-runc.zHeciO.mount: Deactivated successfully. Jul 14 23:06:57.225171 systemd-networkd[1449]: cali6f7462abe57: Gained IPv6LL Jul 14 23:06:59.417975 containerd[1538]: time="2025-07-14T23:06:59.417921605Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 23:06:59.419109 containerd[1538]: time="2025-07-14T23:06:59.419086057Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=47317977" Jul 14 23:06:59.421705 containerd[1538]: time="2025-07-14T23:06:59.421119626Z" level=info msg="ImageCreate event name:\"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 23:06:59.422350 containerd[1538]: time="2025-07-14T23:06:59.422336725Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 23:06:59.423205 containerd[1538]: time="2025-07-14T23:06:59.423191979Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"48810696\" in 3.928641772s" Jul 14 23:06:59.423259 containerd[1538]: time="2025-07-14T23:06:59.423251081Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\"" Jul 14 23:06:59.424328 containerd[1538]: time="2025-07-14T23:06:59.424319315Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\"" Jul 14 23:06:59.426802 containerd[1538]: time="2025-07-14T23:06:59.426787300Z" level=info msg="CreateContainer within sandbox \"8231eb6b89596f8bb82467154a9641ecdf78f3080ea626b95504303389ad45ab\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 14 23:06:59.465205 containerd[1538]: time="2025-07-14T23:06:59.465183159Z" level=info msg="CreateContainer within sandbox \"8231eb6b89596f8bb82467154a9641ecdf78f3080ea626b95504303389ad45ab\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"c3200fe5edc7211e386c50d3f9b937e054f4072de3d5baa6689e4cdd989ac23b\"" Jul 14 23:06:59.473666 containerd[1538]: time="2025-07-14T23:06:59.473359660Z" level=info msg="StartContainer for \"c3200fe5edc7211e386c50d3f9b937e054f4072de3d5baa6689e4cdd989ac23b\"" Jul 14 23:06:59.519305 systemd[1]: Started cri-containerd-c3200fe5edc7211e386c50d3f9b937e054f4072de3d5baa6689e4cdd989ac23b.scope - libcontainer container c3200fe5edc7211e386c50d3f9b937e054f4072de3d5baa6689e4cdd989ac23b. Jul 14 23:06:59.574142 containerd[1538]: time="2025-07-14T23:06:59.574120995Z" level=info msg="StartContainer for \"c3200fe5edc7211e386c50d3f9b937e054f4072de3d5baa6689e4cdd989ac23b\" returns successfully" Jul 14 23:07:00.638308 kubelet[2725]: I0714 23:07:00.638228 2725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7fb9554685-vnrld" podStartSLOduration=29.571034083 podStartE2EDuration="35.62094943s" podCreationTimestamp="2025-07-14 23:06:25 +0000 UTC" firstStartedPulling="2025-07-14 23:06:53.374274414 +0000 UTC m=+42.294980704" lastFinishedPulling="2025-07-14 23:06:59.424189762 +0000 UTC m=+48.344896051" observedRunningTime="2025-07-14 23:07:00.584672352 +0000 UTC m=+49.505378651" watchObservedRunningTime="2025-07-14 23:07:00.62094943 +0000 UTC m=+49.541655722" Jul 14 23:07:04.870541 containerd[1538]: time="2025-07-14T23:07:04.870508757Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 23:07:04.912047 containerd[1538]: time="2025-07-14T23:07:04.871782624Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.2: active requests=0, bytes read=51276688" Jul 14 23:07:04.915083 containerd[1538]: time="2025-07-14T23:07:04.884129736Z" level=info msg="ImageCreate event name:\"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 23:07:04.916114 containerd[1538]: time="2025-07-14T23:07:04.916101896Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 23:07:04.916868 containerd[1538]: time="2025-07-14T23:07:04.916532214Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" with image id \"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\", size \"52769359\" in 5.492159446s" Jul 14 23:07:04.916900 containerd[1538]: time="2025-07-14T23:07:04.916872618Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" returns image reference \"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\"" Jul 14 23:07:04.983669 containerd[1538]: time="2025-07-14T23:07:04.983489056Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 14 23:07:05.216037 containerd[1538]: time="2025-07-14T23:07:05.215715813Z" level=info msg="CreateContainer within sandbox \"f749f27f997f7e4dc3ff818327a30e1d248688b1b63bb69ea330f3e29bfb752c\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jul 14 23:07:05.354593 containerd[1538]: time="2025-07-14T23:07:05.354382434Z" level=info msg="CreateContainer within sandbox \"f749f27f997f7e4dc3ff818327a30e1d248688b1b63bb69ea330f3e29bfb752c\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"5a172af7d0dfe0fc603d8766a08111059479e3008af0d957594be0940fdf0ecc\"" Jul 14 23:07:05.435045 containerd[1538]: time="2025-07-14T23:07:05.434779107Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 23:07:05.435045 containerd[1538]: time="2025-07-14T23:07:05.435005672Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=77" Jul 14 23:07:05.441580 containerd[1538]: time="2025-07-14T23:07:05.441433013Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"48810696\" in 457.920322ms" Jul 14 23:07:05.441580 containerd[1538]: time="2025-07-14T23:07:05.441455850Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\"" Jul 14 23:07:05.651561 containerd[1538]: time="2025-07-14T23:07:05.651319868Z" level=info msg="StartContainer for \"5a172af7d0dfe0fc603d8766a08111059479e3008af0d957594be0940fdf0ecc\"" Jul 14 23:07:05.776729 containerd[1538]: time="2025-07-14T23:07:05.776708304Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\"" Jul 14 23:07:05.783239 containerd[1538]: time="2025-07-14T23:07:05.783144610Z" level=info msg="CreateContainer within sandbox \"9b5404d95355c758267e0a757a3012352d7b3fe8101f319462097d064c02c4ee\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 14 23:07:05.821198 containerd[1538]: time="2025-07-14T23:07:05.820208625Z" level=info msg="CreateContainer within sandbox \"9b5404d95355c758267e0a757a3012352d7b3fe8101f319462097d064c02c4ee\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"0d3bae5098c0b5f9bc31832665ffcf0ba948fd1e94df5d769e30e7268f79d4ff\"" Jul 14 23:07:05.862094 containerd[1538]: time="2025-07-14T23:07:05.861674353Z" level=info msg="StartContainer for \"0d3bae5098c0b5f9bc31832665ffcf0ba948fd1e94df5d769e30e7268f79d4ff\"" Jul 14 23:07:06.014299 systemd[1]: Started cri-containerd-0d3bae5098c0b5f9bc31832665ffcf0ba948fd1e94df5d769e30e7268f79d4ff.scope - libcontainer container 0d3bae5098c0b5f9bc31832665ffcf0ba948fd1e94df5d769e30e7268f79d4ff. Jul 14 23:07:06.015341 systemd[1]: Started cri-containerd-5a172af7d0dfe0fc603d8766a08111059479e3008af0d957594be0940fdf0ecc.scope - libcontainer container 5a172af7d0dfe0fc603d8766a08111059479e3008af0d957594be0940fdf0ecc. Jul 14 23:07:06.069183 containerd[1538]: time="2025-07-14T23:07:06.069044966Z" level=info msg="StartContainer for \"5a172af7d0dfe0fc603d8766a08111059479e3008af0d957594be0940fdf0ecc\" returns successfully" Jul 14 23:07:06.078415 containerd[1538]: time="2025-07-14T23:07:06.078392142Z" level=info msg="StartContainer for \"0d3bae5098c0b5f9bc31832665ffcf0ba948fd1e94df5d769e30e7268f79d4ff\" returns successfully" Jul 14 23:07:07.107638 kubelet[2725]: I0714 23:07:07.105671 2725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7fb9554685-xx2mf" podStartSLOduration=30.88093214 podStartE2EDuration="42.085220942s" podCreationTimestamp="2025-07-14 23:06:25 +0000 UTC" firstStartedPulling="2025-07-14 23:06:54.518634345 +0000 UTC m=+43.439340635" lastFinishedPulling="2025-07-14 23:07:05.722923147 +0000 UTC m=+54.643629437" observedRunningTime="2025-07-14 23:07:07.040174262 +0000 UTC m=+55.960880561" watchObservedRunningTime="2025-07-14 23:07:07.085220942 +0000 UTC m=+56.005927237" Jul 14 23:07:07.112110 kubelet[2725]: I0714 23:07:07.108119 2725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-789c44b4f5-hbf9t" podStartSLOduration=28.636296148 podStartE2EDuration="39.108110665s" podCreationTimestamp="2025-07-14 23:06:28 +0000 UTC" firstStartedPulling="2025-07-14 23:06:54.492365693 +0000 UTC m=+43.413071984" lastFinishedPulling="2025-07-14 23:07:04.96418021 +0000 UTC m=+53.884886501" observedRunningTime="2025-07-14 23:07:07.051124962 +0000 UTC m=+55.971831262" watchObservedRunningTime="2025-07-14 23:07:07.108110665 +0000 UTC m=+56.028816958" Jul 14 23:07:11.403800 containerd[1538]: time="2025-07-14T23:07:11.403719606Z" level=info msg="StopPodSandbox for \"1cd51275a95c9ccc42b96cd64df25c1c45e843975e12ce4fd50640deae067dbe\"" Jul 14 23:07:12.757134 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount252773403.mount: Deactivated successfully. Jul 14 23:07:12.820486 containerd[1538]: 2025-07-14 23:07:12.178 [WARNING][5371] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="1cd51275a95c9ccc42b96cd64df25c1c45e843975e12ce4fd50640deae067dbe" WorkloadEndpoint="localhost-k8s-whisker--5b6c867988--cbx84-eth0" Jul 14 23:07:12.820486 containerd[1538]: 2025-07-14 23:07:12.183 [INFO][5371] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1cd51275a95c9ccc42b96cd64df25c1c45e843975e12ce4fd50640deae067dbe" Jul 14 23:07:12.820486 containerd[1538]: 2025-07-14 23:07:12.183 [INFO][5371] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1cd51275a95c9ccc42b96cd64df25c1c45e843975e12ce4fd50640deae067dbe" iface="eth0" netns="" Jul 14 23:07:12.820486 containerd[1538]: 2025-07-14 23:07:12.183 [INFO][5371] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1cd51275a95c9ccc42b96cd64df25c1c45e843975e12ce4fd50640deae067dbe" Jul 14 23:07:12.820486 containerd[1538]: 2025-07-14 23:07:12.183 [INFO][5371] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1cd51275a95c9ccc42b96cd64df25c1c45e843975e12ce4fd50640deae067dbe" Jul 14 23:07:12.820486 containerd[1538]: 2025-07-14 23:07:12.790 [INFO][5386] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1cd51275a95c9ccc42b96cd64df25c1c45e843975e12ce4fd50640deae067dbe" HandleID="k8s-pod-network.1cd51275a95c9ccc42b96cd64df25c1c45e843975e12ce4fd50640deae067dbe" Workload="localhost-k8s-whisker--5b6c867988--cbx84-eth0" Jul 14 23:07:12.820486 containerd[1538]: 2025-07-14 23:07:12.795 [INFO][5386] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 23:07:12.820486 containerd[1538]: 2025-07-14 23:07:12.796 [INFO][5386] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 23:07:12.820486 containerd[1538]: 2025-07-14 23:07:12.812 [WARNING][5386] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1cd51275a95c9ccc42b96cd64df25c1c45e843975e12ce4fd50640deae067dbe" HandleID="k8s-pod-network.1cd51275a95c9ccc42b96cd64df25c1c45e843975e12ce4fd50640deae067dbe" Workload="localhost-k8s-whisker--5b6c867988--cbx84-eth0" Jul 14 23:07:12.820486 containerd[1538]: 2025-07-14 23:07:12.812 [INFO][5386] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1cd51275a95c9ccc42b96cd64df25c1c45e843975e12ce4fd50640deae067dbe" HandleID="k8s-pod-network.1cd51275a95c9ccc42b96cd64df25c1c45e843975e12ce4fd50640deae067dbe" Workload="localhost-k8s-whisker--5b6c867988--cbx84-eth0" Jul 14 23:07:12.820486 containerd[1538]: 2025-07-14 23:07:12.814 [INFO][5386] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 23:07:12.820486 containerd[1538]: 2025-07-14 23:07:12.817 [INFO][5371] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1cd51275a95c9ccc42b96cd64df25c1c45e843975e12ce4fd50640deae067dbe" Jul 14 23:07:12.820486 containerd[1538]: time="2025-07-14T23:07:12.820511701Z" level=info msg="TearDown network for sandbox \"1cd51275a95c9ccc42b96cd64df25c1c45e843975e12ce4fd50640deae067dbe\" successfully" Jul 14 23:07:12.820486 containerd[1538]: time="2025-07-14T23:07:12.820536089Z" level=info msg="StopPodSandbox for \"1cd51275a95c9ccc42b96cd64df25c1c45e843975e12ce4fd50640deae067dbe\" returns successfully" Jul 14 23:07:13.158450 containerd[1538]: time="2025-07-14T23:07:13.158374232Z" level=info msg="RemovePodSandbox for \"1cd51275a95c9ccc42b96cd64df25c1c45e843975e12ce4fd50640deae067dbe\"" Jul 14 23:07:13.163760 containerd[1538]: time="2025-07-14T23:07:13.163296748Z" level=info msg="Forcibly stopping sandbox \"1cd51275a95c9ccc42b96cd64df25c1c45e843975e12ce4fd50640deae067dbe\"" Jul 14 23:07:13.918105 containerd[1538]: time="2025-07-14T23:07:13.917524310Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 23:07:13.926088 containerd[1538]: time="2025-07-14T23:07:13.925810853Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.2: active requests=0, bytes read=66352308" Jul 14 23:07:13.947092 containerd[1538]: time="2025-07-14T23:07:13.947054195Z" level=info msg="ImageCreate event name:\"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 23:07:13.950516 containerd[1538]: time="2025-07-14T23:07:13.950480530Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 23:07:13.951994 containerd[1538]: time="2025-07-14T23:07:13.951959955Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" with image id \"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\", size \"66352154\" in 8.17356994s" Jul 14 23:07:13.959475 containerd[1538]: time="2025-07-14T23:07:13.959438736Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" returns image reference \"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\"" Jul 14 23:07:14.069290 containerd[1538]: time="2025-07-14T23:07:14.068695944Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\"" Jul 14 23:07:14.178341 containerd[1538]: 2025-07-14 23:07:13.684 [WARNING][5404] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="1cd51275a95c9ccc42b96cd64df25c1c45e843975e12ce4fd50640deae067dbe" WorkloadEndpoint="localhost-k8s-whisker--5b6c867988--cbx84-eth0" Jul 14 23:07:14.178341 containerd[1538]: 2025-07-14 23:07:13.685 [INFO][5404] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1cd51275a95c9ccc42b96cd64df25c1c45e843975e12ce4fd50640deae067dbe" Jul 14 23:07:14.178341 containerd[1538]: 2025-07-14 23:07:13.685 [INFO][5404] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1cd51275a95c9ccc42b96cd64df25c1c45e843975e12ce4fd50640deae067dbe" iface="eth0" netns="" Jul 14 23:07:14.178341 containerd[1538]: 2025-07-14 23:07:13.685 [INFO][5404] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1cd51275a95c9ccc42b96cd64df25c1c45e843975e12ce4fd50640deae067dbe" Jul 14 23:07:14.178341 containerd[1538]: 2025-07-14 23:07:13.685 [INFO][5404] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1cd51275a95c9ccc42b96cd64df25c1c45e843975e12ce4fd50640deae067dbe" Jul 14 23:07:14.178341 containerd[1538]: 2025-07-14 23:07:14.156 [INFO][5411] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1cd51275a95c9ccc42b96cd64df25c1c45e843975e12ce4fd50640deae067dbe" HandleID="k8s-pod-network.1cd51275a95c9ccc42b96cd64df25c1c45e843975e12ce4fd50640deae067dbe" Workload="localhost-k8s-whisker--5b6c867988--cbx84-eth0" Jul 14 23:07:14.178341 containerd[1538]: 2025-07-14 23:07:14.157 [INFO][5411] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 23:07:14.178341 containerd[1538]: 2025-07-14 23:07:14.159 [INFO][5411] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 23:07:14.178341 containerd[1538]: 2025-07-14 23:07:14.171 [WARNING][5411] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1cd51275a95c9ccc42b96cd64df25c1c45e843975e12ce4fd50640deae067dbe" HandleID="k8s-pod-network.1cd51275a95c9ccc42b96cd64df25c1c45e843975e12ce4fd50640deae067dbe" Workload="localhost-k8s-whisker--5b6c867988--cbx84-eth0" Jul 14 23:07:14.178341 containerd[1538]: 2025-07-14 23:07:14.171 [INFO][5411] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1cd51275a95c9ccc42b96cd64df25c1c45e843975e12ce4fd50640deae067dbe" HandleID="k8s-pod-network.1cd51275a95c9ccc42b96cd64df25c1c45e843975e12ce4fd50640deae067dbe" Workload="localhost-k8s-whisker--5b6c867988--cbx84-eth0" Jul 14 23:07:14.178341 containerd[1538]: 2025-07-14 23:07:14.172 [INFO][5411] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 23:07:14.178341 containerd[1538]: 2025-07-14 23:07:14.173 [INFO][5404] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1cd51275a95c9ccc42b96cd64df25c1c45e843975e12ce4fd50640deae067dbe" Jul 14 23:07:14.178341 containerd[1538]: time="2025-07-14T23:07:14.178124089Z" level=info msg="TearDown network for sandbox \"1cd51275a95c9ccc42b96cd64df25c1c45e843975e12ce4fd50640deae067dbe\" successfully" Jul 14 23:07:14.208168 containerd[1538]: time="2025-07-14T23:07:14.208092183Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1cd51275a95c9ccc42b96cd64df25c1c45e843975e12ce4fd50640deae067dbe\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 14 23:07:14.211536 containerd[1538]: time="2025-07-14T23:07:14.211148929Z" level=info msg="RemovePodSandbox \"1cd51275a95c9ccc42b96cd64df25c1c45e843975e12ce4fd50640deae067dbe\" returns successfully" Jul 14 23:07:14.239364 containerd[1538]: time="2025-07-14T23:07:14.239193206Z" level=info msg="StopPodSandbox for \"d43c1a91dd2276562727eefbcc01517e504d878328879e85a13408edec409254\"" Jul 14 23:07:14.253711 containerd[1538]: time="2025-07-14T23:07:14.253695009Z" level=info msg="CreateContainer within sandbox \"26130304085c8326dfc85985702e52e66090d61123d8558b02387790c0ab5454\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Jul 14 23:07:14.359510 containerd[1538]: 2025-07-14 23:07:14.318 [WARNING][5429] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d43c1a91dd2276562727eefbcc01517e504d878328879e85a13408edec409254" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7fb9554685--xx2mf-eth0", GenerateName:"calico-apiserver-7fb9554685-", Namespace:"calico-apiserver", SelfLink:"", UID:"010be3c5-59c9-4ff4-a6d2-2124514c9299", ResourceVersion:"1065", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 23, 6, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7fb9554685", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9b5404d95355c758267e0a757a3012352d7b3fe8101f319462097d064c02c4ee", Pod:"calico-apiserver-7fb9554685-xx2mf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8fe711b9864", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 23:07:14.359510 containerd[1538]: 2025-07-14 23:07:14.319 [INFO][5429] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d43c1a91dd2276562727eefbcc01517e504d878328879e85a13408edec409254" Jul 14 23:07:14.359510 containerd[1538]: 2025-07-14 23:07:14.319 [INFO][5429] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d43c1a91dd2276562727eefbcc01517e504d878328879e85a13408edec409254" iface="eth0" netns="" Jul 14 23:07:14.359510 containerd[1538]: 2025-07-14 23:07:14.319 [INFO][5429] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d43c1a91dd2276562727eefbcc01517e504d878328879e85a13408edec409254" Jul 14 23:07:14.359510 containerd[1538]: 2025-07-14 23:07:14.319 [INFO][5429] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d43c1a91dd2276562727eefbcc01517e504d878328879e85a13408edec409254" Jul 14 23:07:14.359510 containerd[1538]: 2025-07-14 23:07:14.348 [INFO][5436] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d43c1a91dd2276562727eefbcc01517e504d878328879e85a13408edec409254" HandleID="k8s-pod-network.d43c1a91dd2276562727eefbcc01517e504d878328879e85a13408edec409254" Workload="localhost-k8s-calico--apiserver--7fb9554685--xx2mf-eth0" Jul 14 23:07:14.359510 containerd[1538]: 2025-07-14 23:07:14.349 [INFO][5436] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 23:07:14.359510 containerd[1538]: 2025-07-14 23:07:14.349 [INFO][5436] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 23:07:14.359510 containerd[1538]: 2025-07-14 23:07:14.352 [WARNING][5436] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d43c1a91dd2276562727eefbcc01517e504d878328879e85a13408edec409254" HandleID="k8s-pod-network.d43c1a91dd2276562727eefbcc01517e504d878328879e85a13408edec409254" Workload="localhost-k8s-calico--apiserver--7fb9554685--xx2mf-eth0" Jul 14 23:07:14.359510 containerd[1538]: 2025-07-14 23:07:14.352 [INFO][5436] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d43c1a91dd2276562727eefbcc01517e504d878328879e85a13408edec409254" HandleID="k8s-pod-network.d43c1a91dd2276562727eefbcc01517e504d878328879e85a13408edec409254" Workload="localhost-k8s-calico--apiserver--7fb9554685--xx2mf-eth0" Jul 14 23:07:14.359510 containerd[1538]: 2025-07-14 23:07:14.353 [INFO][5436] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 23:07:14.359510 containerd[1538]: 2025-07-14 23:07:14.357 [INFO][5429] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d43c1a91dd2276562727eefbcc01517e504d878328879e85a13408edec409254" Jul 14 23:07:14.398935 containerd[1538]: time="2025-07-14T23:07:14.359695190Z" level=info msg="TearDown network for sandbox \"d43c1a91dd2276562727eefbcc01517e504d878328879e85a13408edec409254\" successfully" Jul 14 23:07:14.398935 containerd[1538]: time="2025-07-14T23:07:14.359714440Z" level=info msg="StopPodSandbox for \"d43c1a91dd2276562727eefbcc01517e504d878328879e85a13408edec409254\" returns successfully" Jul 14 23:07:14.468333 containerd[1538]: time="2025-07-14T23:07:14.467642093Z" level=info msg="RemovePodSandbox for \"d43c1a91dd2276562727eefbcc01517e504d878328879e85a13408edec409254\"" Jul 14 23:07:14.468333 containerd[1538]: time="2025-07-14T23:07:14.467669628Z" level=info msg="Forcibly stopping sandbox \"d43c1a91dd2276562727eefbcc01517e504d878328879e85a13408edec409254\"" Jul 14 23:07:14.477512 containerd[1538]: time="2025-07-14T23:07:14.477366248Z" level=info msg="CreateContainer within sandbox \"26130304085c8326dfc85985702e52e66090d61123d8558b02387790c0ab5454\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"50c51de6cef4881b66c94c2c2e7267f2c56f4927772ce78258400cfc30a7a12c\"" Jul 14 23:07:14.478810 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3826748176.mount: Deactivated successfully. Jul 14 23:07:14.499015 containerd[1538]: time="2025-07-14T23:07:14.498168457Z" level=info msg="StartContainer for \"50c51de6cef4881b66c94c2c2e7267f2c56f4927772ce78258400cfc30a7a12c\"" Jul 14 23:07:14.547127 containerd[1538]: 2025-07-14 23:07:14.514 [WARNING][5450] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d43c1a91dd2276562727eefbcc01517e504d878328879e85a13408edec409254" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7fb9554685--xx2mf-eth0", GenerateName:"calico-apiserver-7fb9554685-", Namespace:"calico-apiserver", SelfLink:"", UID:"010be3c5-59c9-4ff4-a6d2-2124514c9299", ResourceVersion:"1065", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 23, 6, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7fb9554685", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9b5404d95355c758267e0a757a3012352d7b3fe8101f319462097d064c02c4ee", Pod:"calico-apiserver-7fb9554685-xx2mf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8fe711b9864", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 23:07:14.547127 containerd[1538]: 2025-07-14 23:07:14.514 [INFO][5450] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d43c1a91dd2276562727eefbcc01517e504d878328879e85a13408edec409254" Jul 14 23:07:14.547127 containerd[1538]: 2025-07-14 23:07:14.514 [INFO][5450] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d43c1a91dd2276562727eefbcc01517e504d878328879e85a13408edec409254" iface="eth0" netns="" Jul 14 23:07:14.547127 containerd[1538]: 2025-07-14 23:07:14.514 [INFO][5450] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d43c1a91dd2276562727eefbcc01517e504d878328879e85a13408edec409254" Jul 14 23:07:14.547127 containerd[1538]: 2025-07-14 23:07:14.514 [INFO][5450] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d43c1a91dd2276562727eefbcc01517e504d878328879e85a13408edec409254" Jul 14 23:07:14.547127 containerd[1538]: 2025-07-14 23:07:14.532 [INFO][5458] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d43c1a91dd2276562727eefbcc01517e504d878328879e85a13408edec409254" HandleID="k8s-pod-network.d43c1a91dd2276562727eefbcc01517e504d878328879e85a13408edec409254" Workload="localhost-k8s-calico--apiserver--7fb9554685--xx2mf-eth0" Jul 14 23:07:14.547127 containerd[1538]: 2025-07-14 23:07:14.532 [INFO][5458] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 23:07:14.547127 containerd[1538]: 2025-07-14 23:07:14.532 [INFO][5458] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 23:07:14.547127 containerd[1538]: 2025-07-14 23:07:14.539 [WARNING][5458] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d43c1a91dd2276562727eefbcc01517e504d878328879e85a13408edec409254" HandleID="k8s-pod-network.d43c1a91dd2276562727eefbcc01517e504d878328879e85a13408edec409254" Workload="localhost-k8s-calico--apiserver--7fb9554685--xx2mf-eth0" Jul 14 23:07:14.547127 containerd[1538]: 2025-07-14 23:07:14.539 [INFO][5458] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d43c1a91dd2276562727eefbcc01517e504d878328879e85a13408edec409254" HandleID="k8s-pod-network.d43c1a91dd2276562727eefbcc01517e504d878328879e85a13408edec409254" Workload="localhost-k8s-calico--apiserver--7fb9554685--xx2mf-eth0" Jul 14 23:07:14.547127 containerd[1538]: 2025-07-14 23:07:14.541 [INFO][5458] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 23:07:14.547127 containerd[1538]: 2025-07-14 23:07:14.544 [INFO][5450] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d43c1a91dd2276562727eefbcc01517e504d878328879e85a13408edec409254" Jul 14 23:07:14.547127 containerd[1538]: time="2025-07-14T23:07:14.546936830Z" level=info msg="TearDown network for sandbox \"d43c1a91dd2276562727eefbcc01517e504d878328879e85a13408edec409254\" successfully" Jul 14 23:07:14.559883 containerd[1538]: time="2025-07-14T23:07:14.559637062Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d43c1a91dd2276562727eefbcc01517e504d878328879e85a13408edec409254\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 14 23:07:14.560019 containerd[1538]: time="2025-07-14T23:07:14.559955031Z" level=info msg="RemovePodSandbox \"d43c1a91dd2276562727eefbcc01517e504d878328879e85a13408edec409254\" returns successfully" Jul 14 23:07:14.560868 containerd[1538]: time="2025-07-14T23:07:14.560704855Z" level=info msg="StopPodSandbox for \"d7ba934ca5fa8f5a942e798510589363b14ba71279585d1870a93dee910bdc2b\"" Jul 14 23:07:14.636843 containerd[1538]: 2025-07-14 23:07:14.592 [WARNING][5476] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d7ba934ca5fa8f5a942e798510589363b14ba71279585d1870a93dee910bdc2b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7fb9554685--vnrld-eth0", GenerateName:"calico-apiserver-7fb9554685-", Namespace:"calico-apiserver", SelfLink:"", UID:"b3b1fbe5-9018-4070-bc43-78a6548c5e8b", ResourceVersion:"1026", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 23, 6, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7fb9554685", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8231eb6b89596f8bb82467154a9641ecdf78f3080ea626b95504303389ad45ab", Pod:"calico-apiserver-7fb9554685-vnrld", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7e858eefa40", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 23:07:14.636843 containerd[1538]: 2025-07-14 23:07:14.592 [INFO][5476] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d7ba934ca5fa8f5a942e798510589363b14ba71279585d1870a93dee910bdc2b" Jul 14 23:07:14.636843 containerd[1538]: 2025-07-14 23:07:14.592 [INFO][5476] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d7ba934ca5fa8f5a942e798510589363b14ba71279585d1870a93dee910bdc2b" iface="eth0" netns="" Jul 14 23:07:14.636843 containerd[1538]: 2025-07-14 23:07:14.592 [INFO][5476] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d7ba934ca5fa8f5a942e798510589363b14ba71279585d1870a93dee910bdc2b" Jul 14 23:07:14.636843 containerd[1538]: 2025-07-14 23:07:14.592 [INFO][5476] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d7ba934ca5fa8f5a942e798510589363b14ba71279585d1870a93dee910bdc2b" Jul 14 23:07:14.636843 containerd[1538]: 2025-07-14 23:07:14.621 [INFO][5483] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d7ba934ca5fa8f5a942e798510589363b14ba71279585d1870a93dee910bdc2b" HandleID="k8s-pod-network.d7ba934ca5fa8f5a942e798510589363b14ba71279585d1870a93dee910bdc2b" Workload="localhost-k8s-calico--apiserver--7fb9554685--vnrld-eth0" Jul 14 23:07:14.636843 containerd[1538]: 2025-07-14 23:07:14.621 [INFO][5483] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 23:07:14.636843 containerd[1538]: 2025-07-14 23:07:14.621 [INFO][5483] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 23:07:14.636843 containerd[1538]: 2025-07-14 23:07:14.628 [WARNING][5483] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d7ba934ca5fa8f5a942e798510589363b14ba71279585d1870a93dee910bdc2b" HandleID="k8s-pod-network.d7ba934ca5fa8f5a942e798510589363b14ba71279585d1870a93dee910bdc2b" Workload="localhost-k8s-calico--apiserver--7fb9554685--vnrld-eth0" Jul 14 23:07:14.636843 containerd[1538]: 2025-07-14 23:07:14.628 [INFO][5483] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d7ba934ca5fa8f5a942e798510589363b14ba71279585d1870a93dee910bdc2b" HandleID="k8s-pod-network.d7ba934ca5fa8f5a942e798510589363b14ba71279585d1870a93dee910bdc2b" Workload="localhost-k8s-calico--apiserver--7fb9554685--vnrld-eth0" Jul 14 23:07:14.636843 containerd[1538]: 2025-07-14 23:07:14.629 [INFO][5483] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 23:07:14.636843 containerd[1538]: 2025-07-14 23:07:14.631 [INFO][5476] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d7ba934ca5fa8f5a942e798510589363b14ba71279585d1870a93dee910bdc2b" Jul 14 23:07:14.636843 containerd[1538]: time="2025-07-14T23:07:14.636749721Z" level=info msg="TearDown network for sandbox \"d7ba934ca5fa8f5a942e798510589363b14ba71279585d1870a93dee910bdc2b\" successfully" Jul 14 23:07:14.636843 containerd[1538]: time="2025-07-14T23:07:14.636767362Z" level=info msg="StopPodSandbox for \"d7ba934ca5fa8f5a942e798510589363b14ba71279585d1870a93dee910bdc2b\" returns successfully" Jul 14 23:07:14.637998 containerd[1538]: time="2025-07-14T23:07:14.637051308Z" level=info msg="RemovePodSandbox for \"d7ba934ca5fa8f5a942e798510589363b14ba71279585d1870a93dee910bdc2b\"" Jul 14 23:07:14.637998 containerd[1538]: time="2025-07-14T23:07:14.637065469Z" level=info msg="Forcibly stopping sandbox \"d7ba934ca5fa8f5a942e798510589363b14ba71279585d1870a93dee910bdc2b\"" Jul 14 23:07:14.692964 containerd[1538]: 2025-07-14 23:07:14.662 [WARNING][5501] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d7ba934ca5fa8f5a942e798510589363b14ba71279585d1870a93dee910bdc2b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7fb9554685--vnrld-eth0", GenerateName:"calico-apiserver-7fb9554685-", Namespace:"calico-apiserver", SelfLink:"", UID:"b3b1fbe5-9018-4070-bc43-78a6548c5e8b", ResourceVersion:"1026", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 23, 6, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7fb9554685", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8231eb6b89596f8bb82467154a9641ecdf78f3080ea626b95504303389ad45ab", Pod:"calico-apiserver-7fb9554685-vnrld", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7e858eefa40", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 23:07:14.692964 containerd[1538]: 2025-07-14 23:07:14.662 [INFO][5501] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d7ba934ca5fa8f5a942e798510589363b14ba71279585d1870a93dee910bdc2b" Jul 14 23:07:14.692964 containerd[1538]: 2025-07-14 23:07:14.662 [INFO][5501] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d7ba934ca5fa8f5a942e798510589363b14ba71279585d1870a93dee910bdc2b" iface="eth0" netns="" Jul 14 23:07:14.692964 containerd[1538]: 2025-07-14 23:07:14.662 [INFO][5501] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d7ba934ca5fa8f5a942e798510589363b14ba71279585d1870a93dee910bdc2b" Jul 14 23:07:14.692964 containerd[1538]: 2025-07-14 23:07:14.662 [INFO][5501] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d7ba934ca5fa8f5a942e798510589363b14ba71279585d1870a93dee910bdc2b" Jul 14 23:07:14.692964 containerd[1538]: 2025-07-14 23:07:14.677 [INFO][5509] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d7ba934ca5fa8f5a942e798510589363b14ba71279585d1870a93dee910bdc2b" HandleID="k8s-pod-network.d7ba934ca5fa8f5a942e798510589363b14ba71279585d1870a93dee910bdc2b" Workload="localhost-k8s-calico--apiserver--7fb9554685--vnrld-eth0" Jul 14 23:07:14.692964 containerd[1538]: 2025-07-14 23:07:14.677 [INFO][5509] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 23:07:14.692964 containerd[1538]: 2025-07-14 23:07:14.677 [INFO][5509] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 23:07:14.692964 containerd[1538]: 2025-07-14 23:07:14.681 [WARNING][5509] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d7ba934ca5fa8f5a942e798510589363b14ba71279585d1870a93dee910bdc2b" HandleID="k8s-pod-network.d7ba934ca5fa8f5a942e798510589363b14ba71279585d1870a93dee910bdc2b" Workload="localhost-k8s-calico--apiserver--7fb9554685--vnrld-eth0" Jul 14 23:07:14.692964 containerd[1538]: 2025-07-14 23:07:14.681 [INFO][5509] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d7ba934ca5fa8f5a942e798510589363b14ba71279585d1870a93dee910bdc2b" HandleID="k8s-pod-network.d7ba934ca5fa8f5a942e798510589363b14ba71279585d1870a93dee910bdc2b" Workload="localhost-k8s-calico--apiserver--7fb9554685--vnrld-eth0" Jul 14 23:07:14.692964 containerd[1538]: 2025-07-14 23:07:14.682 [INFO][5509] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 23:07:14.692964 containerd[1538]: 2025-07-14 23:07:14.686 [INFO][5501] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d7ba934ca5fa8f5a942e798510589363b14ba71279585d1870a93dee910bdc2b" Jul 14 23:07:14.692964 containerd[1538]: time="2025-07-14T23:07:14.692374610Z" level=info msg="TearDown network for sandbox \"d7ba934ca5fa8f5a942e798510589363b14ba71279585d1870a93dee910bdc2b\" successfully" Jul 14 23:07:14.696683 containerd[1538]: time="2025-07-14T23:07:14.696668919Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d7ba934ca5fa8f5a942e798510589363b14ba71279585d1870a93dee910bdc2b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 14 23:07:14.696781 containerd[1538]: time="2025-07-14T23:07:14.696769543Z" level=info msg="RemovePodSandbox \"d7ba934ca5fa8f5a942e798510589363b14ba71279585d1870a93dee910bdc2b\" returns successfully" Jul 14 23:07:14.697171 containerd[1538]: time="2025-07-14T23:07:14.697160942Z" level=info msg="StopPodSandbox for \"8ebf0099ef01d13457469ce1ae86fc508ffb7913518970c58c807e2985630951\"" Jul 14 23:07:14.719156 systemd[1]: Started cri-containerd-50c51de6cef4881b66c94c2c2e7267f2c56f4927772ce78258400cfc30a7a12c.scope - libcontainer container 50c51de6cef4881b66c94c2c2e7267f2c56f4927772ce78258400cfc30a7a12c. Jul 14 23:07:14.783047 containerd[1538]: time="2025-07-14T23:07:14.783021443Z" level=info msg="StartContainer for \"50c51de6cef4881b66c94c2c2e7267f2c56f4927772ce78258400cfc30a7a12c\" returns successfully" Jul 14 23:07:14.786383 containerd[1538]: 2025-07-14 23:07:14.741 [WARNING][5533] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8ebf0099ef01d13457469ce1ae86fc508ffb7913518970c58c807e2985630951" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--789c44b4f5--hbf9t-eth0", GenerateName:"calico-kube-controllers-789c44b4f5-", Namespace:"calico-system", SelfLink:"", UID:"f381ec3d-1f99-48f5-a759-d4ef727cd042", ResourceVersion:"1061", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 23, 6, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"789c44b4f5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f749f27f997f7e4dc3ff818327a30e1d248688b1b63bb69ea330f3e29bfb752c", Pod:"calico-kube-controllers-789c44b4f5-hbf9t", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali55fefe79caa", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 23:07:14.786383 containerd[1538]: 2025-07-14 23:07:14.741 [INFO][5533] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8ebf0099ef01d13457469ce1ae86fc508ffb7913518970c58c807e2985630951" Jul 14 23:07:14.786383 containerd[1538]: 2025-07-14 23:07:14.741 [INFO][5533] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8ebf0099ef01d13457469ce1ae86fc508ffb7913518970c58c807e2985630951" iface="eth0" netns="" Jul 14 23:07:14.786383 containerd[1538]: 2025-07-14 23:07:14.741 [INFO][5533] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8ebf0099ef01d13457469ce1ae86fc508ffb7913518970c58c807e2985630951" Jul 14 23:07:14.786383 containerd[1538]: 2025-07-14 23:07:14.741 [INFO][5533] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8ebf0099ef01d13457469ce1ae86fc508ffb7913518970c58c807e2985630951" Jul 14 23:07:14.786383 containerd[1538]: 2025-07-14 23:07:14.777 [INFO][5547] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8ebf0099ef01d13457469ce1ae86fc508ffb7913518970c58c807e2985630951" HandleID="k8s-pod-network.8ebf0099ef01d13457469ce1ae86fc508ffb7913518970c58c807e2985630951" Workload="localhost-k8s-calico--kube--controllers--789c44b4f5--hbf9t-eth0" Jul 14 23:07:14.786383 containerd[1538]: 2025-07-14 23:07:14.777 [INFO][5547] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 23:07:14.786383 containerd[1538]: 2025-07-14 23:07:14.777 [INFO][5547] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 23:07:14.786383 containerd[1538]: 2025-07-14 23:07:14.782 [WARNING][5547] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8ebf0099ef01d13457469ce1ae86fc508ffb7913518970c58c807e2985630951" HandleID="k8s-pod-network.8ebf0099ef01d13457469ce1ae86fc508ffb7913518970c58c807e2985630951" Workload="localhost-k8s-calico--kube--controllers--789c44b4f5--hbf9t-eth0" Jul 14 23:07:14.786383 containerd[1538]: 2025-07-14 23:07:14.782 [INFO][5547] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8ebf0099ef01d13457469ce1ae86fc508ffb7913518970c58c807e2985630951" HandleID="k8s-pod-network.8ebf0099ef01d13457469ce1ae86fc508ffb7913518970c58c807e2985630951" Workload="localhost-k8s-calico--kube--controllers--789c44b4f5--hbf9t-eth0" Jul 14 23:07:14.786383 containerd[1538]: 2025-07-14 23:07:14.783 [INFO][5547] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 23:07:14.786383 containerd[1538]: 2025-07-14 23:07:14.784 [INFO][5533] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8ebf0099ef01d13457469ce1ae86fc508ffb7913518970c58c807e2985630951" Jul 14 23:07:14.787987 containerd[1538]: time="2025-07-14T23:07:14.786402432Z" level=info msg="TearDown network for sandbox \"8ebf0099ef01d13457469ce1ae86fc508ffb7913518970c58c807e2985630951\" successfully" Jul 14 23:07:14.787987 containerd[1538]: time="2025-07-14T23:07:14.786413626Z" level=info msg="StopPodSandbox for \"8ebf0099ef01d13457469ce1ae86fc508ffb7913518970c58c807e2985630951\" returns successfully" Jul 14 23:07:14.872761 containerd[1538]: time="2025-07-14T23:07:14.872361019Z" level=info msg="RemovePodSandbox for \"8ebf0099ef01d13457469ce1ae86fc508ffb7913518970c58c807e2985630951\"" Jul 14 23:07:14.872761 containerd[1538]: time="2025-07-14T23:07:14.872388342Z" level=info msg="Forcibly stopping sandbox \"8ebf0099ef01d13457469ce1ae86fc508ffb7913518970c58c807e2985630951\"" Jul 14 23:07:14.947465 containerd[1538]: 2025-07-14 23:07:14.911 [WARNING][5571] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8ebf0099ef01d13457469ce1ae86fc508ffb7913518970c58c807e2985630951" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--789c44b4f5--hbf9t-eth0", GenerateName:"calico-kube-controllers-789c44b4f5-", Namespace:"calico-system", SelfLink:"", UID:"f381ec3d-1f99-48f5-a759-d4ef727cd042", ResourceVersion:"1061", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 23, 6, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"789c44b4f5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f749f27f997f7e4dc3ff818327a30e1d248688b1b63bb69ea330f3e29bfb752c", Pod:"calico-kube-controllers-789c44b4f5-hbf9t", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali55fefe79caa", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 23:07:14.947465 containerd[1538]: 2025-07-14 23:07:14.911 [INFO][5571] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8ebf0099ef01d13457469ce1ae86fc508ffb7913518970c58c807e2985630951" Jul 14 23:07:14.947465 containerd[1538]: 2025-07-14 23:07:14.911 [INFO][5571] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8ebf0099ef01d13457469ce1ae86fc508ffb7913518970c58c807e2985630951" iface="eth0" netns="" Jul 14 23:07:14.947465 containerd[1538]: 2025-07-14 23:07:14.911 [INFO][5571] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8ebf0099ef01d13457469ce1ae86fc508ffb7913518970c58c807e2985630951" Jul 14 23:07:14.947465 containerd[1538]: 2025-07-14 23:07:14.911 [INFO][5571] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8ebf0099ef01d13457469ce1ae86fc508ffb7913518970c58c807e2985630951" Jul 14 23:07:14.947465 containerd[1538]: 2025-07-14 23:07:14.936 [INFO][5579] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8ebf0099ef01d13457469ce1ae86fc508ffb7913518970c58c807e2985630951" HandleID="k8s-pod-network.8ebf0099ef01d13457469ce1ae86fc508ffb7913518970c58c807e2985630951" Workload="localhost-k8s-calico--kube--controllers--789c44b4f5--hbf9t-eth0" Jul 14 23:07:14.947465 containerd[1538]: 2025-07-14 23:07:14.936 [INFO][5579] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 23:07:14.947465 containerd[1538]: 2025-07-14 23:07:14.937 [INFO][5579] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 23:07:14.947465 containerd[1538]: 2025-07-14 23:07:14.941 [WARNING][5579] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8ebf0099ef01d13457469ce1ae86fc508ffb7913518970c58c807e2985630951" HandleID="k8s-pod-network.8ebf0099ef01d13457469ce1ae86fc508ffb7913518970c58c807e2985630951" Workload="localhost-k8s-calico--kube--controllers--789c44b4f5--hbf9t-eth0" Jul 14 23:07:14.947465 containerd[1538]: 2025-07-14 23:07:14.941 [INFO][5579] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8ebf0099ef01d13457469ce1ae86fc508ffb7913518970c58c807e2985630951" HandleID="k8s-pod-network.8ebf0099ef01d13457469ce1ae86fc508ffb7913518970c58c807e2985630951" Workload="localhost-k8s-calico--kube--controllers--789c44b4f5--hbf9t-eth0" Jul 14 23:07:14.947465 containerd[1538]: 2025-07-14 23:07:14.943 [INFO][5579] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 23:07:14.947465 containerd[1538]: 2025-07-14 23:07:14.946 [INFO][5571] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8ebf0099ef01d13457469ce1ae86fc508ffb7913518970c58c807e2985630951" Jul 14 23:07:14.949301 containerd[1538]: time="2025-07-14T23:07:14.947923965Z" level=info msg="TearDown network for sandbox \"8ebf0099ef01d13457469ce1ae86fc508ffb7913518970c58c807e2985630951\" successfully" Jul 14 23:07:14.962480 containerd[1538]: time="2025-07-14T23:07:14.962317966Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8ebf0099ef01d13457469ce1ae86fc508ffb7913518970c58c807e2985630951\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 14 23:07:14.962480 containerd[1538]: time="2025-07-14T23:07:14.962362858Z" level=info msg="RemovePodSandbox \"8ebf0099ef01d13457469ce1ae86fc508ffb7913518970c58c807e2985630951\" returns successfully" Jul 14 23:07:14.964337 containerd[1538]: time="2025-07-14T23:07:14.964324153Z" level=info msg="StopPodSandbox for \"80facd741d5e4ffb378eb31c003e2b8b943dc8d829374aa5eb7a612dc68ca69d\"" Jul 14 23:07:15.076890 containerd[1538]: 2025-07-14 23:07:15.029 [WARNING][5593] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="80facd741d5e4ffb378eb31c003e2b8b943dc8d829374aa5eb7a612dc68ca69d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--4jr6z-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"ffa01b57-cf5c-4652-8eda-490fdd179a1b", ResourceVersion:"939", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 23, 6, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a13ce779967ebe010f7e625b9a81e00a36d8fefd375ff48732cc04c50f17cb91", Pod:"csi-node-driver-4jr6z", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali93cabc924ca", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 23:07:15.076890 containerd[1538]: 2025-07-14 23:07:15.030 [INFO][5593] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="80facd741d5e4ffb378eb31c003e2b8b943dc8d829374aa5eb7a612dc68ca69d" Jul 14 23:07:15.076890 containerd[1538]: 2025-07-14 23:07:15.030 [INFO][5593] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="80facd741d5e4ffb378eb31c003e2b8b943dc8d829374aa5eb7a612dc68ca69d" iface="eth0" netns="" Jul 14 23:07:15.076890 containerd[1538]: 2025-07-14 23:07:15.030 [INFO][5593] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="80facd741d5e4ffb378eb31c003e2b8b943dc8d829374aa5eb7a612dc68ca69d" Jul 14 23:07:15.076890 containerd[1538]: 2025-07-14 23:07:15.030 [INFO][5593] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="80facd741d5e4ffb378eb31c003e2b8b943dc8d829374aa5eb7a612dc68ca69d" Jul 14 23:07:15.076890 containerd[1538]: 2025-07-14 23:07:15.058 [INFO][5601] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="80facd741d5e4ffb378eb31c003e2b8b943dc8d829374aa5eb7a612dc68ca69d" HandleID="k8s-pod-network.80facd741d5e4ffb378eb31c003e2b8b943dc8d829374aa5eb7a612dc68ca69d" Workload="localhost-k8s-csi--node--driver--4jr6z-eth0" Jul 14 23:07:15.076890 containerd[1538]: 2025-07-14 23:07:15.058 [INFO][5601] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 23:07:15.076890 containerd[1538]: 2025-07-14 23:07:15.059 [INFO][5601] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 23:07:15.076890 containerd[1538]: 2025-07-14 23:07:15.073 [WARNING][5601] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="80facd741d5e4ffb378eb31c003e2b8b943dc8d829374aa5eb7a612dc68ca69d" HandleID="k8s-pod-network.80facd741d5e4ffb378eb31c003e2b8b943dc8d829374aa5eb7a612dc68ca69d" Workload="localhost-k8s-csi--node--driver--4jr6z-eth0" Jul 14 23:07:15.076890 containerd[1538]: 2025-07-14 23:07:15.073 [INFO][5601] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="80facd741d5e4ffb378eb31c003e2b8b943dc8d829374aa5eb7a612dc68ca69d" HandleID="k8s-pod-network.80facd741d5e4ffb378eb31c003e2b8b943dc8d829374aa5eb7a612dc68ca69d" Workload="localhost-k8s-csi--node--driver--4jr6z-eth0" Jul 14 23:07:15.076890 containerd[1538]: 2025-07-14 23:07:15.074 [INFO][5601] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 23:07:15.076890 containerd[1538]: 2025-07-14 23:07:15.075 [INFO][5593] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="80facd741d5e4ffb378eb31c003e2b8b943dc8d829374aa5eb7a612dc68ca69d" Jul 14 23:07:15.082818 containerd[1538]: time="2025-07-14T23:07:15.078132971Z" level=info msg="TearDown network for sandbox \"80facd741d5e4ffb378eb31c003e2b8b943dc8d829374aa5eb7a612dc68ca69d\" successfully" Jul 14 23:07:15.082818 containerd[1538]: time="2025-07-14T23:07:15.078150204Z" level=info msg="StopPodSandbox for \"80facd741d5e4ffb378eb31c003e2b8b943dc8d829374aa5eb7a612dc68ca69d\" returns successfully" Jul 14 23:07:15.082818 containerd[1538]: time="2025-07-14T23:07:15.078456353Z" level=info msg="RemovePodSandbox for \"80facd741d5e4ffb378eb31c003e2b8b943dc8d829374aa5eb7a612dc68ca69d\"" Jul 14 23:07:15.082818 containerd[1538]: time="2025-07-14T23:07:15.078472388Z" level=info msg="Forcibly stopping sandbox \"80facd741d5e4ffb378eb31c003e2b8b943dc8d829374aa5eb7a612dc68ca69d\"" Jul 14 23:07:15.134993 containerd[1538]: 2025-07-14 23:07:15.104 [WARNING][5615] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="80facd741d5e4ffb378eb31c003e2b8b943dc8d829374aa5eb7a612dc68ca69d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--4jr6z-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"ffa01b57-cf5c-4652-8eda-490fdd179a1b", ResourceVersion:"939", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 23, 6, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a13ce779967ebe010f7e625b9a81e00a36d8fefd375ff48732cc04c50f17cb91", Pod:"csi-node-driver-4jr6z", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali93cabc924ca", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 23:07:15.134993 containerd[1538]: 2025-07-14 23:07:15.104 [INFO][5615] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="80facd741d5e4ffb378eb31c003e2b8b943dc8d829374aa5eb7a612dc68ca69d" Jul 14 23:07:15.134993 containerd[1538]: 2025-07-14 23:07:15.104 [INFO][5615] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="80facd741d5e4ffb378eb31c003e2b8b943dc8d829374aa5eb7a612dc68ca69d" iface="eth0" netns="" Jul 14 23:07:15.134993 containerd[1538]: 2025-07-14 23:07:15.104 [INFO][5615] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="80facd741d5e4ffb378eb31c003e2b8b943dc8d829374aa5eb7a612dc68ca69d" Jul 14 23:07:15.134993 containerd[1538]: 2025-07-14 23:07:15.104 [INFO][5615] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="80facd741d5e4ffb378eb31c003e2b8b943dc8d829374aa5eb7a612dc68ca69d" Jul 14 23:07:15.134993 containerd[1538]: 2025-07-14 23:07:15.124 [INFO][5623] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="80facd741d5e4ffb378eb31c003e2b8b943dc8d829374aa5eb7a612dc68ca69d" HandleID="k8s-pod-network.80facd741d5e4ffb378eb31c003e2b8b943dc8d829374aa5eb7a612dc68ca69d" Workload="localhost-k8s-csi--node--driver--4jr6z-eth0" Jul 14 23:07:15.134993 containerd[1538]: 2025-07-14 23:07:15.124 [INFO][5623] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 23:07:15.134993 containerd[1538]: 2025-07-14 23:07:15.125 [INFO][5623] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 23:07:15.134993 containerd[1538]: 2025-07-14 23:07:15.130 [WARNING][5623] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="80facd741d5e4ffb378eb31c003e2b8b943dc8d829374aa5eb7a612dc68ca69d" HandleID="k8s-pod-network.80facd741d5e4ffb378eb31c003e2b8b943dc8d829374aa5eb7a612dc68ca69d" Workload="localhost-k8s-csi--node--driver--4jr6z-eth0" Jul 14 23:07:15.134993 containerd[1538]: 2025-07-14 23:07:15.130 [INFO][5623] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="80facd741d5e4ffb378eb31c003e2b8b943dc8d829374aa5eb7a612dc68ca69d" HandleID="k8s-pod-network.80facd741d5e4ffb378eb31c003e2b8b943dc8d829374aa5eb7a612dc68ca69d" Workload="localhost-k8s-csi--node--driver--4jr6z-eth0" Jul 14 23:07:15.134993 containerd[1538]: 2025-07-14 23:07:15.130 [INFO][5623] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 23:07:15.134993 containerd[1538]: 2025-07-14 23:07:15.133 [INFO][5615] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="80facd741d5e4ffb378eb31c003e2b8b943dc8d829374aa5eb7a612dc68ca69d" Jul 14 23:07:15.141793 containerd[1538]: time="2025-07-14T23:07:15.135132246Z" level=info msg="TearDown network for sandbox \"80facd741d5e4ffb378eb31c003e2b8b943dc8d829374aa5eb7a612dc68ca69d\" successfully" Jul 14 23:07:15.144725 containerd[1538]: time="2025-07-14T23:07:15.144677772Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"80facd741d5e4ffb378eb31c003e2b8b943dc8d829374aa5eb7a612dc68ca69d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 14 23:07:15.145024 containerd[1538]: time="2025-07-14T23:07:15.144828002Z" level=info msg="RemovePodSandbox \"80facd741d5e4ffb378eb31c003e2b8b943dc8d829374aa5eb7a612dc68ca69d\" returns successfully" Jul 14 23:07:15.145338 containerd[1538]: time="2025-07-14T23:07:15.145225728Z" level=info msg="StopPodSandbox for \"60b63cd1d2bc8137d3b2c460cacbcefb6a777adc4ebc12fbbcc857d1a5535357\"" Jul 14 23:07:15.214062 containerd[1538]: 2025-07-14 23:07:15.170 [WARNING][5638] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="60b63cd1d2bc8137d3b2c460cacbcefb6a777adc4ebc12fbbcc857d1a5535357" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--hc96w-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"eac42073-634d-4a92-8c8b-4e4d39002987", ResourceVersion:"973", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 23, 6, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"59e41c1efb25b4ce6b2ad587c763b9477923da04545c866aa20d8bc50c9c6c4f", Pod:"coredns-7c65d6cfc9-hc96w", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali54d9e493df0", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 23:07:15.214062 containerd[1538]: 2025-07-14 23:07:15.171 [INFO][5638] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="60b63cd1d2bc8137d3b2c460cacbcefb6a777adc4ebc12fbbcc857d1a5535357" Jul 14 23:07:15.214062 containerd[1538]: 2025-07-14 23:07:15.171 [INFO][5638] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="60b63cd1d2bc8137d3b2c460cacbcefb6a777adc4ebc12fbbcc857d1a5535357" iface="eth0" netns="" Jul 14 23:07:15.214062 containerd[1538]: 2025-07-14 23:07:15.171 [INFO][5638] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="60b63cd1d2bc8137d3b2c460cacbcefb6a777adc4ebc12fbbcc857d1a5535357" Jul 14 23:07:15.214062 containerd[1538]: 2025-07-14 23:07:15.171 [INFO][5638] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="60b63cd1d2bc8137d3b2c460cacbcefb6a777adc4ebc12fbbcc857d1a5535357" Jul 14 23:07:15.214062 containerd[1538]: 2025-07-14 23:07:15.192 [INFO][5645] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="60b63cd1d2bc8137d3b2c460cacbcefb6a777adc4ebc12fbbcc857d1a5535357" HandleID="k8s-pod-network.60b63cd1d2bc8137d3b2c460cacbcefb6a777adc4ebc12fbbcc857d1a5535357" Workload="localhost-k8s-coredns--7c65d6cfc9--hc96w-eth0" Jul 14 23:07:15.214062 containerd[1538]: 2025-07-14 23:07:15.192 [INFO][5645] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 23:07:15.214062 containerd[1538]: 2025-07-14 23:07:15.192 [INFO][5645] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 23:07:15.214062 containerd[1538]: 2025-07-14 23:07:15.201 [WARNING][5645] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="60b63cd1d2bc8137d3b2c460cacbcefb6a777adc4ebc12fbbcc857d1a5535357" HandleID="k8s-pod-network.60b63cd1d2bc8137d3b2c460cacbcefb6a777adc4ebc12fbbcc857d1a5535357" Workload="localhost-k8s-coredns--7c65d6cfc9--hc96w-eth0" Jul 14 23:07:15.214062 containerd[1538]: 2025-07-14 23:07:15.201 [INFO][5645] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="60b63cd1d2bc8137d3b2c460cacbcefb6a777adc4ebc12fbbcc857d1a5535357" HandleID="k8s-pod-network.60b63cd1d2bc8137d3b2c460cacbcefb6a777adc4ebc12fbbcc857d1a5535357" Workload="localhost-k8s-coredns--7c65d6cfc9--hc96w-eth0" Jul 14 23:07:15.214062 containerd[1538]: 2025-07-14 23:07:15.202 [INFO][5645] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 23:07:15.214062 containerd[1538]: 2025-07-14 23:07:15.211 [INFO][5638] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="60b63cd1d2bc8137d3b2c460cacbcefb6a777adc4ebc12fbbcc857d1a5535357" Jul 14 23:07:15.217318 containerd[1538]: time="2025-07-14T23:07:15.214256552Z" level=info msg="TearDown network for sandbox \"60b63cd1d2bc8137d3b2c460cacbcefb6a777adc4ebc12fbbcc857d1a5535357\" successfully" Jul 14 23:07:15.217318 containerd[1538]: time="2025-07-14T23:07:15.214271093Z" level=info msg="StopPodSandbox for \"60b63cd1d2bc8137d3b2c460cacbcefb6a777adc4ebc12fbbcc857d1a5535357\" returns successfully" Jul 14 23:07:15.225601 containerd[1538]: time="2025-07-14T23:07:15.225441401Z" level=info msg="RemovePodSandbox for \"60b63cd1d2bc8137d3b2c460cacbcefb6a777adc4ebc12fbbcc857d1a5535357\"" Jul 14 23:07:15.225601 containerd[1538]: time="2025-07-14T23:07:15.225460760Z" level=info msg="Forcibly stopping sandbox \"60b63cd1d2bc8137d3b2c460cacbcefb6a777adc4ebc12fbbcc857d1a5535357\"" Jul 14 23:07:15.338165 containerd[1538]: 2025-07-14 23:07:15.284 [WARNING][5659] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="60b63cd1d2bc8137d3b2c460cacbcefb6a777adc4ebc12fbbcc857d1a5535357" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--hc96w-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"eac42073-634d-4a92-8c8b-4e4d39002987", ResourceVersion:"973", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 23, 6, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"59e41c1efb25b4ce6b2ad587c763b9477923da04545c866aa20d8bc50c9c6c4f", Pod:"coredns-7c65d6cfc9-hc96w", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali54d9e493df0", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 23:07:15.338165 containerd[1538]: 2025-07-14 23:07:15.285 [INFO][5659] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="60b63cd1d2bc8137d3b2c460cacbcefb6a777adc4ebc12fbbcc857d1a5535357" Jul 14 23:07:15.338165 containerd[1538]: 2025-07-14 23:07:15.285 [INFO][5659] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="60b63cd1d2bc8137d3b2c460cacbcefb6a777adc4ebc12fbbcc857d1a5535357" iface="eth0" netns="" Jul 14 23:07:15.338165 containerd[1538]: 2025-07-14 23:07:15.285 [INFO][5659] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="60b63cd1d2bc8137d3b2c460cacbcefb6a777adc4ebc12fbbcc857d1a5535357" Jul 14 23:07:15.338165 containerd[1538]: 2025-07-14 23:07:15.285 [INFO][5659] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="60b63cd1d2bc8137d3b2c460cacbcefb6a777adc4ebc12fbbcc857d1a5535357" Jul 14 23:07:15.338165 containerd[1538]: 2025-07-14 23:07:15.322 [INFO][5666] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="60b63cd1d2bc8137d3b2c460cacbcefb6a777adc4ebc12fbbcc857d1a5535357" HandleID="k8s-pod-network.60b63cd1d2bc8137d3b2c460cacbcefb6a777adc4ebc12fbbcc857d1a5535357" Workload="localhost-k8s-coredns--7c65d6cfc9--hc96w-eth0" Jul 14 23:07:15.338165 containerd[1538]: 2025-07-14 23:07:15.322 [INFO][5666] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 23:07:15.338165 containerd[1538]: 2025-07-14 23:07:15.322 [INFO][5666] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 23:07:15.338165 containerd[1538]: 2025-07-14 23:07:15.332 [WARNING][5666] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="60b63cd1d2bc8137d3b2c460cacbcefb6a777adc4ebc12fbbcc857d1a5535357" HandleID="k8s-pod-network.60b63cd1d2bc8137d3b2c460cacbcefb6a777adc4ebc12fbbcc857d1a5535357" Workload="localhost-k8s-coredns--7c65d6cfc9--hc96w-eth0" Jul 14 23:07:15.338165 containerd[1538]: 2025-07-14 23:07:15.332 [INFO][5666] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="60b63cd1d2bc8137d3b2c460cacbcefb6a777adc4ebc12fbbcc857d1a5535357" HandleID="k8s-pod-network.60b63cd1d2bc8137d3b2c460cacbcefb6a777adc4ebc12fbbcc857d1a5535357" Workload="localhost-k8s-coredns--7c65d6cfc9--hc96w-eth0" Jul 14 23:07:15.338165 containerd[1538]: 2025-07-14 23:07:15.334 [INFO][5666] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 23:07:15.338165 containerd[1538]: 2025-07-14 23:07:15.335 [INFO][5659] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="60b63cd1d2bc8137d3b2c460cacbcefb6a777adc4ebc12fbbcc857d1a5535357" Jul 14 23:07:15.341223 containerd[1538]: time="2025-07-14T23:07:15.338256936Z" level=info msg="TearDown network for sandbox \"60b63cd1d2bc8137d3b2c460cacbcefb6a777adc4ebc12fbbcc857d1a5535357\" successfully" Jul 14 23:07:15.517128 containerd[1538]: time="2025-07-14T23:07:15.517099804Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"60b63cd1d2bc8137d3b2c460cacbcefb6a777adc4ebc12fbbcc857d1a5535357\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 14 23:07:15.518232 containerd[1538]: time="2025-07-14T23:07:15.517260995Z" level=info msg="RemovePodSandbox \"60b63cd1d2bc8137d3b2c460cacbcefb6a777adc4ebc12fbbcc857d1a5535357\" returns successfully" Jul 14 23:07:15.535748 containerd[1538]: time="2025-07-14T23:07:15.535695134Z" level=info msg="StopPodSandbox for \"d554541d3db0c3b607ce2e6782d0ca7031b7705c981827114079a1a167c3af10\"" Jul 14 23:07:15.631632 containerd[1538]: 2025-07-14 23:07:15.578 [WARNING][5684] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d554541d3db0c3b607ce2e6782d0ca7031b7705c981827114079a1a167c3af10" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--86f6f-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"47889b81-1613-42d1-9473-e89886fa669f", ResourceVersion:"971", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 23, 6, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d77dc65ffc35b92ec2952aeb03ab362af3887c9017312b0ff818032f4be181ac", Pod:"coredns-7c65d6cfc9-86f6f", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif28411e7715", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 23:07:15.631632 containerd[1538]: 2025-07-14 23:07:15.578 [INFO][5684] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d554541d3db0c3b607ce2e6782d0ca7031b7705c981827114079a1a167c3af10" Jul 14 23:07:15.631632 containerd[1538]: 2025-07-14 23:07:15.578 [INFO][5684] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d554541d3db0c3b607ce2e6782d0ca7031b7705c981827114079a1a167c3af10" iface="eth0" netns="" Jul 14 23:07:15.631632 containerd[1538]: 2025-07-14 23:07:15.578 [INFO][5684] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d554541d3db0c3b607ce2e6782d0ca7031b7705c981827114079a1a167c3af10" Jul 14 23:07:15.631632 containerd[1538]: 2025-07-14 23:07:15.578 [INFO][5684] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d554541d3db0c3b607ce2e6782d0ca7031b7705c981827114079a1a167c3af10" Jul 14 23:07:15.631632 containerd[1538]: 2025-07-14 23:07:15.608 [INFO][5693] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d554541d3db0c3b607ce2e6782d0ca7031b7705c981827114079a1a167c3af10" HandleID="k8s-pod-network.d554541d3db0c3b607ce2e6782d0ca7031b7705c981827114079a1a167c3af10" Workload="localhost-k8s-coredns--7c65d6cfc9--86f6f-eth0" Jul 14 23:07:15.631632 containerd[1538]: 2025-07-14 23:07:15.608 [INFO][5693] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 23:07:15.631632 containerd[1538]: 2025-07-14 23:07:15.608 [INFO][5693] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 23:07:15.631632 containerd[1538]: 2025-07-14 23:07:15.624 [WARNING][5693] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d554541d3db0c3b607ce2e6782d0ca7031b7705c981827114079a1a167c3af10" HandleID="k8s-pod-network.d554541d3db0c3b607ce2e6782d0ca7031b7705c981827114079a1a167c3af10" Workload="localhost-k8s-coredns--7c65d6cfc9--86f6f-eth0" Jul 14 23:07:15.631632 containerd[1538]: 2025-07-14 23:07:15.624 [INFO][5693] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d554541d3db0c3b607ce2e6782d0ca7031b7705c981827114079a1a167c3af10" HandleID="k8s-pod-network.d554541d3db0c3b607ce2e6782d0ca7031b7705c981827114079a1a167c3af10" Workload="localhost-k8s-coredns--7c65d6cfc9--86f6f-eth0" Jul 14 23:07:15.631632 containerd[1538]: 2025-07-14 23:07:15.626 [INFO][5693] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 23:07:15.631632 containerd[1538]: 2025-07-14 23:07:15.628 [INFO][5684] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d554541d3db0c3b607ce2e6782d0ca7031b7705c981827114079a1a167c3af10" Jul 14 23:07:15.631632 containerd[1538]: time="2025-07-14T23:07:15.631290827Z" level=info msg="TearDown network for sandbox \"d554541d3db0c3b607ce2e6782d0ca7031b7705c981827114079a1a167c3af10\" successfully" Jul 14 23:07:15.651851 containerd[1538]: time="2025-07-14T23:07:15.631304865Z" level=info msg="StopPodSandbox for \"d554541d3db0c3b607ce2e6782d0ca7031b7705c981827114079a1a167c3af10\" returns successfully" Jul 14 23:07:15.652617 containerd[1538]: time="2025-07-14T23:07:15.652109155Z" level=info msg="RemovePodSandbox for \"d554541d3db0c3b607ce2e6782d0ca7031b7705c981827114079a1a167c3af10\"" Jul 14 23:07:15.652617 containerd[1538]: time="2025-07-14T23:07:15.652129894Z" level=info msg="Forcibly stopping sandbox \"d554541d3db0c3b607ce2e6782d0ca7031b7705c981827114079a1a167c3af10\"" Jul 14 23:07:15.732808 containerd[1538]: 2025-07-14 23:07:15.696 [WARNING][5712] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d554541d3db0c3b607ce2e6782d0ca7031b7705c981827114079a1a167c3af10" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--86f6f-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"47889b81-1613-42d1-9473-e89886fa669f", ResourceVersion:"971", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 23, 6, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d77dc65ffc35b92ec2952aeb03ab362af3887c9017312b0ff818032f4be181ac", Pod:"coredns-7c65d6cfc9-86f6f", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif28411e7715", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 23:07:15.732808 containerd[1538]: 2025-07-14 23:07:15.698 [INFO][5712] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d554541d3db0c3b607ce2e6782d0ca7031b7705c981827114079a1a167c3af10" Jul 14 23:07:15.732808 containerd[1538]: 2025-07-14 23:07:15.698 [INFO][5712] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d554541d3db0c3b607ce2e6782d0ca7031b7705c981827114079a1a167c3af10" iface="eth0" netns="" Jul 14 23:07:15.732808 containerd[1538]: 2025-07-14 23:07:15.698 [INFO][5712] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d554541d3db0c3b607ce2e6782d0ca7031b7705c981827114079a1a167c3af10" Jul 14 23:07:15.732808 containerd[1538]: 2025-07-14 23:07:15.698 [INFO][5712] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d554541d3db0c3b607ce2e6782d0ca7031b7705c981827114079a1a167c3af10" Jul 14 23:07:15.732808 containerd[1538]: 2025-07-14 23:07:15.722 [INFO][5719] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d554541d3db0c3b607ce2e6782d0ca7031b7705c981827114079a1a167c3af10" HandleID="k8s-pod-network.d554541d3db0c3b607ce2e6782d0ca7031b7705c981827114079a1a167c3af10" Workload="localhost-k8s-coredns--7c65d6cfc9--86f6f-eth0" Jul 14 23:07:15.732808 containerd[1538]: 2025-07-14 23:07:15.722 [INFO][5719] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 23:07:15.732808 containerd[1538]: 2025-07-14 23:07:15.722 [INFO][5719] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 23:07:15.732808 containerd[1538]: 2025-07-14 23:07:15.727 [WARNING][5719] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d554541d3db0c3b607ce2e6782d0ca7031b7705c981827114079a1a167c3af10" HandleID="k8s-pod-network.d554541d3db0c3b607ce2e6782d0ca7031b7705c981827114079a1a167c3af10" Workload="localhost-k8s-coredns--7c65d6cfc9--86f6f-eth0" Jul 14 23:07:15.732808 containerd[1538]: 2025-07-14 23:07:15.727 [INFO][5719] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d554541d3db0c3b607ce2e6782d0ca7031b7705c981827114079a1a167c3af10" HandleID="k8s-pod-network.d554541d3db0c3b607ce2e6782d0ca7031b7705c981827114079a1a167c3af10" Workload="localhost-k8s-coredns--7c65d6cfc9--86f6f-eth0" Jul 14 23:07:15.732808 containerd[1538]: 2025-07-14 23:07:15.728 [INFO][5719] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 23:07:15.732808 containerd[1538]: 2025-07-14 23:07:15.731 [INFO][5712] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d554541d3db0c3b607ce2e6782d0ca7031b7705c981827114079a1a167c3af10" Jul 14 23:07:15.735346 containerd[1538]: time="2025-07-14T23:07:15.732961447Z" level=info msg="TearDown network for sandbox \"d554541d3db0c3b607ce2e6782d0ca7031b7705c981827114079a1a167c3af10\" successfully" Jul 14 23:07:15.738264 containerd[1538]: time="2025-07-14T23:07:15.738110966Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d554541d3db0c3b607ce2e6782d0ca7031b7705c981827114079a1a167c3af10\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 14 23:07:15.738264 containerd[1538]: time="2025-07-14T23:07:15.738153546Z" level=info msg="RemovePodSandbox \"d554541d3db0c3b607ce2e6782d0ca7031b7705c981827114079a1a167c3af10\" returns successfully" Jul 14 23:07:15.739180 containerd[1538]: time="2025-07-14T23:07:15.739163465Z" level=info msg="StopPodSandbox for \"8fae9a5219270f031d65f7835bdf8284ac7002124c33d69de3f815a76bdd3e1e\"" Jul 14 23:07:15.740014 kubelet[2725]: I0714 23:07:15.734346 2725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-58fd7646b9-5d4g6" podStartSLOduration=30.115320271 podStartE2EDuration="48.68236296s" podCreationTimestamp="2025-07-14 23:06:27 +0000 UTC" firstStartedPulling="2025-07-14 23:06:55.450686882 +0000 UTC m=+44.371393172" lastFinishedPulling="2025-07-14 23:07:14.017729569 +0000 UTC m=+62.938435861" observedRunningTime="2025-07-14 23:07:15.588239432 +0000 UTC m=+64.508945731" watchObservedRunningTime="2025-07-14 23:07:15.68236296 +0000 UTC m=+64.603069254" Jul 14 23:07:15.810275 containerd[1538]: 2025-07-14 23:07:15.777 [WARNING][5733] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8fae9a5219270f031d65f7835bdf8284ac7002124c33d69de3f815a76bdd3e1e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--58fd7646b9--5d4g6-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"43e0fd53-8ec8-4b02-86e2-a124557aa367", ResourceVersion:"1105", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 23, 6, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"26130304085c8326dfc85985702e52e66090d61123d8558b02387790c0ab5454", Pod:"goldmane-58fd7646b9-5d4g6", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali6f7462abe57", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 23:07:15.810275 containerd[1538]: 2025-07-14 23:07:15.777 [INFO][5733] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8fae9a5219270f031d65f7835bdf8284ac7002124c33d69de3f815a76bdd3e1e" Jul 14 23:07:15.810275 containerd[1538]: 2025-07-14 23:07:15.777 [INFO][5733] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8fae9a5219270f031d65f7835bdf8284ac7002124c33d69de3f815a76bdd3e1e" iface="eth0" netns="" Jul 14 23:07:15.810275 containerd[1538]: 2025-07-14 23:07:15.777 [INFO][5733] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8fae9a5219270f031d65f7835bdf8284ac7002124c33d69de3f815a76bdd3e1e" Jul 14 23:07:15.810275 containerd[1538]: 2025-07-14 23:07:15.777 [INFO][5733] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8fae9a5219270f031d65f7835bdf8284ac7002124c33d69de3f815a76bdd3e1e" Jul 14 23:07:15.810275 containerd[1538]: 2025-07-14 23:07:15.797 [INFO][5740] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8fae9a5219270f031d65f7835bdf8284ac7002124c33d69de3f815a76bdd3e1e" HandleID="k8s-pod-network.8fae9a5219270f031d65f7835bdf8284ac7002124c33d69de3f815a76bdd3e1e" Workload="localhost-k8s-goldmane--58fd7646b9--5d4g6-eth0" Jul 14 23:07:15.810275 containerd[1538]: 2025-07-14 23:07:15.797 [INFO][5740] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 23:07:15.810275 containerd[1538]: 2025-07-14 23:07:15.797 [INFO][5740] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 23:07:15.810275 containerd[1538]: 2025-07-14 23:07:15.806 [WARNING][5740] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8fae9a5219270f031d65f7835bdf8284ac7002124c33d69de3f815a76bdd3e1e" HandleID="k8s-pod-network.8fae9a5219270f031d65f7835bdf8284ac7002124c33d69de3f815a76bdd3e1e" Workload="localhost-k8s-goldmane--58fd7646b9--5d4g6-eth0" Jul 14 23:07:15.810275 containerd[1538]: 2025-07-14 23:07:15.806 [INFO][5740] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8fae9a5219270f031d65f7835bdf8284ac7002124c33d69de3f815a76bdd3e1e" HandleID="k8s-pod-network.8fae9a5219270f031d65f7835bdf8284ac7002124c33d69de3f815a76bdd3e1e" Workload="localhost-k8s-goldmane--58fd7646b9--5d4g6-eth0" Jul 14 23:07:15.810275 containerd[1538]: 2025-07-14 23:07:15.807 [INFO][5740] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 23:07:15.810275 containerd[1538]: 2025-07-14 23:07:15.808 [INFO][5733] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8fae9a5219270f031d65f7835bdf8284ac7002124c33d69de3f815a76bdd3e1e" Jul 14 23:07:15.816275 containerd[1538]: time="2025-07-14T23:07:15.810278974Z" level=info msg="TearDown network for sandbox \"8fae9a5219270f031d65f7835bdf8284ac7002124c33d69de3f815a76bdd3e1e\" successfully" Jul 14 23:07:15.816275 containerd[1538]: time="2025-07-14T23:07:15.810295644Z" level=info msg="StopPodSandbox for \"8fae9a5219270f031d65f7835bdf8284ac7002124c33d69de3f815a76bdd3e1e\" returns successfully" Jul 14 23:07:15.816275 containerd[1538]: time="2025-07-14T23:07:15.810725943Z" level=info msg="RemovePodSandbox for \"8fae9a5219270f031d65f7835bdf8284ac7002124c33d69de3f815a76bdd3e1e\"" Jul 14 23:07:15.816275 containerd[1538]: time="2025-07-14T23:07:15.810749848Z" level=info msg="Forcibly stopping sandbox \"8fae9a5219270f031d65f7835bdf8284ac7002124c33d69de3f815a76bdd3e1e\"" Jul 14 23:07:15.872852 containerd[1538]: 2025-07-14 23:07:15.842 [WARNING][5754] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8fae9a5219270f031d65f7835bdf8284ac7002124c33d69de3f815a76bdd3e1e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--58fd7646b9--5d4g6-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"43e0fd53-8ec8-4b02-86e2-a124557aa367", ResourceVersion:"1105", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 23, 6, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"26130304085c8326dfc85985702e52e66090d61123d8558b02387790c0ab5454", Pod:"goldmane-58fd7646b9-5d4g6", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali6f7462abe57", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 23:07:15.872852 containerd[1538]: 2025-07-14 23:07:15.843 [INFO][5754] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8fae9a5219270f031d65f7835bdf8284ac7002124c33d69de3f815a76bdd3e1e" Jul 14 23:07:15.872852 containerd[1538]: 2025-07-14 23:07:15.843 [INFO][5754] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8fae9a5219270f031d65f7835bdf8284ac7002124c33d69de3f815a76bdd3e1e" iface="eth0" netns="" Jul 14 23:07:15.872852 containerd[1538]: 2025-07-14 23:07:15.843 [INFO][5754] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8fae9a5219270f031d65f7835bdf8284ac7002124c33d69de3f815a76bdd3e1e" Jul 14 23:07:15.872852 containerd[1538]: 2025-07-14 23:07:15.843 [INFO][5754] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8fae9a5219270f031d65f7835bdf8284ac7002124c33d69de3f815a76bdd3e1e" Jul 14 23:07:15.872852 containerd[1538]: 2025-07-14 23:07:15.861 [INFO][5761] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8fae9a5219270f031d65f7835bdf8284ac7002124c33d69de3f815a76bdd3e1e" HandleID="k8s-pod-network.8fae9a5219270f031d65f7835bdf8284ac7002124c33d69de3f815a76bdd3e1e" Workload="localhost-k8s-goldmane--58fd7646b9--5d4g6-eth0" Jul 14 23:07:15.872852 containerd[1538]: 2025-07-14 23:07:15.862 [INFO][5761] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 23:07:15.872852 containerd[1538]: 2025-07-14 23:07:15.862 [INFO][5761] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 23:07:15.872852 containerd[1538]: 2025-07-14 23:07:15.867 [WARNING][5761] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8fae9a5219270f031d65f7835bdf8284ac7002124c33d69de3f815a76bdd3e1e" HandleID="k8s-pod-network.8fae9a5219270f031d65f7835bdf8284ac7002124c33d69de3f815a76bdd3e1e" Workload="localhost-k8s-goldmane--58fd7646b9--5d4g6-eth0" Jul 14 23:07:15.872852 containerd[1538]: 2025-07-14 23:07:15.867 [INFO][5761] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8fae9a5219270f031d65f7835bdf8284ac7002124c33d69de3f815a76bdd3e1e" HandleID="k8s-pod-network.8fae9a5219270f031d65f7835bdf8284ac7002124c33d69de3f815a76bdd3e1e" Workload="localhost-k8s-goldmane--58fd7646b9--5d4g6-eth0" Jul 14 23:07:15.872852 containerd[1538]: 2025-07-14 23:07:15.867 [INFO][5761] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 23:07:15.872852 containerd[1538]: 2025-07-14 23:07:15.869 [INFO][5754] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8fae9a5219270f031d65f7835bdf8284ac7002124c33d69de3f815a76bdd3e1e" Jul 14 23:07:15.873343 containerd[1538]: time="2025-07-14T23:07:15.872875955Z" level=info msg="TearDown network for sandbox \"8fae9a5219270f031d65f7835bdf8284ac7002124c33d69de3f815a76bdd3e1e\" successfully" Jul 14 23:07:15.875822 containerd[1538]: time="2025-07-14T23:07:15.875802093Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8fae9a5219270f031d65f7835bdf8284ac7002124c33d69de3f815a76bdd3e1e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 14 23:07:15.875864 containerd[1538]: time="2025-07-14T23:07:15.875837572Z" level=info msg="RemovePodSandbox \"8fae9a5219270f031d65f7835bdf8284ac7002124c33d69de3f815a76bdd3e1e\" returns successfully" Jul 14 23:07:17.148830 containerd[1538]: time="2025-07-14T23:07:17.148723256Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 23:07:17.205621 containerd[1538]: time="2025-07-14T23:07:17.194698540Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2: active requests=0, bytes read=14703784" Jul 14 23:07:17.214807 containerd[1538]: time="2025-07-14T23:07:17.211854548Z" level=info msg="ImageCreate event name:\"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 23:07:17.240662 containerd[1538]: time="2025-07-14T23:07:17.240622642Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 23:07:17.245714 containerd[1538]: time="2025-07-14T23:07:17.245465065Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" with image id \"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\", size \"16196439\" in 3.176739923s" Jul 14 23:07:17.252469 containerd[1538]: time="2025-07-14T23:07:17.249728300Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" returns image reference \"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\"" Jul 14 23:07:18.764582 containerd[1538]: time="2025-07-14T23:07:18.764544294Z" level=info msg="CreateContainer within sandbox \"a13ce779967ebe010f7e625b9a81e00a36d8fefd375ff48732cc04c50f17cb91\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jul 14 23:07:19.077589 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4032854829.mount: Deactivated successfully. Jul 14 23:07:19.127818 containerd[1538]: time="2025-07-14T23:07:19.094578850Z" level=info msg="CreateContainer within sandbox \"a13ce779967ebe010f7e625b9a81e00a36d8fefd375ff48732cc04c50f17cb91\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"a29ee3b8d53b40f973aebce2b0844f4e7b4ab8b953ab79cec3e410070a8290d7\"" Jul 14 23:07:19.161653 containerd[1538]: time="2025-07-14T23:07:19.161560067Z" level=info msg="StartContainer for \"a29ee3b8d53b40f973aebce2b0844f4e7b4ab8b953ab79cec3e410070a8290d7\"" Jul 14 23:07:19.513191 systemd[1]: Started cri-containerd-a29ee3b8d53b40f973aebce2b0844f4e7b4ab8b953ab79cec3e410070a8290d7.scope - libcontainer container a29ee3b8d53b40f973aebce2b0844f4e7b4ab8b953ab79cec3e410070a8290d7. Jul 14 23:07:19.525480 systemd[1]: Started sshd@7-139.178.70.101:22-139.178.68.195:36782.service - OpenSSH per-connection server daemon (139.178.68.195:36782). Jul 14 23:07:19.723033 containerd[1538]: time="2025-07-14T23:07:19.723005160Z" level=info msg="StartContainer for \"a29ee3b8d53b40f973aebce2b0844f4e7b4ab8b953ab79cec3e410070a8290d7\" returns successfully" Jul 14 23:07:19.802052 sshd[5841]: Accepted publickey for core from 139.178.68.195 port 36782 ssh2: RSA SHA256:P8jM/IfcNW8CAymy10q/kVwVtMfLKdB1OJSN9TFdkWE Jul 14 23:07:19.808896 sshd[5841]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 23:07:19.825518 systemd-logind[1517]: New session 10 of user core. Jul 14 23:07:19.831311 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 14 23:07:19.862749 systemd[1]: run-containerd-runc-k8s.io-a29ee3b8d53b40f973aebce2b0844f4e7b4ab8b953ab79cec3e410070a8290d7-runc.uOHMaC.mount: Deactivated successfully. Jul 14 23:07:20.070290 systemd[1]: Started sshd@8-139.178.70.101:22-195.178.110.224:36310.service - OpenSSH per-connection server daemon (195.178.110.224:36310). Jul 14 23:07:20.406898 kubelet[2725]: I0714 23:07:20.369278 2725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-4jr6z" podStartSLOduration=27.311957306 podStartE2EDuration="52.32455505s" podCreationTimestamp="2025-07-14 23:06:28 +0000 UTC" firstStartedPulling="2025-07-14 23:06:52.513629532 +0000 UTC m=+41.434335823" lastFinishedPulling="2025-07-14 23:07:17.526227277 +0000 UTC m=+66.446933567" observedRunningTime="2025-07-14 23:07:20.268984598 +0000 UTC m=+69.189690896" watchObservedRunningTime="2025-07-14 23:07:20.32455505 +0000 UTC m=+69.245261350" Jul 14 23:07:20.697738 sshd[5871]: Invalid user ubuntu from 195.178.110.224 port 36310 Jul 14 23:07:20.827985 kubelet[2725]: I0714 23:07:20.813235 2725 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jul 14 23:07:20.829347 kubelet[2725]: I0714 23:07:20.829331 2725 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jul 14 23:07:20.838094 sshd[5871]: Connection closed by invalid user ubuntu 195.178.110.224 port 36310 [preauth] Jul 14 23:07:20.839307 systemd[1]: sshd@8-139.178.70.101:22-195.178.110.224:36310.service: Deactivated successfully. Jul 14 23:07:21.199654 sshd[5841]: pam_unix(sshd:session): session closed for user core Jul 14 23:07:21.217379 systemd[1]: sshd@7-139.178.70.101:22-139.178.68.195:36782.service: Deactivated successfully. Jul 14 23:07:21.219987 systemd[1]: session-10.scope: Deactivated successfully. Jul 14 23:07:21.220831 systemd-logind[1517]: Session 10 logged out. Waiting for processes to exit. Jul 14 23:07:21.221502 systemd-logind[1517]: Removed session 10. Jul 14 23:07:26.226431 systemd[1]: Started sshd@9-139.178.70.101:22-139.178.68.195:46942.service - OpenSSH per-connection server daemon (139.178.68.195:46942). Jul 14 23:07:26.689203 sshd[5886]: Accepted publickey for core from 139.178.68.195 port 46942 ssh2: RSA SHA256:P8jM/IfcNW8CAymy10q/kVwVtMfLKdB1OJSN9TFdkWE Jul 14 23:07:26.691427 sshd[5886]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 23:07:26.706687 systemd-logind[1517]: New session 11 of user core. Jul 14 23:07:26.713366 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 14 23:07:29.167590 sshd[5886]: pam_unix(sshd:session): session closed for user core Jul 14 23:07:29.210010 systemd[1]: sshd@9-139.178.70.101:22-139.178.68.195:46942.service: Deactivated successfully. Jul 14 23:07:29.211738 systemd[1]: session-11.scope: Deactivated successfully. Jul 14 23:07:29.212501 systemd-logind[1517]: Session 11 logged out. Waiting for processes to exit. Jul 14 23:07:29.217248 systemd[1]: Started sshd@10-139.178.70.101:22-139.178.68.195:46950.service - OpenSSH per-connection server daemon (139.178.68.195:46950). Jul 14 23:07:29.219757 systemd-logind[1517]: Removed session 11. Jul 14 23:07:29.275900 sshd[5949]: Accepted publickey for core from 139.178.68.195 port 46950 ssh2: RSA SHA256:P8jM/IfcNW8CAymy10q/kVwVtMfLKdB1OJSN9TFdkWE Jul 14 23:07:29.277832 sshd[5949]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 23:07:29.284064 systemd-logind[1517]: New session 12 of user core. Jul 14 23:07:29.289196 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 14 23:07:29.561718 sshd[5949]: pam_unix(sshd:session): session closed for user core Jul 14 23:07:29.567441 systemd[1]: sshd@10-139.178.70.101:22-139.178.68.195:46950.service: Deactivated successfully. Jul 14 23:07:29.569787 systemd[1]: session-12.scope: Deactivated successfully. Jul 14 23:07:29.570975 systemd-logind[1517]: Session 12 logged out. Waiting for processes to exit. Jul 14 23:07:29.574314 systemd[1]: Started sshd@11-139.178.70.101:22-139.178.68.195:46956.service - OpenSSH per-connection server daemon (139.178.68.195:46956). Jul 14 23:07:29.575352 systemd-logind[1517]: Removed session 12. Jul 14 23:07:29.643164 sshd[5960]: Accepted publickey for core from 139.178.68.195 port 46956 ssh2: RSA SHA256:P8jM/IfcNW8CAymy10q/kVwVtMfLKdB1OJSN9TFdkWE Jul 14 23:07:29.644231 sshd[5960]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 23:07:29.647622 systemd-logind[1517]: New session 13 of user core. Jul 14 23:07:29.651171 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 14 23:07:30.060374 sshd[5960]: pam_unix(sshd:session): session closed for user core Jul 14 23:07:30.069131 systemd[1]: sshd@11-139.178.70.101:22-139.178.68.195:46956.service: Deactivated successfully. Jul 14 23:07:30.070336 systemd[1]: session-13.scope: Deactivated successfully. Jul 14 23:07:30.071675 systemd-logind[1517]: Session 13 logged out. Waiting for processes to exit. Jul 14 23:07:30.072372 systemd-logind[1517]: Removed session 13. Jul 14 23:07:35.104583 systemd[1]: Started sshd@12-139.178.70.101:22-139.178.68.195:40346.service - OpenSSH per-connection server daemon (139.178.68.195:40346). Jul 14 23:07:35.208577 sshd[6019]: Accepted publickey for core from 139.178.68.195 port 40346 ssh2: RSA SHA256:P8jM/IfcNW8CAymy10q/kVwVtMfLKdB1OJSN9TFdkWE Jul 14 23:07:35.211558 sshd[6019]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 23:07:35.216186 systemd-logind[1517]: New session 14 of user core. Jul 14 23:07:35.221192 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 14 23:07:35.830475 sshd[6019]: pam_unix(sshd:session): session closed for user core Jul 14 23:07:35.840352 systemd[1]: sshd@12-139.178.70.101:22-139.178.68.195:40346.service: Deactivated successfully. Jul 14 23:07:35.843595 systemd[1]: session-14.scope: Deactivated successfully. Jul 14 23:07:35.846435 systemd-logind[1517]: Session 14 logged out. Waiting for processes to exit. Jul 14 23:07:35.855865 systemd-logind[1517]: Removed session 14. Jul 14 23:07:40.846268 systemd[1]: Started sshd@13-139.178.70.101:22-139.178.68.195:43486.service - OpenSSH per-connection server daemon (139.178.68.195:43486). Jul 14 23:07:40.972522 sshd[6036]: Accepted publickey for core from 139.178.68.195 port 43486 ssh2: RSA SHA256:P8jM/IfcNW8CAymy10q/kVwVtMfLKdB1OJSN9TFdkWE Jul 14 23:07:40.975130 sshd[6036]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 23:07:40.978452 systemd-logind[1517]: New session 15 of user core. Jul 14 23:07:40.984165 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 14 23:07:41.605947 sshd[6036]: pam_unix(sshd:session): session closed for user core Jul 14 23:07:41.613334 systemd-logind[1517]: Session 15 logged out. Waiting for processes to exit. Jul 14 23:07:41.613811 systemd[1]: sshd@13-139.178.70.101:22-139.178.68.195:43486.service: Deactivated successfully. Jul 14 23:07:41.614867 systemd[1]: session-15.scope: Deactivated successfully. Jul 14 23:07:41.615467 systemd-logind[1517]: Removed session 15. Jul 14 23:07:46.630417 systemd[1]: Started sshd@14-139.178.70.101:22-139.178.68.195:43500.service - OpenSSH per-connection server daemon (139.178.68.195:43500). Jul 14 23:07:46.753192 sshd[6049]: Accepted publickey for core from 139.178.68.195 port 43500 ssh2: RSA SHA256:P8jM/IfcNW8CAymy10q/kVwVtMfLKdB1OJSN9TFdkWE Jul 14 23:07:46.756018 sshd[6049]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 23:07:46.758590 systemd-logind[1517]: New session 16 of user core. Jul 14 23:07:46.768205 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 14 23:07:47.236225 sshd[6049]: pam_unix(sshd:session): session closed for user core Jul 14 23:07:47.238402 systemd[1]: sshd@14-139.178.70.101:22-139.178.68.195:43500.service: Deactivated successfully. Jul 14 23:07:47.239511 systemd[1]: session-16.scope: Deactivated successfully. Jul 14 23:07:47.239983 systemd-logind[1517]: Session 16 logged out. Waiting for processes to exit. Jul 14 23:07:47.240520 systemd-logind[1517]: Removed session 16. Jul 14 23:07:52.247222 systemd[1]: Started sshd@15-139.178.70.101:22-139.178.68.195:56574.service - OpenSSH per-connection server daemon (139.178.68.195:56574). Jul 14 23:07:52.319912 sshd[6064]: Accepted publickey for core from 139.178.68.195 port 56574 ssh2: RSA SHA256:P8jM/IfcNW8CAymy10q/kVwVtMfLKdB1OJSN9TFdkWE Jul 14 23:07:52.321139 sshd[6064]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 23:07:52.324432 systemd-logind[1517]: New session 17 of user core. Jul 14 23:07:52.328266 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 14 23:07:52.828302 sshd[6064]: pam_unix(sshd:session): session closed for user core Jul 14 23:07:52.833685 systemd[1]: sshd@15-139.178.70.101:22-139.178.68.195:56574.service: Deactivated successfully. Jul 14 23:07:52.834678 systemd[1]: session-17.scope: Deactivated successfully. Jul 14 23:07:52.835732 systemd-logind[1517]: Session 17 logged out. Waiting for processes to exit. Jul 14 23:07:52.840508 systemd[1]: Started sshd@16-139.178.70.101:22-139.178.68.195:56578.service - OpenSSH per-connection server daemon (139.178.68.195:56578). Jul 14 23:07:52.841375 systemd-logind[1517]: Removed session 17. Jul 14 23:07:52.899132 sshd[6077]: Accepted publickey for core from 139.178.68.195 port 56578 ssh2: RSA SHA256:P8jM/IfcNW8CAymy10q/kVwVtMfLKdB1OJSN9TFdkWE Jul 14 23:07:52.899962 sshd[6077]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 23:07:52.903149 systemd-logind[1517]: New session 18 of user core. Jul 14 23:07:52.909245 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 14 23:07:53.276195 sshd[6077]: pam_unix(sshd:session): session closed for user core Jul 14 23:07:53.283228 systemd[1]: Started sshd@17-139.178.70.101:22-139.178.68.195:56588.service - OpenSSH per-connection server daemon (139.178.68.195:56588). Jul 14 23:07:53.283670 systemd[1]: sshd@16-139.178.70.101:22-139.178.68.195:56578.service: Deactivated successfully. Jul 14 23:07:53.284823 systemd[1]: session-18.scope: Deactivated successfully. Jul 14 23:07:53.285835 systemd-logind[1517]: Session 18 logged out. Waiting for processes to exit. Jul 14 23:07:53.286605 systemd-logind[1517]: Removed session 18. Jul 14 23:07:53.374518 sshd[6085]: Accepted publickey for core from 139.178.68.195 port 56588 ssh2: RSA SHA256:P8jM/IfcNW8CAymy10q/kVwVtMfLKdB1OJSN9TFdkWE Jul 14 23:07:53.375488 sshd[6085]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 23:07:53.378188 systemd-logind[1517]: New session 19 of user core. Jul 14 23:07:53.388156 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 14 23:07:56.490155 sshd[6085]: pam_unix(sshd:session): session closed for user core Jul 14 23:07:56.538004 systemd[1]: sshd@17-139.178.70.101:22-139.178.68.195:56588.service: Deactivated successfully. Jul 14 23:07:56.539459 systemd[1]: session-19.scope: Deactivated successfully. Jul 14 23:07:56.540571 systemd-logind[1517]: Session 19 logged out. Waiting for processes to exit. Jul 14 23:07:56.554267 systemd[1]: Started sshd@18-139.178.70.101:22-139.178.68.195:56592.service - OpenSSH per-connection server daemon (139.178.68.195:56592). Jul 14 23:07:56.554922 systemd-logind[1517]: Removed session 19. Jul 14 23:07:56.732674 sshd[6115]: Accepted publickey for core from 139.178.68.195 port 56592 ssh2: RSA SHA256:P8jM/IfcNW8CAymy10q/kVwVtMfLKdB1OJSN9TFdkWE Jul 14 23:07:56.735903 sshd[6115]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 23:07:56.747853 systemd-logind[1517]: New session 20 of user core. Jul 14 23:07:56.754296 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 14 23:08:14.345072 kubelet[2725]: E0714 23:08:14.305608 2725 kubelet.go:2512] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="16.403s" Jul 14 23:08:16.197091 kubelet[2725]: E0714 23:08:16.196343 2725 kubelet.go:2512] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.57s" Jul 14 23:08:16.687158 sshd[6115]: pam_unix(sshd:session): session closed for user core Jul 14 23:08:16.737326 systemd[1]: sshd@18-139.178.70.101:22-139.178.68.195:56592.service: Deactivated successfully. Jul 14 23:08:16.738510 systemd[1]: session-20.scope: Deactivated successfully. Jul 14 23:08:16.738620 systemd[1]: session-20.scope: Consumed 4.787s CPU time. Jul 14 23:08:16.739394 systemd-logind[1517]: Session 20 logged out. Waiting for processes to exit. Jul 14 23:08:16.750267 systemd[1]: Started sshd@19-139.178.70.101:22-139.178.68.195:58510.service - OpenSSH per-connection server daemon (139.178.68.195:58510). Jul 14 23:08:16.751062 systemd-logind[1517]: Removed session 20. Jul 14 23:08:16.876328 sshd[6233]: Accepted publickey for core from 139.178.68.195 port 58510 ssh2: RSA SHA256:P8jM/IfcNW8CAymy10q/kVwVtMfLKdB1OJSN9TFdkWE Jul 14 23:08:16.877797 sshd[6233]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 23:08:16.883196 systemd-logind[1517]: New session 21 of user core. Jul 14 23:08:16.889167 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 14 23:08:20.236522 sshd[6233]: pam_unix(sshd:session): session closed for user core Jul 14 23:08:20.284605 systemd[1]: sshd@19-139.178.70.101:22-139.178.68.195:58510.service: Deactivated successfully. Jul 14 23:08:20.285890 systemd[1]: session-21.scope: Deactivated successfully. Jul 14 23:08:20.285983 systemd[1]: session-21.scope: Consumed 1.102s CPU time. Jul 14 23:08:20.287140 systemd-logind[1517]: Session 21 logged out. Waiting for processes to exit. Jul 14 23:08:20.293390 systemd-logind[1517]: Removed session 21. Jul 14 23:08:25.303275 systemd[1]: Started sshd@20-139.178.70.101:22-139.178.68.195:59942.service - OpenSSH per-connection server daemon (139.178.68.195:59942). Jul 14 23:08:25.495936 sshd[6268]: Accepted publickey for core from 139.178.68.195 port 59942 ssh2: RSA SHA256:P8jM/IfcNW8CAymy10q/kVwVtMfLKdB1OJSN9TFdkWE Jul 14 23:08:25.500548 sshd[6268]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 23:08:25.511130 systemd-logind[1517]: New session 22 of user core. Jul 14 23:08:25.518640 systemd[1]: Started session-22.scope - Session 22 of User core. Jul 14 23:08:26.118729 sshd[6268]: pam_unix(sshd:session): session closed for user core Jul 14 23:08:26.124297 systemd[1]: sshd@20-139.178.70.101:22-139.178.68.195:59942.service: Deactivated successfully. Jul 14 23:08:26.131354 systemd[1]: session-22.scope: Deactivated successfully. Jul 14 23:08:26.133110 systemd-logind[1517]: Session 22 logged out. Waiting for processes to exit. Jul 14 23:08:26.134820 systemd-logind[1517]: Removed session 22. Jul 14 23:08:31.201245 systemd[1]: Started sshd@21-139.178.70.101:22-139.178.68.195:34720.service - OpenSSH per-connection server daemon (139.178.68.195:34720). Jul 14 23:08:31.437706 sshd[6323]: Accepted publickey for core from 139.178.68.195 port 34720 ssh2: RSA SHA256:P8jM/IfcNW8CAymy10q/kVwVtMfLKdB1OJSN9TFdkWE Jul 14 23:08:31.448248 sshd[6323]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 23:08:31.468099 systemd-logind[1517]: New session 23 of user core. Jul 14 23:08:31.475229 systemd[1]: Started session-23.scope - Session 23 of User core. Jul 14 23:08:34.323228 sshd[6323]: pam_unix(sshd:session): session closed for user core Jul 14 23:08:34.359722 systemd[1]: sshd@21-139.178.70.101:22-139.178.68.195:34720.service: Deactivated successfully. Jul 14 23:08:34.364969 systemd[1]: session-23.scope: Deactivated successfully. Jul 14 23:08:34.365940 systemd-logind[1517]: Session 23 logged out. Waiting for processes to exit. Jul 14 23:08:34.369759 systemd-logind[1517]: Removed session 23. Jul 14 23:08:39.394625 systemd[1]: Started sshd@22-139.178.70.101:22-139.178.68.195:34730.service - OpenSSH per-connection server daemon (139.178.68.195:34730). Jul 14 23:08:39.545235 sshd[6387]: Accepted publickey for core from 139.178.68.195 port 34730 ssh2: RSA SHA256:P8jM/IfcNW8CAymy10q/kVwVtMfLKdB1OJSN9TFdkWE Jul 14 23:08:39.549245 sshd[6387]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 23:08:39.558122 systemd-logind[1517]: New session 24 of user core. Jul 14 23:08:39.565157 systemd[1]: Started session-24.scope - Session 24 of User core. Jul 14 23:08:40.092313 sshd[6387]: pam_unix(sshd:session): session closed for user core Jul 14 23:08:40.099528 systemd[1]: sshd@22-139.178.70.101:22-139.178.68.195:34730.service: Deactivated successfully. Jul 14 23:08:40.100718 systemd[1]: session-24.scope: Deactivated successfully. Jul 14 23:08:40.101193 systemd-logind[1517]: Session 24 logged out. Waiting for processes to exit. Jul 14 23:08:40.101693 systemd-logind[1517]: Removed session 24.