Jul 14 22:45:16.741931 kernel: Linux version 6.6.97-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Mon Jul 14 20:23:49 -00 2025 Jul 14 22:45:16.741948 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=bfa97d577a2baa7448b0ab2cae71f1606bd0084ffae5b72cc7eef5122a2ca497 Jul 14 22:45:16.741954 kernel: Disabled fast string operations Jul 14 22:45:16.741958 kernel: BIOS-provided physical RAM map: Jul 14 22:45:16.741962 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ebff] usable Jul 14 22:45:16.741966 kernel: BIOS-e820: [mem 0x000000000009ec00-0x000000000009ffff] reserved Jul 14 22:45:16.741972 kernel: BIOS-e820: [mem 0x00000000000dc000-0x00000000000fffff] reserved Jul 14 22:45:16.741977 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007fedffff] usable Jul 14 22:45:16.741981 kernel: BIOS-e820: [mem 0x000000007fee0000-0x000000007fefefff] ACPI data Jul 14 22:45:16.741985 kernel: BIOS-e820: [mem 0x000000007feff000-0x000000007fefffff] ACPI NVS Jul 14 22:45:16.741989 kernel: BIOS-e820: [mem 0x000000007ff00000-0x000000007fffffff] usable Jul 14 22:45:16.741993 kernel: BIOS-e820: [mem 0x00000000f0000000-0x00000000f7ffffff] reserved Jul 14 22:45:16.741997 kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec0ffff] reserved Jul 14 22:45:16.742001 kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved Jul 14 22:45:16.742008 kernel: BIOS-e820: [mem 0x00000000fffe0000-0x00000000ffffffff] reserved Jul 14 22:45:16.742012 kernel: NX (Execute Disable) protection: active Jul 14 22:45:16.742017 kernel: APIC: Static calls initialized Jul 14 22:45:16.742022 kernel: SMBIOS 2.7 present. Jul 14 22:45:16.742027 kernel: DMI: VMware, Inc. VMware Virtual Platform/440BX Desktop Reference Platform, BIOS 6.00 05/28/2020 Jul 14 22:45:16.742031 kernel: vmware: hypercall mode: 0x00 Jul 14 22:45:16.742036 kernel: Hypervisor detected: VMware Jul 14 22:45:16.742041 kernel: vmware: TSC freq read from hypervisor : 3408.000 MHz Jul 14 22:45:16.742047 kernel: vmware: Host bus clock speed read from hypervisor : 66000000 Hz Jul 14 22:45:16.742051 kernel: vmware: using clock offset of 4240981186 ns Jul 14 22:45:16.742056 kernel: tsc: Detected 3408.000 MHz processor Jul 14 22:45:16.742061 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jul 14 22:45:16.742066 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jul 14 22:45:16.742071 kernel: last_pfn = 0x80000 max_arch_pfn = 0x400000000 Jul 14 22:45:16.742076 kernel: total RAM covered: 3072M Jul 14 22:45:16.742080 kernel: Found optimal setting for mtrr clean up Jul 14 22:45:16.742086 kernel: gran_size: 64K chunk_size: 64K num_reg: 2 lose cover RAM: 0G Jul 14 22:45:16.742092 kernel: MTRR map: 6 entries (5 fixed + 1 variable; max 21), built from 8 variable MTRRs Jul 14 22:45:16.742097 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jul 14 22:45:16.742101 kernel: Using GB pages for direct mapping Jul 14 22:45:16.742106 kernel: ACPI: Early table checksum verification disabled Jul 14 22:45:16.742111 kernel: ACPI: RSDP 0x00000000000F6A00 000024 (v02 PTLTD ) Jul 14 22:45:16.742116 kernel: ACPI: XSDT 0x000000007FEE965B 00005C (v01 INTEL 440BX 06040000 VMW 01324272) Jul 14 22:45:16.742120 kernel: ACPI: FACP 0x000000007FEFEE73 0000F4 (v04 INTEL 440BX 06040000 PTL 000F4240) Jul 14 22:45:16.742125 kernel: ACPI: DSDT 0x000000007FEEAD55 01411E (v01 PTLTD Custom 06040000 MSFT 03000001) Jul 14 22:45:16.742130 kernel: ACPI: FACS 0x000000007FEFFFC0 000040 Jul 14 22:45:16.742138 kernel: ACPI: FACS 0x000000007FEFFFC0 000040 Jul 14 22:45:16.742143 kernel: ACPI: BOOT 0x000000007FEEAD2D 000028 (v01 PTLTD $SBFTBL$ 06040000 LTP 00000001) Jul 14 22:45:16.742148 kernel: ACPI: APIC 0x000000007FEEA5EB 000742 (v01 PTLTD ? APIC 06040000 LTP 00000000) Jul 14 22:45:16.742153 kernel: ACPI: MCFG 0x000000007FEEA5AF 00003C (v01 PTLTD $PCITBL$ 06040000 LTP 00000001) Jul 14 22:45:16.742158 kernel: ACPI: SRAT 0x000000007FEE9757 0008A8 (v02 VMWARE MEMPLUG 06040000 VMW 00000001) Jul 14 22:45:16.742164 kernel: ACPI: HPET 0x000000007FEE971F 000038 (v01 VMWARE VMW HPET 06040000 VMW 00000001) Jul 14 22:45:16.742169 kernel: ACPI: WAET 0x000000007FEE96F7 000028 (v01 VMWARE VMW WAET 06040000 VMW 00000001) Jul 14 22:45:16.742175 kernel: ACPI: Reserving FACP table memory at [mem 0x7fefee73-0x7fefef66] Jul 14 22:45:16.742180 kernel: ACPI: Reserving DSDT table memory at [mem 0x7feead55-0x7fefee72] Jul 14 22:45:16.742185 kernel: ACPI: Reserving FACS table memory at [mem 0x7fefffc0-0x7fefffff] Jul 14 22:45:16.742190 kernel: ACPI: Reserving FACS table memory at [mem 0x7fefffc0-0x7fefffff] Jul 14 22:45:16.742195 kernel: ACPI: Reserving BOOT table memory at [mem 0x7feead2d-0x7feead54] Jul 14 22:45:16.742200 kernel: ACPI: Reserving APIC table memory at [mem 0x7feea5eb-0x7feead2c] Jul 14 22:45:16.742205 kernel: ACPI: Reserving MCFG table memory at [mem 0x7feea5af-0x7feea5ea] Jul 14 22:45:16.742210 kernel: ACPI: Reserving SRAT table memory at [mem 0x7fee9757-0x7fee9ffe] Jul 14 22:45:16.742216 kernel: ACPI: Reserving HPET table memory at [mem 0x7fee971f-0x7fee9756] Jul 14 22:45:16.742221 kernel: ACPI: Reserving WAET table memory at [mem 0x7fee96f7-0x7fee971e] Jul 14 22:45:16.742226 kernel: system APIC only can use physical flat Jul 14 22:45:16.742231 kernel: APIC: Switched APIC routing to: physical flat Jul 14 22:45:16.742236 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jul 14 22:45:16.742241 kernel: SRAT: PXM 0 -> APIC 0x02 -> Node 0 Jul 14 22:45:16.742247 kernel: SRAT: PXM 0 -> APIC 0x04 -> Node 0 Jul 14 22:45:16.742252 kernel: SRAT: PXM 0 -> APIC 0x06 -> Node 0 Jul 14 22:45:16.742256 kernel: SRAT: PXM 0 -> APIC 0x08 -> Node 0 Jul 14 22:45:16.742263 kernel: SRAT: PXM 0 -> APIC 0x0a -> Node 0 Jul 14 22:45:16.742267 kernel: SRAT: PXM 0 -> APIC 0x0c -> Node 0 Jul 14 22:45:16.742273 kernel: SRAT: PXM 0 -> APIC 0x0e -> Node 0 Jul 14 22:45:16.742278 kernel: SRAT: PXM 0 -> APIC 0x10 -> Node 0 Jul 14 22:45:16.742282 kernel: SRAT: PXM 0 -> APIC 0x12 -> Node 0 Jul 14 22:45:16.742287 kernel: SRAT: PXM 0 -> APIC 0x14 -> Node 0 Jul 14 22:45:16.742292 kernel: SRAT: PXM 0 -> APIC 0x16 -> Node 0 Jul 14 22:45:16.742297 kernel: SRAT: PXM 0 -> APIC 0x18 -> Node 0 Jul 14 22:45:16.742302 kernel: SRAT: PXM 0 -> APIC 0x1a -> Node 0 Jul 14 22:45:16.742307 kernel: SRAT: PXM 0 -> APIC 0x1c -> Node 0 Jul 14 22:45:16.742313 kernel: SRAT: PXM 0 -> APIC 0x1e -> Node 0 Jul 14 22:45:16.743339 kernel: SRAT: PXM 0 -> APIC 0x20 -> Node 0 Jul 14 22:45:16.743349 kernel: SRAT: PXM 0 -> APIC 0x22 -> Node 0 Jul 14 22:45:16.743355 kernel: SRAT: PXM 0 -> APIC 0x24 -> Node 0 Jul 14 22:45:16.743360 kernel: SRAT: PXM 0 -> APIC 0x26 -> Node 0 Jul 14 22:45:16.743365 kernel: SRAT: PXM 0 -> APIC 0x28 -> Node 0 Jul 14 22:45:16.743370 kernel: SRAT: PXM 0 -> APIC 0x2a -> Node 0 Jul 14 22:45:16.743375 kernel: SRAT: PXM 0 -> APIC 0x2c -> Node 0 Jul 14 22:45:16.743381 kernel: SRAT: PXM 0 -> APIC 0x2e -> Node 0 Jul 14 22:45:16.743386 kernel: SRAT: PXM 0 -> APIC 0x30 -> Node 0 Jul 14 22:45:16.743390 kernel: SRAT: PXM 0 -> APIC 0x32 -> Node 0 Jul 14 22:45:16.743398 kernel: SRAT: PXM 0 -> APIC 0x34 -> Node 0 Jul 14 22:45:16.743403 kernel: SRAT: PXM 0 -> APIC 0x36 -> Node 0 Jul 14 22:45:16.743408 kernel: SRAT: PXM 0 -> APIC 0x38 -> Node 0 Jul 14 22:45:16.743413 kernel: SRAT: PXM 0 -> APIC 0x3a -> Node 0 Jul 14 22:45:16.743418 kernel: SRAT: PXM 0 -> APIC 0x3c -> Node 0 Jul 14 22:45:16.743423 kernel: SRAT: PXM 0 -> APIC 0x3e -> Node 0 Jul 14 22:45:16.743428 kernel: SRAT: PXM 0 -> APIC 0x40 -> Node 0 Jul 14 22:45:16.743433 kernel: SRAT: PXM 0 -> APIC 0x42 -> Node 0 Jul 14 22:45:16.743438 kernel: SRAT: PXM 0 -> APIC 0x44 -> Node 0 Jul 14 22:45:16.743443 kernel: SRAT: PXM 0 -> APIC 0x46 -> Node 0 Jul 14 22:45:16.743454 kernel: SRAT: PXM 0 -> APIC 0x48 -> Node 0 Jul 14 22:45:16.743463 kernel: SRAT: PXM 0 -> APIC 0x4a -> Node 0 Jul 14 22:45:16.743470 kernel: SRAT: PXM 0 -> APIC 0x4c -> Node 0 Jul 14 22:45:16.743479 kernel: SRAT: PXM 0 -> APIC 0x4e -> Node 0 Jul 14 22:45:16.743485 kernel: SRAT: PXM 0 -> APIC 0x50 -> Node 0 Jul 14 22:45:16.743490 kernel: SRAT: PXM 0 -> APIC 0x52 -> Node 0 Jul 14 22:45:16.743495 kernel: SRAT: PXM 0 -> APIC 0x54 -> Node 0 Jul 14 22:45:16.743500 kernel: SRAT: PXM 0 -> APIC 0x56 -> Node 0 Jul 14 22:45:16.743505 kernel: SRAT: PXM 0 -> APIC 0x58 -> Node 0 Jul 14 22:45:16.743510 kernel: SRAT: PXM 0 -> APIC 0x5a -> Node 0 Jul 14 22:45:16.743517 kernel: SRAT: PXM 0 -> APIC 0x5c -> Node 0 Jul 14 22:45:16.743522 kernel: SRAT: PXM 0 -> APIC 0x5e -> Node 0 Jul 14 22:45:16.743527 kernel: SRAT: PXM 0 -> APIC 0x60 -> Node 0 Jul 14 22:45:16.743532 kernel: SRAT: PXM 0 -> APIC 0x62 -> Node 0 Jul 14 22:45:16.743537 kernel: SRAT: PXM 0 -> APIC 0x64 -> Node 0 Jul 14 22:45:16.743542 kernel: SRAT: PXM 0 -> APIC 0x66 -> Node 0 Jul 14 22:45:16.743547 kernel: SRAT: PXM 0 -> APIC 0x68 -> Node 0 Jul 14 22:45:16.743552 kernel: SRAT: PXM 0 -> APIC 0x6a -> Node 0 Jul 14 22:45:16.743556 kernel: SRAT: PXM 0 -> APIC 0x6c -> Node 0 Jul 14 22:45:16.743561 kernel: SRAT: PXM 0 -> APIC 0x6e -> Node 0 Jul 14 22:45:16.743567 kernel: SRAT: PXM 0 -> APIC 0x70 -> Node 0 Jul 14 22:45:16.743572 kernel: SRAT: PXM 0 -> APIC 0x72 -> Node 0 Jul 14 22:45:16.743578 kernel: SRAT: PXM 0 -> APIC 0x74 -> Node 0 Jul 14 22:45:16.743587 kernel: SRAT: PXM 0 -> APIC 0x76 -> Node 0 Jul 14 22:45:16.743592 kernel: SRAT: PXM 0 -> APIC 0x78 -> Node 0 Jul 14 22:45:16.743598 kernel: SRAT: PXM 0 -> APIC 0x7a -> Node 0 Jul 14 22:45:16.743603 kernel: SRAT: PXM 0 -> APIC 0x7c -> Node 0 Jul 14 22:45:16.743608 kernel: SRAT: PXM 0 -> APIC 0x7e -> Node 0 Jul 14 22:45:16.743613 kernel: SRAT: PXM 0 -> APIC 0x80 -> Node 0 Jul 14 22:45:16.743620 kernel: SRAT: PXM 0 -> APIC 0x82 -> Node 0 Jul 14 22:45:16.743625 kernel: SRAT: PXM 0 -> APIC 0x84 -> Node 0 Jul 14 22:45:16.743631 kernel: SRAT: PXM 0 -> APIC 0x86 -> Node 0 Jul 14 22:45:16.743636 kernel: SRAT: PXM 0 -> APIC 0x88 -> Node 0 Jul 14 22:45:16.743641 kernel: SRAT: PXM 0 -> APIC 0x8a -> Node 0 Jul 14 22:45:16.743647 kernel: SRAT: PXM 0 -> APIC 0x8c -> Node 0 Jul 14 22:45:16.743652 kernel: SRAT: PXM 0 -> APIC 0x8e -> Node 0 Jul 14 22:45:16.743657 kernel: SRAT: PXM 0 -> APIC 0x90 -> Node 0 Jul 14 22:45:16.743662 kernel: SRAT: PXM 0 -> APIC 0x92 -> Node 0 Jul 14 22:45:16.743668 kernel: SRAT: PXM 0 -> APIC 0x94 -> Node 0 Jul 14 22:45:16.743674 kernel: SRAT: PXM 0 -> APIC 0x96 -> Node 0 Jul 14 22:45:16.743680 kernel: SRAT: PXM 0 -> APIC 0x98 -> Node 0 Jul 14 22:45:16.743686 kernel: SRAT: PXM 0 -> APIC 0x9a -> Node 0 Jul 14 22:45:16.743691 kernel: SRAT: PXM 0 -> APIC 0x9c -> Node 0 Jul 14 22:45:16.743696 kernel: SRAT: PXM 0 -> APIC 0x9e -> Node 0 Jul 14 22:45:16.743701 kernel: SRAT: PXM 0 -> APIC 0xa0 -> Node 0 Jul 14 22:45:16.743707 kernel: SRAT: PXM 0 -> APIC 0xa2 -> Node 0 Jul 14 22:45:16.743712 kernel: SRAT: PXM 0 -> APIC 0xa4 -> Node 0 Jul 14 22:45:16.743717 kernel: SRAT: PXM 0 -> APIC 0xa6 -> Node 0 Jul 14 22:45:16.743723 kernel: SRAT: PXM 0 -> APIC 0xa8 -> Node 0 Jul 14 22:45:16.743729 kernel: SRAT: PXM 0 -> APIC 0xaa -> Node 0 Jul 14 22:45:16.743735 kernel: SRAT: PXM 0 -> APIC 0xac -> Node 0 Jul 14 22:45:16.743740 kernel: SRAT: PXM 0 -> APIC 0xae -> Node 0 Jul 14 22:45:16.743745 kernel: SRAT: PXM 0 -> APIC 0xb0 -> Node 0 Jul 14 22:45:16.743751 kernel: SRAT: PXM 0 -> APIC 0xb2 -> Node 0 Jul 14 22:45:16.743756 kernel: SRAT: PXM 0 -> APIC 0xb4 -> Node 0 Jul 14 22:45:16.743761 kernel: SRAT: PXM 0 -> APIC 0xb6 -> Node 0 Jul 14 22:45:16.743767 kernel: SRAT: PXM 0 -> APIC 0xb8 -> Node 0 Jul 14 22:45:16.743772 kernel: SRAT: PXM 0 -> APIC 0xba -> Node 0 Jul 14 22:45:16.743777 kernel: SRAT: PXM 0 -> APIC 0xbc -> Node 0 Jul 14 22:45:16.743784 kernel: SRAT: PXM 0 -> APIC 0xbe -> Node 0 Jul 14 22:45:16.743789 kernel: SRAT: PXM 0 -> APIC 0xc0 -> Node 0 Jul 14 22:45:16.743794 kernel: SRAT: PXM 0 -> APIC 0xc2 -> Node 0 Jul 14 22:45:16.743800 kernel: SRAT: PXM 0 -> APIC 0xc4 -> Node 0 Jul 14 22:45:16.743805 kernel: SRAT: PXM 0 -> APIC 0xc6 -> Node 0 Jul 14 22:45:16.743810 kernel: SRAT: PXM 0 -> APIC 0xc8 -> Node 0 Jul 14 22:45:16.743815 kernel: SRAT: PXM 0 -> APIC 0xca -> Node 0 Jul 14 22:45:16.743821 kernel: SRAT: PXM 0 -> APIC 0xcc -> Node 0 Jul 14 22:45:16.743826 kernel: SRAT: PXM 0 -> APIC 0xce -> Node 0 Jul 14 22:45:16.743831 kernel: SRAT: PXM 0 -> APIC 0xd0 -> Node 0 Jul 14 22:45:16.743838 kernel: SRAT: PXM 0 -> APIC 0xd2 -> Node 0 Jul 14 22:45:16.743843 kernel: SRAT: PXM 0 -> APIC 0xd4 -> Node 0 Jul 14 22:45:16.743848 kernel: SRAT: PXM 0 -> APIC 0xd6 -> Node 0 Jul 14 22:45:16.743853 kernel: SRAT: PXM 0 -> APIC 0xd8 -> Node 0 Jul 14 22:45:16.743859 kernel: SRAT: PXM 0 -> APIC 0xda -> Node 0 Jul 14 22:45:16.743864 kernel: SRAT: PXM 0 -> APIC 0xdc -> Node 0 Jul 14 22:45:16.743869 kernel: SRAT: PXM 0 -> APIC 0xde -> Node 0 Jul 14 22:45:16.743875 kernel: SRAT: PXM 0 -> APIC 0xe0 -> Node 0 Jul 14 22:45:16.743880 kernel: SRAT: PXM 0 -> APIC 0xe2 -> Node 0 Jul 14 22:45:16.743885 kernel: SRAT: PXM 0 -> APIC 0xe4 -> Node 0 Jul 14 22:45:16.743891 kernel: SRAT: PXM 0 -> APIC 0xe6 -> Node 0 Jul 14 22:45:16.743897 kernel: SRAT: PXM 0 -> APIC 0xe8 -> Node 0 Jul 14 22:45:16.743902 kernel: SRAT: PXM 0 -> APIC 0xea -> Node 0 Jul 14 22:45:16.743908 kernel: SRAT: PXM 0 -> APIC 0xec -> Node 0 Jul 14 22:45:16.743913 kernel: SRAT: PXM 0 -> APIC 0xee -> Node 0 Jul 14 22:45:16.743919 kernel: SRAT: PXM 0 -> APIC 0xf0 -> Node 0 Jul 14 22:45:16.743924 kernel: SRAT: PXM 0 -> APIC 0xf2 -> Node 0 Jul 14 22:45:16.743929 kernel: SRAT: PXM 0 -> APIC 0xf4 -> Node 0 Jul 14 22:45:16.743934 kernel: SRAT: PXM 0 -> APIC 0xf6 -> Node 0 Jul 14 22:45:16.743940 kernel: SRAT: PXM 0 -> APIC 0xf8 -> Node 0 Jul 14 22:45:16.743945 kernel: SRAT: PXM 0 -> APIC 0xfa -> Node 0 Jul 14 22:45:16.743951 kernel: SRAT: PXM 0 -> APIC 0xfc -> Node 0 Jul 14 22:45:16.743957 kernel: SRAT: PXM 0 -> APIC 0xfe -> Node 0 Jul 14 22:45:16.743963 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Jul 14 22:45:16.743968 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Jul 14 22:45:16.743974 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000-0xbfffffff] hotplug Jul 14 22:45:16.743979 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7fffffff] -> [mem 0x00000000-0x7fffffff] Jul 14 22:45:16.743985 kernel: NODE_DATA(0) allocated [mem 0x7fffa000-0x7fffffff] Jul 14 22:45:16.743991 kernel: Zone ranges: Jul 14 22:45:16.743996 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jul 14 22:45:16.744003 kernel: DMA32 [mem 0x0000000001000000-0x000000007fffffff] Jul 14 22:45:16.744008 kernel: Normal empty Jul 14 22:45:16.744014 kernel: Movable zone start for each node Jul 14 22:45:16.744019 kernel: Early memory node ranges Jul 14 22:45:16.744024 kernel: node 0: [mem 0x0000000000001000-0x000000000009dfff] Jul 14 22:45:16.744030 kernel: node 0: [mem 0x0000000000100000-0x000000007fedffff] Jul 14 22:45:16.744035 kernel: node 0: [mem 0x000000007ff00000-0x000000007fffffff] Jul 14 22:45:16.744041 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007fffffff] Jul 14 22:45:16.744046 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jul 14 22:45:16.744052 kernel: On node 0, zone DMA: 98 pages in unavailable ranges Jul 14 22:45:16.744058 kernel: On node 0, zone DMA32: 32 pages in unavailable ranges Jul 14 22:45:16.744064 kernel: ACPI: PM-Timer IO Port: 0x1008 Jul 14 22:45:16.744069 kernel: system APIC only can use physical flat Jul 14 22:45:16.744074 kernel: ACPI: LAPIC_NMI (acpi_id[0x00] high edge lint[0x1]) Jul 14 22:45:16.744080 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1]) Jul 14 22:45:16.744085 kernel: ACPI: LAPIC_NMI (acpi_id[0x02] high edge lint[0x1]) Jul 14 22:45:16.744091 kernel: ACPI: LAPIC_NMI (acpi_id[0x03] high edge lint[0x1]) Jul 14 22:45:16.744096 kernel: ACPI: LAPIC_NMI (acpi_id[0x04] high edge lint[0x1]) Jul 14 22:45:16.744102 kernel: ACPI: LAPIC_NMI (acpi_id[0x05] high edge lint[0x1]) Jul 14 22:45:16.744108 kernel: ACPI: LAPIC_NMI (acpi_id[0x06] high edge lint[0x1]) Jul 14 22:45:16.744114 kernel: ACPI: LAPIC_NMI (acpi_id[0x07] high edge lint[0x1]) Jul 14 22:45:16.744119 kernel: ACPI: LAPIC_NMI (acpi_id[0x08] high edge lint[0x1]) Jul 14 22:45:16.744124 kernel: ACPI: LAPIC_NMI (acpi_id[0x09] high edge lint[0x1]) Jul 14 22:45:16.744130 kernel: ACPI: LAPIC_NMI (acpi_id[0x0a] high edge lint[0x1]) Jul 14 22:45:16.744135 kernel: ACPI: LAPIC_NMI (acpi_id[0x0b] high edge lint[0x1]) Jul 14 22:45:16.744140 kernel: ACPI: LAPIC_NMI (acpi_id[0x0c] high edge lint[0x1]) Jul 14 22:45:16.744146 kernel: ACPI: LAPIC_NMI (acpi_id[0x0d] high edge lint[0x1]) Jul 14 22:45:16.744151 kernel: ACPI: LAPIC_NMI (acpi_id[0x0e] high edge lint[0x1]) Jul 14 22:45:16.744157 kernel: ACPI: LAPIC_NMI (acpi_id[0x0f] high edge lint[0x1]) Jul 14 22:45:16.744163 kernel: ACPI: LAPIC_NMI (acpi_id[0x10] high edge lint[0x1]) Jul 14 22:45:16.744169 kernel: ACPI: LAPIC_NMI (acpi_id[0x11] high edge lint[0x1]) Jul 14 22:45:16.744174 kernel: ACPI: LAPIC_NMI (acpi_id[0x12] high edge lint[0x1]) Jul 14 22:45:16.744179 kernel: ACPI: LAPIC_NMI (acpi_id[0x13] high edge lint[0x1]) Jul 14 22:45:16.744185 kernel: ACPI: LAPIC_NMI (acpi_id[0x14] high edge lint[0x1]) Jul 14 22:45:16.744190 kernel: ACPI: LAPIC_NMI (acpi_id[0x15] high edge lint[0x1]) Jul 14 22:45:16.744195 kernel: ACPI: LAPIC_NMI (acpi_id[0x16] high edge lint[0x1]) Jul 14 22:45:16.744201 kernel: ACPI: LAPIC_NMI (acpi_id[0x17] high edge lint[0x1]) Jul 14 22:45:16.744206 kernel: ACPI: LAPIC_NMI (acpi_id[0x18] high edge lint[0x1]) Jul 14 22:45:16.744212 kernel: ACPI: LAPIC_NMI (acpi_id[0x19] high edge lint[0x1]) Jul 14 22:45:16.744218 kernel: ACPI: LAPIC_NMI (acpi_id[0x1a] high edge lint[0x1]) Jul 14 22:45:16.744224 kernel: ACPI: LAPIC_NMI (acpi_id[0x1b] high edge lint[0x1]) Jul 14 22:45:16.744229 kernel: ACPI: LAPIC_NMI (acpi_id[0x1c] high edge lint[0x1]) Jul 14 22:45:16.744235 kernel: ACPI: LAPIC_NMI (acpi_id[0x1d] high edge lint[0x1]) Jul 14 22:45:16.744240 kernel: ACPI: LAPIC_NMI (acpi_id[0x1e] high edge lint[0x1]) Jul 14 22:45:16.744245 kernel: ACPI: LAPIC_NMI (acpi_id[0x1f] high edge lint[0x1]) Jul 14 22:45:16.744251 kernel: ACPI: LAPIC_NMI (acpi_id[0x20] high edge lint[0x1]) Jul 14 22:45:16.744256 kernel: ACPI: LAPIC_NMI (acpi_id[0x21] high edge lint[0x1]) Jul 14 22:45:16.744262 kernel: ACPI: LAPIC_NMI (acpi_id[0x22] high edge lint[0x1]) Jul 14 22:45:16.744268 kernel: ACPI: LAPIC_NMI (acpi_id[0x23] high edge lint[0x1]) Jul 14 22:45:16.744273 kernel: ACPI: LAPIC_NMI (acpi_id[0x24] high edge lint[0x1]) Jul 14 22:45:16.744279 kernel: ACPI: LAPIC_NMI (acpi_id[0x25] high edge lint[0x1]) Jul 14 22:45:16.744284 kernel: ACPI: LAPIC_NMI (acpi_id[0x26] high edge lint[0x1]) Jul 14 22:45:16.744289 kernel: ACPI: LAPIC_NMI (acpi_id[0x27] high edge lint[0x1]) Jul 14 22:45:16.744295 kernel: ACPI: LAPIC_NMI (acpi_id[0x28] high edge lint[0x1]) Jul 14 22:45:16.744300 kernel: ACPI: LAPIC_NMI (acpi_id[0x29] high edge lint[0x1]) Jul 14 22:45:16.744305 kernel: ACPI: LAPIC_NMI (acpi_id[0x2a] high edge lint[0x1]) Jul 14 22:45:16.744311 kernel: ACPI: LAPIC_NMI (acpi_id[0x2b] high edge lint[0x1]) Jul 14 22:45:16.744316 kernel: ACPI: LAPIC_NMI (acpi_id[0x2c] high edge lint[0x1]) Jul 14 22:45:16.744329 kernel: ACPI: LAPIC_NMI (acpi_id[0x2d] high edge lint[0x1]) Jul 14 22:45:16.744335 kernel: ACPI: LAPIC_NMI (acpi_id[0x2e] high edge lint[0x1]) Jul 14 22:45:16.744340 kernel: ACPI: LAPIC_NMI (acpi_id[0x2f] high edge lint[0x1]) Jul 14 22:45:16.744345 kernel: ACPI: LAPIC_NMI (acpi_id[0x30] high edge lint[0x1]) Jul 14 22:45:16.744351 kernel: ACPI: LAPIC_NMI (acpi_id[0x31] high edge lint[0x1]) Jul 14 22:45:16.744356 kernel: ACPI: LAPIC_NMI (acpi_id[0x32] high edge lint[0x1]) Jul 14 22:45:16.744362 kernel: ACPI: LAPIC_NMI (acpi_id[0x33] high edge lint[0x1]) Jul 14 22:45:16.744367 kernel: ACPI: LAPIC_NMI (acpi_id[0x34] high edge lint[0x1]) Jul 14 22:45:16.744373 kernel: ACPI: LAPIC_NMI (acpi_id[0x35] high edge lint[0x1]) Jul 14 22:45:16.744378 kernel: ACPI: LAPIC_NMI (acpi_id[0x36] high edge lint[0x1]) Jul 14 22:45:16.744384 kernel: ACPI: LAPIC_NMI (acpi_id[0x37] high edge lint[0x1]) Jul 14 22:45:16.744390 kernel: ACPI: LAPIC_NMI (acpi_id[0x38] high edge lint[0x1]) Jul 14 22:45:16.744395 kernel: ACPI: LAPIC_NMI (acpi_id[0x39] high edge lint[0x1]) Jul 14 22:45:16.744401 kernel: ACPI: LAPIC_NMI (acpi_id[0x3a] high edge lint[0x1]) Jul 14 22:45:16.744406 kernel: ACPI: LAPIC_NMI (acpi_id[0x3b] high edge lint[0x1]) Jul 14 22:45:16.744411 kernel: ACPI: LAPIC_NMI (acpi_id[0x3c] high edge lint[0x1]) Jul 14 22:45:16.744417 kernel: ACPI: LAPIC_NMI (acpi_id[0x3d] high edge lint[0x1]) Jul 14 22:45:16.744422 kernel: ACPI: LAPIC_NMI (acpi_id[0x3e] high edge lint[0x1]) Jul 14 22:45:16.744427 kernel: ACPI: LAPIC_NMI (acpi_id[0x3f] high edge lint[0x1]) Jul 14 22:45:16.744434 kernel: ACPI: LAPIC_NMI (acpi_id[0x40] high edge lint[0x1]) Jul 14 22:45:16.744439 kernel: ACPI: LAPIC_NMI (acpi_id[0x41] high edge lint[0x1]) Jul 14 22:45:16.744445 kernel: ACPI: LAPIC_NMI (acpi_id[0x42] high edge lint[0x1]) Jul 14 22:45:16.744454 kernel: ACPI: LAPIC_NMI (acpi_id[0x43] high edge lint[0x1]) Jul 14 22:45:16.744462 kernel: ACPI: LAPIC_NMI (acpi_id[0x44] high edge lint[0x1]) Jul 14 22:45:16.744467 kernel: ACPI: LAPIC_NMI (acpi_id[0x45] high edge lint[0x1]) Jul 14 22:45:16.744473 kernel: ACPI: LAPIC_NMI (acpi_id[0x46] high edge lint[0x1]) Jul 14 22:45:16.744479 kernel: ACPI: LAPIC_NMI (acpi_id[0x47] high edge lint[0x1]) Jul 14 22:45:16.744484 kernel: ACPI: LAPIC_NMI (acpi_id[0x48] high edge lint[0x1]) Jul 14 22:45:16.744489 kernel: ACPI: LAPIC_NMI (acpi_id[0x49] high edge lint[0x1]) Jul 14 22:45:16.744497 kernel: ACPI: LAPIC_NMI (acpi_id[0x4a] high edge lint[0x1]) Jul 14 22:45:16.744502 kernel: ACPI: LAPIC_NMI (acpi_id[0x4b] high edge lint[0x1]) Jul 14 22:45:16.744508 kernel: ACPI: LAPIC_NMI (acpi_id[0x4c] high edge lint[0x1]) Jul 14 22:45:16.744513 kernel: ACPI: LAPIC_NMI (acpi_id[0x4d] high edge lint[0x1]) Jul 14 22:45:16.744518 kernel: ACPI: LAPIC_NMI (acpi_id[0x4e] high edge lint[0x1]) Jul 14 22:45:16.744524 kernel: ACPI: LAPIC_NMI (acpi_id[0x4f] high edge lint[0x1]) Jul 14 22:45:16.744529 kernel: ACPI: LAPIC_NMI (acpi_id[0x50] high edge lint[0x1]) Jul 14 22:45:16.744535 kernel: ACPI: LAPIC_NMI (acpi_id[0x51] high edge lint[0x1]) Jul 14 22:45:16.744540 kernel: ACPI: LAPIC_NMI (acpi_id[0x52] high edge lint[0x1]) Jul 14 22:45:16.744545 kernel: ACPI: LAPIC_NMI (acpi_id[0x53] high edge lint[0x1]) Jul 14 22:45:16.744552 kernel: ACPI: LAPIC_NMI (acpi_id[0x54] high edge lint[0x1]) Jul 14 22:45:16.744557 kernel: ACPI: LAPIC_NMI (acpi_id[0x55] high edge lint[0x1]) Jul 14 22:45:16.744562 kernel: ACPI: LAPIC_NMI (acpi_id[0x56] high edge lint[0x1]) Jul 14 22:45:16.744568 kernel: ACPI: LAPIC_NMI (acpi_id[0x57] high edge lint[0x1]) Jul 14 22:45:16.744573 kernel: ACPI: LAPIC_NMI (acpi_id[0x58] high edge lint[0x1]) Jul 14 22:45:16.744579 kernel: ACPI: LAPIC_NMI (acpi_id[0x59] high edge lint[0x1]) Jul 14 22:45:16.744584 kernel: ACPI: LAPIC_NMI (acpi_id[0x5a] high edge lint[0x1]) Jul 14 22:45:16.744590 kernel: ACPI: LAPIC_NMI (acpi_id[0x5b] high edge lint[0x1]) Jul 14 22:45:16.744595 kernel: ACPI: LAPIC_NMI (acpi_id[0x5c] high edge lint[0x1]) Jul 14 22:45:16.744602 kernel: ACPI: LAPIC_NMI (acpi_id[0x5d] high edge lint[0x1]) Jul 14 22:45:16.744607 kernel: ACPI: LAPIC_NMI (acpi_id[0x5e] high edge lint[0x1]) Jul 14 22:45:16.744612 kernel: ACPI: LAPIC_NMI (acpi_id[0x5f] high edge lint[0x1]) Jul 14 22:45:16.744618 kernel: ACPI: LAPIC_NMI (acpi_id[0x60] high edge lint[0x1]) Jul 14 22:45:16.744623 kernel: ACPI: LAPIC_NMI (acpi_id[0x61] high edge lint[0x1]) Jul 14 22:45:16.744629 kernel: ACPI: LAPIC_NMI (acpi_id[0x62] high edge lint[0x1]) Jul 14 22:45:16.744634 kernel: ACPI: LAPIC_NMI (acpi_id[0x63] high edge lint[0x1]) Jul 14 22:45:16.744639 kernel: ACPI: LAPIC_NMI (acpi_id[0x64] high edge lint[0x1]) Jul 14 22:45:16.744645 kernel: ACPI: LAPIC_NMI (acpi_id[0x65] high edge lint[0x1]) Jul 14 22:45:16.744655 kernel: ACPI: LAPIC_NMI (acpi_id[0x66] high edge lint[0x1]) Jul 14 22:45:16.744664 kernel: ACPI: LAPIC_NMI (acpi_id[0x67] high edge lint[0x1]) Jul 14 22:45:16.744670 kernel: ACPI: LAPIC_NMI (acpi_id[0x68] high edge lint[0x1]) Jul 14 22:45:16.744675 kernel: ACPI: LAPIC_NMI (acpi_id[0x69] high edge lint[0x1]) Jul 14 22:45:16.744680 kernel: ACPI: LAPIC_NMI (acpi_id[0x6a] high edge lint[0x1]) Jul 14 22:45:16.744686 kernel: ACPI: LAPIC_NMI (acpi_id[0x6b] high edge lint[0x1]) Jul 14 22:45:16.744691 kernel: ACPI: LAPIC_NMI (acpi_id[0x6c] high edge lint[0x1]) Jul 14 22:45:16.744696 kernel: ACPI: LAPIC_NMI (acpi_id[0x6d] high edge lint[0x1]) Jul 14 22:45:16.744702 kernel: ACPI: LAPIC_NMI (acpi_id[0x6e] high edge lint[0x1]) Jul 14 22:45:16.744707 kernel: ACPI: LAPIC_NMI (acpi_id[0x6f] high edge lint[0x1]) Jul 14 22:45:16.744714 kernel: ACPI: LAPIC_NMI (acpi_id[0x70] high edge lint[0x1]) Jul 14 22:45:16.744719 kernel: ACPI: LAPIC_NMI (acpi_id[0x71] high edge lint[0x1]) Jul 14 22:45:16.744724 kernel: ACPI: LAPIC_NMI (acpi_id[0x72] high edge lint[0x1]) Jul 14 22:45:16.744730 kernel: ACPI: LAPIC_NMI (acpi_id[0x73] high edge lint[0x1]) Jul 14 22:45:16.744739 kernel: ACPI: LAPIC_NMI (acpi_id[0x74] high edge lint[0x1]) Jul 14 22:45:16.744744 kernel: ACPI: LAPIC_NMI (acpi_id[0x75] high edge lint[0x1]) Jul 14 22:45:16.744749 kernel: ACPI: LAPIC_NMI (acpi_id[0x76] high edge lint[0x1]) Jul 14 22:45:16.744755 kernel: ACPI: LAPIC_NMI (acpi_id[0x77] high edge lint[0x1]) Jul 14 22:45:16.744761 kernel: ACPI: LAPIC_NMI (acpi_id[0x78] high edge lint[0x1]) Jul 14 22:45:16.744766 kernel: ACPI: LAPIC_NMI (acpi_id[0x79] high edge lint[0x1]) Jul 14 22:45:16.744772 kernel: ACPI: LAPIC_NMI (acpi_id[0x7a] high edge lint[0x1]) Jul 14 22:45:16.744778 kernel: ACPI: LAPIC_NMI (acpi_id[0x7b] high edge lint[0x1]) Jul 14 22:45:16.744783 kernel: ACPI: LAPIC_NMI (acpi_id[0x7c] high edge lint[0x1]) Jul 14 22:45:16.744789 kernel: ACPI: LAPIC_NMI (acpi_id[0x7d] high edge lint[0x1]) Jul 14 22:45:16.744794 kernel: ACPI: LAPIC_NMI (acpi_id[0x7e] high edge lint[0x1]) Jul 14 22:45:16.744800 kernel: ACPI: LAPIC_NMI (acpi_id[0x7f] high edge lint[0x1]) Jul 14 22:45:16.744805 kernel: IOAPIC[0]: apic_id 1, version 17, address 0xfec00000, GSI 0-23 Jul 14 22:45:16.744811 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 high edge) Jul 14 22:45:16.744816 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jul 14 22:45:16.744822 kernel: ACPI: HPET id: 0x8086af01 base: 0xfed00000 Jul 14 22:45:16.744828 kernel: TSC deadline timer available Jul 14 22:45:16.744837 kernel: smpboot: Allowing 128 CPUs, 126 hotplug CPUs Jul 14 22:45:16.744842 kernel: [mem 0x80000000-0xefffffff] available for PCI devices Jul 14 22:45:16.744848 kernel: Booting paravirtualized kernel on VMware hypervisor Jul 14 22:45:16.744854 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jul 14 22:45:16.744859 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:128 nr_cpu_ids:128 nr_node_ids:1 Jul 14 22:45:16.744865 kernel: percpu: Embedded 58 pages/cpu s197096 r8192 d32280 u262144 Jul 14 22:45:16.744871 kernel: pcpu-alloc: s197096 r8192 d32280 u262144 alloc=1*2097152 Jul 14 22:45:16.744876 kernel: pcpu-alloc: [0] 000 001 002 003 004 005 006 007 Jul 14 22:45:16.744882 kernel: pcpu-alloc: [0] 008 009 010 011 012 013 014 015 Jul 14 22:45:16.744888 kernel: pcpu-alloc: [0] 016 017 018 019 020 021 022 023 Jul 14 22:45:16.744893 kernel: pcpu-alloc: [0] 024 025 026 027 028 029 030 031 Jul 14 22:45:16.744899 kernel: pcpu-alloc: [0] 032 033 034 035 036 037 038 039 Jul 14 22:45:16.744912 kernel: pcpu-alloc: [0] 040 041 042 043 044 045 046 047 Jul 14 22:45:16.744918 kernel: pcpu-alloc: [0] 048 049 050 051 052 053 054 055 Jul 14 22:45:16.744924 kernel: pcpu-alloc: [0] 056 057 058 059 060 061 062 063 Jul 14 22:45:16.744930 kernel: pcpu-alloc: [0] 064 065 066 067 068 069 070 071 Jul 14 22:45:16.744936 kernel: pcpu-alloc: [0] 072 073 074 075 076 077 078 079 Jul 14 22:45:16.744942 kernel: pcpu-alloc: [0] 080 081 082 083 084 085 086 087 Jul 14 22:45:16.747356 kernel: pcpu-alloc: [0] 088 089 090 091 092 093 094 095 Jul 14 22:45:16.747366 kernel: pcpu-alloc: [0] 096 097 098 099 100 101 102 103 Jul 14 22:45:16.747372 kernel: pcpu-alloc: [0] 104 105 106 107 108 109 110 111 Jul 14 22:45:16.747378 kernel: pcpu-alloc: [0] 112 113 114 115 116 117 118 119 Jul 14 22:45:16.747384 kernel: pcpu-alloc: [0] 120 121 122 123 124 125 126 127 Jul 14 22:45:16.747390 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=bfa97d577a2baa7448b0ab2cae71f1606bd0084ffae5b72cc7eef5122a2ca497 Jul 14 22:45:16.747399 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 14 22:45:16.747405 kernel: random: crng init done Jul 14 22:45:16.747411 kernel: printk: log_buf_len individual max cpu contribution: 4096 bytes Jul 14 22:45:16.747417 kernel: printk: log_buf_len total cpu_extra contributions: 520192 bytes Jul 14 22:45:16.747423 kernel: printk: log_buf_len min size: 262144 bytes Jul 14 22:45:16.747429 kernel: printk: log_buf_len: 1048576 bytes Jul 14 22:45:16.747435 kernel: printk: early log buf free: 239648(91%) Jul 14 22:45:16.747441 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 14 22:45:16.747447 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jul 14 22:45:16.747457 kernel: Fallback order for Node 0: 0 Jul 14 22:45:16.747463 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515808 Jul 14 22:45:16.747469 kernel: Policy zone: DMA32 Jul 14 22:45:16.747475 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 14 22:45:16.747481 kernel: Memory: 1936340K/2096628K available (12288K kernel code, 2295K rwdata, 22748K rodata, 42876K init, 2316K bss, 160028K reserved, 0K cma-reserved) Jul 14 22:45:16.747488 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=128, Nodes=1 Jul 14 22:45:16.747496 kernel: ftrace: allocating 37970 entries in 149 pages Jul 14 22:45:16.747502 kernel: ftrace: allocated 149 pages with 4 groups Jul 14 22:45:16.747508 kernel: Dynamic Preempt: voluntary Jul 14 22:45:16.747514 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 14 22:45:16.747520 kernel: rcu: RCU event tracing is enabled. Jul 14 22:45:16.747526 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=128. Jul 14 22:45:16.747532 kernel: Trampoline variant of Tasks RCU enabled. Jul 14 22:45:16.747537 kernel: Rude variant of Tasks RCU enabled. Jul 14 22:45:16.747543 kernel: Tracing variant of Tasks RCU enabled. Jul 14 22:45:16.747550 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 14 22:45:16.747557 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=128 Jul 14 22:45:16.747563 kernel: NR_IRQS: 33024, nr_irqs: 1448, preallocated irqs: 16 Jul 14 22:45:16.747569 kernel: rcu: srcu_init: Setting srcu_struct sizes to big. Jul 14 22:45:16.747575 kernel: Console: colour VGA+ 80x25 Jul 14 22:45:16.747581 kernel: printk: console [tty0] enabled Jul 14 22:45:16.747587 kernel: printk: console [ttyS0] enabled Jul 14 22:45:16.747593 kernel: ACPI: Core revision 20230628 Jul 14 22:45:16.747599 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 133484882848 ns Jul 14 22:45:16.747605 kernel: APIC: Switch to symmetric I/O mode setup Jul 14 22:45:16.747612 kernel: x2apic enabled Jul 14 22:45:16.747618 kernel: APIC: Switched APIC routing to: physical x2apic Jul 14 22:45:16.747624 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jul 14 22:45:16.747630 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x311fd3cd494, max_idle_ns: 440795223879 ns Jul 14 22:45:16.747636 kernel: Calibrating delay loop (skipped) preset value.. 6816.00 BogoMIPS (lpj=3408000) Jul 14 22:45:16.747642 kernel: Disabled fast string operations Jul 14 22:45:16.747648 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Jul 14 22:45:16.747653 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Jul 14 22:45:16.747659 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jul 14 22:45:16.747667 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Jul 14 22:45:16.747672 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Jul 14 22:45:16.747678 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS Jul 14 22:45:16.747684 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT Jul 14 22:45:16.747690 kernel: RETBleed: Mitigation: Enhanced IBRS Jul 14 22:45:16.747697 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jul 14 22:45:16.747702 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jul 14 22:45:16.747708 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jul 14 22:45:16.747715 kernel: SRBDS: Unknown: Dependent on hypervisor status Jul 14 22:45:16.747721 kernel: GDS: Unknown: Dependent on hypervisor status Jul 14 22:45:16.747727 kernel: ITS: Mitigation: Aligned branch/return thunks Jul 14 22:45:16.747733 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jul 14 22:45:16.747739 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jul 14 22:45:16.747745 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jul 14 22:45:16.747751 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jul 14 22:45:16.747757 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jul 14 22:45:16.747763 kernel: Freeing SMP alternatives memory: 32K Jul 14 22:45:16.747770 kernel: pid_max: default: 131072 minimum: 1024 Jul 14 22:45:16.747776 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jul 14 22:45:16.747782 kernel: landlock: Up and running. Jul 14 22:45:16.747788 kernel: SELinux: Initializing. Jul 14 22:45:16.747794 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jul 14 22:45:16.747800 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jul 14 22:45:16.747806 kernel: smpboot: CPU0: Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz (family: 0x6, model: 0x9e, stepping: 0xd) Jul 14 22:45:16.747811 kernel: RCU Tasks: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. Jul 14 22:45:16.747817 kernel: RCU Tasks Rude: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. Jul 14 22:45:16.747824 kernel: RCU Tasks Trace: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. Jul 14 22:45:16.747830 kernel: Performance Events: Skylake events, core PMU driver. Jul 14 22:45:16.747836 kernel: core: CPUID marked event: 'cpu cycles' unavailable Jul 14 22:45:16.747842 kernel: core: CPUID marked event: 'instructions' unavailable Jul 14 22:45:16.747848 kernel: core: CPUID marked event: 'bus cycles' unavailable Jul 14 22:45:16.747853 kernel: core: CPUID marked event: 'cache references' unavailable Jul 14 22:45:16.747859 kernel: core: CPUID marked event: 'cache misses' unavailable Jul 14 22:45:16.747865 kernel: core: CPUID marked event: 'branch instructions' unavailable Jul 14 22:45:16.747871 kernel: core: CPUID marked event: 'branch misses' unavailable Jul 14 22:45:16.747877 kernel: ... version: 1 Jul 14 22:45:16.747883 kernel: ... bit width: 48 Jul 14 22:45:16.747889 kernel: ... generic registers: 4 Jul 14 22:45:16.747895 kernel: ... value mask: 0000ffffffffffff Jul 14 22:45:16.747901 kernel: ... max period: 000000007fffffff Jul 14 22:45:16.747907 kernel: ... fixed-purpose events: 0 Jul 14 22:45:16.747914 kernel: ... event mask: 000000000000000f Jul 14 22:45:16.747919 kernel: signal: max sigframe size: 1776 Jul 14 22:45:16.747925 kernel: rcu: Hierarchical SRCU implementation. Jul 14 22:45:16.747932 kernel: rcu: Max phase no-delay instances is 400. Jul 14 22:45:16.747939 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jul 14 22:45:16.747944 kernel: smp: Bringing up secondary CPUs ... Jul 14 22:45:16.747950 kernel: smpboot: x86: Booting SMP configuration: Jul 14 22:45:16.747956 kernel: .... node #0, CPUs: #1 Jul 14 22:45:16.747962 kernel: Disabled fast string operations Jul 14 22:45:16.747968 kernel: smpboot: CPU 1 Converting physical 2 to logical package 1 Jul 14 22:45:16.747974 kernel: smpboot: CPU 1 Converting physical 0 to logical die 1 Jul 14 22:45:16.747980 kernel: smp: Brought up 1 node, 2 CPUs Jul 14 22:45:16.747986 kernel: smpboot: Max logical packages: 128 Jul 14 22:45:16.747993 kernel: smpboot: Total of 2 processors activated (13632.00 BogoMIPS) Jul 14 22:45:16.747999 kernel: devtmpfs: initialized Jul 14 22:45:16.748005 kernel: x86/mm: Memory block size: 128MB Jul 14 22:45:16.748011 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7feff000-0x7fefffff] (4096 bytes) Jul 14 22:45:16.748017 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 14 22:45:16.748023 kernel: futex hash table entries: 32768 (order: 9, 2097152 bytes, linear) Jul 14 22:45:16.748029 kernel: pinctrl core: initialized pinctrl subsystem Jul 14 22:45:16.748035 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 14 22:45:16.748041 kernel: audit: initializing netlink subsys (disabled) Jul 14 22:45:16.748048 kernel: audit: type=2000 audit(1752533115.093:1): state=initialized audit_enabled=0 res=1 Jul 14 22:45:16.748053 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 14 22:45:16.748059 kernel: thermal_sys: Registered thermal governor 'user_space' Jul 14 22:45:16.748065 kernel: cpuidle: using governor menu Jul 14 22:45:16.748071 kernel: Simple Boot Flag at 0x36 set to 0x80 Jul 14 22:45:16.748077 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 14 22:45:16.748083 kernel: dca service started, version 1.12.1 Jul 14 22:45:16.748089 kernel: PCI: MMCONFIG for domain 0000 [bus 00-7f] at [mem 0xf0000000-0xf7ffffff] (base 0xf0000000) Jul 14 22:45:16.748096 kernel: PCI: Using configuration type 1 for base access Jul 14 22:45:16.748102 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jul 14 22:45:16.748108 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 14 22:45:16.748114 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jul 14 22:45:16.748120 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 14 22:45:16.748126 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jul 14 22:45:16.748132 kernel: ACPI: Added _OSI(Module Device) Jul 14 22:45:16.748137 kernel: ACPI: Added _OSI(Processor Device) Jul 14 22:45:16.748143 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 14 22:45:16.748150 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 14 22:45:16.748156 kernel: ACPI: [Firmware Bug]: BIOS _OSI(Linux) query ignored Jul 14 22:45:16.748162 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jul 14 22:45:16.748168 kernel: ACPI: Interpreter enabled Jul 14 22:45:16.748174 kernel: ACPI: PM: (supports S0 S1 S5) Jul 14 22:45:16.748180 kernel: ACPI: Using IOAPIC for interrupt routing Jul 14 22:45:16.748186 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jul 14 22:45:16.748191 kernel: PCI: Using E820 reservations for host bridge windows Jul 14 22:45:16.748197 kernel: ACPI: Enabled 4 GPEs in block 00 to 0F Jul 14 22:45:16.748204 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-7f]) Jul 14 22:45:16.748288 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 14 22:45:16.749171 kernel: acpi PNP0A03:00: _OSC: platform does not support [AER LTR] Jul 14 22:45:16.749230 kernel: acpi PNP0A03:00: _OSC: OS now controls [PCIeHotplug PME PCIeCapability] Jul 14 22:45:16.749239 kernel: PCI host bridge to bus 0000:00 Jul 14 22:45:16.749293 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jul 14 22:45:16.749354 kernel: pci_bus 0000:00: root bus resource [mem 0x000cc000-0x000dbfff window] Jul 14 22:45:16.749406 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jul 14 22:45:16.749453 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jul 14 22:45:16.749509 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xfeff window] Jul 14 22:45:16.749556 kernel: pci_bus 0000:00: root bus resource [bus 00-7f] Jul 14 22:45:16.749616 kernel: pci 0000:00:00.0: [8086:7190] type 00 class 0x060000 Jul 14 22:45:16.749676 kernel: pci 0000:00:01.0: [8086:7191] type 01 class 0x060400 Jul 14 22:45:16.749735 kernel: pci 0000:00:07.0: [8086:7110] type 00 class 0x060100 Jul 14 22:45:16.749792 kernel: pci 0000:00:07.1: [8086:7111] type 00 class 0x01018a Jul 14 22:45:16.749844 kernel: pci 0000:00:07.1: reg 0x20: [io 0x1060-0x106f] Jul 14 22:45:16.749896 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Jul 14 22:45:16.749948 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Jul 14 22:45:16.749999 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Jul 14 22:45:16.750053 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Jul 14 22:45:16.750111 kernel: pci 0000:00:07.3: [8086:7113] type 00 class 0x068000 Jul 14 22:45:16.750164 kernel: pci 0000:00:07.3: quirk: [io 0x1000-0x103f] claimed by PIIX4 ACPI Jul 14 22:45:16.750215 kernel: pci 0000:00:07.3: quirk: [io 0x1040-0x104f] claimed by PIIX4 SMB Jul 14 22:45:16.750270 kernel: pci 0000:00:07.7: [15ad:0740] type 00 class 0x088000 Jul 14 22:45:16.754340 kernel: pci 0000:00:07.7: reg 0x10: [io 0x1080-0x10bf] Jul 14 22:45:16.754410 kernel: pci 0000:00:07.7: reg 0x14: [mem 0xfebfe000-0xfebfffff 64bit] Jul 14 22:45:16.754480 kernel: pci 0000:00:0f.0: [15ad:0405] type 00 class 0x030000 Jul 14 22:45:16.754536 kernel: pci 0000:00:0f.0: reg 0x10: [io 0x1070-0x107f] Jul 14 22:45:16.754588 kernel: pci 0000:00:0f.0: reg 0x14: [mem 0xe8000000-0xefffffff pref] Jul 14 22:45:16.754640 kernel: pci 0000:00:0f.0: reg 0x18: [mem 0xfe000000-0xfe7fffff] Jul 14 22:45:16.754691 kernel: pci 0000:00:0f.0: reg 0x30: [mem 0x00000000-0x00007fff pref] Jul 14 22:45:16.754743 kernel: pci 0000:00:0f.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jul 14 22:45:16.754800 kernel: pci 0000:00:11.0: [15ad:0790] type 01 class 0x060401 Jul 14 22:45:16.754859 kernel: pci 0000:00:15.0: [15ad:07a0] type 01 class 0x060400 Jul 14 22:45:16.754912 kernel: pci 0000:00:15.0: PME# supported from D0 D3hot D3cold Jul 14 22:45:16.754971 kernel: pci 0000:00:15.1: [15ad:07a0] type 01 class 0x060400 Jul 14 22:45:16.755025 kernel: pci 0000:00:15.1: PME# supported from D0 D3hot D3cold Jul 14 22:45:16.755081 kernel: pci 0000:00:15.2: [15ad:07a0] type 01 class 0x060400 Jul 14 22:45:16.755134 kernel: pci 0000:00:15.2: PME# supported from D0 D3hot D3cold Jul 14 22:45:16.755193 kernel: pci 0000:00:15.3: [15ad:07a0] type 01 class 0x060400 Jul 14 22:45:16.755245 kernel: pci 0000:00:15.3: PME# supported from D0 D3hot D3cold Jul 14 22:45:16.755301 kernel: pci 0000:00:15.4: [15ad:07a0] type 01 class 0x060400 Jul 14 22:45:16.755365 kernel: pci 0000:00:15.4: PME# supported from D0 D3hot D3cold Jul 14 22:45:16.755421 kernel: pci 0000:00:15.5: [15ad:07a0] type 01 class 0x060400 Jul 14 22:45:16.755484 kernel: pci 0000:00:15.5: PME# supported from D0 D3hot D3cold Jul 14 22:45:16.755546 kernel: pci 0000:00:15.6: [15ad:07a0] type 01 class 0x060400 Jul 14 22:45:16.755599 kernel: pci 0000:00:15.6: PME# supported from D0 D3hot D3cold Jul 14 22:45:16.755654 kernel: pci 0000:00:15.7: [15ad:07a0] type 01 class 0x060400 Jul 14 22:45:16.755707 kernel: pci 0000:00:15.7: PME# supported from D0 D3hot D3cold Jul 14 22:45:16.755763 kernel: pci 0000:00:16.0: [15ad:07a0] type 01 class 0x060400 Jul 14 22:45:16.755816 kernel: pci 0000:00:16.0: PME# supported from D0 D3hot D3cold Jul 14 22:45:16.755874 kernel: pci 0000:00:16.1: [15ad:07a0] type 01 class 0x060400 Jul 14 22:45:16.755926 kernel: pci 0000:00:16.1: PME# supported from D0 D3hot D3cold Jul 14 22:45:16.755984 kernel: pci 0000:00:16.2: [15ad:07a0] type 01 class 0x060400 Jul 14 22:45:16.756036 kernel: pci 0000:00:16.2: PME# supported from D0 D3hot D3cold Jul 14 22:45:16.756091 kernel: pci 0000:00:16.3: [15ad:07a0] type 01 class 0x060400 Jul 14 22:45:16.756157 kernel: pci 0000:00:16.3: PME# supported from D0 D3hot D3cold Jul 14 22:45:16.756213 kernel: pci 0000:00:16.4: [15ad:07a0] type 01 class 0x060400 Jul 14 22:45:16.756265 kernel: pci 0000:00:16.4: PME# supported from D0 D3hot D3cold Jul 14 22:45:16.758631 kernel: pci 0000:00:16.5: [15ad:07a0] type 01 class 0x060400 Jul 14 22:45:16.758699 kernel: pci 0000:00:16.5: PME# supported from D0 D3hot D3cold Jul 14 22:45:16.758756 kernel: pci 0000:00:16.6: [15ad:07a0] type 01 class 0x060400 Jul 14 22:45:16.758809 kernel: pci 0000:00:16.6: PME# supported from D0 D3hot D3cold Jul 14 22:45:16.758867 kernel: pci 0000:00:16.7: [15ad:07a0] type 01 class 0x060400 Jul 14 22:45:16.758918 kernel: pci 0000:00:16.7: PME# supported from D0 D3hot D3cold Jul 14 22:45:16.758972 kernel: pci 0000:00:17.0: [15ad:07a0] type 01 class 0x060400 Jul 14 22:45:16.759023 kernel: pci 0000:00:17.0: PME# supported from D0 D3hot D3cold Jul 14 22:45:16.759075 kernel: pci 0000:00:17.1: [15ad:07a0] type 01 class 0x060400 Jul 14 22:45:16.759126 kernel: pci 0000:00:17.1: PME# supported from D0 D3hot D3cold Jul 14 22:45:16.759185 kernel: pci 0000:00:17.2: [15ad:07a0] type 01 class 0x060400 Jul 14 22:45:16.759236 kernel: pci 0000:00:17.2: PME# supported from D0 D3hot D3cold Jul 14 22:45:16.759288 kernel: pci 0000:00:17.3: [15ad:07a0] type 01 class 0x060400 Jul 14 22:45:16.759353 kernel: pci 0000:00:17.3: PME# supported from D0 D3hot D3cold Jul 14 22:45:16.759408 kernel: pci 0000:00:17.4: [15ad:07a0] type 01 class 0x060400 Jul 14 22:45:16.759459 kernel: pci 0000:00:17.4: PME# supported from D0 D3hot D3cold Jul 14 22:45:16.759514 kernel: pci 0000:00:17.5: [15ad:07a0] type 01 class 0x060400 Jul 14 22:45:16.759569 kernel: pci 0000:00:17.5: PME# supported from D0 D3hot D3cold Jul 14 22:45:16.759623 kernel: pci 0000:00:17.6: [15ad:07a0] type 01 class 0x060400 Jul 14 22:45:16.759673 kernel: pci 0000:00:17.6: PME# supported from D0 D3hot D3cold Jul 14 22:45:16.759727 kernel: pci 0000:00:17.7: [15ad:07a0] type 01 class 0x060400 Jul 14 22:45:16.759777 kernel: pci 0000:00:17.7: PME# supported from D0 D3hot D3cold Jul 14 22:45:16.759830 kernel: pci 0000:00:18.0: [15ad:07a0] type 01 class 0x060400 Jul 14 22:45:16.759884 kernel: pci 0000:00:18.0: PME# supported from D0 D3hot D3cold Jul 14 22:45:16.759973 kernel: pci 0000:00:18.1: [15ad:07a0] type 01 class 0x060400 Jul 14 22:45:16.760054 kernel: pci 0000:00:18.1: PME# supported from D0 D3hot D3cold Jul 14 22:45:16.760110 kernel: pci 0000:00:18.2: [15ad:07a0] type 01 class 0x060400 Jul 14 22:45:16.760161 kernel: pci 0000:00:18.2: PME# supported from D0 D3hot D3cold Jul 14 22:45:16.760214 kernel: pci 0000:00:18.3: [15ad:07a0] type 01 class 0x060400 Jul 14 22:45:16.760267 kernel: pci 0000:00:18.3: PME# supported from D0 D3hot D3cold Jul 14 22:45:16.762392 kernel: pci 0000:00:18.4: [15ad:07a0] type 01 class 0x060400 Jul 14 22:45:16.762464 kernel: pci 0000:00:18.4: PME# supported from D0 D3hot D3cold Jul 14 22:45:16.762527 kernel: pci 0000:00:18.5: [15ad:07a0] type 01 class 0x060400 Jul 14 22:45:16.762582 kernel: pci 0000:00:18.5: PME# supported from D0 D3hot D3cold Jul 14 22:45:16.762673 kernel: pci 0000:00:18.6: [15ad:07a0] type 01 class 0x060400 Jul 14 22:45:16.762730 kernel: pci 0000:00:18.6: PME# supported from D0 D3hot D3cold Jul 14 22:45:16.762785 kernel: pci 0000:00:18.7: [15ad:07a0] type 01 class 0x060400 Jul 14 22:45:16.762838 kernel: pci 0000:00:18.7: PME# supported from D0 D3hot D3cold Jul 14 22:45:16.762891 kernel: pci_bus 0000:01: extended config space not accessible Jul 14 22:45:16.762945 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Jul 14 22:45:16.762997 kernel: pci_bus 0000:02: extended config space not accessible Jul 14 22:45:16.763006 kernel: acpiphp: Slot [32] registered Jul 14 22:45:16.763015 kernel: acpiphp: Slot [33] registered Jul 14 22:45:16.763021 kernel: acpiphp: Slot [34] registered Jul 14 22:45:16.763027 kernel: acpiphp: Slot [35] registered Jul 14 22:45:16.763033 kernel: acpiphp: Slot [36] registered Jul 14 22:45:16.763038 kernel: acpiphp: Slot [37] registered Jul 14 22:45:16.763044 kernel: acpiphp: Slot [38] registered Jul 14 22:45:16.763050 kernel: acpiphp: Slot [39] registered Jul 14 22:45:16.763056 kernel: acpiphp: Slot [40] registered Jul 14 22:45:16.763062 kernel: acpiphp: Slot [41] registered Jul 14 22:45:16.763069 kernel: acpiphp: Slot [42] registered Jul 14 22:45:16.763074 kernel: acpiphp: Slot [43] registered Jul 14 22:45:16.763080 kernel: acpiphp: Slot [44] registered Jul 14 22:45:16.763086 kernel: acpiphp: Slot [45] registered Jul 14 22:45:16.763091 kernel: acpiphp: Slot [46] registered Jul 14 22:45:16.763097 kernel: acpiphp: Slot [47] registered Jul 14 22:45:16.763103 kernel: acpiphp: Slot [48] registered Jul 14 22:45:16.763108 kernel: acpiphp: Slot [49] registered Jul 14 22:45:16.763114 kernel: acpiphp: Slot [50] registered Jul 14 22:45:16.763121 kernel: acpiphp: Slot [51] registered Jul 14 22:45:16.763127 kernel: acpiphp: Slot [52] registered Jul 14 22:45:16.763133 kernel: acpiphp: Slot [53] registered Jul 14 22:45:16.763138 kernel: acpiphp: Slot [54] registered Jul 14 22:45:16.763144 kernel: acpiphp: Slot [55] registered Jul 14 22:45:16.763149 kernel: acpiphp: Slot [56] registered Jul 14 22:45:16.763155 kernel: acpiphp: Slot [57] registered Jul 14 22:45:16.763161 kernel: acpiphp: Slot [58] registered Jul 14 22:45:16.763167 kernel: acpiphp: Slot [59] registered Jul 14 22:45:16.763172 kernel: acpiphp: Slot [60] registered Jul 14 22:45:16.763179 kernel: acpiphp: Slot [61] registered Jul 14 22:45:16.763185 kernel: acpiphp: Slot [62] registered Jul 14 22:45:16.763191 kernel: acpiphp: Slot [63] registered Jul 14 22:45:16.763242 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] (subtractive decode) Jul 14 22:45:16.763293 kernel: pci 0000:00:11.0: bridge window [io 0x2000-0x3fff] Jul 14 22:45:16.763354 kernel: pci 0000:00:11.0: bridge window [mem 0xfd600000-0xfdffffff] Jul 14 22:45:16.763405 kernel: pci 0000:00:11.0: bridge window [mem 0xe7b00000-0xe7ffffff 64bit pref] Jul 14 22:45:16.763460 kernel: pci 0000:00:11.0: bridge window [mem 0x000a0000-0x000bffff window] (subtractive decode) Jul 14 22:45:16.763515 kernel: pci 0000:00:11.0: bridge window [mem 0x000cc000-0x000dbfff window] (subtractive decode) Jul 14 22:45:16.763566 kernel: pci 0000:00:11.0: bridge window [mem 0xc0000000-0xfebfffff window] (subtractive decode) Jul 14 22:45:16.763617 kernel: pci 0000:00:11.0: bridge window [io 0x0000-0x0cf7 window] (subtractive decode) Jul 14 22:45:16.763668 kernel: pci 0000:00:11.0: bridge window [io 0x0d00-0xfeff window] (subtractive decode) Jul 14 22:45:16.763725 kernel: pci 0000:03:00.0: [15ad:07c0] type 00 class 0x010700 Jul 14 22:45:16.763778 kernel: pci 0000:03:00.0: reg 0x10: [io 0x4000-0x4007] Jul 14 22:45:16.763830 kernel: pci 0000:03:00.0: reg 0x14: [mem 0xfd5f8000-0xfd5fffff 64bit] Jul 14 22:45:16.763886 kernel: pci 0000:03:00.0: reg 0x30: [mem 0x00000000-0x0000ffff pref] Jul 14 22:45:16.763938 kernel: pci 0000:03:00.0: PME# supported from D0 D3hot D3cold Jul 14 22:45:16.763990 kernel: pci 0000:03:00.0: disabling ASPM on pre-1.1 PCIe device. You can enable it with 'pcie_aspm=force' Jul 14 22:45:16.764042 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] Jul 14 22:45:16.764094 kernel: pci 0000:00:15.0: bridge window [io 0x4000-0x4fff] Jul 14 22:45:16.764145 kernel: pci 0000:00:15.0: bridge window [mem 0xfd500000-0xfd5fffff] Jul 14 22:45:16.764198 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] Jul 14 22:45:16.764249 kernel: pci 0000:00:15.1: bridge window [io 0x8000-0x8fff] Jul 14 22:45:16.764303 kernel: pci 0000:00:15.1: bridge window [mem 0xfd100000-0xfd1fffff] Jul 14 22:45:16.765874 kernel: pci 0000:00:15.1: bridge window [mem 0xe7800000-0xe78fffff 64bit pref] Jul 14 22:45:16.765936 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] Jul 14 22:45:16.765991 kernel: pci 0000:00:15.2: bridge window [io 0xc000-0xcfff] Jul 14 22:45:16.766044 kernel: pci 0000:00:15.2: bridge window [mem 0xfcd00000-0xfcdfffff] Jul 14 22:45:16.766097 kernel: pci 0000:00:15.2: bridge window [mem 0xe7400000-0xe74fffff 64bit pref] Jul 14 22:45:16.766150 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] Jul 14 22:45:16.766206 kernel: pci 0000:00:15.3: bridge window [mem 0xfc900000-0xfc9fffff] Jul 14 22:45:16.766258 kernel: pci 0000:00:15.3: bridge window [mem 0xe7000000-0xe70fffff 64bit pref] Jul 14 22:45:16.766310 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] Jul 14 22:45:16.766370 kernel: pci 0000:00:15.4: bridge window [mem 0xfc500000-0xfc5fffff] Jul 14 22:45:16.766422 kernel: pci 0000:00:15.4: bridge window [mem 0xe6c00000-0xe6cfffff 64bit pref] Jul 14 22:45:16.766477 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] Jul 14 22:45:16.766530 kernel: pci 0000:00:15.5: bridge window [mem 0xfc100000-0xfc1fffff] Jul 14 22:45:16.766583 kernel: pci 0000:00:15.5: bridge window [mem 0xe6800000-0xe68fffff 64bit pref] Jul 14 22:45:16.766635 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] Jul 14 22:45:16.766687 kernel: pci 0000:00:15.6: bridge window [mem 0xfbd00000-0xfbdfffff] Jul 14 22:45:16.766739 kernel: pci 0000:00:15.6: bridge window [mem 0xe6400000-0xe64fffff 64bit pref] Jul 14 22:45:16.766791 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] Jul 14 22:45:16.766843 kernel: pci 0000:00:15.7: bridge window [mem 0xfb900000-0xfb9fffff] Jul 14 22:45:16.766898 kernel: pci 0000:00:15.7: bridge window [mem 0xe6000000-0xe60fffff 64bit pref] Jul 14 22:45:16.766957 kernel: pci 0000:0b:00.0: [15ad:07b0] type 00 class 0x020000 Jul 14 22:45:16.767012 kernel: pci 0000:0b:00.0: reg 0x10: [mem 0xfd4fc000-0xfd4fcfff] Jul 14 22:45:16.767066 kernel: pci 0000:0b:00.0: reg 0x14: [mem 0xfd4fd000-0xfd4fdfff] Jul 14 22:45:16.767119 kernel: pci 0000:0b:00.0: reg 0x18: [mem 0xfd4fe000-0xfd4fffff] Jul 14 22:45:16.767172 kernel: pci 0000:0b:00.0: reg 0x1c: [io 0x5000-0x500f] Jul 14 22:45:16.767225 kernel: pci 0000:0b:00.0: reg 0x30: [mem 0x00000000-0x0000ffff pref] Jul 14 22:45:16.767281 kernel: pci 0000:0b:00.0: supports D1 D2 Jul 14 22:45:16.769355 kernel: pci 0000:0b:00.0: PME# supported from D0 D1 D2 D3hot D3cold Jul 14 22:45:16.769418 kernel: pci 0000:0b:00.0: disabling ASPM on pre-1.1 PCIe device. You can enable it with 'pcie_aspm=force' Jul 14 22:45:16.769473 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] Jul 14 22:45:16.769527 kernel: pci 0000:00:16.0: bridge window [io 0x5000-0x5fff] Jul 14 22:45:16.769579 kernel: pci 0000:00:16.0: bridge window [mem 0xfd400000-0xfd4fffff] Jul 14 22:45:16.769631 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] Jul 14 22:45:16.769684 kernel: pci 0000:00:16.1: bridge window [io 0x9000-0x9fff] Jul 14 22:45:16.769740 kernel: pci 0000:00:16.1: bridge window [mem 0xfd000000-0xfd0fffff] Jul 14 22:45:16.769792 kernel: pci 0000:00:16.1: bridge window [mem 0xe7700000-0xe77fffff 64bit pref] Jul 14 22:45:16.769845 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] Jul 14 22:45:16.769897 kernel: pci 0000:00:16.2: bridge window [io 0xd000-0xdfff] Jul 14 22:45:16.769949 kernel: pci 0000:00:16.2: bridge window [mem 0xfcc00000-0xfccfffff] Jul 14 22:45:16.770000 kernel: pci 0000:00:16.2: bridge window [mem 0xe7300000-0xe73fffff 64bit pref] Jul 14 22:45:16.770052 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] Jul 14 22:45:16.770107 kernel: pci 0000:00:16.3: bridge window [mem 0xfc800000-0xfc8fffff] Jul 14 22:45:16.770158 kernel: pci 0000:00:16.3: bridge window [mem 0xe6f00000-0xe6ffffff 64bit pref] Jul 14 22:45:16.770212 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] Jul 14 22:45:16.770264 kernel: pci 0000:00:16.4: bridge window [mem 0xfc400000-0xfc4fffff] Jul 14 22:45:16.770315 kernel: pci 0000:00:16.4: bridge window [mem 0xe6b00000-0xe6bfffff 64bit pref] Jul 14 22:45:16.770378 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] Jul 14 22:45:16.770430 kernel: pci 0000:00:16.5: bridge window [mem 0xfc000000-0xfc0fffff] Jul 14 22:45:16.770489 kernel: pci 0000:00:16.5: bridge window [mem 0xe6700000-0xe67fffff 64bit pref] Jul 14 22:45:16.770546 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] Jul 14 22:45:16.770598 kernel: pci 0000:00:16.6: bridge window [mem 0xfbc00000-0xfbcfffff] Jul 14 22:45:16.770649 kernel: pci 0000:00:16.6: bridge window [mem 0xe6300000-0xe63fffff 64bit pref] Jul 14 22:45:16.770701 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] Jul 14 22:45:16.770753 kernel: pci 0000:00:16.7: bridge window [mem 0xfb800000-0xfb8fffff] Jul 14 22:45:16.770805 kernel: pci 0000:00:16.7: bridge window [mem 0xe5f00000-0xe5ffffff 64bit pref] Jul 14 22:45:16.770857 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] Jul 14 22:45:16.770908 kernel: pci 0000:00:17.0: bridge window [io 0x6000-0x6fff] Jul 14 22:45:16.770963 kernel: pci 0000:00:17.0: bridge window [mem 0xfd300000-0xfd3fffff] Jul 14 22:45:16.771016 kernel: pci 0000:00:17.0: bridge window [mem 0xe7a00000-0xe7afffff 64bit pref] Jul 14 22:45:16.771069 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] Jul 14 22:45:16.771121 kernel: pci 0000:00:17.1: bridge window [io 0xa000-0xafff] Jul 14 22:45:16.771172 kernel: pci 0000:00:17.1: bridge window [mem 0xfcf00000-0xfcffffff] Jul 14 22:45:16.771223 kernel: pci 0000:00:17.1: bridge window [mem 0xe7600000-0xe76fffff 64bit pref] Jul 14 22:45:16.771275 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] Jul 14 22:45:16.774354 kernel: pci 0000:00:17.2: bridge window [io 0xe000-0xefff] Jul 14 22:45:16.774422 kernel: pci 0000:00:17.2: bridge window [mem 0xfcb00000-0xfcbfffff] Jul 14 22:45:16.774483 kernel: pci 0000:00:17.2: bridge window [mem 0xe7200000-0xe72fffff 64bit pref] Jul 14 22:45:16.774537 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] Jul 14 22:45:16.774590 kernel: pci 0000:00:17.3: bridge window [mem 0xfc700000-0xfc7fffff] Jul 14 22:45:16.774642 kernel: pci 0000:00:17.3: bridge window [mem 0xe6e00000-0xe6efffff 64bit pref] Jul 14 22:45:16.774694 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] Jul 14 22:45:16.774745 kernel: pci 0000:00:17.4: bridge window [mem 0xfc300000-0xfc3fffff] Jul 14 22:45:16.774796 kernel: pci 0000:00:17.4: bridge window [mem 0xe6a00000-0xe6afffff 64bit pref] Jul 14 22:45:16.774852 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] Jul 14 22:45:16.774905 kernel: pci 0000:00:17.5: bridge window [mem 0xfbf00000-0xfbffffff] Jul 14 22:45:16.774956 kernel: pci 0000:00:17.5: bridge window [mem 0xe6600000-0xe66fffff 64bit pref] Jul 14 22:45:16.775008 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] Jul 14 22:45:16.775059 kernel: pci 0000:00:17.6: bridge window [mem 0xfbb00000-0xfbbfffff] Jul 14 22:45:16.775111 kernel: pci 0000:00:17.6: bridge window [mem 0xe6200000-0xe62fffff 64bit pref] Jul 14 22:45:16.775163 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] Jul 14 22:45:16.775214 kernel: pci 0000:00:17.7: bridge window [mem 0xfb700000-0xfb7fffff] Jul 14 22:45:16.775270 kernel: pci 0000:00:17.7: bridge window [mem 0xe5e00000-0xe5efffff 64bit pref] Jul 14 22:45:16.775332 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] Jul 14 22:45:16.775391 kernel: pci 0000:00:18.0: bridge window [io 0x7000-0x7fff] Jul 14 22:45:16.775442 kernel: pci 0000:00:18.0: bridge window [mem 0xfd200000-0xfd2fffff] Jul 14 22:45:16.775495 kernel: pci 0000:00:18.0: bridge window [mem 0xe7900000-0xe79fffff 64bit pref] Jul 14 22:45:16.775548 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] Jul 14 22:45:16.775600 kernel: pci 0000:00:18.1: bridge window [io 0xb000-0xbfff] Jul 14 22:45:16.775655 kernel: pci 0000:00:18.1: bridge window [mem 0xfce00000-0xfcefffff] Jul 14 22:45:16.775706 kernel: pci 0000:00:18.1: bridge window [mem 0xe7500000-0xe75fffff 64bit pref] Jul 14 22:45:16.775759 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] Jul 14 22:45:16.775811 kernel: pci 0000:00:18.2: bridge window [mem 0xfca00000-0xfcafffff] Jul 14 22:45:16.775863 kernel: pci 0000:00:18.2: bridge window [mem 0xe7100000-0xe71fffff 64bit pref] Jul 14 22:45:16.775915 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] Jul 14 22:45:16.775968 kernel: pci 0000:00:18.3: bridge window [mem 0xfc600000-0xfc6fffff] Jul 14 22:45:16.776019 kernel: pci 0000:00:18.3: bridge window [mem 0xe6d00000-0xe6dfffff 64bit pref] Jul 14 22:45:16.776074 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] Jul 14 22:45:16.776125 kernel: pci 0000:00:18.4: bridge window [mem 0xfc200000-0xfc2fffff] Jul 14 22:45:16.776177 kernel: pci 0000:00:18.4: bridge window [mem 0xe6900000-0xe69fffff 64bit pref] Jul 14 22:45:16.776229 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] Jul 14 22:45:16.776281 kernel: pci 0000:00:18.5: bridge window [mem 0xfbe00000-0xfbefffff] Jul 14 22:45:16.780466 kernel: pci 0000:00:18.5: bridge window [mem 0xe6500000-0xe65fffff 64bit pref] Jul 14 22:45:16.780528 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] Jul 14 22:45:16.780583 kernel: pci 0000:00:18.6: bridge window [mem 0xfba00000-0xfbafffff] Jul 14 22:45:16.780639 kernel: pci 0000:00:18.6: bridge window [mem 0xe6100000-0xe61fffff 64bit pref] Jul 14 22:45:16.780694 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] Jul 14 22:45:16.780745 kernel: pci 0000:00:18.7: bridge window [mem 0xfb600000-0xfb6fffff] Jul 14 22:45:16.780797 kernel: pci 0000:00:18.7: bridge window [mem 0xe5d00000-0xe5dfffff 64bit pref] Jul 14 22:45:16.780806 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 9 Jul 14 22:45:16.780812 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 0 Jul 14 22:45:16.780818 kernel: ACPI: PCI: Interrupt link LNKB disabled Jul 14 22:45:16.780824 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jul 14 22:45:16.780830 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 10 Jul 14 22:45:16.780838 kernel: iommu: Default domain type: Translated Jul 14 22:45:16.780845 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jul 14 22:45:16.780850 kernel: PCI: Using ACPI for IRQ routing Jul 14 22:45:16.780856 kernel: PCI: pci_cache_line_size set to 64 bytes Jul 14 22:45:16.780862 kernel: e820: reserve RAM buffer [mem 0x0009ec00-0x0009ffff] Jul 14 22:45:16.780868 kernel: e820: reserve RAM buffer [mem 0x7fee0000-0x7fffffff] Jul 14 22:45:16.780919 kernel: pci 0000:00:0f.0: vgaarb: setting as boot VGA device Jul 14 22:45:16.780971 kernel: pci 0000:00:0f.0: vgaarb: bridge control possible Jul 14 22:45:16.781022 kernel: pci 0000:00:0f.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jul 14 22:45:16.781034 kernel: vgaarb: loaded Jul 14 22:45:16.781041 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 Jul 14 22:45:16.781046 kernel: hpet0: 16 comparators, 64-bit 14.318180 MHz counter Jul 14 22:45:16.781052 kernel: clocksource: Switched to clocksource tsc-early Jul 14 22:45:16.781058 kernel: VFS: Disk quotas dquot_6.6.0 Jul 14 22:45:16.781065 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 14 22:45:16.781070 kernel: pnp: PnP ACPI init Jul 14 22:45:16.781125 kernel: system 00:00: [io 0x1000-0x103f] has been reserved Jul 14 22:45:16.781176 kernel: system 00:00: [io 0x1040-0x104f] has been reserved Jul 14 22:45:16.781224 kernel: system 00:00: [io 0x0cf0-0x0cf1] has been reserved Jul 14 22:45:16.781275 kernel: system 00:04: [mem 0xfed00000-0xfed003ff] has been reserved Jul 14 22:45:16.781332 kernel: pnp 00:06: [dma 2] Jul 14 22:45:16.781384 kernel: system 00:07: [io 0xfce0-0xfcff] has been reserved Jul 14 22:45:16.781431 kernel: system 00:07: [mem 0xf0000000-0xf7ffffff] has been reserved Jul 14 22:45:16.781481 kernel: system 00:07: [mem 0xfe800000-0xfe9fffff] has been reserved Jul 14 22:45:16.781490 kernel: pnp: PnP ACPI: found 8 devices Jul 14 22:45:16.781496 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jul 14 22:45:16.781502 kernel: NET: Registered PF_INET protocol family Jul 14 22:45:16.781508 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 14 22:45:16.781514 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Jul 14 22:45:16.781520 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 14 22:45:16.781526 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Jul 14 22:45:16.781532 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Jul 14 22:45:16.781540 kernel: TCP: Hash tables configured (established 16384 bind 16384) Jul 14 22:45:16.781546 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Jul 14 22:45:16.781552 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Jul 14 22:45:16.781558 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 14 22:45:16.781564 kernel: NET: Registered PF_XDP protocol family Jul 14 22:45:16.781617 kernel: pci 0000:00:15.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 Jul 14 22:45:16.781671 kernel: pci 0000:00:15.3: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Jul 14 22:45:16.781724 kernel: pci 0000:00:15.4: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Jul 14 22:45:16.781779 kernel: pci 0000:00:15.5: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Jul 14 22:45:16.781832 kernel: pci 0000:00:15.6: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Jul 14 22:45:16.781885 kernel: pci 0000:00:15.7: bridge window [io 0x1000-0x0fff] to [bus 0a] add_size 1000 Jul 14 22:45:16.781938 kernel: pci 0000:00:16.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 0b] add_size 200000 add_align 100000 Jul 14 22:45:16.781991 kernel: pci 0000:00:16.3: bridge window [io 0x1000-0x0fff] to [bus 0e] add_size 1000 Jul 14 22:45:16.782043 kernel: pci 0000:00:16.4: bridge window [io 0x1000-0x0fff] to [bus 0f] add_size 1000 Jul 14 22:45:16.782098 kernel: pci 0000:00:16.5: bridge window [io 0x1000-0x0fff] to [bus 10] add_size 1000 Jul 14 22:45:16.782151 kernel: pci 0000:00:16.6: bridge window [io 0x1000-0x0fff] to [bus 11] add_size 1000 Jul 14 22:45:16.782204 kernel: pci 0000:00:16.7: bridge window [io 0x1000-0x0fff] to [bus 12] add_size 1000 Jul 14 22:45:16.782256 kernel: pci 0000:00:17.3: bridge window [io 0x1000-0x0fff] to [bus 16] add_size 1000 Jul 14 22:45:16.782308 kernel: pci 0000:00:17.4: bridge window [io 0x1000-0x0fff] to [bus 17] add_size 1000 Jul 14 22:45:16.782378 kernel: pci 0000:00:17.5: bridge window [io 0x1000-0x0fff] to [bus 18] add_size 1000 Jul 14 22:45:16.782431 kernel: pci 0000:00:17.6: bridge window [io 0x1000-0x0fff] to [bus 19] add_size 1000 Jul 14 22:45:16.782491 kernel: pci 0000:00:17.7: bridge window [io 0x1000-0x0fff] to [bus 1a] add_size 1000 Jul 14 22:45:16.782543 kernel: pci 0000:00:18.2: bridge window [io 0x1000-0x0fff] to [bus 1d] add_size 1000 Jul 14 22:45:16.782595 kernel: pci 0000:00:18.3: bridge window [io 0x1000-0x0fff] to [bus 1e] add_size 1000 Jul 14 22:45:16.782647 kernel: pci 0000:00:18.4: bridge window [io 0x1000-0x0fff] to [bus 1f] add_size 1000 Jul 14 22:45:16.782703 kernel: pci 0000:00:18.5: bridge window [io 0x1000-0x0fff] to [bus 20] add_size 1000 Jul 14 22:45:16.782755 kernel: pci 0000:00:18.6: bridge window [io 0x1000-0x0fff] to [bus 21] add_size 1000 Jul 14 22:45:16.782808 kernel: pci 0000:00:18.7: bridge window [io 0x1000-0x0fff] to [bus 22] add_size 1000 Jul 14 22:45:16.782860 kernel: pci 0000:00:15.0: BAR 15: assigned [mem 0xc0000000-0xc01fffff 64bit pref] Jul 14 22:45:16.782911 kernel: pci 0000:00:16.0: BAR 15: assigned [mem 0xc0200000-0xc03fffff 64bit pref] Jul 14 22:45:16.782963 kernel: pci 0000:00:15.3: BAR 13: no space for [io size 0x1000] Jul 14 22:45:16.783015 kernel: pci 0000:00:15.3: BAR 13: failed to assign [io size 0x1000] Jul 14 22:45:16.783070 kernel: pci 0000:00:15.4: BAR 13: no space for [io size 0x1000] Jul 14 22:45:16.783122 kernel: pci 0000:00:15.4: BAR 13: failed to assign [io size 0x1000] Jul 14 22:45:16.783175 kernel: pci 0000:00:15.5: BAR 13: no space for [io size 0x1000] Jul 14 22:45:16.783226 kernel: pci 0000:00:15.5: BAR 13: failed to assign [io size 0x1000] Jul 14 22:45:16.783278 kernel: pci 0000:00:15.6: BAR 13: no space for [io size 0x1000] Jul 14 22:45:16.783340 kernel: pci 0000:00:15.6: BAR 13: failed to assign [io size 0x1000] Jul 14 22:45:16.783393 kernel: pci 0000:00:15.7: BAR 13: no space for [io size 0x1000] Jul 14 22:45:16.783445 kernel: pci 0000:00:15.7: BAR 13: failed to assign [io size 0x1000] Jul 14 22:45:16.783500 kernel: pci 0000:00:16.3: BAR 13: no space for [io size 0x1000] Jul 14 22:45:16.783551 kernel: pci 0000:00:16.3: BAR 13: failed to assign [io size 0x1000] Jul 14 22:45:16.783604 kernel: pci 0000:00:16.4: BAR 13: no space for [io size 0x1000] Jul 14 22:45:16.783655 kernel: pci 0000:00:16.4: BAR 13: failed to assign [io size 0x1000] Jul 14 22:45:16.783708 kernel: pci 0000:00:16.5: BAR 13: no space for [io size 0x1000] Jul 14 22:45:16.783760 kernel: pci 0000:00:16.5: BAR 13: failed to assign [io size 0x1000] Jul 14 22:45:16.783812 kernel: pci 0000:00:16.6: BAR 13: no space for [io size 0x1000] Jul 14 22:45:16.783865 kernel: pci 0000:00:16.6: BAR 13: failed to assign [io size 0x1000] Jul 14 22:45:16.783921 kernel: pci 0000:00:16.7: BAR 13: no space for [io size 0x1000] Jul 14 22:45:16.783974 kernel: pci 0000:00:16.7: BAR 13: failed to assign [io size 0x1000] Jul 14 22:45:16.784026 kernel: pci 0000:00:17.3: BAR 13: no space for [io size 0x1000] Jul 14 22:45:16.784078 kernel: pci 0000:00:17.3: BAR 13: failed to assign [io size 0x1000] Jul 14 22:45:16.784130 kernel: pci 0000:00:17.4: BAR 13: no space for [io size 0x1000] Jul 14 22:45:16.784182 kernel: pci 0000:00:17.4: BAR 13: failed to assign [io size 0x1000] Jul 14 22:45:16.784234 kernel: pci 0000:00:17.5: BAR 13: no space for [io size 0x1000] Jul 14 22:45:16.784286 kernel: pci 0000:00:17.5: BAR 13: failed to assign [io size 0x1000] Jul 14 22:45:16.789217 kernel: pci 0000:00:17.6: BAR 13: no space for [io size 0x1000] Jul 14 22:45:16.789278 kernel: pci 0000:00:17.6: BAR 13: failed to assign [io size 0x1000] Jul 14 22:45:16.789345 kernel: pci 0000:00:17.7: BAR 13: no space for [io size 0x1000] Jul 14 22:45:16.789402 kernel: pci 0000:00:17.7: BAR 13: failed to assign [io size 0x1000] Jul 14 22:45:16.789455 kernel: pci 0000:00:18.2: BAR 13: no space for [io size 0x1000] Jul 14 22:45:16.789507 kernel: pci 0000:00:18.2: BAR 13: failed to assign [io size 0x1000] Jul 14 22:45:16.789559 kernel: pci 0000:00:18.3: BAR 13: no space for [io size 0x1000] Jul 14 22:45:16.789611 kernel: pci 0000:00:18.3: BAR 13: failed to assign [io size 0x1000] Jul 14 22:45:16.789667 kernel: pci 0000:00:18.4: BAR 13: no space for [io size 0x1000] Jul 14 22:45:16.789719 kernel: pci 0000:00:18.4: BAR 13: failed to assign [io size 0x1000] Jul 14 22:45:16.789770 kernel: pci 0000:00:18.5: BAR 13: no space for [io size 0x1000] Jul 14 22:45:16.789822 kernel: pci 0000:00:18.5: BAR 13: failed to assign [io size 0x1000] Jul 14 22:45:16.789874 kernel: pci 0000:00:18.6: BAR 13: no space for [io size 0x1000] Jul 14 22:45:16.789926 kernel: pci 0000:00:18.6: BAR 13: failed to assign [io size 0x1000] Jul 14 22:45:16.789978 kernel: pci 0000:00:18.7: BAR 13: no space for [io size 0x1000] Jul 14 22:45:16.790029 kernel: pci 0000:00:18.7: BAR 13: failed to assign [io size 0x1000] Jul 14 22:45:16.790084 kernel: pci 0000:00:18.7: BAR 13: no space for [io size 0x1000] Jul 14 22:45:16.790135 kernel: pci 0000:00:18.7: BAR 13: failed to assign [io size 0x1000] Jul 14 22:45:16.790187 kernel: pci 0000:00:18.6: BAR 13: no space for [io size 0x1000] Jul 14 22:45:16.790239 kernel: pci 0000:00:18.6: BAR 13: failed to assign [io size 0x1000] Jul 14 22:45:16.790290 kernel: pci 0000:00:18.5: BAR 13: no space for [io size 0x1000] Jul 14 22:45:16.790350 kernel: pci 0000:00:18.5: BAR 13: failed to assign [io size 0x1000] Jul 14 22:45:16.790402 kernel: pci 0000:00:18.4: BAR 13: no space for [io size 0x1000] Jul 14 22:45:16.790458 kernel: pci 0000:00:18.4: BAR 13: failed to assign [io size 0x1000] Jul 14 22:45:16.790510 kernel: pci 0000:00:18.3: BAR 13: no space for [io size 0x1000] Jul 14 22:45:16.790561 kernel: pci 0000:00:18.3: BAR 13: failed to assign [io size 0x1000] Jul 14 22:45:16.790616 kernel: pci 0000:00:18.2: BAR 13: no space for [io size 0x1000] Jul 14 22:45:16.790667 kernel: pci 0000:00:18.2: BAR 13: failed to assign [io size 0x1000] Jul 14 22:45:16.790718 kernel: pci 0000:00:17.7: BAR 13: no space for [io size 0x1000] Jul 14 22:45:16.790769 kernel: pci 0000:00:17.7: BAR 13: failed to assign [io size 0x1000] Jul 14 22:45:16.790821 kernel: pci 0000:00:17.6: BAR 13: no space for [io size 0x1000] Jul 14 22:45:16.790872 kernel: pci 0000:00:17.6: BAR 13: failed to assign [io size 0x1000] Jul 14 22:45:16.790923 kernel: pci 0000:00:17.5: BAR 13: no space for [io size 0x1000] Jul 14 22:45:16.790975 kernel: pci 0000:00:17.5: BAR 13: failed to assign [io size 0x1000] Jul 14 22:45:16.791027 kernel: pci 0000:00:17.4: BAR 13: no space for [io size 0x1000] Jul 14 22:45:16.791082 kernel: pci 0000:00:17.4: BAR 13: failed to assign [io size 0x1000] Jul 14 22:45:16.791134 kernel: pci 0000:00:17.3: BAR 13: no space for [io size 0x1000] Jul 14 22:45:16.791187 kernel: pci 0000:00:17.3: BAR 13: failed to assign [io size 0x1000] Jul 14 22:45:16.791239 kernel: pci 0000:00:16.7: BAR 13: no space for [io size 0x1000] Jul 14 22:45:16.791291 kernel: pci 0000:00:16.7: BAR 13: failed to assign [io size 0x1000] Jul 14 22:45:16.791365 kernel: pci 0000:00:16.6: BAR 13: no space for [io size 0x1000] Jul 14 22:45:16.791419 kernel: pci 0000:00:16.6: BAR 13: failed to assign [io size 0x1000] Jul 14 22:45:16.791475 kernel: pci 0000:00:16.5: BAR 13: no space for [io size 0x1000] Jul 14 22:45:16.791528 kernel: pci 0000:00:16.5: BAR 13: failed to assign [io size 0x1000] Jul 14 22:45:16.791581 kernel: pci 0000:00:16.4: BAR 13: no space for [io size 0x1000] Jul 14 22:45:16.791637 kernel: pci 0000:00:16.4: BAR 13: failed to assign [io size 0x1000] Jul 14 22:45:16.791689 kernel: pci 0000:00:16.3: BAR 13: no space for [io size 0x1000] Jul 14 22:45:16.791742 kernel: pci 0000:00:16.3: BAR 13: failed to assign [io size 0x1000] Jul 14 22:45:16.791794 kernel: pci 0000:00:15.7: BAR 13: no space for [io size 0x1000] Jul 14 22:45:16.791845 kernel: pci 0000:00:15.7: BAR 13: failed to assign [io size 0x1000] Jul 14 22:45:16.791898 kernel: pci 0000:00:15.6: BAR 13: no space for [io size 0x1000] Jul 14 22:45:16.791950 kernel: pci 0000:00:15.6: BAR 13: failed to assign [io size 0x1000] Jul 14 22:45:16.792002 kernel: pci 0000:00:15.5: BAR 13: no space for [io size 0x1000] Jul 14 22:45:16.792054 kernel: pci 0000:00:15.5: BAR 13: failed to assign [io size 0x1000] Jul 14 22:45:16.792109 kernel: pci 0000:00:15.4: BAR 13: no space for [io size 0x1000] Jul 14 22:45:16.792161 kernel: pci 0000:00:15.4: BAR 13: failed to assign [io size 0x1000] Jul 14 22:45:16.792214 kernel: pci 0000:00:15.3: BAR 13: no space for [io size 0x1000] Jul 14 22:45:16.792266 kernel: pci 0000:00:15.3: BAR 13: failed to assign [io size 0x1000] Jul 14 22:45:16.792326 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Jul 14 22:45:16.792382 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] Jul 14 22:45:16.792434 kernel: pci 0000:00:11.0: bridge window [io 0x2000-0x3fff] Jul 14 22:45:16.792485 kernel: pci 0000:00:11.0: bridge window [mem 0xfd600000-0xfdffffff] Jul 14 22:45:16.792537 kernel: pci 0000:00:11.0: bridge window [mem 0xe7b00000-0xe7ffffff 64bit pref] Jul 14 22:45:16.792597 kernel: pci 0000:03:00.0: BAR 6: assigned [mem 0xfd500000-0xfd50ffff pref] Jul 14 22:45:16.792650 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] Jul 14 22:45:16.792704 kernel: pci 0000:00:15.0: bridge window [io 0x4000-0x4fff] Jul 14 22:45:16.792756 kernel: pci 0000:00:15.0: bridge window [mem 0xfd500000-0xfd5fffff] Jul 14 22:45:16.792808 kernel: pci 0000:00:15.0: bridge window [mem 0xc0000000-0xc01fffff 64bit pref] Jul 14 22:45:16.792862 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] Jul 14 22:45:16.792914 kernel: pci 0000:00:15.1: bridge window [io 0x8000-0x8fff] Jul 14 22:45:16.792967 kernel: pci 0000:00:15.1: bridge window [mem 0xfd100000-0xfd1fffff] Jul 14 22:45:16.793023 kernel: pci 0000:00:15.1: bridge window [mem 0xe7800000-0xe78fffff 64bit pref] Jul 14 22:45:16.793077 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] Jul 14 22:45:16.793130 kernel: pci 0000:00:15.2: bridge window [io 0xc000-0xcfff] Jul 14 22:45:16.793183 kernel: pci 0000:00:15.2: bridge window [mem 0xfcd00000-0xfcdfffff] Jul 14 22:45:16.793235 kernel: pci 0000:00:15.2: bridge window [mem 0xe7400000-0xe74fffff 64bit pref] Jul 14 22:45:16.793287 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] Jul 14 22:45:16.795457 kernel: pci 0000:00:15.3: bridge window [mem 0xfc900000-0xfc9fffff] Jul 14 22:45:16.795516 kernel: pci 0000:00:15.3: bridge window [mem 0xe7000000-0xe70fffff 64bit pref] Jul 14 22:45:16.795571 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] Jul 14 22:45:16.795623 kernel: pci 0000:00:15.4: bridge window [mem 0xfc500000-0xfc5fffff] Jul 14 22:45:16.795678 kernel: pci 0000:00:15.4: bridge window [mem 0xe6c00000-0xe6cfffff 64bit pref] Jul 14 22:45:16.795732 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] Jul 14 22:45:16.795784 kernel: pci 0000:00:15.5: bridge window [mem 0xfc100000-0xfc1fffff] Jul 14 22:45:16.795837 kernel: pci 0000:00:15.5: bridge window [mem 0xe6800000-0xe68fffff 64bit pref] Jul 14 22:45:16.795888 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] Jul 14 22:45:16.795940 kernel: pci 0000:00:15.6: bridge window [mem 0xfbd00000-0xfbdfffff] Jul 14 22:45:16.795994 kernel: pci 0000:00:15.6: bridge window [mem 0xe6400000-0xe64fffff 64bit pref] Jul 14 22:45:16.796045 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] Jul 14 22:45:16.796096 kernel: pci 0000:00:15.7: bridge window [mem 0xfb900000-0xfb9fffff] Jul 14 22:45:16.796147 kernel: pci 0000:00:15.7: bridge window [mem 0xe6000000-0xe60fffff 64bit pref] Jul 14 22:45:16.796202 kernel: pci 0000:0b:00.0: BAR 6: assigned [mem 0xfd400000-0xfd40ffff pref] Jul 14 22:45:16.796254 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] Jul 14 22:45:16.796305 kernel: pci 0000:00:16.0: bridge window [io 0x5000-0x5fff] Jul 14 22:45:16.796371 kernel: pci 0000:00:16.0: bridge window [mem 0xfd400000-0xfd4fffff] Jul 14 22:45:16.796424 kernel: pci 0000:00:16.0: bridge window [mem 0xc0200000-0xc03fffff 64bit pref] Jul 14 22:45:16.796481 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] Jul 14 22:45:16.796534 kernel: pci 0000:00:16.1: bridge window [io 0x9000-0x9fff] Jul 14 22:45:16.796586 kernel: pci 0000:00:16.1: bridge window [mem 0xfd000000-0xfd0fffff] Jul 14 22:45:16.796637 kernel: pci 0000:00:16.1: bridge window [mem 0xe7700000-0xe77fffff 64bit pref] Jul 14 22:45:16.796690 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] Jul 14 22:45:16.796742 kernel: pci 0000:00:16.2: bridge window [io 0xd000-0xdfff] Jul 14 22:45:16.796794 kernel: pci 0000:00:16.2: bridge window [mem 0xfcc00000-0xfccfffff] Jul 14 22:45:16.796845 kernel: pci 0000:00:16.2: bridge window [mem 0xe7300000-0xe73fffff 64bit pref] Jul 14 22:45:16.796897 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] Jul 14 22:45:16.796951 kernel: pci 0000:00:16.3: bridge window [mem 0xfc800000-0xfc8fffff] Jul 14 22:45:16.797003 kernel: pci 0000:00:16.3: bridge window [mem 0xe6f00000-0xe6ffffff 64bit pref] Jul 14 22:45:16.797055 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] Jul 14 22:45:16.797107 kernel: pci 0000:00:16.4: bridge window [mem 0xfc400000-0xfc4fffff] Jul 14 22:45:16.797158 kernel: pci 0000:00:16.4: bridge window [mem 0xe6b00000-0xe6bfffff 64bit pref] Jul 14 22:45:16.797210 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] Jul 14 22:45:16.797262 kernel: pci 0000:00:16.5: bridge window [mem 0xfc000000-0xfc0fffff] Jul 14 22:45:16.797313 kernel: pci 0000:00:16.5: bridge window [mem 0xe6700000-0xe67fffff 64bit pref] Jul 14 22:45:16.799458 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] Jul 14 22:45:16.799517 kernel: pci 0000:00:16.6: bridge window [mem 0xfbc00000-0xfbcfffff] Jul 14 22:45:16.799575 kernel: pci 0000:00:16.6: bridge window [mem 0xe6300000-0xe63fffff 64bit pref] Jul 14 22:45:16.799629 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] Jul 14 22:45:16.799681 kernel: pci 0000:00:16.7: bridge window [mem 0xfb800000-0xfb8fffff] Jul 14 22:45:16.799733 kernel: pci 0000:00:16.7: bridge window [mem 0xe5f00000-0xe5ffffff 64bit pref] Jul 14 22:45:16.799787 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] Jul 14 22:45:16.799839 kernel: pci 0000:00:17.0: bridge window [io 0x6000-0x6fff] Jul 14 22:45:16.799891 kernel: pci 0000:00:17.0: bridge window [mem 0xfd300000-0xfd3fffff] Jul 14 22:45:16.799943 kernel: pci 0000:00:17.0: bridge window [mem 0xe7a00000-0xe7afffff 64bit pref] Jul 14 22:45:16.799997 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] Jul 14 22:45:16.800052 kernel: pci 0000:00:17.1: bridge window [io 0xa000-0xafff] Jul 14 22:45:16.800105 kernel: pci 0000:00:17.1: bridge window [mem 0xfcf00000-0xfcffffff] Jul 14 22:45:16.800157 kernel: pci 0000:00:17.1: bridge window [mem 0xe7600000-0xe76fffff 64bit pref] Jul 14 22:45:16.800211 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] Jul 14 22:45:16.800263 kernel: pci 0000:00:17.2: bridge window [io 0xe000-0xefff] Jul 14 22:45:16.800316 kernel: pci 0000:00:17.2: bridge window [mem 0xfcb00000-0xfcbfffff] Jul 14 22:45:16.800377 kernel: pci 0000:00:17.2: bridge window [mem 0xe7200000-0xe72fffff 64bit pref] Jul 14 22:45:16.800430 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] Jul 14 22:45:16.800486 kernel: pci 0000:00:17.3: bridge window [mem 0xfc700000-0xfc7fffff] Jul 14 22:45:16.800538 kernel: pci 0000:00:17.3: bridge window [mem 0xe6e00000-0xe6efffff 64bit pref] Jul 14 22:45:16.800594 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] Jul 14 22:45:16.800646 kernel: pci 0000:00:17.4: bridge window [mem 0xfc300000-0xfc3fffff] Jul 14 22:45:16.800698 kernel: pci 0000:00:17.4: bridge window [mem 0xe6a00000-0xe6afffff 64bit pref] Jul 14 22:45:16.800751 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] Jul 14 22:45:16.800803 kernel: pci 0000:00:17.5: bridge window [mem 0xfbf00000-0xfbffffff] Jul 14 22:45:16.800855 kernel: pci 0000:00:17.5: bridge window [mem 0xe6600000-0xe66fffff 64bit pref] Jul 14 22:45:16.800907 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] Jul 14 22:45:16.800959 kernel: pci 0000:00:17.6: bridge window [mem 0xfbb00000-0xfbbfffff] Jul 14 22:45:16.801010 kernel: pci 0000:00:17.6: bridge window [mem 0xe6200000-0xe62fffff 64bit pref] Jul 14 22:45:16.801065 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] Jul 14 22:45:16.801118 kernel: pci 0000:00:17.7: bridge window [mem 0xfb700000-0xfb7fffff] Jul 14 22:45:16.801170 kernel: pci 0000:00:17.7: bridge window [mem 0xe5e00000-0xe5efffff 64bit pref] Jul 14 22:45:16.801223 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] Jul 14 22:45:16.801275 kernel: pci 0000:00:18.0: bridge window [io 0x7000-0x7fff] Jul 14 22:45:16.801548 kernel: pci 0000:00:18.0: bridge window [mem 0xfd200000-0xfd2fffff] Jul 14 22:45:16.801608 kernel: pci 0000:00:18.0: bridge window [mem 0xe7900000-0xe79fffff 64bit pref] Jul 14 22:45:16.801663 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] Jul 14 22:45:16.801717 kernel: pci 0000:00:18.1: bridge window [io 0xb000-0xbfff] Jul 14 22:45:16.801769 kernel: pci 0000:00:18.1: bridge window [mem 0xfce00000-0xfcefffff] Jul 14 22:45:16.801824 kernel: pci 0000:00:18.1: bridge window [mem 0xe7500000-0xe75fffff 64bit pref] Jul 14 22:45:16.801877 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] Jul 14 22:45:16.801929 kernel: pci 0000:00:18.2: bridge window [mem 0xfca00000-0xfcafffff] Jul 14 22:45:16.801981 kernel: pci 0000:00:18.2: bridge window [mem 0xe7100000-0xe71fffff 64bit pref] Jul 14 22:45:16.802034 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] Jul 14 22:45:16.802087 kernel: pci 0000:00:18.3: bridge window [mem 0xfc600000-0xfc6fffff] Jul 14 22:45:16.802139 kernel: pci 0000:00:18.3: bridge window [mem 0xe6d00000-0xe6dfffff 64bit pref] Jul 14 22:45:16.802192 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] Jul 14 22:45:16.802244 kernel: pci 0000:00:18.4: bridge window [mem 0xfc200000-0xfc2fffff] Jul 14 22:45:16.802298 kernel: pci 0000:00:18.4: bridge window [mem 0xe6900000-0xe69fffff 64bit pref] Jul 14 22:45:16.802359 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] Jul 14 22:45:16.802412 kernel: pci 0000:00:18.5: bridge window [mem 0xfbe00000-0xfbefffff] Jul 14 22:45:16.802463 kernel: pci 0000:00:18.5: bridge window [mem 0xe6500000-0xe65fffff 64bit pref] Jul 14 22:45:16.802515 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] Jul 14 22:45:16.802569 kernel: pci 0000:00:18.6: bridge window [mem 0xfba00000-0xfbafffff] Jul 14 22:45:16.802621 kernel: pci 0000:00:18.6: bridge window [mem 0xe6100000-0xe61fffff 64bit pref] Jul 14 22:45:16.802674 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] Jul 14 22:45:16.802726 kernel: pci 0000:00:18.7: bridge window [mem 0xfb600000-0xfb6fffff] Jul 14 22:45:16.802779 kernel: pci 0000:00:18.7: bridge window [mem 0xe5d00000-0xe5dfffff 64bit pref] Jul 14 22:45:16.802833 kernel: pci_bus 0000:00: resource 4 [mem 0x000a0000-0x000bffff window] Jul 14 22:45:16.802881 kernel: pci_bus 0000:00: resource 5 [mem 0x000cc000-0x000dbfff window] Jul 14 22:45:16.802928 kernel: pci_bus 0000:00: resource 6 [mem 0xc0000000-0xfebfffff window] Jul 14 22:45:16.802974 kernel: pci_bus 0000:00: resource 7 [io 0x0000-0x0cf7 window] Jul 14 22:45:16.803020 kernel: pci_bus 0000:00: resource 8 [io 0x0d00-0xfeff window] Jul 14 22:45:16.803072 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x3fff] Jul 14 22:45:16.803120 kernel: pci_bus 0000:02: resource 1 [mem 0xfd600000-0xfdffffff] Jul 14 22:45:16.803171 kernel: pci_bus 0000:02: resource 2 [mem 0xe7b00000-0xe7ffffff 64bit pref] Jul 14 22:45:16.803218 kernel: pci_bus 0000:02: resource 4 [mem 0x000a0000-0x000bffff window] Jul 14 22:45:16.803266 kernel: pci_bus 0000:02: resource 5 [mem 0x000cc000-0x000dbfff window] Jul 14 22:45:16.803314 kernel: pci_bus 0000:02: resource 6 [mem 0xc0000000-0xfebfffff window] Jul 14 22:45:16.803800 kernel: pci_bus 0000:02: resource 7 [io 0x0000-0x0cf7 window] Jul 14 22:45:16.803851 kernel: pci_bus 0000:02: resource 8 [io 0x0d00-0xfeff window] Jul 14 22:45:16.803905 kernel: pci_bus 0000:03: resource 0 [io 0x4000-0x4fff] Jul 14 22:45:16.803953 kernel: pci_bus 0000:03: resource 1 [mem 0xfd500000-0xfd5fffff] Jul 14 22:45:16.804004 kernel: pci_bus 0000:03: resource 2 [mem 0xc0000000-0xc01fffff 64bit pref] Jul 14 22:45:16.804058 kernel: pci_bus 0000:04: resource 0 [io 0x8000-0x8fff] Jul 14 22:45:16.804106 kernel: pci_bus 0000:04: resource 1 [mem 0xfd100000-0xfd1fffff] Jul 14 22:45:16.804153 kernel: pci_bus 0000:04: resource 2 [mem 0xe7800000-0xe78fffff 64bit pref] Jul 14 22:45:16.804205 kernel: pci_bus 0000:05: resource 0 [io 0xc000-0xcfff] Jul 14 22:45:16.804253 kernel: pci_bus 0000:05: resource 1 [mem 0xfcd00000-0xfcdfffff] Jul 14 22:45:16.804304 kernel: pci_bus 0000:05: resource 2 [mem 0xe7400000-0xe74fffff 64bit pref] Jul 14 22:45:16.804374 kernel: pci_bus 0000:06: resource 1 [mem 0xfc900000-0xfc9fffff] Jul 14 22:45:16.804423 kernel: pci_bus 0000:06: resource 2 [mem 0xe7000000-0xe70fffff 64bit pref] Jul 14 22:45:16.804482 kernel: pci_bus 0000:07: resource 1 [mem 0xfc500000-0xfc5fffff] Jul 14 22:45:16.804531 kernel: pci_bus 0000:07: resource 2 [mem 0xe6c00000-0xe6cfffff 64bit pref] Jul 14 22:45:16.804583 kernel: pci_bus 0000:08: resource 1 [mem 0xfc100000-0xfc1fffff] Jul 14 22:45:16.804631 kernel: pci_bus 0000:08: resource 2 [mem 0xe6800000-0xe68fffff 64bit pref] Jul 14 22:45:16.804688 kernel: pci_bus 0000:09: resource 1 [mem 0xfbd00000-0xfbdfffff] Jul 14 22:45:16.804737 kernel: pci_bus 0000:09: resource 2 [mem 0xe6400000-0xe64fffff 64bit pref] Jul 14 22:45:16.804789 kernel: pci_bus 0000:0a: resource 1 [mem 0xfb900000-0xfb9fffff] Jul 14 22:45:16.804837 kernel: pci_bus 0000:0a: resource 2 [mem 0xe6000000-0xe60fffff 64bit pref] Jul 14 22:45:16.804902 kernel: pci_bus 0000:0b: resource 0 [io 0x5000-0x5fff] Jul 14 22:45:16.804954 kernel: pci_bus 0000:0b: resource 1 [mem 0xfd400000-0xfd4fffff] Jul 14 22:45:16.805002 kernel: pci_bus 0000:0b: resource 2 [mem 0xc0200000-0xc03fffff 64bit pref] Jul 14 22:45:16.805056 kernel: pci_bus 0000:0c: resource 0 [io 0x9000-0x9fff] Jul 14 22:45:16.805105 kernel: pci_bus 0000:0c: resource 1 [mem 0xfd000000-0xfd0fffff] Jul 14 22:45:16.805156 kernel: pci_bus 0000:0c: resource 2 [mem 0xe7700000-0xe77fffff 64bit pref] Jul 14 22:45:16.805209 kernel: pci_bus 0000:0d: resource 0 [io 0xd000-0xdfff] Jul 14 22:45:16.805259 kernel: pci_bus 0000:0d: resource 1 [mem 0xfcc00000-0xfccfffff] Jul 14 22:45:16.805313 kernel: pci_bus 0000:0d: resource 2 [mem 0xe7300000-0xe73fffff 64bit pref] Jul 14 22:45:16.805428 kernel: pci_bus 0000:0e: resource 1 [mem 0xfc800000-0xfc8fffff] Jul 14 22:45:16.805477 kernel: pci_bus 0000:0e: resource 2 [mem 0xe6f00000-0xe6ffffff 64bit pref] Jul 14 22:45:16.805529 kernel: pci_bus 0000:0f: resource 1 [mem 0xfc400000-0xfc4fffff] Jul 14 22:45:16.805578 kernel: pci_bus 0000:0f: resource 2 [mem 0xe6b00000-0xe6bfffff 64bit pref] Jul 14 22:45:16.805886 kernel: pci_bus 0000:10: resource 1 [mem 0xfc000000-0xfc0fffff] Jul 14 22:45:16.805944 kernel: pci_bus 0000:10: resource 2 [mem 0xe6700000-0xe67fffff 64bit pref] Jul 14 22:45:16.805998 kernel: pci_bus 0000:11: resource 1 [mem 0xfbc00000-0xfbcfffff] Jul 14 22:45:16.806047 kernel: pci_bus 0000:11: resource 2 [mem 0xe6300000-0xe63fffff 64bit pref] Jul 14 22:45:16.806099 kernel: pci_bus 0000:12: resource 1 [mem 0xfb800000-0xfb8fffff] Jul 14 22:45:16.806147 kernel: pci_bus 0000:12: resource 2 [mem 0xe5f00000-0xe5ffffff 64bit pref] Jul 14 22:45:16.806199 kernel: pci_bus 0000:13: resource 0 [io 0x6000-0x6fff] Jul 14 22:45:16.806257 kernel: pci_bus 0000:13: resource 1 [mem 0xfd300000-0xfd3fffff] Jul 14 22:45:16.806564 kernel: pci_bus 0000:13: resource 2 [mem 0xe7a00000-0xe7afffff 64bit pref] Jul 14 22:45:16.806631 kernel: pci_bus 0000:14: resource 0 [io 0xa000-0xafff] Jul 14 22:45:16.806680 kernel: pci_bus 0000:14: resource 1 [mem 0xfcf00000-0xfcffffff] Jul 14 22:45:16.806728 kernel: pci_bus 0000:14: resource 2 [mem 0xe7600000-0xe76fffff 64bit pref] Jul 14 22:45:16.806780 kernel: pci_bus 0000:15: resource 0 [io 0xe000-0xefff] Jul 14 22:45:16.806828 kernel: pci_bus 0000:15: resource 1 [mem 0xfcb00000-0xfcbfffff] Jul 14 22:45:16.806879 kernel: pci_bus 0000:15: resource 2 [mem 0xe7200000-0xe72fffff 64bit pref] Jul 14 22:45:16.806932 kernel: pci_bus 0000:16: resource 1 [mem 0xfc700000-0xfc7fffff] Jul 14 22:45:16.806981 kernel: pci_bus 0000:16: resource 2 [mem 0xe6e00000-0xe6efffff 64bit pref] Jul 14 22:45:16.807032 kernel: pci_bus 0000:17: resource 1 [mem 0xfc300000-0xfc3fffff] Jul 14 22:45:16.807081 kernel: pci_bus 0000:17: resource 2 [mem 0xe6a00000-0xe6afffff 64bit pref] Jul 14 22:45:16.807132 kernel: pci_bus 0000:18: resource 1 [mem 0xfbf00000-0xfbffffff] Jul 14 22:45:16.807181 kernel: pci_bus 0000:18: resource 2 [mem 0xe6600000-0xe66fffff 64bit pref] Jul 14 22:45:16.807236 kernel: pci_bus 0000:19: resource 1 [mem 0xfbb00000-0xfbbfffff] Jul 14 22:45:16.807284 kernel: pci_bus 0000:19: resource 2 [mem 0xe6200000-0xe62fffff 64bit pref] Jul 14 22:45:16.807618 kernel: pci_bus 0000:1a: resource 1 [mem 0xfb700000-0xfb7fffff] Jul 14 22:45:16.807672 kernel: pci_bus 0000:1a: resource 2 [mem 0xe5e00000-0xe5efffff 64bit pref] Jul 14 22:45:16.807725 kernel: pci_bus 0000:1b: resource 0 [io 0x7000-0x7fff] Jul 14 22:45:16.807774 kernel: pci_bus 0000:1b: resource 1 [mem 0xfd200000-0xfd2fffff] Jul 14 22:45:16.807825 kernel: pci_bus 0000:1b: resource 2 [mem 0xe7900000-0xe79fffff 64bit pref] Jul 14 22:45:16.807876 kernel: pci_bus 0000:1c: resource 0 [io 0xb000-0xbfff] Jul 14 22:45:16.807924 kernel: pci_bus 0000:1c: resource 1 [mem 0xfce00000-0xfcefffff] Jul 14 22:45:16.807973 kernel: pci_bus 0000:1c: resource 2 [mem 0xe7500000-0xe75fffff 64bit pref] Jul 14 22:45:16.808027 kernel: pci_bus 0000:1d: resource 1 [mem 0xfca00000-0xfcafffff] Jul 14 22:45:16.808075 kernel: pci_bus 0000:1d: resource 2 [mem 0xe7100000-0xe71fffff 64bit pref] Jul 14 22:45:16.808130 kernel: pci_bus 0000:1e: resource 1 [mem 0xfc600000-0xfc6fffff] Jul 14 22:45:16.808178 kernel: pci_bus 0000:1e: resource 2 [mem 0xe6d00000-0xe6dfffff 64bit pref] Jul 14 22:45:16.808229 kernel: pci_bus 0000:1f: resource 1 [mem 0xfc200000-0xfc2fffff] Jul 14 22:45:16.808279 kernel: pci_bus 0000:1f: resource 2 [mem 0xe6900000-0xe69fffff 64bit pref] Jul 14 22:45:16.808363 kernel: pci_bus 0000:20: resource 1 [mem 0xfbe00000-0xfbefffff] Jul 14 22:45:16.808413 kernel: pci_bus 0000:20: resource 2 [mem 0xe6500000-0xe65fffff 64bit pref] Jul 14 22:45:16.808468 kernel: pci_bus 0000:21: resource 1 [mem 0xfba00000-0xfbafffff] Jul 14 22:45:16.808863 kernel: pci_bus 0000:21: resource 2 [mem 0xe6100000-0xe61fffff 64bit pref] Jul 14 22:45:16.808921 kernel: pci_bus 0000:22: resource 1 [mem 0xfb600000-0xfb6fffff] Jul 14 22:45:16.808972 kernel: pci_bus 0000:22: resource 2 [mem 0xe5d00000-0xe5dfffff 64bit pref] Jul 14 22:45:16.809032 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jul 14 22:45:16.809042 kernel: PCI: CLS 32 bytes, default 64 Jul 14 22:45:16.809049 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jul 14 22:45:16.809058 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x311fd3cd494, max_idle_ns: 440795223879 ns Jul 14 22:45:16.809065 kernel: clocksource: Switched to clocksource tsc Jul 14 22:45:16.809071 kernel: Initialise system trusted keyrings Jul 14 22:45:16.809077 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Jul 14 22:45:16.809084 kernel: Key type asymmetric registered Jul 14 22:45:16.809090 kernel: Asymmetric key parser 'x509' registered Jul 14 22:45:16.809096 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jul 14 22:45:16.809104 kernel: io scheduler mq-deadline registered Jul 14 22:45:16.809110 kernel: io scheduler kyber registered Jul 14 22:45:16.809118 kernel: io scheduler bfq registered Jul 14 22:45:16.809180 kernel: pcieport 0000:00:15.0: PME: Signaling with IRQ 24 Jul 14 22:45:16.809236 kernel: pcieport 0000:00:15.0: pciehp: Slot #160 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 14 22:45:16.809303 kernel: pcieport 0000:00:15.1: PME: Signaling with IRQ 25 Jul 14 22:45:16.809395 kernel: pcieport 0000:00:15.1: pciehp: Slot #161 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 14 22:45:16.809460 kernel: pcieport 0000:00:15.2: PME: Signaling with IRQ 26 Jul 14 22:45:16.809515 kernel: pcieport 0000:00:15.2: pciehp: Slot #162 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 14 22:45:16.809569 kernel: pcieport 0000:00:15.3: PME: Signaling with IRQ 27 Jul 14 22:45:16.809631 kernel: pcieport 0000:00:15.3: pciehp: Slot #163 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 14 22:45:16.809685 kernel: pcieport 0000:00:15.4: PME: Signaling with IRQ 28 Jul 14 22:45:16.809738 kernel: pcieport 0000:00:15.4: pciehp: Slot #164 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 14 22:45:16.809792 kernel: pcieport 0000:00:15.5: PME: Signaling with IRQ 29 Jul 14 22:45:16.809844 kernel: pcieport 0000:00:15.5: pciehp: Slot #165 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 14 22:45:16.809900 kernel: pcieport 0000:00:15.6: PME: Signaling with IRQ 30 Jul 14 22:45:16.809954 kernel: pcieport 0000:00:15.6: pciehp: Slot #166 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 14 22:45:16.810007 kernel: pcieport 0000:00:15.7: PME: Signaling with IRQ 31 Jul 14 22:45:16.810061 kernel: pcieport 0000:00:15.7: pciehp: Slot #167 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 14 22:45:16.810115 kernel: pcieport 0000:00:16.0: PME: Signaling with IRQ 32 Jul 14 22:45:16.810171 kernel: pcieport 0000:00:16.0: pciehp: Slot #192 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 14 22:45:16.810226 kernel: pcieport 0000:00:16.1: PME: Signaling with IRQ 33 Jul 14 22:45:16.810279 kernel: pcieport 0000:00:16.1: pciehp: Slot #193 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 14 22:45:16.810666 kernel: pcieport 0000:00:16.2: PME: Signaling with IRQ 34 Jul 14 22:45:16.810729 kernel: pcieport 0000:00:16.2: pciehp: Slot #194 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 14 22:45:16.810786 kernel: pcieport 0000:00:16.3: PME: Signaling with IRQ 35 Jul 14 22:45:16.810841 kernel: pcieport 0000:00:16.3: pciehp: Slot #195 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 14 22:45:16.810899 kernel: pcieport 0000:00:16.4: PME: Signaling with IRQ 36 Jul 14 22:45:16.810953 kernel: pcieport 0000:00:16.4: pciehp: Slot #196 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 14 22:45:16.811008 kernel: pcieport 0000:00:16.5: PME: Signaling with IRQ 37 Jul 14 22:45:16.811061 kernel: pcieport 0000:00:16.5: pciehp: Slot #197 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 14 22:45:16.811115 kernel: pcieport 0000:00:16.6: PME: Signaling with IRQ 38 Jul 14 22:45:16.811172 kernel: pcieport 0000:00:16.6: pciehp: Slot #198 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 14 22:45:16.811226 kernel: pcieport 0000:00:16.7: PME: Signaling with IRQ 39 Jul 14 22:45:16.811279 kernel: pcieport 0000:00:16.7: pciehp: Slot #199 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 14 22:45:16.811375 kernel: pcieport 0000:00:17.0: PME: Signaling with IRQ 40 Jul 14 22:45:16.811432 kernel: pcieport 0000:00:17.0: pciehp: Slot #224 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 14 22:45:16.811486 kernel: pcieport 0000:00:17.1: PME: Signaling with IRQ 41 Jul 14 22:45:16.811538 kernel: pcieport 0000:00:17.1: pciehp: Slot #225 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 14 22:45:16.811595 kernel: pcieport 0000:00:17.2: PME: Signaling with IRQ 42 Jul 14 22:45:16.811647 kernel: pcieport 0000:00:17.2: pciehp: Slot #226 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 14 22:45:16.811701 kernel: pcieport 0000:00:17.3: PME: Signaling with IRQ 43 Jul 14 22:45:16.811754 kernel: pcieport 0000:00:17.3: pciehp: Slot #227 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 14 22:45:16.811807 kernel: pcieport 0000:00:17.4: PME: Signaling with IRQ 44 Jul 14 22:45:16.811860 kernel: pcieport 0000:00:17.4: pciehp: Slot #228 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 14 22:45:16.811916 kernel: pcieport 0000:00:17.5: PME: Signaling with IRQ 45 Jul 14 22:45:16.811970 kernel: pcieport 0000:00:17.5: pciehp: Slot #229 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 14 22:45:16.812024 kernel: pcieport 0000:00:17.6: PME: Signaling with IRQ 46 Jul 14 22:45:16.812370 kernel: pcieport 0000:00:17.6: pciehp: Slot #230 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 14 22:45:16.812436 kernel: pcieport 0000:00:17.7: PME: Signaling with IRQ 47 Jul 14 22:45:16.812503 kernel: pcieport 0000:00:17.7: pciehp: Slot #231 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 14 22:45:16.812559 kernel: pcieport 0000:00:18.0: PME: Signaling with IRQ 48 Jul 14 22:45:16.812612 kernel: pcieport 0000:00:18.0: pciehp: Slot #256 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 14 22:45:16.812665 kernel: pcieport 0000:00:18.1: PME: Signaling with IRQ 49 Jul 14 22:45:16.812718 kernel: pcieport 0000:00:18.1: pciehp: Slot #257 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 14 22:45:16.812771 kernel: pcieport 0000:00:18.2: PME: Signaling with IRQ 50 Jul 14 22:45:16.812827 kernel: pcieport 0000:00:18.2: pciehp: Slot #258 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 14 22:45:16.812879 kernel: pcieport 0000:00:18.3: PME: Signaling with IRQ 51 Jul 14 22:45:16.812932 kernel: pcieport 0000:00:18.3: pciehp: Slot #259 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 14 22:45:16.812986 kernel: pcieport 0000:00:18.4: PME: Signaling with IRQ 52 Jul 14 22:45:16.813038 kernel: pcieport 0000:00:18.4: pciehp: Slot #260 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 14 22:45:16.813091 kernel: pcieport 0000:00:18.5: PME: Signaling with IRQ 53 Jul 14 22:45:16.813146 kernel: pcieport 0000:00:18.5: pciehp: Slot #261 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 14 22:45:16.813200 kernel: pcieport 0000:00:18.6: PME: Signaling with IRQ 54 Jul 14 22:45:16.813253 kernel: pcieport 0000:00:18.6: pciehp: Slot #262 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 14 22:45:16.813305 kernel: pcieport 0000:00:18.7: PME: Signaling with IRQ 55 Jul 14 22:45:16.813397 kernel: pcieport 0000:00:18.7: pciehp: Slot #263 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 14 22:45:16.813410 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jul 14 22:45:16.813417 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 14 22:45:16.813423 kernel: 00:05: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jul 14 22:45:16.813430 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBC,PNP0f13:MOUS] at 0x60,0x64 irq 1,12 Jul 14 22:45:16.813436 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jul 14 22:45:16.813442 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jul 14 22:45:16.813497 kernel: rtc_cmos 00:01: registered as rtc0 Jul 14 22:45:16.813547 kernel: rtc_cmos 00:01: setting system clock to 2025-07-14T22:45:16 UTC (1752533116) Jul 14 22:45:16.813558 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jul 14 22:45:16.813605 kernel: rtc_cmos 00:01: alarms up to one month, y3k, 114 bytes nvram Jul 14 22:45:16.813614 kernel: intel_pstate: CPU model not supported Jul 14 22:45:16.813621 kernel: NET: Registered PF_INET6 protocol family Jul 14 22:45:16.813627 kernel: Segment Routing with IPv6 Jul 14 22:45:16.813634 kernel: In-situ OAM (IOAM) with IPv6 Jul 14 22:45:16.813640 kernel: NET: Registered PF_PACKET protocol family Jul 14 22:45:16.813647 kernel: Key type dns_resolver registered Jul 14 22:45:16.813653 kernel: IPI shorthand broadcast: enabled Jul 14 22:45:16.813661 kernel: sched_clock: Marking stable (937397473, 233185090)->(1235031317, -64448754) Jul 14 22:45:16.813667 kernel: registered taskstats version 1 Jul 14 22:45:16.813673 kernel: Loading compiled-in X.509 certificates Jul 14 22:45:16.813680 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.97-flatcar: ff10e110ca3923b510cf0133f4e9f48dd636b870' Jul 14 22:45:16.813686 kernel: Key type .fscrypt registered Jul 14 22:45:16.813692 kernel: Key type fscrypt-provisioning registered Jul 14 22:45:16.813698 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 14 22:45:16.813705 kernel: ima: Allocated hash algorithm: sha1 Jul 14 22:45:16.813712 kernel: ima: No architecture policies found Jul 14 22:45:16.813719 kernel: clk: Disabling unused clocks Jul 14 22:45:16.813725 kernel: Freeing unused kernel image (initmem) memory: 42876K Jul 14 22:45:16.813732 kernel: Write protecting the kernel read-only data: 36864k Jul 14 22:45:16.813738 kernel: Freeing unused kernel image (rodata/data gap) memory: 1828K Jul 14 22:45:16.813744 kernel: Run /init as init process Jul 14 22:45:16.813751 kernel: with arguments: Jul 14 22:45:16.813757 kernel: /init Jul 14 22:45:16.813763 kernel: with environment: Jul 14 22:45:16.813769 kernel: HOME=/ Jul 14 22:45:16.813777 kernel: TERM=linux Jul 14 22:45:16.813783 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 14 22:45:16.813790 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 14 22:45:16.813798 systemd[1]: Detected virtualization vmware. Jul 14 22:45:16.813805 systemd[1]: Detected architecture x86-64. Jul 14 22:45:16.813811 systemd[1]: Running in initrd. Jul 14 22:45:16.813818 systemd[1]: No hostname configured, using default hostname. Jul 14 22:45:16.813825 systemd[1]: Hostname set to . Jul 14 22:45:16.813832 systemd[1]: Initializing machine ID from random generator. Jul 14 22:45:16.813839 systemd[1]: Queued start job for default target initrd.target. Jul 14 22:45:16.813845 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 14 22:45:16.813852 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 14 22:45:16.813859 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 14 22:45:16.813865 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 14 22:45:16.813872 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 14 22:45:16.813879 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 14 22:45:16.813887 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 14 22:45:16.813894 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 14 22:45:16.813900 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 14 22:45:16.813907 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 14 22:45:16.813913 systemd[1]: Reached target paths.target - Path Units. Jul 14 22:45:16.813921 systemd[1]: Reached target slices.target - Slice Units. Jul 14 22:45:16.813928 systemd[1]: Reached target swap.target - Swaps. Jul 14 22:45:16.813936 systemd[1]: Reached target timers.target - Timer Units. Jul 14 22:45:16.813943 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 14 22:45:16.813949 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 14 22:45:16.813955 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 14 22:45:16.813962 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jul 14 22:45:16.813968 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 14 22:45:16.813975 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 14 22:45:16.813982 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 14 22:45:16.813989 systemd[1]: Reached target sockets.target - Socket Units. Jul 14 22:45:16.813996 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 14 22:45:16.814002 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 14 22:45:16.814009 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 14 22:45:16.814015 systemd[1]: Starting systemd-fsck-usr.service... Jul 14 22:45:16.814022 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 14 22:45:16.814028 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 14 22:45:16.814035 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 14 22:45:16.814054 systemd-journald[215]: Collecting audit messages is disabled. Jul 14 22:45:16.814071 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 14 22:45:16.814078 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 14 22:45:16.814084 systemd[1]: Finished systemd-fsck-usr.service. Jul 14 22:45:16.814093 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 14 22:45:16.814101 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 14 22:45:16.814107 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 14 22:45:16.814114 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 14 22:45:16.814122 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 14 22:45:16.814128 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 14 22:45:16.814135 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 14 22:45:16.814350 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 14 22:45:16.814359 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 14 22:45:16.814366 kernel: Bridge firewalling registered Jul 14 22:45:16.814372 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 14 22:45:16.814379 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 14 22:45:16.814386 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 14 22:45:16.814396 systemd-journald[215]: Journal started Jul 14 22:45:16.814411 systemd-journald[215]: Runtime Journal (/run/log/journal/847721fa8d474d9e8d304217707f7cb7) is 4.8M, max 38.6M, 33.8M free. Jul 14 22:45:16.764394 systemd-modules-load[216]: Inserted module 'overlay' Jul 14 22:45:16.799187 systemd-modules-load[216]: Inserted module 'br_netfilter' Jul 14 22:45:16.816709 systemd[1]: Started systemd-journald.service - Journal Service. Jul 14 22:45:16.817002 dracut-cmdline[232]: dracut-dracut-053 Jul 14 22:45:16.817002 dracut-cmdline[232]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=bfa97d577a2baa7448b0ab2cae71f1606bd0084ffae5b72cc7eef5122a2ca497 Jul 14 22:45:16.822405 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 14 22:45:16.827686 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 14 22:45:16.829400 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 14 22:45:16.846610 systemd-resolved[280]: Positive Trust Anchors: Jul 14 22:45:16.846618 systemd-resolved[280]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 14 22:45:16.846639 systemd-resolved[280]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 14 22:45:16.849139 systemd-resolved[280]: Defaulting to hostname 'linux'. Jul 14 22:45:16.849724 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 14 22:45:16.849863 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 14 22:45:16.859332 kernel: SCSI subsystem initialized Jul 14 22:45:16.867331 kernel: Loading iSCSI transport class v2.0-870. Jul 14 22:45:16.873330 kernel: iscsi: registered transport (tcp) Jul 14 22:45:16.888332 kernel: iscsi: registered transport (qla4xxx) Jul 14 22:45:16.888348 kernel: QLogic iSCSI HBA Driver Jul 14 22:45:16.908312 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 14 22:45:16.911417 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 14 22:45:16.926769 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 14 22:45:16.926803 kernel: device-mapper: uevent: version 1.0.3 Jul 14 22:45:16.927930 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jul 14 22:45:16.960366 kernel: raid6: avx2x4 gen() 52067 MB/s Jul 14 22:45:16.976340 kernel: raid6: avx2x2 gen() 52688 MB/s Jul 14 22:45:16.993571 kernel: raid6: avx2x1 gen() 43126 MB/s Jul 14 22:45:16.993634 kernel: raid6: using algorithm avx2x2 gen() 52688 MB/s Jul 14 22:45:17.011582 kernel: raid6: .... xor() 30153 MB/s, rmw enabled Jul 14 22:45:17.011652 kernel: raid6: using avx2x2 recovery algorithm Jul 14 22:45:17.025342 kernel: xor: automatically using best checksumming function avx Jul 14 22:45:17.127672 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 14 22:45:17.132140 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 14 22:45:17.136413 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 14 22:45:17.143773 systemd-udevd[431]: Using default interface naming scheme 'v255'. Jul 14 22:45:17.146246 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 14 22:45:17.153410 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 14 22:45:17.160056 dracut-pre-trigger[437]: rd.md=0: removing MD RAID activation Jul 14 22:45:17.175156 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 14 22:45:17.178415 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 14 22:45:17.250298 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 14 22:45:17.256377 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 14 22:45:17.264527 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 14 22:45:17.265077 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 14 22:45:17.265451 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 14 22:45:17.265571 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 14 22:45:17.274475 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 14 22:45:17.281848 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 14 22:45:17.312398 kernel: VMware PVSCSI driver - version 1.0.7.0-k Jul 14 22:45:17.312437 kernel: vmw_pvscsi: using 64bit dma Jul 14 22:45:17.323232 kernel: VMware vmxnet3 virtual NIC driver - version 1.7.0.0-k-NAPI Jul 14 22:45:17.323258 kernel: vmxnet3 0000:0b:00.0: # of Tx queues : 2, # of Rx queues : 2 Jul 14 22:45:17.323379 kernel: vmxnet3 0000:0b:00.0 eth0: NIC Link is Up 10000 Mbps Jul 14 22:45:17.325742 kernel: vmw_pvscsi: max_id: 16 Jul 14 22:45:17.325758 kernel: vmw_pvscsi: setting ring_pages to 8 Jul 14 22:45:17.331329 kernel: vmw_pvscsi: enabling reqCallThreshold Jul 14 22:45:17.331347 kernel: vmw_pvscsi: driver-based request coalescing enabled Jul 14 22:45:17.331359 kernel: vmw_pvscsi: using MSI-X Jul 14 22:45:17.334336 kernel: scsi host0: VMware PVSCSI storage adapter rev 2, req/cmp/msg rings: 8/8/1 pages, cmd_per_lun=254 Jul 14 22:45:17.334363 kernel: cryptd: max_cpu_qlen set to 1000 Jul 14 22:45:17.340348 kernel: vmw_pvscsi 0000:03:00.0: VMware PVSCSI rev 2 host #0 Jul 14 22:45:17.343335 kernel: vmxnet3 0000:0b:00.0 ens192: renamed from eth0 Jul 14 22:45:17.345340 kernel: scsi 0:0:0:0: Direct-Access VMware Virtual disk 2.0 PQ: 0 ANSI: 6 Jul 14 22:45:17.349014 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 14 22:45:17.349049 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 14 22:45:17.351386 kernel: AVX2 version of gcm_enc/dec engaged. Jul 14 22:45:17.351399 kernel: AES CTR mode by8 optimization enabled Jul 14 22:45:17.349221 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 14 22:45:17.349314 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 14 22:45:17.349344 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 14 22:45:17.349446 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 14 22:45:17.357457 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 14 22:45:17.363559 kernel: libata version 3.00 loaded. Jul 14 22:45:17.368334 kernel: ata_piix 0000:00:07.1: version 2.13 Jul 14 22:45:17.371350 kernel: scsi host1: ata_piix Jul 14 22:45:17.372483 kernel: scsi host2: ata_piix Jul 14 22:45:17.372565 kernel: ata1: PATA max UDMA/33 cmd 0x1f0 ctl 0x3f6 bmdma 0x1060 irq 14 Jul 14 22:45:17.372575 kernel: ata2: PATA max UDMA/33 cmd 0x170 ctl 0x376 bmdma 0x1068 irq 15 Jul 14 22:45:17.373329 kernel: sd 0:0:0:0: [sda] 17805312 512-byte logical blocks: (9.12 GB/8.49 GiB) Jul 14 22:45:17.373436 kernel: sd 0:0:0:0: [sda] Write Protect is off Jul 14 22:45:17.373505 kernel: sd 0:0:0:0: [sda] Mode Sense: 31 00 00 00 Jul 14 22:45:17.373568 kernel: sd 0:0:0:0: [sda] Cache data unavailable Jul 14 22:45:17.373629 kernel: sd 0:0:0:0: [sda] Assuming drive cache: write through Jul 14 22:45:17.383219 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 14 22:45:17.386405 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 14 22:45:17.397875 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 14 22:45:17.416339 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 14 22:45:17.416368 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jul 14 22:45:17.545341 kernel: ata2.00: ATAPI: VMware Virtual IDE CDROM Drive, 00000001, max UDMA/33 Jul 14 22:45:17.549340 kernel: scsi 2:0:0:0: CD-ROM NECVMWar VMware IDE CDR10 1.00 PQ: 0 ANSI: 5 Jul 14 22:45:17.572652 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 1x/1x writer dvd-ram cd/rw xa/form2 cdda tray Jul 14 22:45:17.572797 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jul 14 22:45:17.582329 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jul 14 22:45:17.582427 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by (udev-worker) (487) Jul 14 22:45:17.583578 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_disk EFI-SYSTEM. Jul 14 22:45:17.586236 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_disk ROOT. Jul 14 22:45:17.588334 kernel: BTRFS: device fsid d23b6972-ad36-4741-bf36-4d440b923127 devid 1 transid 36 /dev/sda3 scanned by (udev-worker) (481) Jul 14 22:45:17.591494 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_disk OEM. Jul 14 22:45:17.595917 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_disk USR-A. Jul 14 22:45:17.596028 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_disk USR-A. Jul 14 22:45:17.600469 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 14 22:45:17.631352 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 14 22:45:17.638348 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 14 22:45:18.638348 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 14 22:45:18.638470 disk-uuid[589]: The operation has completed successfully. Jul 14 22:45:18.670365 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 14 22:45:18.670431 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 14 22:45:18.674411 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 14 22:45:18.676259 sh[606]: Success Jul 14 22:45:18.685362 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jul 14 22:45:18.719487 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 14 22:45:18.720374 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 14 22:45:18.720684 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 14 22:45:18.736160 kernel: BTRFS info (device dm-0): first mount of filesystem d23b6972-ad36-4741-bf36-4d440b923127 Jul 14 22:45:18.736182 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jul 14 22:45:18.736190 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jul 14 22:45:18.737245 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jul 14 22:45:18.738032 kernel: BTRFS info (device dm-0): using free space tree Jul 14 22:45:18.745329 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jul 14 22:45:18.748279 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 14 22:45:18.757386 systemd[1]: Starting afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments... Jul 14 22:45:18.758512 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 14 22:45:18.777128 kernel: BTRFS info (device sda6): first mount of filesystem 1f379987-f438-494c-89f9-63473ca1b18d Jul 14 22:45:18.777152 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jul 14 22:45:18.777160 kernel: BTRFS info (device sda6): using free space tree Jul 14 22:45:18.783329 kernel: BTRFS info (device sda6): enabling ssd optimizations Jul 14 22:45:18.791889 systemd[1]: mnt-oem.mount: Deactivated successfully. Jul 14 22:45:18.793341 kernel: BTRFS info (device sda6): last unmount of filesystem 1f379987-f438-494c-89f9-63473ca1b18d Jul 14 22:45:18.799167 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 14 22:45:18.802409 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 14 22:45:18.824982 systemd[1]: Finished afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments. Jul 14 22:45:18.830568 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 14 22:45:18.882012 ignition[666]: Ignition 2.19.0 Jul 14 22:45:18.882019 ignition[666]: Stage: fetch-offline Jul 14 22:45:18.882042 ignition[666]: no configs at "/usr/lib/ignition/base.d" Jul 14 22:45:18.882048 ignition[666]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Jul 14 22:45:18.882099 ignition[666]: parsed url from cmdline: "" Jul 14 22:45:18.882101 ignition[666]: no config URL provided Jul 14 22:45:18.882103 ignition[666]: reading system config file "/usr/lib/ignition/user.ign" Jul 14 22:45:18.882108 ignition[666]: no config at "/usr/lib/ignition/user.ign" Jul 14 22:45:18.882478 ignition[666]: config successfully fetched Jul 14 22:45:18.882496 ignition[666]: parsing config with SHA512: ad7da1504b708e0e7711baec01e44d45edc790e831c71dabaedeb448754a945eb4f3fd087cb5c07fc32466e7de36c1cbc50384d417d579a8c67227c727c8142e Jul 14 22:45:18.887952 unknown[666]: fetched base config from "system" Jul 14 22:45:18.888062 unknown[666]: fetched user config from "vmware" Jul 14 22:45:18.888357 ignition[666]: fetch-offline: fetch-offline passed Jul 14 22:45:18.888393 ignition[666]: Ignition finished successfully Jul 14 22:45:18.889583 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 14 22:45:18.889831 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 14 22:45:18.893418 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 14 22:45:18.904999 systemd-networkd[800]: lo: Link UP Jul 14 22:45:18.905005 systemd-networkd[800]: lo: Gained carrier Jul 14 22:45:18.905702 systemd-networkd[800]: Enumeration completed Jul 14 22:45:18.905861 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 14 22:45:18.905968 systemd-networkd[800]: ens192: Configuring with /etc/systemd/network/10-dracut-cmdline-99.network. Jul 14 22:45:18.906000 systemd[1]: Reached target network.target - Network. Jul 14 22:45:18.906091 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jul 14 22:45:18.909225 kernel: vmxnet3 0000:0b:00.0 ens192: intr type 3, mode 0, 3 vectors allocated Jul 14 22:45:18.909792 kernel: vmxnet3 0000:0b:00.0 ens192: NIC Link is Up 10000 Mbps Jul 14 22:45:18.909475 systemd-networkd[800]: ens192: Link UP Jul 14 22:45:18.909478 systemd-networkd[800]: ens192: Gained carrier Jul 14 22:45:18.918405 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 14 22:45:18.925837 ignition[802]: Ignition 2.19.0 Jul 14 22:45:18.925843 ignition[802]: Stage: kargs Jul 14 22:45:18.925946 ignition[802]: no configs at "/usr/lib/ignition/base.d" Jul 14 22:45:18.925952 ignition[802]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Jul 14 22:45:18.926569 ignition[802]: kargs: kargs passed Jul 14 22:45:18.926597 ignition[802]: Ignition finished successfully Jul 14 22:45:18.927786 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 14 22:45:18.931422 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 14 22:45:18.939073 ignition[809]: Ignition 2.19.0 Jul 14 22:45:18.939081 ignition[809]: Stage: disks Jul 14 22:45:18.939203 ignition[809]: no configs at "/usr/lib/ignition/base.d" Jul 14 22:45:18.939209 ignition[809]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Jul 14 22:45:18.939790 ignition[809]: disks: disks passed Jul 14 22:45:18.939814 ignition[809]: Ignition finished successfully Jul 14 22:45:18.940462 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 14 22:45:18.940839 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 14 22:45:18.940976 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 14 22:45:18.941167 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 14 22:45:18.941364 systemd[1]: Reached target sysinit.target - System Initialization. Jul 14 22:45:18.941535 systemd[1]: Reached target basic.target - Basic System. Jul 14 22:45:18.946410 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 14 22:45:18.956575 systemd-fsck[817]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Jul 14 22:45:18.957460 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 14 22:45:18.962407 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 14 22:45:19.023395 kernel: EXT4-fs (sda9): mounted filesystem dda007d3-640b-4d11-976f-3b761ca7aabd r/w with ordered data mode. Quota mode: none. Jul 14 22:45:19.023733 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 14 22:45:19.024089 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 14 22:45:19.033375 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 14 22:45:19.034706 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 14 22:45:19.034979 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jul 14 22:45:19.035005 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 14 22:45:19.035020 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 14 22:45:19.038016 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 14 22:45:19.038957 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 14 22:45:19.041505 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (825) Jul 14 22:45:19.044660 kernel: BTRFS info (device sda6): first mount of filesystem 1f379987-f438-494c-89f9-63473ca1b18d Jul 14 22:45:19.044682 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jul 14 22:45:19.044691 kernel: BTRFS info (device sda6): using free space tree Jul 14 22:45:19.049332 kernel: BTRFS info (device sda6): enabling ssd optimizations Jul 14 22:45:19.050931 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 14 22:45:19.067688 initrd-setup-root[849]: cut: /sysroot/etc/passwd: No such file or directory Jul 14 22:45:19.070650 initrd-setup-root[856]: cut: /sysroot/etc/group: No such file or directory Jul 14 22:45:19.072949 initrd-setup-root[863]: cut: /sysroot/etc/shadow: No such file or directory Jul 14 22:45:19.074819 initrd-setup-root[870]: cut: /sysroot/etc/gshadow: No such file or directory Jul 14 22:45:19.134778 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 14 22:45:19.137431 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 14 22:45:19.139736 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 14 22:45:19.145424 kernel: BTRFS info (device sda6): last unmount of filesystem 1f379987-f438-494c-89f9-63473ca1b18d Jul 14 22:45:19.156419 ignition[937]: INFO : Ignition 2.19.0 Jul 14 22:45:19.156664 ignition[937]: INFO : Stage: mount Jul 14 22:45:19.156916 ignition[937]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 14 22:45:19.157050 ignition[937]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Jul 14 22:45:19.158268 ignition[937]: INFO : mount: mount passed Jul 14 22:45:19.158471 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 14 22:45:19.158679 ignition[937]: INFO : Ignition finished successfully Jul 14 22:45:19.159364 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 14 22:45:19.163414 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 14 22:45:19.734828 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 14 22:45:19.740504 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 14 22:45:19.748358 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by mount (950) Jul 14 22:45:19.751012 kernel: BTRFS info (device sda6): first mount of filesystem 1f379987-f438-494c-89f9-63473ca1b18d Jul 14 22:45:19.751036 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jul 14 22:45:19.751045 kernel: BTRFS info (device sda6): using free space tree Jul 14 22:45:19.755354 kernel: BTRFS info (device sda6): enabling ssd optimizations Jul 14 22:45:19.762206 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 14 22:45:19.778990 ignition[967]: INFO : Ignition 2.19.0 Jul 14 22:45:19.778990 ignition[967]: INFO : Stage: files Jul 14 22:45:19.778990 ignition[967]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 14 22:45:19.778990 ignition[967]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Jul 14 22:45:19.779762 ignition[967]: DEBUG : files: compiled without relabeling support, skipping Jul 14 22:45:19.785914 ignition[967]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 14 22:45:19.785914 ignition[967]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 14 22:45:19.789684 ignition[967]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 14 22:45:19.789847 ignition[967]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 14 22:45:19.790018 ignition[967]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 14 22:45:19.789924 unknown[967]: wrote ssh authorized keys file for user: core Jul 14 22:45:19.791866 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Jul 14 22:45:19.792064 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Jul 14 22:45:19.792064 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jul 14 22:45:19.792064 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jul 14 22:45:19.852730 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jul 14 22:45:20.041285 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jul 14 22:45:20.041285 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jul 14 22:45:20.041811 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jul 14 22:45:20.041811 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 14 22:45:20.041811 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 14 22:45:20.041811 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 14 22:45:20.041811 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 14 22:45:20.041811 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 14 22:45:20.041811 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 14 22:45:20.041811 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 14 22:45:20.043115 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 14 22:45:20.043115 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 14 22:45:20.043115 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 14 22:45:20.043115 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 14 22:45:20.043115 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 Jul 14 22:45:20.754439 systemd-networkd[800]: ens192: Gained IPv6LL Jul 14 22:45:20.843539 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jul 14 22:45:21.091433 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 14 22:45:21.091433 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/etc/systemd/network/00-vmware.network" Jul 14 22:45:21.092003 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/etc/systemd/network/00-vmware.network" Jul 14 22:45:21.092003 ignition[967]: INFO : files: op(d): [started] processing unit "containerd.service" Jul 14 22:45:21.092732 ignition[967]: INFO : files: op(d): op(e): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jul 14 22:45:21.093016 ignition[967]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jul 14 22:45:21.093016 ignition[967]: INFO : files: op(d): [finished] processing unit "containerd.service" Jul 14 22:45:21.093016 ignition[967]: INFO : files: op(f): [started] processing unit "prepare-helm.service" Jul 14 22:45:21.093016 ignition[967]: INFO : files: op(f): op(10): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 14 22:45:21.093016 ignition[967]: INFO : files: op(f): op(10): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 14 22:45:21.093016 ignition[967]: INFO : files: op(f): [finished] processing unit "prepare-helm.service" Jul 14 22:45:21.093016 ignition[967]: INFO : files: op(11): [started] processing unit "coreos-metadata.service" Jul 14 22:45:21.093016 ignition[967]: INFO : files: op(11): op(12): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 14 22:45:21.094648 ignition[967]: INFO : files: op(11): op(12): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 14 22:45:21.094648 ignition[967]: INFO : files: op(11): [finished] processing unit "coreos-metadata.service" Jul 14 22:45:21.094648 ignition[967]: INFO : files: op(13): [started] setting preset to disabled for "coreos-metadata.service" Jul 14 22:45:21.229304 ignition[967]: INFO : files: op(13): op(14): [started] removing enablement symlink(s) for "coreos-metadata.service" Jul 14 22:45:21.233380 ignition[967]: INFO : files: op(13): op(14): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jul 14 22:45:21.233380 ignition[967]: INFO : files: op(13): [finished] setting preset to disabled for "coreos-metadata.service" Jul 14 22:45:21.233380 ignition[967]: INFO : files: op(15): [started] setting preset to enabled for "prepare-helm.service" Jul 14 22:45:21.233380 ignition[967]: INFO : files: op(15): [finished] setting preset to enabled for "prepare-helm.service" Jul 14 22:45:21.234173 ignition[967]: INFO : files: createResultFile: createFiles: op(16): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 14 22:45:21.234173 ignition[967]: INFO : files: createResultFile: createFiles: op(16): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 14 22:45:21.234173 ignition[967]: INFO : files: files passed Jul 14 22:45:21.234173 ignition[967]: INFO : Ignition finished successfully Jul 14 22:45:21.234418 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 14 22:45:21.239412 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 14 22:45:21.240862 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 14 22:45:21.242156 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 14 22:45:21.242373 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 14 22:45:21.247613 initrd-setup-root-after-ignition[998]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 14 22:45:21.247613 initrd-setup-root-after-ignition[998]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 14 22:45:21.248642 initrd-setup-root-after-ignition[1002]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 14 22:45:21.249564 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 14 22:45:21.249957 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 14 22:45:21.254502 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 14 22:45:21.267618 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 14 22:45:21.267688 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 14 22:45:21.267974 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 14 22:45:21.268092 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 14 22:45:21.268288 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 14 22:45:21.271430 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 14 22:45:21.278200 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 14 22:45:21.282438 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 14 22:45:21.287781 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 14 22:45:21.288056 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 14 22:45:21.288213 systemd[1]: Stopped target timers.target - Timer Units. Jul 14 22:45:21.288366 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 14 22:45:21.288446 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 14 22:45:21.288663 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 14 22:45:21.288881 systemd[1]: Stopped target basic.target - Basic System. Jul 14 22:45:21.289204 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 14 22:45:21.289413 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 14 22:45:21.289623 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 14 22:45:21.289825 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 14 22:45:21.290004 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 14 22:45:21.290218 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 14 22:45:21.290428 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 14 22:45:21.290620 systemd[1]: Stopped target swap.target - Swaps. Jul 14 22:45:21.290781 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 14 22:45:21.290842 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 14 22:45:21.291124 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 14 22:45:21.291254 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 14 22:45:21.291469 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 14 22:45:21.291514 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 14 22:45:21.291654 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 14 22:45:21.291713 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 14 22:45:21.291952 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 14 22:45:21.292016 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 14 22:45:21.292260 systemd[1]: Stopped target paths.target - Path Units. Jul 14 22:45:21.292412 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 14 22:45:21.298340 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 14 22:45:21.298508 systemd[1]: Stopped target slices.target - Slice Units. Jul 14 22:45:21.298702 systemd[1]: Stopped target sockets.target - Socket Units. Jul 14 22:45:21.298881 systemd[1]: iscsid.socket: Deactivated successfully. Jul 14 22:45:21.298950 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 14 22:45:21.299168 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 14 22:45:21.299214 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 14 22:45:21.299452 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 14 22:45:21.299514 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 14 22:45:21.299769 systemd[1]: ignition-files.service: Deactivated successfully. Jul 14 22:45:21.299825 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 14 22:45:21.304416 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 14 22:45:21.306300 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 14 22:45:21.306496 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 14 22:45:21.306583 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 14 22:45:21.306823 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 14 22:45:21.306898 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 14 22:45:21.309957 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 14 22:45:21.311061 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 14 22:45:21.315136 ignition[1022]: INFO : Ignition 2.19.0 Jul 14 22:45:21.315136 ignition[1022]: INFO : Stage: umount Jul 14 22:45:21.315469 ignition[1022]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 14 22:45:21.315469 ignition[1022]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Jul 14 22:45:21.317289 ignition[1022]: INFO : umount: umount passed Jul 14 22:45:21.317448 ignition[1022]: INFO : Ignition finished successfully Jul 14 22:45:21.318305 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 14 22:45:21.319402 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 14 22:45:21.319604 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 14 22:45:21.319904 systemd[1]: Stopped target network.target - Network. Jul 14 22:45:21.320118 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 14 22:45:21.320145 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 14 22:45:21.320420 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 14 22:45:21.320444 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 14 22:45:21.320913 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 14 22:45:21.320938 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 14 22:45:21.321047 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 14 22:45:21.321069 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 14 22:45:21.321253 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 14 22:45:21.321640 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 14 22:45:21.324097 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 14 22:45:21.324158 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 14 22:45:21.324474 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 14 22:45:21.324499 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 14 22:45:21.329433 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 14 22:45:21.329586 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 14 22:45:21.329613 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 14 22:45:21.330410 systemd[1]: afterburn-network-kargs.service: Deactivated successfully. Jul 14 22:45:21.330439 systemd[1]: Stopped afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments. Jul 14 22:45:21.330688 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 14 22:45:21.334749 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 14 22:45:21.334815 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 14 22:45:21.336152 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 14 22:45:21.336196 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 14 22:45:21.336463 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 14 22:45:21.336485 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 14 22:45:21.336943 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 14 22:45:21.336966 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 14 22:45:21.337578 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 14 22:45:21.337645 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 14 22:45:21.338264 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 14 22:45:21.338299 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 14 22:45:21.338516 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 14 22:45:21.338534 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 14 22:45:21.338684 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 14 22:45:21.338707 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 14 22:45:21.338985 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 14 22:45:21.339006 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 14 22:45:21.339556 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 14 22:45:21.339579 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 14 22:45:21.345547 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 14 22:45:21.345770 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 14 22:45:21.345798 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 14 22:45:21.345917 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jul 14 22:45:21.345938 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 14 22:45:21.346053 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 14 22:45:21.346074 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 14 22:45:21.346187 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 14 22:45:21.346206 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 14 22:45:21.346475 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 14 22:45:21.346532 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 14 22:45:21.348569 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 14 22:45:21.348615 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 14 22:45:21.427094 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 14 22:45:21.427186 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 14 22:45:21.427689 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 14 22:45:21.427860 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 14 22:45:21.427898 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 14 22:45:21.431447 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 14 22:45:21.457339 systemd[1]: Switching root. Jul 14 22:45:21.486132 systemd-journald[215]: Journal stopped Jul 14 22:45:16.741931 kernel: Linux version 6.6.97-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Mon Jul 14 20:23:49 -00 2025 Jul 14 22:45:16.741948 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=bfa97d577a2baa7448b0ab2cae71f1606bd0084ffae5b72cc7eef5122a2ca497 Jul 14 22:45:16.741954 kernel: Disabled fast string operations Jul 14 22:45:16.741958 kernel: BIOS-provided physical RAM map: Jul 14 22:45:16.741962 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ebff] usable Jul 14 22:45:16.741966 kernel: BIOS-e820: [mem 0x000000000009ec00-0x000000000009ffff] reserved Jul 14 22:45:16.741972 kernel: BIOS-e820: [mem 0x00000000000dc000-0x00000000000fffff] reserved Jul 14 22:45:16.741977 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007fedffff] usable Jul 14 22:45:16.741981 kernel: BIOS-e820: [mem 0x000000007fee0000-0x000000007fefefff] ACPI data Jul 14 22:45:16.741985 kernel: BIOS-e820: [mem 0x000000007feff000-0x000000007fefffff] ACPI NVS Jul 14 22:45:16.741989 kernel: BIOS-e820: [mem 0x000000007ff00000-0x000000007fffffff] usable Jul 14 22:45:16.741993 kernel: BIOS-e820: [mem 0x00000000f0000000-0x00000000f7ffffff] reserved Jul 14 22:45:16.741997 kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec0ffff] reserved Jul 14 22:45:16.742001 kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved Jul 14 22:45:16.742008 kernel: BIOS-e820: [mem 0x00000000fffe0000-0x00000000ffffffff] reserved Jul 14 22:45:16.742012 kernel: NX (Execute Disable) protection: active Jul 14 22:45:16.742017 kernel: APIC: Static calls initialized Jul 14 22:45:16.742022 kernel: SMBIOS 2.7 present. Jul 14 22:45:16.742027 kernel: DMI: VMware, Inc. VMware Virtual Platform/440BX Desktop Reference Platform, BIOS 6.00 05/28/2020 Jul 14 22:45:16.742031 kernel: vmware: hypercall mode: 0x00 Jul 14 22:45:16.742036 kernel: Hypervisor detected: VMware Jul 14 22:45:16.742041 kernel: vmware: TSC freq read from hypervisor : 3408.000 MHz Jul 14 22:45:16.742047 kernel: vmware: Host bus clock speed read from hypervisor : 66000000 Hz Jul 14 22:45:16.742051 kernel: vmware: using clock offset of 4240981186 ns Jul 14 22:45:16.742056 kernel: tsc: Detected 3408.000 MHz processor Jul 14 22:45:16.742061 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jul 14 22:45:16.742066 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jul 14 22:45:16.742071 kernel: last_pfn = 0x80000 max_arch_pfn = 0x400000000 Jul 14 22:45:16.742076 kernel: total RAM covered: 3072M Jul 14 22:45:16.742080 kernel: Found optimal setting for mtrr clean up Jul 14 22:45:16.742086 kernel: gran_size: 64K chunk_size: 64K num_reg: 2 lose cover RAM: 0G Jul 14 22:45:16.742092 kernel: MTRR map: 6 entries (5 fixed + 1 variable; max 21), built from 8 variable MTRRs Jul 14 22:45:16.742097 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jul 14 22:45:16.742101 kernel: Using GB pages for direct mapping Jul 14 22:45:16.742106 kernel: ACPI: Early table checksum verification disabled Jul 14 22:45:16.742111 kernel: ACPI: RSDP 0x00000000000F6A00 000024 (v02 PTLTD ) Jul 14 22:45:16.742116 kernel: ACPI: XSDT 0x000000007FEE965B 00005C (v01 INTEL 440BX 06040000 VMW 01324272) Jul 14 22:45:16.742120 kernel: ACPI: FACP 0x000000007FEFEE73 0000F4 (v04 INTEL 440BX 06040000 PTL 000F4240) Jul 14 22:45:16.742125 kernel: ACPI: DSDT 0x000000007FEEAD55 01411E (v01 PTLTD Custom 06040000 MSFT 03000001) Jul 14 22:45:16.742130 kernel: ACPI: FACS 0x000000007FEFFFC0 000040 Jul 14 22:45:16.742138 kernel: ACPI: FACS 0x000000007FEFFFC0 000040 Jul 14 22:45:16.742143 kernel: ACPI: BOOT 0x000000007FEEAD2D 000028 (v01 PTLTD $SBFTBL$ 06040000 LTP 00000001) Jul 14 22:45:16.742148 kernel: ACPI: APIC 0x000000007FEEA5EB 000742 (v01 PTLTD ? APIC 06040000 LTP 00000000) Jul 14 22:45:16.742153 kernel: ACPI: MCFG 0x000000007FEEA5AF 00003C (v01 PTLTD $PCITBL$ 06040000 LTP 00000001) Jul 14 22:45:16.742158 kernel: ACPI: SRAT 0x000000007FEE9757 0008A8 (v02 VMWARE MEMPLUG 06040000 VMW 00000001) Jul 14 22:45:16.742164 kernel: ACPI: HPET 0x000000007FEE971F 000038 (v01 VMWARE VMW HPET 06040000 VMW 00000001) Jul 14 22:45:16.742169 kernel: ACPI: WAET 0x000000007FEE96F7 000028 (v01 VMWARE VMW WAET 06040000 VMW 00000001) Jul 14 22:45:16.742175 kernel: ACPI: Reserving FACP table memory at [mem 0x7fefee73-0x7fefef66] Jul 14 22:45:16.742180 kernel: ACPI: Reserving DSDT table memory at [mem 0x7feead55-0x7fefee72] Jul 14 22:45:16.742185 kernel: ACPI: Reserving FACS table memory at [mem 0x7fefffc0-0x7fefffff] Jul 14 22:45:16.742190 kernel: ACPI: Reserving FACS table memory at [mem 0x7fefffc0-0x7fefffff] Jul 14 22:45:16.742195 kernel: ACPI: Reserving BOOT table memory at [mem 0x7feead2d-0x7feead54] Jul 14 22:45:16.742200 kernel: ACPI: Reserving APIC table memory at [mem 0x7feea5eb-0x7feead2c] Jul 14 22:45:16.742205 kernel: ACPI: Reserving MCFG table memory at [mem 0x7feea5af-0x7feea5ea] Jul 14 22:45:16.742210 kernel: ACPI: Reserving SRAT table memory at [mem 0x7fee9757-0x7fee9ffe] Jul 14 22:45:16.742216 kernel: ACPI: Reserving HPET table memory at [mem 0x7fee971f-0x7fee9756] Jul 14 22:45:16.742221 kernel: ACPI: Reserving WAET table memory at [mem 0x7fee96f7-0x7fee971e] Jul 14 22:45:16.742226 kernel: system APIC only can use physical flat Jul 14 22:45:16.742231 kernel: APIC: Switched APIC routing to: physical flat Jul 14 22:45:16.742236 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jul 14 22:45:16.742241 kernel: SRAT: PXM 0 -> APIC 0x02 -> Node 0 Jul 14 22:45:16.742247 kernel: SRAT: PXM 0 -> APIC 0x04 -> Node 0 Jul 14 22:45:16.742252 kernel: SRAT: PXM 0 -> APIC 0x06 -> Node 0 Jul 14 22:45:16.742256 kernel: SRAT: PXM 0 -> APIC 0x08 -> Node 0 Jul 14 22:45:16.742263 kernel: SRAT: PXM 0 -> APIC 0x0a -> Node 0 Jul 14 22:45:16.742267 kernel: SRAT: PXM 0 -> APIC 0x0c -> Node 0 Jul 14 22:45:16.742273 kernel: SRAT: PXM 0 -> APIC 0x0e -> Node 0 Jul 14 22:45:16.742278 kernel: SRAT: PXM 0 -> APIC 0x10 -> Node 0 Jul 14 22:45:16.742282 kernel: SRAT: PXM 0 -> APIC 0x12 -> Node 0 Jul 14 22:45:16.742287 kernel: SRAT: PXM 0 -> APIC 0x14 -> Node 0 Jul 14 22:45:16.742292 kernel: SRAT: PXM 0 -> APIC 0x16 -> Node 0 Jul 14 22:45:16.742297 kernel: SRAT: PXM 0 -> APIC 0x18 -> Node 0 Jul 14 22:45:16.742302 kernel: SRAT: PXM 0 -> APIC 0x1a -> Node 0 Jul 14 22:45:16.742307 kernel: SRAT: PXM 0 -> APIC 0x1c -> Node 0 Jul 14 22:45:16.742313 kernel: SRAT: PXM 0 -> APIC 0x1e -> Node 0 Jul 14 22:45:16.743339 kernel: SRAT: PXM 0 -> APIC 0x20 -> Node 0 Jul 14 22:45:16.743349 kernel: SRAT: PXM 0 -> APIC 0x22 -> Node 0 Jul 14 22:45:16.743355 kernel: SRAT: PXM 0 -> APIC 0x24 -> Node 0 Jul 14 22:45:16.743360 kernel: SRAT: PXM 0 -> APIC 0x26 -> Node 0 Jul 14 22:45:16.743365 kernel: SRAT: PXM 0 -> APIC 0x28 -> Node 0 Jul 14 22:45:16.743370 kernel: SRAT: PXM 0 -> APIC 0x2a -> Node 0 Jul 14 22:45:16.743375 kernel: SRAT: PXM 0 -> APIC 0x2c -> Node 0 Jul 14 22:45:16.743381 kernel: SRAT: PXM 0 -> APIC 0x2e -> Node 0 Jul 14 22:45:16.743386 kernel: SRAT: PXM 0 -> APIC 0x30 -> Node 0 Jul 14 22:45:16.743390 kernel: SRAT: PXM 0 -> APIC 0x32 -> Node 0 Jul 14 22:45:16.743398 kernel: SRAT: PXM 0 -> APIC 0x34 -> Node 0 Jul 14 22:45:16.743403 kernel: SRAT: PXM 0 -> APIC 0x36 -> Node 0 Jul 14 22:45:16.743408 kernel: SRAT: PXM 0 -> APIC 0x38 -> Node 0 Jul 14 22:45:16.743413 kernel: SRAT: PXM 0 -> APIC 0x3a -> Node 0 Jul 14 22:45:16.743418 kernel: SRAT: PXM 0 -> APIC 0x3c -> Node 0 Jul 14 22:45:16.743423 kernel: SRAT: PXM 0 -> APIC 0x3e -> Node 0 Jul 14 22:45:16.743428 kernel: SRAT: PXM 0 -> APIC 0x40 -> Node 0 Jul 14 22:45:16.743433 kernel: SRAT: PXM 0 -> APIC 0x42 -> Node 0 Jul 14 22:45:16.743438 kernel: SRAT: PXM 0 -> APIC 0x44 -> Node 0 Jul 14 22:45:16.743443 kernel: SRAT: PXM 0 -> APIC 0x46 -> Node 0 Jul 14 22:45:16.743454 kernel: SRAT: PXM 0 -> APIC 0x48 -> Node 0 Jul 14 22:45:16.743463 kernel: SRAT: PXM 0 -> APIC 0x4a -> Node 0 Jul 14 22:45:16.743470 kernel: SRAT: PXM 0 -> APIC 0x4c -> Node 0 Jul 14 22:45:16.743479 kernel: SRAT: PXM 0 -> APIC 0x4e -> Node 0 Jul 14 22:45:16.743485 kernel: SRAT: PXM 0 -> APIC 0x50 -> Node 0 Jul 14 22:45:16.743490 kernel: SRAT: PXM 0 -> APIC 0x52 -> Node 0 Jul 14 22:45:16.743495 kernel: SRAT: PXM 0 -> APIC 0x54 -> Node 0 Jul 14 22:45:16.743500 kernel: SRAT: PXM 0 -> APIC 0x56 -> Node 0 Jul 14 22:45:16.743505 kernel: SRAT: PXM 0 -> APIC 0x58 -> Node 0 Jul 14 22:45:16.743510 kernel: SRAT: PXM 0 -> APIC 0x5a -> Node 0 Jul 14 22:45:16.743517 kernel: SRAT: PXM 0 -> APIC 0x5c -> Node 0 Jul 14 22:45:16.743522 kernel: SRAT: PXM 0 -> APIC 0x5e -> Node 0 Jul 14 22:45:16.743527 kernel: SRAT: PXM 0 -> APIC 0x60 -> Node 0 Jul 14 22:45:16.743532 kernel: SRAT: PXM 0 -> APIC 0x62 -> Node 0 Jul 14 22:45:16.743537 kernel: SRAT: PXM 0 -> APIC 0x64 -> Node 0 Jul 14 22:45:16.743542 kernel: SRAT: PXM 0 -> APIC 0x66 -> Node 0 Jul 14 22:45:16.743547 kernel: SRAT: PXM 0 -> APIC 0x68 -> Node 0 Jul 14 22:45:16.743552 kernel: SRAT: PXM 0 -> APIC 0x6a -> Node 0 Jul 14 22:45:16.743556 kernel: SRAT: PXM 0 -> APIC 0x6c -> Node 0 Jul 14 22:45:16.743561 kernel: SRAT: PXM 0 -> APIC 0x6e -> Node 0 Jul 14 22:45:16.743567 kernel: SRAT: PXM 0 -> APIC 0x70 -> Node 0 Jul 14 22:45:16.743572 kernel: SRAT: PXM 0 -> APIC 0x72 -> Node 0 Jul 14 22:45:16.743578 kernel: SRAT: PXM 0 -> APIC 0x74 -> Node 0 Jul 14 22:45:16.743587 kernel: SRAT: PXM 0 -> APIC 0x76 -> Node 0 Jul 14 22:45:16.743592 kernel: SRAT: PXM 0 -> APIC 0x78 -> Node 0 Jul 14 22:45:16.743598 kernel: SRAT: PXM 0 -> APIC 0x7a -> Node 0 Jul 14 22:45:16.743603 kernel: SRAT: PXM 0 -> APIC 0x7c -> Node 0 Jul 14 22:45:16.743608 kernel: SRAT: PXM 0 -> APIC 0x7e -> Node 0 Jul 14 22:45:16.743613 kernel: SRAT: PXM 0 -> APIC 0x80 -> Node 0 Jul 14 22:45:16.743620 kernel: SRAT: PXM 0 -> APIC 0x82 -> Node 0 Jul 14 22:45:16.743625 kernel: SRAT: PXM 0 -> APIC 0x84 -> Node 0 Jul 14 22:45:16.743631 kernel: SRAT: PXM 0 -> APIC 0x86 -> Node 0 Jul 14 22:45:16.743636 kernel: SRAT: PXM 0 -> APIC 0x88 -> Node 0 Jul 14 22:45:16.743641 kernel: SRAT: PXM 0 -> APIC 0x8a -> Node 0 Jul 14 22:45:16.743647 kernel: SRAT: PXM 0 -> APIC 0x8c -> Node 0 Jul 14 22:45:16.743652 kernel: SRAT: PXM 0 -> APIC 0x8e -> Node 0 Jul 14 22:45:16.743657 kernel: SRAT: PXM 0 -> APIC 0x90 -> Node 0 Jul 14 22:45:16.743662 kernel: SRAT: PXM 0 -> APIC 0x92 -> Node 0 Jul 14 22:45:16.743668 kernel: SRAT: PXM 0 -> APIC 0x94 -> Node 0 Jul 14 22:45:16.743674 kernel: SRAT: PXM 0 -> APIC 0x96 -> Node 0 Jul 14 22:45:16.743680 kernel: SRAT: PXM 0 -> APIC 0x98 -> Node 0 Jul 14 22:45:16.743686 kernel: SRAT: PXM 0 -> APIC 0x9a -> Node 0 Jul 14 22:45:16.743691 kernel: SRAT: PXM 0 -> APIC 0x9c -> Node 0 Jul 14 22:45:16.743696 kernel: SRAT: PXM 0 -> APIC 0x9e -> Node 0 Jul 14 22:45:16.743701 kernel: SRAT: PXM 0 -> APIC 0xa0 -> Node 0 Jul 14 22:45:16.743707 kernel: SRAT: PXM 0 -> APIC 0xa2 -> Node 0 Jul 14 22:45:16.743712 kernel: SRAT: PXM 0 -> APIC 0xa4 -> Node 0 Jul 14 22:45:16.743717 kernel: SRAT: PXM 0 -> APIC 0xa6 -> Node 0 Jul 14 22:45:16.743723 kernel: SRAT: PXM 0 -> APIC 0xa8 -> Node 0 Jul 14 22:45:16.743729 kernel: SRAT: PXM 0 -> APIC 0xaa -> Node 0 Jul 14 22:45:16.743735 kernel: SRAT: PXM 0 -> APIC 0xac -> Node 0 Jul 14 22:45:16.743740 kernel: SRAT: PXM 0 -> APIC 0xae -> Node 0 Jul 14 22:45:16.743745 kernel: SRAT: PXM 0 -> APIC 0xb0 -> Node 0 Jul 14 22:45:16.743751 kernel: SRAT: PXM 0 -> APIC 0xb2 -> Node 0 Jul 14 22:45:16.743756 kernel: SRAT: PXM 0 -> APIC 0xb4 -> Node 0 Jul 14 22:45:16.743761 kernel: SRAT: PXM 0 -> APIC 0xb6 -> Node 0 Jul 14 22:45:16.743767 kernel: SRAT: PXM 0 -> APIC 0xb8 -> Node 0 Jul 14 22:45:16.743772 kernel: SRAT: PXM 0 -> APIC 0xba -> Node 0 Jul 14 22:45:16.743777 kernel: SRAT: PXM 0 -> APIC 0xbc -> Node 0 Jul 14 22:45:16.743784 kernel: SRAT: PXM 0 -> APIC 0xbe -> Node 0 Jul 14 22:45:16.743789 kernel: SRAT: PXM 0 -> APIC 0xc0 -> Node 0 Jul 14 22:45:16.743794 kernel: SRAT: PXM 0 -> APIC 0xc2 -> Node 0 Jul 14 22:45:16.743800 kernel: SRAT: PXM 0 -> APIC 0xc4 -> Node 0 Jul 14 22:45:16.743805 kernel: SRAT: PXM 0 -> APIC 0xc6 -> Node 0 Jul 14 22:45:16.743810 kernel: SRAT: PXM 0 -> APIC 0xc8 -> Node 0 Jul 14 22:45:16.743815 kernel: SRAT: PXM 0 -> APIC 0xca -> Node 0 Jul 14 22:45:16.743821 kernel: SRAT: PXM 0 -> APIC 0xcc -> Node 0 Jul 14 22:45:16.743826 kernel: SRAT: PXM 0 -> APIC 0xce -> Node 0 Jul 14 22:45:16.743831 kernel: SRAT: PXM 0 -> APIC 0xd0 -> Node 0 Jul 14 22:45:16.743838 kernel: SRAT: PXM 0 -> APIC 0xd2 -> Node 0 Jul 14 22:45:16.743843 kernel: SRAT: PXM 0 -> APIC 0xd4 -> Node 0 Jul 14 22:45:16.743848 kernel: SRAT: PXM 0 -> APIC 0xd6 -> Node 0 Jul 14 22:45:16.743853 kernel: SRAT: PXM 0 -> APIC 0xd8 -> Node 0 Jul 14 22:45:16.743859 kernel: SRAT: PXM 0 -> APIC 0xda -> Node 0 Jul 14 22:45:16.743864 kernel: SRAT: PXM 0 -> APIC 0xdc -> Node 0 Jul 14 22:45:16.743869 kernel: SRAT: PXM 0 -> APIC 0xde -> Node 0 Jul 14 22:45:16.743875 kernel: SRAT: PXM 0 -> APIC 0xe0 -> Node 0 Jul 14 22:45:16.743880 kernel: SRAT: PXM 0 -> APIC 0xe2 -> Node 0 Jul 14 22:45:16.743885 kernel: SRAT: PXM 0 -> APIC 0xe4 -> Node 0 Jul 14 22:45:16.743891 kernel: SRAT: PXM 0 -> APIC 0xe6 -> Node 0 Jul 14 22:45:16.743897 kernel: SRAT: PXM 0 -> APIC 0xe8 -> Node 0 Jul 14 22:45:16.743902 kernel: SRAT: PXM 0 -> APIC 0xea -> Node 0 Jul 14 22:45:16.743908 kernel: SRAT: PXM 0 -> APIC 0xec -> Node 0 Jul 14 22:45:16.743913 kernel: SRAT: PXM 0 -> APIC 0xee -> Node 0 Jul 14 22:45:16.743919 kernel: SRAT: PXM 0 -> APIC 0xf0 -> Node 0 Jul 14 22:45:16.743924 kernel: SRAT: PXM 0 -> APIC 0xf2 -> Node 0 Jul 14 22:45:16.743929 kernel: SRAT: PXM 0 -> APIC 0xf4 -> Node 0 Jul 14 22:45:16.743934 kernel: SRAT: PXM 0 -> APIC 0xf6 -> Node 0 Jul 14 22:45:16.743940 kernel: SRAT: PXM 0 -> APIC 0xf8 -> Node 0 Jul 14 22:45:16.743945 kernel: SRAT: PXM 0 -> APIC 0xfa -> Node 0 Jul 14 22:45:16.743951 kernel: SRAT: PXM 0 -> APIC 0xfc -> Node 0 Jul 14 22:45:16.743957 kernel: SRAT: PXM 0 -> APIC 0xfe -> Node 0 Jul 14 22:45:16.743963 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Jul 14 22:45:16.743968 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Jul 14 22:45:16.743974 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000-0xbfffffff] hotplug Jul 14 22:45:16.743979 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7fffffff] -> [mem 0x00000000-0x7fffffff] Jul 14 22:45:16.743985 kernel: NODE_DATA(0) allocated [mem 0x7fffa000-0x7fffffff] Jul 14 22:45:16.743991 kernel: Zone ranges: Jul 14 22:45:16.743996 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jul 14 22:45:16.744003 kernel: DMA32 [mem 0x0000000001000000-0x000000007fffffff] Jul 14 22:45:16.744008 kernel: Normal empty Jul 14 22:45:16.744014 kernel: Movable zone start for each node Jul 14 22:45:16.744019 kernel: Early memory node ranges Jul 14 22:45:16.744024 kernel: node 0: [mem 0x0000000000001000-0x000000000009dfff] Jul 14 22:45:16.744030 kernel: node 0: [mem 0x0000000000100000-0x000000007fedffff] Jul 14 22:45:16.744035 kernel: node 0: [mem 0x000000007ff00000-0x000000007fffffff] Jul 14 22:45:16.744041 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007fffffff] Jul 14 22:45:16.744046 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jul 14 22:45:16.744052 kernel: On node 0, zone DMA: 98 pages in unavailable ranges Jul 14 22:45:16.744058 kernel: On node 0, zone DMA32: 32 pages in unavailable ranges Jul 14 22:45:16.744064 kernel: ACPI: PM-Timer IO Port: 0x1008 Jul 14 22:45:16.744069 kernel: system APIC only can use physical flat Jul 14 22:45:16.744074 kernel: ACPI: LAPIC_NMI (acpi_id[0x00] high edge lint[0x1]) Jul 14 22:45:16.744080 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1]) Jul 14 22:45:16.744085 kernel: ACPI: LAPIC_NMI (acpi_id[0x02] high edge lint[0x1]) Jul 14 22:45:16.744091 kernel: ACPI: LAPIC_NMI (acpi_id[0x03] high edge lint[0x1]) Jul 14 22:45:16.744096 kernel: ACPI: LAPIC_NMI (acpi_id[0x04] high edge lint[0x1]) Jul 14 22:45:16.744102 kernel: ACPI: LAPIC_NMI (acpi_id[0x05] high edge lint[0x1]) Jul 14 22:45:16.744108 kernel: ACPI: LAPIC_NMI (acpi_id[0x06] high edge lint[0x1]) Jul 14 22:45:16.744114 kernel: ACPI: LAPIC_NMI (acpi_id[0x07] high edge lint[0x1]) Jul 14 22:45:16.744119 kernel: ACPI: LAPIC_NMI (acpi_id[0x08] high edge lint[0x1]) Jul 14 22:45:16.744124 kernel: ACPI: LAPIC_NMI (acpi_id[0x09] high edge lint[0x1]) Jul 14 22:45:16.744130 kernel: ACPI: LAPIC_NMI (acpi_id[0x0a] high edge lint[0x1]) Jul 14 22:45:16.744135 kernel: ACPI: LAPIC_NMI (acpi_id[0x0b] high edge lint[0x1]) Jul 14 22:45:16.744140 kernel: ACPI: LAPIC_NMI (acpi_id[0x0c] high edge lint[0x1]) Jul 14 22:45:16.744146 kernel: ACPI: LAPIC_NMI (acpi_id[0x0d] high edge lint[0x1]) Jul 14 22:45:16.744151 kernel: ACPI: LAPIC_NMI (acpi_id[0x0e] high edge lint[0x1]) Jul 14 22:45:16.744157 kernel: ACPI: LAPIC_NMI (acpi_id[0x0f] high edge lint[0x1]) Jul 14 22:45:16.744163 kernel: ACPI: LAPIC_NMI (acpi_id[0x10] high edge lint[0x1]) Jul 14 22:45:16.744169 kernel: ACPI: LAPIC_NMI (acpi_id[0x11] high edge lint[0x1]) Jul 14 22:45:16.744174 kernel: ACPI: LAPIC_NMI (acpi_id[0x12] high edge lint[0x1]) Jul 14 22:45:16.744179 kernel: ACPI: LAPIC_NMI (acpi_id[0x13] high edge lint[0x1]) Jul 14 22:45:16.744185 kernel: ACPI: LAPIC_NMI (acpi_id[0x14] high edge lint[0x1]) Jul 14 22:45:16.744190 kernel: ACPI: LAPIC_NMI (acpi_id[0x15] high edge lint[0x1]) Jul 14 22:45:16.744195 kernel: ACPI: LAPIC_NMI (acpi_id[0x16] high edge lint[0x1]) Jul 14 22:45:16.744201 kernel: ACPI: LAPIC_NMI (acpi_id[0x17] high edge lint[0x1]) Jul 14 22:45:16.744206 kernel: ACPI: LAPIC_NMI (acpi_id[0x18] high edge lint[0x1]) Jul 14 22:45:16.744212 kernel: ACPI: LAPIC_NMI (acpi_id[0x19] high edge lint[0x1]) Jul 14 22:45:16.744218 kernel: ACPI: LAPIC_NMI (acpi_id[0x1a] high edge lint[0x1]) Jul 14 22:45:16.744224 kernel: ACPI: LAPIC_NMI (acpi_id[0x1b] high edge lint[0x1]) Jul 14 22:45:16.744229 kernel: ACPI: LAPIC_NMI (acpi_id[0x1c] high edge lint[0x1]) Jul 14 22:45:16.744235 kernel: ACPI: LAPIC_NMI (acpi_id[0x1d] high edge lint[0x1]) Jul 14 22:45:16.744240 kernel: ACPI: LAPIC_NMI (acpi_id[0x1e] high edge lint[0x1]) Jul 14 22:45:16.744245 kernel: ACPI: LAPIC_NMI (acpi_id[0x1f] high edge lint[0x1]) Jul 14 22:45:16.744251 kernel: ACPI: LAPIC_NMI (acpi_id[0x20] high edge lint[0x1]) Jul 14 22:45:16.744256 kernel: ACPI: LAPIC_NMI (acpi_id[0x21] high edge lint[0x1]) Jul 14 22:45:16.744262 kernel: ACPI: LAPIC_NMI (acpi_id[0x22] high edge lint[0x1]) Jul 14 22:45:16.744268 kernel: ACPI: LAPIC_NMI (acpi_id[0x23] high edge lint[0x1]) Jul 14 22:45:16.744273 kernel: ACPI: LAPIC_NMI (acpi_id[0x24] high edge lint[0x1]) Jul 14 22:45:16.744279 kernel: ACPI: LAPIC_NMI (acpi_id[0x25] high edge lint[0x1]) Jul 14 22:45:16.744284 kernel: ACPI: LAPIC_NMI (acpi_id[0x26] high edge lint[0x1]) Jul 14 22:45:16.744289 kernel: ACPI: LAPIC_NMI (acpi_id[0x27] high edge lint[0x1]) Jul 14 22:45:16.744295 kernel: ACPI: LAPIC_NMI (acpi_id[0x28] high edge lint[0x1]) Jul 14 22:45:16.744300 kernel: ACPI: LAPIC_NMI (acpi_id[0x29] high edge lint[0x1]) Jul 14 22:45:16.744305 kernel: ACPI: LAPIC_NMI (acpi_id[0x2a] high edge lint[0x1]) Jul 14 22:45:16.744311 kernel: ACPI: LAPIC_NMI (acpi_id[0x2b] high edge lint[0x1]) Jul 14 22:45:16.744316 kernel: ACPI: LAPIC_NMI (acpi_id[0x2c] high edge lint[0x1]) Jul 14 22:45:16.744329 kernel: ACPI: LAPIC_NMI (acpi_id[0x2d] high edge lint[0x1]) Jul 14 22:45:16.744335 kernel: ACPI: LAPIC_NMI (acpi_id[0x2e] high edge lint[0x1]) Jul 14 22:45:16.744340 kernel: ACPI: LAPIC_NMI (acpi_id[0x2f] high edge lint[0x1]) Jul 14 22:45:16.744345 kernel: ACPI: LAPIC_NMI (acpi_id[0x30] high edge lint[0x1]) Jul 14 22:45:16.744351 kernel: ACPI: LAPIC_NMI (acpi_id[0x31] high edge lint[0x1]) Jul 14 22:45:16.744356 kernel: ACPI: LAPIC_NMI (acpi_id[0x32] high edge lint[0x1]) Jul 14 22:45:16.744362 kernel: ACPI: LAPIC_NMI (acpi_id[0x33] high edge lint[0x1]) Jul 14 22:45:16.744367 kernel: ACPI: LAPIC_NMI (acpi_id[0x34] high edge lint[0x1]) Jul 14 22:45:16.744373 kernel: ACPI: LAPIC_NMI (acpi_id[0x35] high edge lint[0x1]) Jul 14 22:45:16.744378 kernel: ACPI: LAPIC_NMI (acpi_id[0x36] high edge lint[0x1]) Jul 14 22:45:16.744384 kernel: ACPI: LAPIC_NMI (acpi_id[0x37] high edge lint[0x1]) Jul 14 22:45:16.744390 kernel: ACPI: LAPIC_NMI (acpi_id[0x38] high edge lint[0x1]) Jul 14 22:45:16.744395 kernel: ACPI: LAPIC_NMI (acpi_id[0x39] high edge lint[0x1]) Jul 14 22:45:16.744401 kernel: ACPI: LAPIC_NMI (acpi_id[0x3a] high edge lint[0x1]) Jul 14 22:45:16.744406 kernel: ACPI: LAPIC_NMI (acpi_id[0x3b] high edge lint[0x1]) Jul 14 22:45:16.744411 kernel: ACPI: LAPIC_NMI (acpi_id[0x3c] high edge lint[0x1]) Jul 14 22:45:16.744417 kernel: ACPI: LAPIC_NMI (acpi_id[0x3d] high edge lint[0x1]) Jul 14 22:45:16.744422 kernel: ACPI: LAPIC_NMI (acpi_id[0x3e] high edge lint[0x1]) Jul 14 22:45:16.744427 kernel: ACPI: LAPIC_NMI (acpi_id[0x3f] high edge lint[0x1]) Jul 14 22:45:16.744434 kernel: ACPI: LAPIC_NMI (acpi_id[0x40] high edge lint[0x1]) Jul 14 22:45:16.744439 kernel: ACPI: LAPIC_NMI (acpi_id[0x41] high edge lint[0x1]) Jul 14 22:45:16.744445 kernel: ACPI: LAPIC_NMI (acpi_id[0x42] high edge lint[0x1]) Jul 14 22:45:16.744454 kernel: ACPI: LAPIC_NMI (acpi_id[0x43] high edge lint[0x1]) Jul 14 22:45:16.744462 kernel: ACPI: LAPIC_NMI (acpi_id[0x44] high edge lint[0x1]) Jul 14 22:45:16.744467 kernel: ACPI: LAPIC_NMI (acpi_id[0x45] high edge lint[0x1]) Jul 14 22:45:16.744473 kernel: ACPI: LAPIC_NMI (acpi_id[0x46] high edge lint[0x1]) Jul 14 22:45:16.744479 kernel: ACPI: LAPIC_NMI (acpi_id[0x47] high edge lint[0x1]) Jul 14 22:45:16.744484 kernel: ACPI: LAPIC_NMI (acpi_id[0x48] high edge lint[0x1]) Jul 14 22:45:16.744489 kernel: ACPI: LAPIC_NMI (acpi_id[0x49] high edge lint[0x1]) Jul 14 22:45:16.744497 kernel: ACPI: LAPIC_NMI (acpi_id[0x4a] high edge lint[0x1]) Jul 14 22:45:16.744502 kernel: ACPI: LAPIC_NMI (acpi_id[0x4b] high edge lint[0x1]) Jul 14 22:45:16.744508 kernel: ACPI: LAPIC_NMI (acpi_id[0x4c] high edge lint[0x1]) Jul 14 22:45:16.744513 kernel: ACPI: LAPIC_NMI (acpi_id[0x4d] high edge lint[0x1]) Jul 14 22:45:16.744518 kernel: ACPI: LAPIC_NMI (acpi_id[0x4e] high edge lint[0x1]) Jul 14 22:45:16.744524 kernel: ACPI: LAPIC_NMI (acpi_id[0x4f] high edge lint[0x1]) Jul 14 22:45:16.744529 kernel: ACPI: LAPIC_NMI (acpi_id[0x50] high edge lint[0x1]) Jul 14 22:45:16.744535 kernel: ACPI: LAPIC_NMI (acpi_id[0x51] high edge lint[0x1]) Jul 14 22:45:16.744540 kernel: ACPI: LAPIC_NMI (acpi_id[0x52] high edge lint[0x1]) Jul 14 22:45:16.744545 kernel: ACPI: LAPIC_NMI (acpi_id[0x53] high edge lint[0x1]) Jul 14 22:45:16.744552 kernel: ACPI: LAPIC_NMI (acpi_id[0x54] high edge lint[0x1]) Jul 14 22:45:16.744557 kernel: ACPI: LAPIC_NMI (acpi_id[0x55] high edge lint[0x1]) Jul 14 22:45:16.744562 kernel: ACPI: LAPIC_NMI (acpi_id[0x56] high edge lint[0x1]) Jul 14 22:45:16.744568 kernel: ACPI: LAPIC_NMI (acpi_id[0x57] high edge lint[0x1]) Jul 14 22:45:16.744573 kernel: ACPI: LAPIC_NMI (acpi_id[0x58] high edge lint[0x1]) Jul 14 22:45:16.744579 kernel: ACPI: LAPIC_NMI (acpi_id[0x59] high edge lint[0x1]) Jul 14 22:45:16.744584 kernel: ACPI: LAPIC_NMI (acpi_id[0x5a] high edge lint[0x1]) Jul 14 22:45:16.744590 kernel: ACPI: LAPIC_NMI (acpi_id[0x5b] high edge lint[0x1]) Jul 14 22:45:16.744595 kernel: ACPI: LAPIC_NMI (acpi_id[0x5c] high edge lint[0x1]) Jul 14 22:45:16.744602 kernel: ACPI: LAPIC_NMI (acpi_id[0x5d] high edge lint[0x1]) Jul 14 22:45:16.744607 kernel: ACPI: LAPIC_NMI (acpi_id[0x5e] high edge lint[0x1]) Jul 14 22:45:16.744612 kernel: ACPI: LAPIC_NMI (acpi_id[0x5f] high edge lint[0x1]) Jul 14 22:45:16.744618 kernel: ACPI: LAPIC_NMI (acpi_id[0x60] high edge lint[0x1]) Jul 14 22:45:16.744623 kernel: ACPI: LAPIC_NMI (acpi_id[0x61] high edge lint[0x1]) Jul 14 22:45:16.744629 kernel: ACPI: LAPIC_NMI (acpi_id[0x62] high edge lint[0x1]) Jul 14 22:45:16.744634 kernel: ACPI: LAPIC_NMI (acpi_id[0x63] high edge lint[0x1]) Jul 14 22:45:16.744639 kernel: ACPI: LAPIC_NMI (acpi_id[0x64] high edge lint[0x1]) Jul 14 22:45:16.744645 kernel: ACPI: LAPIC_NMI (acpi_id[0x65] high edge lint[0x1]) Jul 14 22:45:16.744655 kernel: ACPI: LAPIC_NMI (acpi_id[0x66] high edge lint[0x1]) Jul 14 22:45:16.744664 kernel: ACPI: LAPIC_NMI (acpi_id[0x67] high edge lint[0x1]) Jul 14 22:45:16.744670 kernel: ACPI: LAPIC_NMI (acpi_id[0x68] high edge lint[0x1]) Jul 14 22:45:16.744675 kernel: ACPI: LAPIC_NMI (acpi_id[0x69] high edge lint[0x1]) Jul 14 22:45:16.744680 kernel: ACPI: LAPIC_NMI (acpi_id[0x6a] high edge lint[0x1]) Jul 14 22:45:16.744686 kernel: ACPI: LAPIC_NMI (acpi_id[0x6b] high edge lint[0x1]) Jul 14 22:45:16.744691 kernel: ACPI: LAPIC_NMI (acpi_id[0x6c] high edge lint[0x1]) Jul 14 22:45:16.744696 kernel: ACPI: LAPIC_NMI (acpi_id[0x6d] high edge lint[0x1]) Jul 14 22:45:16.744702 kernel: ACPI: LAPIC_NMI (acpi_id[0x6e] high edge lint[0x1]) Jul 14 22:45:16.744707 kernel: ACPI: LAPIC_NMI (acpi_id[0x6f] high edge lint[0x1]) Jul 14 22:45:16.744714 kernel: ACPI: LAPIC_NMI (acpi_id[0x70] high edge lint[0x1]) Jul 14 22:45:16.744719 kernel: ACPI: LAPIC_NMI (acpi_id[0x71] high edge lint[0x1]) Jul 14 22:45:16.744724 kernel: ACPI: LAPIC_NMI (acpi_id[0x72] high edge lint[0x1]) Jul 14 22:45:16.744730 kernel: ACPI: LAPIC_NMI (acpi_id[0x73] high edge lint[0x1]) Jul 14 22:45:16.744739 kernel: ACPI: LAPIC_NMI (acpi_id[0x74] high edge lint[0x1]) Jul 14 22:45:16.744744 kernel: ACPI: LAPIC_NMI (acpi_id[0x75] high edge lint[0x1]) Jul 14 22:45:16.744749 kernel: ACPI: LAPIC_NMI (acpi_id[0x76] high edge lint[0x1]) Jul 14 22:45:16.744755 kernel: ACPI: LAPIC_NMI (acpi_id[0x77] high edge lint[0x1]) Jul 14 22:45:16.744761 kernel: ACPI: LAPIC_NMI (acpi_id[0x78] high edge lint[0x1]) Jul 14 22:45:16.744766 kernel: ACPI: LAPIC_NMI (acpi_id[0x79] high edge lint[0x1]) Jul 14 22:45:16.744772 kernel: ACPI: LAPIC_NMI (acpi_id[0x7a] high edge lint[0x1]) Jul 14 22:45:16.744778 kernel: ACPI: LAPIC_NMI (acpi_id[0x7b] high edge lint[0x1]) Jul 14 22:45:16.744783 kernel: ACPI: LAPIC_NMI (acpi_id[0x7c] high edge lint[0x1]) Jul 14 22:45:16.744789 kernel: ACPI: LAPIC_NMI (acpi_id[0x7d] high edge lint[0x1]) Jul 14 22:45:16.744794 kernel: ACPI: LAPIC_NMI (acpi_id[0x7e] high edge lint[0x1]) Jul 14 22:45:16.744800 kernel: ACPI: LAPIC_NMI (acpi_id[0x7f] high edge lint[0x1]) Jul 14 22:45:16.744805 kernel: IOAPIC[0]: apic_id 1, version 17, address 0xfec00000, GSI 0-23 Jul 14 22:45:16.744811 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 high edge) Jul 14 22:45:16.744816 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jul 14 22:45:16.744822 kernel: ACPI: HPET id: 0x8086af01 base: 0xfed00000 Jul 14 22:45:16.744828 kernel: TSC deadline timer available Jul 14 22:45:16.744837 kernel: smpboot: Allowing 128 CPUs, 126 hotplug CPUs Jul 14 22:45:16.744842 kernel: [mem 0x80000000-0xefffffff] available for PCI devices Jul 14 22:45:16.744848 kernel: Booting paravirtualized kernel on VMware hypervisor Jul 14 22:45:16.744854 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jul 14 22:45:16.744859 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:128 nr_cpu_ids:128 nr_node_ids:1 Jul 14 22:45:16.744865 kernel: percpu: Embedded 58 pages/cpu s197096 r8192 d32280 u262144 Jul 14 22:45:16.744871 kernel: pcpu-alloc: s197096 r8192 d32280 u262144 alloc=1*2097152 Jul 14 22:45:16.744876 kernel: pcpu-alloc: [0] 000 001 002 003 004 005 006 007 Jul 14 22:45:16.744882 kernel: pcpu-alloc: [0] 008 009 010 011 012 013 014 015 Jul 14 22:45:16.744888 kernel: pcpu-alloc: [0] 016 017 018 019 020 021 022 023 Jul 14 22:45:16.744893 kernel: pcpu-alloc: [0] 024 025 026 027 028 029 030 031 Jul 14 22:45:16.744899 kernel: pcpu-alloc: [0] 032 033 034 035 036 037 038 039 Jul 14 22:45:16.744912 kernel: pcpu-alloc: [0] 040 041 042 043 044 045 046 047 Jul 14 22:45:16.744918 kernel: pcpu-alloc: [0] 048 049 050 051 052 053 054 055 Jul 14 22:45:16.744924 kernel: pcpu-alloc: [0] 056 057 058 059 060 061 062 063 Jul 14 22:45:16.744930 kernel: pcpu-alloc: [0] 064 065 066 067 068 069 070 071 Jul 14 22:45:16.744936 kernel: pcpu-alloc: [0] 072 073 074 075 076 077 078 079 Jul 14 22:45:16.744942 kernel: pcpu-alloc: [0] 080 081 082 083 084 085 086 087 Jul 14 22:45:16.747356 kernel: pcpu-alloc: [0] 088 089 090 091 092 093 094 095 Jul 14 22:45:16.747366 kernel: pcpu-alloc: [0] 096 097 098 099 100 101 102 103 Jul 14 22:45:16.747372 kernel: pcpu-alloc: [0] 104 105 106 107 108 109 110 111 Jul 14 22:45:16.747378 kernel: pcpu-alloc: [0] 112 113 114 115 116 117 118 119 Jul 14 22:45:16.747384 kernel: pcpu-alloc: [0] 120 121 122 123 124 125 126 127 Jul 14 22:45:16.747390 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=bfa97d577a2baa7448b0ab2cae71f1606bd0084ffae5b72cc7eef5122a2ca497 Jul 14 22:45:16.747399 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 14 22:45:16.747405 kernel: random: crng init done Jul 14 22:45:16.747411 kernel: printk: log_buf_len individual max cpu contribution: 4096 bytes Jul 14 22:45:16.747417 kernel: printk: log_buf_len total cpu_extra contributions: 520192 bytes Jul 14 22:45:16.747423 kernel: printk: log_buf_len min size: 262144 bytes Jul 14 22:45:16.747429 kernel: printk: log_buf_len: 1048576 bytes Jul 14 22:45:16.747435 kernel: printk: early log buf free: 239648(91%) Jul 14 22:45:16.747441 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 14 22:45:16.747447 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jul 14 22:45:16.747457 kernel: Fallback order for Node 0: 0 Jul 14 22:45:16.747463 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515808 Jul 14 22:45:16.747469 kernel: Policy zone: DMA32 Jul 14 22:45:16.747475 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 14 22:45:16.747481 kernel: Memory: 1936340K/2096628K available (12288K kernel code, 2295K rwdata, 22748K rodata, 42876K init, 2316K bss, 160028K reserved, 0K cma-reserved) Jul 14 22:45:16.747488 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=128, Nodes=1 Jul 14 22:45:16.747496 kernel: ftrace: allocating 37970 entries in 149 pages Jul 14 22:45:16.747502 kernel: ftrace: allocated 149 pages with 4 groups Jul 14 22:45:16.747508 kernel: Dynamic Preempt: voluntary Jul 14 22:45:16.747514 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 14 22:45:16.747520 kernel: rcu: RCU event tracing is enabled. Jul 14 22:45:16.747526 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=128. Jul 14 22:45:16.747532 kernel: Trampoline variant of Tasks RCU enabled. Jul 14 22:45:16.747537 kernel: Rude variant of Tasks RCU enabled. Jul 14 22:45:16.747543 kernel: Tracing variant of Tasks RCU enabled. Jul 14 22:45:16.747550 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 14 22:45:16.747557 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=128 Jul 14 22:45:16.747563 kernel: NR_IRQS: 33024, nr_irqs: 1448, preallocated irqs: 16 Jul 14 22:45:16.747569 kernel: rcu: srcu_init: Setting srcu_struct sizes to big. Jul 14 22:45:16.747575 kernel: Console: colour VGA+ 80x25 Jul 14 22:45:16.747581 kernel: printk: console [tty0] enabled Jul 14 22:45:16.747587 kernel: printk: console [ttyS0] enabled Jul 14 22:45:16.747593 kernel: ACPI: Core revision 20230628 Jul 14 22:45:16.747599 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 133484882848 ns Jul 14 22:45:16.747605 kernel: APIC: Switch to symmetric I/O mode setup Jul 14 22:45:16.747612 kernel: x2apic enabled Jul 14 22:45:16.747618 kernel: APIC: Switched APIC routing to: physical x2apic Jul 14 22:45:16.747624 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jul 14 22:45:16.747630 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x311fd3cd494, max_idle_ns: 440795223879 ns Jul 14 22:45:16.747636 kernel: Calibrating delay loop (skipped) preset value.. 6816.00 BogoMIPS (lpj=3408000) Jul 14 22:45:16.747642 kernel: Disabled fast string operations Jul 14 22:45:16.747648 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Jul 14 22:45:16.747653 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Jul 14 22:45:16.747659 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jul 14 22:45:16.747667 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Jul 14 22:45:16.747672 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Jul 14 22:45:16.747678 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS Jul 14 22:45:16.747684 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT Jul 14 22:45:16.747690 kernel: RETBleed: Mitigation: Enhanced IBRS Jul 14 22:45:16.747697 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jul 14 22:45:16.747702 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jul 14 22:45:16.747708 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jul 14 22:45:16.747715 kernel: SRBDS: Unknown: Dependent on hypervisor status Jul 14 22:45:16.747721 kernel: GDS: Unknown: Dependent on hypervisor status Jul 14 22:45:16.747727 kernel: ITS: Mitigation: Aligned branch/return thunks Jul 14 22:45:16.747733 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jul 14 22:45:16.747739 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jul 14 22:45:16.747745 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jul 14 22:45:16.747751 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jul 14 22:45:16.747757 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jul 14 22:45:16.747763 kernel: Freeing SMP alternatives memory: 32K Jul 14 22:45:16.747770 kernel: pid_max: default: 131072 minimum: 1024 Jul 14 22:45:16.747776 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jul 14 22:45:16.747782 kernel: landlock: Up and running. Jul 14 22:45:16.747788 kernel: SELinux: Initializing. Jul 14 22:45:16.747794 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jul 14 22:45:16.747800 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jul 14 22:45:16.747806 kernel: smpboot: CPU0: Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz (family: 0x6, model: 0x9e, stepping: 0xd) Jul 14 22:45:16.747811 kernel: RCU Tasks: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. Jul 14 22:45:16.747817 kernel: RCU Tasks Rude: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. Jul 14 22:45:16.747824 kernel: RCU Tasks Trace: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. Jul 14 22:45:16.747830 kernel: Performance Events: Skylake events, core PMU driver. Jul 14 22:45:16.747836 kernel: core: CPUID marked event: 'cpu cycles' unavailable Jul 14 22:45:16.747842 kernel: core: CPUID marked event: 'instructions' unavailable Jul 14 22:45:16.747848 kernel: core: CPUID marked event: 'bus cycles' unavailable Jul 14 22:45:16.747853 kernel: core: CPUID marked event: 'cache references' unavailable Jul 14 22:45:16.747859 kernel: core: CPUID marked event: 'cache misses' unavailable Jul 14 22:45:16.747865 kernel: core: CPUID marked event: 'branch instructions' unavailable Jul 14 22:45:16.747871 kernel: core: CPUID marked event: 'branch misses' unavailable Jul 14 22:45:16.747877 kernel: ... version: 1 Jul 14 22:45:16.747883 kernel: ... bit width: 48 Jul 14 22:45:16.747889 kernel: ... generic registers: 4 Jul 14 22:45:16.747895 kernel: ... value mask: 0000ffffffffffff Jul 14 22:45:16.747901 kernel: ... max period: 000000007fffffff Jul 14 22:45:16.747907 kernel: ... fixed-purpose events: 0 Jul 14 22:45:16.747914 kernel: ... event mask: 000000000000000f Jul 14 22:45:16.747919 kernel: signal: max sigframe size: 1776 Jul 14 22:45:16.747925 kernel: rcu: Hierarchical SRCU implementation. Jul 14 22:45:16.747932 kernel: rcu: Max phase no-delay instances is 400. Jul 14 22:45:16.747939 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jul 14 22:45:16.747944 kernel: smp: Bringing up secondary CPUs ... Jul 14 22:45:16.747950 kernel: smpboot: x86: Booting SMP configuration: Jul 14 22:45:16.747956 kernel: .... node #0, CPUs: #1 Jul 14 22:45:16.747962 kernel: Disabled fast string operations Jul 14 22:45:16.747968 kernel: smpboot: CPU 1 Converting physical 2 to logical package 1 Jul 14 22:45:16.747974 kernel: smpboot: CPU 1 Converting physical 0 to logical die 1 Jul 14 22:45:16.747980 kernel: smp: Brought up 1 node, 2 CPUs Jul 14 22:45:16.747986 kernel: smpboot: Max logical packages: 128 Jul 14 22:45:16.747993 kernel: smpboot: Total of 2 processors activated (13632.00 BogoMIPS) Jul 14 22:45:16.747999 kernel: devtmpfs: initialized Jul 14 22:45:16.748005 kernel: x86/mm: Memory block size: 128MB Jul 14 22:45:16.748011 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7feff000-0x7fefffff] (4096 bytes) Jul 14 22:45:16.748017 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 14 22:45:16.748023 kernel: futex hash table entries: 32768 (order: 9, 2097152 bytes, linear) Jul 14 22:45:16.748029 kernel: pinctrl core: initialized pinctrl subsystem Jul 14 22:45:16.748035 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 14 22:45:16.748041 kernel: audit: initializing netlink subsys (disabled) Jul 14 22:45:16.748048 kernel: audit: type=2000 audit(1752533115.093:1): state=initialized audit_enabled=0 res=1 Jul 14 22:45:16.748053 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 14 22:45:16.748059 kernel: thermal_sys: Registered thermal governor 'user_space' Jul 14 22:45:16.748065 kernel: cpuidle: using governor menu Jul 14 22:45:16.748071 kernel: Simple Boot Flag at 0x36 set to 0x80 Jul 14 22:45:16.748077 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 14 22:45:16.748083 kernel: dca service started, version 1.12.1 Jul 14 22:45:16.748089 kernel: PCI: MMCONFIG for domain 0000 [bus 00-7f] at [mem 0xf0000000-0xf7ffffff] (base 0xf0000000) Jul 14 22:45:16.748096 kernel: PCI: Using configuration type 1 for base access Jul 14 22:45:16.748102 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jul 14 22:45:16.748108 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 14 22:45:16.748114 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jul 14 22:45:16.748120 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 14 22:45:16.748126 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jul 14 22:45:16.748132 kernel: ACPI: Added _OSI(Module Device) Jul 14 22:45:16.748137 kernel: ACPI: Added _OSI(Processor Device) Jul 14 22:45:16.748143 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 14 22:45:16.748150 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 14 22:45:16.748156 kernel: ACPI: [Firmware Bug]: BIOS _OSI(Linux) query ignored Jul 14 22:45:16.748162 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jul 14 22:45:16.748168 kernel: ACPI: Interpreter enabled Jul 14 22:45:16.748174 kernel: ACPI: PM: (supports S0 S1 S5) Jul 14 22:45:16.748180 kernel: ACPI: Using IOAPIC for interrupt routing Jul 14 22:45:16.748186 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jul 14 22:45:16.748191 kernel: PCI: Using E820 reservations for host bridge windows Jul 14 22:45:16.748197 kernel: ACPI: Enabled 4 GPEs in block 00 to 0F Jul 14 22:45:16.748204 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-7f]) Jul 14 22:45:16.748288 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 14 22:45:16.749171 kernel: acpi PNP0A03:00: _OSC: platform does not support [AER LTR] Jul 14 22:45:16.749230 kernel: acpi PNP0A03:00: _OSC: OS now controls [PCIeHotplug PME PCIeCapability] Jul 14 22:45:16.749239 kernel: PCI host bridge to bus 0000:00 Jul 14 22:45:16.749293 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jul 14 22:45:16.749354 kernel: pci_bus 0000:00: root bus resource [mem 0x000cc000-0x000dbfff window] Jul 14 22:45:16.749406 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jul 14 22:45:16.749453 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jul 14 22:45:16.749509 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xfeff window] Jul 14 22:45:16.749556 kernel: pci_bus 0000:00: root bus resource [bus 00-7f] Jul 14 22:45:16.749616 kernel: pci 0000:00:00.0: [8086:7190] type 00 class 0x060000 Jul 14 22:45:16.749676 kernel: pci 0000:00:01.0: [8086:7191] type 01 class 0x060400 Jul 14 22:45:16.749735 kernel: pci 0000:00:07.0: [8086:7110] type 00 class 0x060100 Jul 14 22:45:16.749792 kernel: pci 0000:00:07.1: [8086:7111] type 00 class 0x01018a Jul 14 22:45:16.749844 kernel: pci 0000:00:07.1: reg 0x20: [io 0x1060-0x106f] Jul 14 22:45:16.749896 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Jul 14 22:45:16.749948 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Jul 14 22:45:16.749999 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Jul 14 22:45:16.750053 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Jul 14 22:45:16.750111 kernel: pci 0000:00:07.3: [8086:7113] type 00 class 0x068000 Jul 14 22:45:16.750164 kernel: pci 0000:00:07.3: quirk: [io 0x1000-0x103f] claimed by PIIX4 ACPI Jul 14 22:45:16.750215 kernel: pci 0000:00:07.3: quirk: [io 0x1040-0x104f] claimed by PIIX4 SMB Jul 14 22:45:16.750270 kernel: pci 0000:00:07.7: [15ad:0740] type 00 class 0x088000 Jul 14 22:45:16.754340 kernel: pci 0000:00:07.7: reg 0x10: [io 0x1080-0x10bf] Jul 14 22:45:16.754410 kernel: pci 0000:00:07.7: reg 0x14: [mem 0xfebfe000-0xfebfffff 64bit] Jul 14 22:45:16.754480 kernel: pci 0000:00:0f.0: [15ad:0405] type 00 class 0x030000 Jul 14 22:45:16.754536 kernel: pci 0000:00:0f.0: reg 0x10: [io 0x1070-0x107f] Jul 14 22:45:16.754588 kernel: pci 0000:00:0f.0: reg 0x14: [mem 0xe8000000-0xefffffff pref] Jul 14 22:45:16.754640 kernel: pci 0000:00:0f.0: reg 0x18: [mem 0xfe000000-0xfe7fffff] Jul 14 22:45:16.754691 kernel: pci 0000:00:0f.0: reg 0x30: [mem 0x00000000-0x00007fff pref] Jul 14 22:45:16.754743 kernel: pci 0000:00:0f.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jul 14 22:45:16.754800 kernel: pci 0000:00:11.0: [15ad:0790] type 01 class 0x060401 Jul 14 22:45:16.754859 kernel: pci 0000:00:15.0: [15ad:07a0] type 01 class 0x060400 Jul 14 22:45:16.754912 kernel: pci 0000:00:15.0: PME# supported from D0 D3hot D3cold Jul 14 22:45:16.754971 kernel: pci 0000:00:15.1: [15ad:07a0] type 01 class 0x060400 Jul 14 22:45:16.755025 kernel: pci 0000:00:15.1: PME# supported from D0 D3hot D3cold Jul 14 22:45:16.755081 kernel: pci 0000:00:15.2: [15ad:07a0] type 01 class 0x060400 Jul 14 22:45:16.755134 kernel: pci 0000:00:15.2: PME# supported from D0 D3hot D3cold Jul 14 22:45:16.755193 kernel: pci 0000:00:15.3: [15ad:07a0] type 01 class 0x060400 Jul 14 22:45:16.755245 kernel: pci 0000:00:15.3: PME# supported from D0 D3hot D3cold Jul 14 22:45:16.755301 kernel: pci 0000:00:15.4: [15ad:07a0] type 01 class 0x060400 Jul 14 22:45:16.755365 kernel: pci 0000:00:15.4: PME# supported from D0 D3hot D3cold Jul 14 22:45:16.755421 kernel: pci 0000:00:15.5: [15ad:07a0] type 01 class 0x060400 Jul 14 22:45:16.755484 kernel: pci 0000:00:15.5: PME# supported from D0 D3hot D3cold Jul 14 22:45:16.755546 kernel: pci 0000:00:15.6: [15ad:07a0] type 01 class 0x060400 Jul 14 22:45:16.755599 kernel: pci 0000:00:15.6: PME# supported from D0 D3hot D3cold Jul 14 22:45:16.755654 kernel: pci 0000:00:15.7: [15ad:07a0] type 01 class 0x060400 Jul 14 22:45:16.755707 kernel: pci 0000:00:15.7: PME# supported from D0 D3hot D3cold Jul 14 22:45:16.755763 kernel: pci 0000:00:16.0: [15ad:07a0] type 01 class 0x060400 Jul 14 22:45:16.755816 kernel: pci 0000:00:16.0: PME# supported from D0 D3hot D3cold Jul 14 22:45:16.755874 kernel: pci 0000:00:16.1: [15ad:07a0] type 01 class 0x060400 Jul 14 22:45:16.755926 kernel: pci 0000:00:16.1: PME# supported from D0 D3hot D3cold Jul 14 22:45:16.755984 kernel: pci 0000:00:16.2: [15ad:07a0] type 01 class 0x060400 Jul 14 22:45:16.756036 kernel: pci 0000:00:16.2: PME# supported from D0 D3hot D3cold Jul 14 22:45:16.756091 kernel: pci 0000:00:16.3: [15ad:07a0] type 01 class 0x060400 Jul 14 22:45:16.756157 kernel: pci 0000:00:16.3: PME# supported from D0 D3hot D3cold Jul 14 22:45:16.756213 kernel: pci 0000:00:16.4: [15ad:07a0] type 01 class 0x060400 Jul 14 22:45:16.756265 kernel: pci 0000:00:16.4: PME# supported from D0 D3hot D3cold Jul 14 22:45:16.758631 kernel: pci 0000:00:16.5: [15ad:07a0] type 01 class 0x060400 Jul 14 22:45:16.758699 kernel: pci 0000:00:16.5: PME# supported from D0 D3hot D3cold Jul 14 22:45:16.758756 kernel: pci 0000:00:16.6: [15ad:07a0] type 01 class 0x060400 Jul 14 22:45:16.758809 kernel: pci 0000:00:16.6: PME# supported from D0 D3hot D3cold Jul 14 22:45:16.758867 kernel: pci 0000:00:16.7: [15ad:07a0] type 01 class 0x060400 Jul 14 22:45:16.758918 kernel: pci 0000:00:16.7: PME# supported from D0 D3hot D3cold Jul 14 22:45:16.758972 kernel: pci 0000:00:17.0: [15ad:07a0] type 01 class 0x060400 Jul 14 22:45:16.759023 kernel: pci 0000:00:17.0: PME# supported from D0 D3hot D3cold Jul 14 22:45:16.759075 kernel: pci 0000:00:17.1: [15ad:07a0] type 01 class 0x060400 Jul 14 22:45:16.759126 kernel: pci 0000:00:17.1: PME# supported from D0 D3hot D3cold Jul 14 22:45:16.759185 kernel: pci 0000:00:17.2: [15ad:07a0] type 01 class 0x060400 Jul 14 22:45:16.759236 kernel: pci 0000:00:17.2: PME# supported from D0 D3hot D3cold Jul 14 22:45:16.759288 kernel: pci 0000:00:17.3: [15ad:07a0] type 01 class 0x060400 Jul 14 22:45:16.759353 kernel: pci 0000:00:17.3: PME# supported from D0 D3hot D3cold Jul 14 22:45:16.759408 kernel: pci 0000:00:17.4: [15ad:07a0] type 01 class 0x060400 Jul 14 22:45:16.759459 kernel: pci 0000:00:17.4: PME# supported from D0 D3hot D3cold Jul 14 22:45:16.759514 kernel: pci 0000:00:17.5: [15ad:07a0] type 01 class 0x060400 Jul 14 22:45:16.759569 kernel: pci 0000:00:17.5: PME# supported from D0 D3hot D3cold Jul 14 22:45:16.759623 kernel: pci 0000:00:17.6: [15ad:07a0] type 01 class 0x060400 Jul 14 22:45:16.759673 kernel: pci 0000:00:17.6: PME# supported from D0 D3hot D3cold Jul 14 22:45:16.759727 kernel: pci 0000:00:17.7: [15ad:07a0] type 01 class 0x060400 Jul 14 22:45:16.759777 kernel: pci 0000:00:17.7: PME# supported from D0 D3hot D3cold Jul 14 22:45:16.759830 kernel: pci 0000:00:18.0: [15ad:07a0] type 01 class 0x060400 Jul 14 22:45:16.759884 kernel: pci 0000:00:18.0: PME# supported from D0 D3hot D3cold Jul 14 22:45:16.759973 kernel: pci 0000:00:18.1: [15ad:07a0] type 01 class 0x060400 Jul 14 22:45:16.760054 kernel: pci 0000:00:18.1: PME# supported from D0 D3hot D3cold Jul 14 22:45:16.760110 kernel: pci 0000:00:18.2: [15ad:07a0] type 01 class 0x060400 Jul 14 22:45:16.760161 kernel: pci 0000:00:18.2: PME# supported from D0 D3hot D3cold Jul 14 22:45:16.760214 kernel: pci 0000:00:18.3: [15ad:07a0] type 01 class 0x060400 Jul 14 22:45:16.760267 kernel: pci 0000:00:18.3: PME# supported from D0 D3hot D3cold Jul 14 22:45:16.762392 kernel: pci 0000:00:18.4: [15ad:07a0] type 01 class 0x060400 Jul 14 22:45:16.762464 kernel: pci 0000:00:18.4: PME# supported from D0 D3hot D3cold Jul 14 22:45:16.762527 kernel: pci 0000:00:18.5: [15ad:07a0] type 01 class 0x060400 Jul 14 22:45:16.762582 kernel: pci 0000:00:18.5: PME# supported from D0 D3hot D3cold Jul 14 22:45:16.762673 kernel: pci 0000:00:18.6: [15ad:07a0] type 01 class 0x060400 Jul 14 22:45:16.762730 kernel: pci 0000:00:18.6: PME# supported from D0 D3hot D3cold Jul 14 22:45:16.762785 kernel: pci 0000:00:18.7: [15ad:07a0] type 01 class 0x060400 Jul 14 22:45:16.762838 kernel: pci 0000:00:18.7: PME# supported from D0 D3hot D3cold Jul 14 22:45:16.762891 kernel: pci_bus 0000:01: extended config space not accessible Jul 14 22:45:16.762945 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Jul 14 22:45:16.762997 kernel: pci_bus 0000:02: extended config space not accessible Jul 14 22:45:16.763006 kernel: acpiphp: Slot [32] registered Jul 14 22:45:16.763015 kernel: acpiphp: Slot [33] registered Jul 14 22:45:16.763021 kernel: acpiphp: Slot [34] registered Jul 14 22:45:16.763027 kernel: acpiphp: Slot [35] registered Jul 14 22:45:16.763033 kernel: acpiphp: Slot [36] registered Jul 14 22:45:16.763038 kernel: acpiphp: Slot [37] registered Jul 14 22:45:16.763044 kernel: acpiphp: Slot [38] registered Jul 14 22:45:16.763050 kernel: acpiphp: Slot [39] registered Jul 14 22:45:16.763056 kernel: acpiphp: Slot [40] registered Jul 14 22:45:16.763062 kernel: acpiphp: Slot [41] registered Jul 14 22:45:16.763069 kernel: acpiphp: Slot [42] registered Jul 14 22:45:16.763074 kernel: acpiphp: Slot [43] registered Jul 14 22:45:16.763080 kernel: acpiphp: Slot [44] registered Jul 14 22:45:16.763086 kernel: acpiphp: Slot [45] registered Jul 14 22:45:16.763091 kernel: acpiphp: Slot [46] registered Jul 14 22:45:16.763097 kernel: acpiphp: Slot [47] registered Jul 14 22:45:16.763103 kernel: acpiphp: Slot [48] registered Jul 14 22:45:16.763108 kernel: acpiphp: Slot [49] registered Jul 14 22:45:16.763114 kernel: acpiphp: Slot [50] registered Jul 14 22:45:16.763121 kernel: acpiphp: Slot [51] registered Jul 14 22:45:16.763127 kernel: acpiphp: Slot [52] registered Jul 14 22:45:16.763133 kernel: acpiphp: Slot [53] registered Jul 14 22:45:16.763138 kernel: acpiphp: Slot [54] registered Jul 14 22:45:16.763144 kernel: acpiphp: Slot [55] registered Jul 14 22:45:16.763149 kernel: acpiphp: Slot [56] registered Jul 14 22:45:16.763155 kernel: acpiphp: Slot [57] registered Jul 14 22:45:16.763161 kernel: acpiphp: Slot [58] registered Jul 14 22:45:16.763167 kernel: acpiphp: Slot [59] registered Jul 14 22:45:16.763172 kernel: acpiphp: Slot [60] registered Jul 14 22:45:16.763179 kernel: acpiphp: Slot [61] registered Jul 14 22:45:16.763185 kernel: acpiphp: Slot [62] registered Jul 14 22:45:16.763191 kernel: acpiphp: Slot [63] registered Jul 14 22:45:16.763242 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] (subtractive decode) Jul 14 22:45:16.763293 kernel: pci 0000:00:11.0: bridge window [io 0x2000-0x3fff] Jul 14 22:45:16.763354 kernel: pci 0000:00:11.0: bridge window [mem 0xfd600000-0xfdffffff] Jul 14 22:45:16.763405 kernel: pci 0000:00:11.0: bridge window [mem 0xe7b00000-0xe7ffffff 64bit pref] Jul 14 22:45:16.763460 kernel: pci 0000:00:11.0: bridge window [mem 0x000a0000-0x000bffff window] (subtractive decode) Jul 14 22:45:16.763515 kernel: pci 0000:00:11.0: bridge window [mem 0x000cc000-0x000dbfff window] (subtractive decode) Jul 14 22:45:16.763566 kernel: pci 0000:00:11.0: bridge window [mem 0xc0000000-0xfebfffff window] (subtractive decode) Jul 14 22:45:16.763617 kernel: pci 0000:00:11.0: bridge window [io 0x0000-0x0cf7 window] (subtractive decode) Jul 14 22:45:16.763668 kernel: pci 0000:00:11.0: bridge window [io 0x0d00-0xfeff window] (subtractive decode) Jul 14 22:45:16.763725 kernel: pci 0000:03:00.0: [15ad:07c0] type 00 class 0x010700 Jul 14 22:45:16.763778 kernel: pci 0000:03:00.0: reg 0x10: [io 0x4000-0x4007] Jul 14 22:45:16.763830 kernel: pci 0000:03:00.0: reg 0x14: [mem 0xfd5f8000-0xfd5fffff 64bit] Jul 14 22:45:16.763886 kernel: pci 0000:03:00.0: reg 0x30: [mem 0x00000000-0x0000ffff pref] Jul 14 22:45:16.763938 kernel: pci 0000:03:00.0: PME# supported from D0 D3hot D3cold Jul 14 22:45:16.763990 kernel: pci 0000:03:00.0: disabling ASPM on pre-1.1 PCIe device. You can enable it with 'pcie_aspm=force' Jul 14 22:45:16.764042 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] Jul 14 22:45:16.764094 kernel: pci 0000:00:15.0: bridge window [io 0x4000-0x4fff] Jul 14 22:45:16.764145 kernel: pci 0000:00:15.0: bridge window [mem 0xfd500000-0xfd5fffff] Jul 14 22:45:16.764198 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] Jul 14 22:45:16.764249 kernel: pci 0000:00:15.1: bridge window [io 0x8000-0x8fff] Jul 14 22:45:16.764303 kernel: pci 0000:00:15.1: bridge window [mem 0xfd100000-0xfd1fffff] Jul 14 22:45:16.765874 kernel: pci 0000:00:15.1: bridge window [mem 0xe7800000-0xe78fffff 64bit pref] Jul 14 22:45:16.765936 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] Jul 14 22:45:16.765991 kernel: pci 0000:00:15.2: bridge window [io 0xc000-0xcfff] Jul 14 22:45:16.766044 kernel: pci 0000:00:15.2: bridge window [mem 0xfcd00000-0xfcdfffff] Jul 14 22:45:16.766097 kernel: pci 0000:00:15.2: bridge window [mem 0xe7400000-0xe74fffff 64bit pref] Jul 14 22:45:16.766150 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] Jul 14 22:45:16.766206 kernel: pci 0000:00:15.3: bridge window [mem 0xfc900000-0xfc9fffff] Jul 14 22:45:16.766258 kernel: pci 0000:00:15.3: bridge window [mem 0xe7000000-0xe70fffff 64bit pref] Jul 14 22:45:16.766310 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] Jul 14 22:45:16.766370 kernel: pci 0000:00:15.4: bridge window [mem 0xfc500000-0xfc5fffff] Jul 14 22:45:16.766422 kernel: pci 0000:00:15.4: bridge window [mem 0xe6c00000-0xe6cfffff 64bit pref] Jul 14 22:45:16.766477 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] Jul 14 22:45:16.766530 kernel: pci 0000:00:15.5: bridge window [mem 0xfc100000-0xfc1fffff] Jul 14 22:45:16.766583 kernel: pci 0000:00:15.5: bridge window [mem 0xe6800000-0xe68fffff 64bit pref] Jul 14 22:45:16.766635 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] Jul 14 22:45:16.766687 kernel: pci 0000:00:15.6: bridge window [mem 0xfbd00000-0xfbdfffff] Jul 14 22:45:16.766739 kernel: pci 0000:00:15.6: bridge window [mem 0xe6400000-0xe64fffff 64bit pref] Jul 14 22:45:16.766791 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] Jul 14 22:45:16.766843 kernel: pci 0000:00:15.7: bridge window [mem 0xfb900000-0xfb9fffff] Jul 14 22:45:16.766898 kernel: pci 0000:00:15.7: bridge window [mem 0xe6000000-0xe60fffff 64bit pref] Jul 14 22:45:16.766957 kernel: pci 0000:0b:00.0: [15ad:07b0] type 00 class 0x020000 Jul 14 22:45:16.767012 kernel: pci 0000:0b:00.0: reg 0x10: [mem 0xfd4fc000-0xfd4fcfff] Jul 14 22:45:16.767066 kernel: pci 0000:0b:00.0: reg 0x14: [mem 0xfd4fd000-0xfd4fdfff] Jul 14 22:45:16.767119 kernel: pci 0000:0b:00.0: reg 0x18: [mem 0xfd4fe000-0xfd4fffff] Jul 14 22:45:16.767172 kernel: pci 0000:0b:00.0: reg 0x1c: [io 0x5000-0x500f] Jul 14 22:45:16.767225 kernel: pci 0000:0b:00.0: reg 0x30: [mem 0x00000000-0x0000ffff pref] Jul 14 22:45:16.767281 kernel: pci 0000:0b:00.0: supports D1 D2 Jul 14 22:45:16.769355 kernel: pci 0000:0b:00.0: PME# supported from D0 D1 D2 D3hot D3cold Jul 14 22:45:16.769418 kernel: pci 0000:0b:00.0: disabling ASPM on pre-1.1 PCIe device. You can enable it with 'pcie_aspm=force' Jul 14 22:45:16.769473 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] Jul 14 22:45:16.769527 kernel: pci 0000:00:16.0: bridge window [io 0x5000-0x5fff] Jul 14 22:45:16.769579 kernel: pci 0000:00:16.0: bridge window [mem 0xfd400000-0xfd4fffff] Jul 14 22:45:16.769631 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] Jul 14 22:45:16.769684 kernel: pci 0000:00:16.1: bridge window [io 0x9000-0x9fff] Jul 14 22:45:16.769740 kernel: pci 0000:00:16.1: bridge window [mem 0xfd000000-0xfd0fffff] Jul 14 22:45:16.769792 kernel: pci 0000:00:16.1: bridge window [mem 0xe7700000-0xe77fffff 64bit pref] Jul 14 22:45:16.769845 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] Jul 14 22:45:16.769897 kernel: pci 0000:00:16.2: bridge window [io 0xd000-0xdfff] Jul 14 22:45:16.769949 kernel: pci 0000:00:16.2: bridge window [mem 0xfcc00000-0xfccfffff] Jul 14 22:45:16.770000 kernel: pci 0000:00:16.2: bridge window [mem 0xe7300000-0xe73fffff 64bit pref] Jul 14 22:45:16.770052 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] Jul 14 22:45:16.770107 kernel: pci 0000:00:16.3: bridge window [mem 0xfc800000-0xfc8fffff] Jul 14 22:45:16.770158 kernel: pci 0000:00:16.3: bridge window [mem 0xe6f00000-0xe6ffffff 64bit pref] Jul 14 22:45:16.770212 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] Jul 14 22:45:16.770264 kernel: pci 0000:00:16.4: bridge window [mem 0xfc400000-0xfc4fffff] Jul 14 22:45:16.770315 kernel: pci 0000:00:16.4: bridge window [mem 0xe6b00000-0xe6bfffff 64bit pref] Jul 14 22:45:16.770378 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] Jul 14 22:45:16.770430 kernel: pci 0000:00:16.5: bridge window [mem 0xfc000000-0xfc0fffff] Jul 14 22:45:16.770489 kernel: pci 0000:00:16.5: bridge window [mem 0xe6700000-0xe67fffff 64bit pref] Jul 14 22:45:16.770546 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] Jul 14 22:45:16.770598 kernel: pci 0000:00:16.6: bridge window [mem 0xfbc00000-0xfbcfffff] Jul 14 22:45:16.770649 kernel: pci 0000:00:16.6: bridge window [mem 0xe6300000-0xe63fffff 64bit pref] Jul 14 22:45:16.770701 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] Jul 14 22:45:16.770753 kernel: pci 0000:00:16.7: bridge window [mem 0xfb800000-0xfb8fffff] Jul 14 22:45:16.770805 kernel: pci 0000:00:16.7: bridge window [mem 0xe5f00000-0xe5ffffff 64bit pref] Jul 14 22:45:16.770857 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] Jul 14 22:45:16.770908 kernel: pci 0000:00:17.0: bridge window [io 0x6000-0x6fff] Jul 14 22:45:16.770963 kernel: pci 0000:00:17.0: bridge window [mem 0xfd300000-0xfd3fffff] Jul 14 22:45:16.771016 kernel: pci 0000:00:17.0: bridge window [mem 0xe7a00000-0xe7afffff 64bit pref] Jul 14 22:45:16.771069 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] Jul 14 22:45:16.771121 kernel: pci 0000:00:17.1: bridge window [io 0xa000-0xafff] Jul 14 22:45:16.771172 kernel: pci 0000:00:17.1: bridge window [mem 0xfcf00000-0xfcffffff] Jul 14 22:45:16.771223 kernel: pci 0000:00:17.1: bridge window [mem 0xe7600000-0xe76fffff 64bit pref] Jul 14 22:45:16.771275 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] Jul 14 22:45:16.774354 kernel: pci 0000:00:17.2: bridge window [io 0xe000-0xefff] Jul 14 22:45:16.774422 kernel: pci 0000:00:17.2: bridge window [mem 0xfcb00000-0xfcbfffff] Jul 14 22:45:16.774483 kernel: pci 0000:00:17.2: bridge window [mem 0xe7200000-0xe72fffff 64bit pref] Jul 14 22:45:16.774537 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] Jul 14 22:45:16.774590 kernel: pci 0000:00:17.3: bridge window [mem 0xfc700000-0xfc7fffff] Jul 14 22:45:16.774642 kernel: pci 0000:00:17.3: bridge window [mem 0xe6e00000-0xe6efffff 64bit pref] Jul 14 22:45:16.774694 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] Jul 14 22:45:16.774745 kernel: pci 0000:00:17.4: bridge window [mem 0xfc300000-0xfc3fffff] Jul 14 22:45:16.774796 kernel: pci 0000:00:17.4: bridge window [mem 0xe6a00000-0xe6afffff 64bit pref] Jul 14 22:45:16.774852 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] Jul 14 22:45:16.774905 kernel: pci 0000:00:17.5: bridge window [mem 0xfbf00000-0xfbffffff] Jul 14 22:45:16.774956 kernel: pci 0000:00:17.5: bridge window [mem 0xe6600000-0xe66fffff 64bit pref] Jul 14 22:45:16.775008 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] Jul 14 22:45:16.775059 kernel: pci 0000:00:17.6: bridge window [mem 0xfbb00000-0xfbbfffff] Jul 14 22:45:16.775111 kernel: pci 0000:00:17.6: bridge window [mem 0xe6200000-0xe62fffff 64bit pref] Jul 14 22:45:16.775163 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] Jul 14 22:45:16.775214 kernel: pci 0000:00:17.7: bridge window [mem 0xfb700000-0xfb7fffff] Jul 14 22:45:16.775270 kernel: pci 0000:00:17.7: bridge window [mem 0xe5e00000-0xe5efffff 64bit pref] Jul 14 22:45:16.775332 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] Jul 14 22:45:16.775391 kernel: pci 0000:00:18.0: bridge window [io 0x7000-0x7fff] Jul 14 22:45:16.775442 kernel: pci 0000:00:18.0: bridge window [mem 0xfd200000-0xfd2fffff] Jul 14 22:45:16.775495 kernel: pci 0000:00:18.0: bridge window [mem 0xe7900000-0xe79fffff 64bit pref] Jul 14 22:45:16.775548 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] Jul 14 22:45:16.775600 kernel: pci 0000:00:18.1: bridge window [io 0xb000-0xbfff] Jul 14 22:45:16.775655 kernel: pci 0000:00:18.1: bridge window [mem 0xfce00000-0xfcefffff] Jul 14 22:45:16.775706 kernel: pci 0000:00:18.1: bridge window [mem 0xe7500000-0xe75fffff 64bit pref] Jul 14 22:45:16.775759 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] Jul 14 22:45:16.775811 kernel: pci 0000:00:18.2: bridge window [mem 0xfca00000-0xfcafffff] Jul 14 22:45:16.775863 kernel: pci 0000:00:18.2: bridge window [mem 0xe7100000-0xe71fffff 64bit pref] Jul 14 22:45:16.775915 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] Jul 14 22:45:16.775968 kernel: pci 0000:00:18.3: bridge window [mem 0xfc600000-0xfc6fffff] Jul 14 22:45:16.776019 kernel: pci 0000:00:18.3: bridge window [mem 0xe6d00000-0xe6dfffff 64bit pref] Jul 14 22:45:16.776074 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] Jul 14 22:45:16.776125 kernel: pci 0000:00:18.4: bridge window [mem 0xfc200000-0xfc2fffff] Jul 14 22:45:16.776177 kernel: pci 0000:00:18.4: bridge window [mem 0xe6900000-0xe69fffff 64bit pref] Jul 14 22:45:16.776229 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] Jul 14 22:45:16.776281 kernel: pci 0000:00:18.5: bridge window [mem 0xfbe00000-0xfbefffff] Jul 14 22:45:16.780466 kernel: pci 0000:00:18.5: bridge window [mem 0xe6500000-0xe65fffff 64bit pref] Jul 14 22:45:16.780528 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] Jul 14 22:45:16.780583 kernel: pci 0000:00:18.6: bridge window [mem 0xfba00000-0xfbafffff] Jul 14 22:45:16.780639 kernel: pci 0000:00:18.6: bridge window [mem 0xe6100000-0xe61fffff 64bit pref] Jul 14 22:45:16.780694 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] Jul 14 22:45:16.780745 kernel: pci 0000:00:18.7: bridge window [mem 0xfb600000-0xfb6fffff] Jul 14 22:45:16.780797 kernel: pci 0000:00:18.7: bridge window [mem 0xe5d00000-0xe5dfffff 64bit pref] Jul 14 22:45:16.780806 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 9 Jul 14 22:45:16.780812 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 0 Jul 14 22:45:16.780818 kernel: ACPI: PCI: Interrupt link LNKB disabled Jul 14 22:45:16.780824 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jul 14 22:45:16.780830 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 10 Jul 14 22:45:16.780838 kernel: iommu: Default domain type: Translated Jul 14 22:45:16.780845 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jul 14 22:45:16.780850 kernel: PCI: Using ACPI for IRQ routing Jul 14 22:45:16.780856 kernel: PCI: pci_cache_line_size set to 64 bytes Jul 14 22:45:16.780862 kernel: e820: reserve RAM buffer [mem 0x0009ec00-0x0009ffff] Jul 14 22:45:16.780868 kernel: e820: reserve RAM buffer [mem 0x7fee0000-0x7fffffff] Jul 14 22:45:16.780919 kernel: pci 0000:00:0f.0: vgaarb: setting as boot VGA device Jul 14 22:45:16.780971 kernel: pci 0000:00:0f.0: vgaarb: bridge control possible Jul 14 22:45:16.781022 kernel: pci 0000:00:0f.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jul 14 22:45:16.781034 kernel: vgaarb: loaded Jul 14 22:45:16.781041 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 Jul 14 22:45:16.781046 kernel: hpet0: 16 comparators, 64-bit 14.318180 MHz counter Jul 14 22:45:16.781052 kernel: clocksource: Switched to clocksource tsc-early Jul 14 22:45:16.781058 kernel: VFS: Disk quotas dquot_6.6.0 Jul 14 22:45:16.781065 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 14 22:45:16.781070 kernel: pnp: PnP ACPI init Jul 14 22:45:16.781125 kernel: system 00:00: [io 0x1000-0x103f] has been reserved Jul 14 22:45:16.781176 kernel: system 00:00: [io 0x1040-0x104f] has been reserved Jul 14 22:45:16.781224 kernel: system 00:00: [io 0x0cf0-0x0cf1] has been reserved Jul 14 22:45:16.781275 kernel: system 00:04: [mem 0xfed00000-0xfed003ff] has been reserved Jul 14 22:45:16.781332 kernel: pnp 00:06: [dma 2] Jul 14 22:45:16.781384 kernel: system 00:07: [io 0xfce0-0xfcff] has been reserved Jul 14 22:45:16.781431 kernel: system 00:07: [mem 0xf0000000-0xf7ffffff] has been reserved Jul 14 22:45:16.781481 kernel: system 00:07: [mem 0xfe800000-0xfe9fffff] has been reserved Jul 14 22:45:16.781490 kernel: pnp: PnP ACPI: found 8 devices Jul 14 22:45:16.781496 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jul 14 22:45:16.781502 kernel: NET: Registered PF_INET protocol family Jul 14 22:45:16.781508 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 14 22:45:16.781514 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Jul 14 22:45:16.781520 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 14 22:45:16.781526 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Jul 14 22:45:16.781532 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Jul 14 22:45:16.781540 kernel: TCP: Hash tables configured (established 16384 bind 16384) Jul 14 22:45:16.781546 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Jul 14 22:45:16.781552 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Jul 14 22:45:16.781558 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 14 22:45:16.781564 kernel: NET: Registered PF_XDP protocol family Jul 14 22:45:16.781617 kernel: pci 0000:00:15.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 Jul 14 22:45:16.781671 kernel: pci 0000:00:15.3: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Jul 14 22:45:16.781724 kernel: pci 0000:00:15.4: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Jul 14 22:45:16.781779 kernel: pci 0000:00:15.5: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Jul 14 22:45:16.781832 kernel: pci 0000:00:15.6: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Jul 14 22:45:16.781885 kernel: pci 0000:00:15.7: bridge window [io 0x1000-0x0fff] to [bus 0a] add_size 1000 Jul 14 22:45:16.781938 kernel: pci 0000:00:16.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 0b] add_size 200000 add_align 100000 Jul 14 22:45:16.781991 kernel: pci 0000:00:16.3: bridge window [io 0x1000-0x0fff] to [bus 0e] add_size 1000 Jul 14 22:45:16.782043 kernel: pci 0000:00:16.4: bridge window [io 0x1000-0x0fff] to [bus 0f] add_size 1000 Jul 14 22:45:16.782098 kernel: pci 0000:00:16.5: bridge window [io 0x1000-0x0fff] to [bus 10] add_size 1000 Jul 14 22:45:16.782151 kernel: pci 0000:00:16.6: bridge window [io 0x1000-0x0fff] to [bus 11] add_size 1000 Jul 14 22:45:16.782204 kernel: pci 0000:00:16.7: bridge window [io 0x1000-0x0fff] to [bus 12] add_size 1000 Jul 14 22:45:16.782256 kernel: pci 0000:00:17.3: bridge window [io 0x1000-0x0fff] to [bus 16] add_size 1000 Jul 14 22:45:16.782308 kernel: pci 0000:00:17.4: bridge window [io 0x1000-0x0fff] to [bus 17] add_size 1000 Jul 14 22:45:16.782378 kernel: pci 0000:00:17.5: bridge window [io 0x1000-0x0fff] to [bus 18] add_size 1000 Jul 14 22:45:16.782431 kernel: pci 0000:00:17.6: bridge window [io 0x1000-0x0fff] to [bus 19] add_size 1000 Jul 14 22:45:16.782491 kernel: pci 0000:00:17.7: bridge window [io 0x1000-0x0fff] to [bus 1a] add_size 1000 Jul 14 22:45:16.782543 kernel: pci 0000:00:18.2: bridge window [io 0x1000-0x0fff] to [bus 1d] add_size 1000 Jul 14 22:45:16.782595 kernel: pci 0000:00:18.3: bridge window [io 0x1000-0x0fff] to [bus 1e] add_size 1000 Jul 14 22:45:16.782647 kernel: pci 0000:00:18.4: bridge window [io 0x1000-0x0fff] to [bus 1f] add_size 1000 Jul 14 22:45:16.782703 kernel: pci 0000:00:18.5: bridge window [io 0x1000-0x0fff] to [bus 20] add_size 1000 Jul 14 22:45:16.782755 kernel: pci 0000:00:18.6: bridge window [io 0x1000-0x0fff] to [bus 21] add_size 1000 Jul 14 22:45:16.782808 kernel: pci 0000:00:18.7: bridge window [io 0x1000-0x0fff] to [bus 22] add_size 1000 Jul 14 22:45:16.782860 kernel: pci 0000:00:15.0: BAR 15: assigned [mem 0xc0000000-0xc01fffff 64bit pref] Jul 14 22:45:16.782911 kernel: pci 0000:00:16.0: BAR 15: assigned [mem 0xc0200000-0xc03fffff 64bit pref] Jul 14 22:45:16.782963 kernel: pci 0000:00:15.3: BAR 13: no space for [io size 0x1000] Jul 14 22:45:16.783015 kernel: pci 0000:00:15.3: BAR 13: failed to assign [io size 0x1000] Jul 14 22:45:16.783070 kernel: pci 0000:00:15.4: BAR 13: no space for [io size 0x1000] Jul 14 22:45:16.783122 kernel: pci 0000:00:15.4: BAR 13: failed to assign [io size 0x1000] Jul 14 22:45:16.783175 kernel: pci 0000:00:15.5: BAR 13: no space for [io size 0x1000] Jul 14 22:45:16.783226 kernel: pci 0000:00:15.5: BAR 13: failed to assign [io size 0x1000] Jul 14 22:45:16.783278 kernel: pci 0000:00:15.6: BAR 13: no space for [io size 0x1000] Jul 14 22:45:16.783340 kernel: pci 0000:00:15.6: BAR 13: failed to assign [io size 0x1000] Jul 14 22:45:16.783393 kernel: pci 0000:00:15.7: BAR 13: no space for [io size 0x1000] Jul 14 22:45:16.783445 kernel: pci 0000:00:15.7: BAR 13: failed to assign [io size 0x1000] Jul 14 22:45:16.783500 kernel: pci 0000:00:16.3: BAR 13: no space for [io size 0x1000] Jul 14 22:45:16.783551 kernel: pci 0000:00:16.3: BAR 13: failed to assign [io size 0x1000] Jul 14 22:45:16.783604 kernel: pci 0000:00:16.4: BAR 13: no space for [io size 0x1000] Jul 14 22:45:16.783655 kernel: pci 0000:00:16.4: BAR 13: failed to assign [io size 0x1000] Jul 14 22:45:16.783708 kernel: pci 0000:00:16.5: BAR 13: no space for [io size 0x1000] Jul 14 22:45:16.783760 kernel: pci 0000:00:16.5: BAR 13: failed to assign [io size 0x1000] Jul 14 22:45:16.783812 kernel: pci 0000:00:16.6: BAR 13: no space for [io size 0x1000] Jul 14 22:45:16.783865 kernel: pci 0000:00:16.6: BAR 13: failed to assign [io size 0x1000] Jul 14 22:45:16.783921 kernel: pci 0000:00:16.7: BAR 13: no space for [io size 0x1000] Jul 14 22:45:16.783974 kernel: pci 0000:00:16.7: BAR 13: failed to assign [io size 0x1000] Jul 14 22:45:16.784026 kernel: pci 0000:00:17.3: BAR 13: no space for [io size 0x1000] Jul 14 22:45:16.784078 kernel: pci 0000:00:17.3: BAR 13: failed to assign [io size 0x1000] Jul 14 22:45:16.784130 kernel: pci 0000:00:17.4: BAR 13: no space for [io size 0x1000] Jul 14 22:45:16.784182 kernel: pci 0000:00:17.4: BAR 13: failed to assign [io size 0x1000] Jul 14 22:45:16.784234 kernel: pci 0000:00:17.5: BAR 13: no space for [io size 0x1000] Jul 14 22:45:16.784286 kernel: pci 0000:00:17.5: BAR 13: failed to assign [io size 0x1000] Jul 14 22:45:16.789217 kernel: pci 0000:00:17.6: BAR 13: no space for [io size 0x1000] Jul 14 22:45:16.789278 kernel: pci 0000:00:17.6: BAR 13: failed to assign [io size 0x1000] Jul 14 22:45:16.789345 kernel: pci 0000:00:17.7: BAR 13: no space for [io size 0x1000] Jul 14 22:45:16.789402 kernel: pci 0000:00:17.7: BAR 13: failed to assign [io size 0x1000] Jul 14 22:45:16.789455 kernel: pci 0000:00:18.2: BAR 13: no space for [io size 0x1000] Jul 14 22:45:16.789507 kernel: pci 0000:00:18.2: BAR 13: failed to assign [io size 0x1000] Jul 14 22:45:16.789559 kernel: pci 0000:00:18.3: BAR 13: no space for [io size 0x1000] Jul 14 22:45:16.789611 kernel: pci 0000:00:18.3: BAR 13: failed to assign [io size 0x1000] Jul 14 22:45:16.789667 kernel: pci 0000:00:18.4: BAR 13: no space for [io size 0x1000] Jul 14 22:45:16.789719 kernel: pci 0000:00:18.4: BAR 13: failed to assign [io size 0x1000] Jul 14 22:45:16.789770 kernel: pci 0000:00:18.5: BAR 13: no space for [io size 0x1000] Jul 14 22:45:16.789822 kernel: pci 0000:00:18.5: BAR 13: failed to assign [io size 0x1000] Jul 14 22:45:16.789874 kernel: pci 0000:00:18.6: BAR 13: no space for [io size 0x1000] Jul 14 22:45:16.789926 kernel: pci 0000:00:18.6: BAR 13: failed to assign [io size 0x1000] Jul 14 22:45:16.789978 kernel: pci 0000:00:18.7: BAR 13: no space for [io size 0x1000] Jul 14 22:45:16.790029 kernel: pci 0000:00:18.7: BAR 13: failed to assign [io size 0x1000] Jul 14 22:45:16.790084 kernel: pci 0000:00:18.7: BAR 13: no space for [io size 0x1000] Jul 14 22:45:16.790135 kernel: pci 0000:00:18.7: BAR 13: failed to assign [io size 0x1000] Jul 14 22:45:16.790187 kernel: pci 0000:00:18.6: BAR 13: no space for [io size 0x1000] Jul 14 22:45:16.790239 kernel: pci 0000:00:18.6: BAR 13: failed to assign [io size 0x1000] Jul 14 22:45:16.790290 kernel: pci 0000:00:18.5: BAR 13: no space for [io size 0x1000] Jul 14 22:45:16.790350 kernel: pci 0000:00:18.5: BAR 13: failed to assign [io size 0x1000] Jul 14 22:45:16.790402 kernel: pci 0000:00:18.4: BAR 13: no space for [io size 0x1000] Jul 14 22:45:16.790458 kernel: pci 0000:00:18.4: BAR 13: failed to assign [io size 0x1000] Jul 14 22:45:16.790510 kernel: pci 0000:00:18.3: BAR 13: no space for [io size 0x1000] Jul 14 22:45:16.790561 kernel: pci 0000:00:18.3: BAR 13: failed to assign [io size 0x1000] Jul 14 22:45:16.790616 kernel: pci 0000:00:18.2: BAR 13: no space for [io size 0x1000] Jul 14 22:45:16.790667 kernel: pci 0000:00:18.2: BAR 13: failed to assign [io size 0x1000] Jul 14 22:45:16.790718 kernel: pci 0000:00:17.7: BAR 13: no space for [io size 0x1000] Jul 14 22:45:16.790769 kernel: pci 0000:00:17.7: BAR 13: failed to assign [io size 0x1000] Jul 14 22:45:16.790821 kernel: pci 0000:00:17.6: BAR 13: no space for [io size 0x1000] Jul 14 22:45:16.790872 kernel: pci 0000:00:17.6: BAR 13: failed to assign [io size 0x1000] Jul 14 22:45:16.790923 kernel: pci 0000:00:17.5: BAR 13: no space for [io size 0x1000] Jul 14 22:45:16.790975 kernel: pci 0000:00:17.5: BAR 13: failed to assign [io size 0x1000] Jul 14 22:45:16.791027 kernel: pci 0000:00:17.4: BAR 13: no space for [io size 0x1000] Jul 14 22:45:16.791082 kernel: pci 0000:00:17.4: BAR 13: failed to assign [io size 0x1000] Jul 14 22:45:16.791134 kernel: pci 0000:00:17.3: BAR 13: no space for [io size 0x1000] Jul 14 22:45:16.791187 kernel: pci 0000:00:17.3: BAR 13: failed to assign [io size 0x1000] Jul 14 22:45:16.791239 kernel: pci 0000:00:16.7: BAR 13: no space for [io size 0x1000] Jul 14 22:45:16.791291 kernel: pci 0000:00:16.7: BAR 13: failed to assign [io size 0x1000] Jul 14 22:45:16.791365 kernel: pci 0000:00:16.6: BAR 13: no space for [io size 0x1000] Jul 14 22:45:16.791419 kernel: pci 0000:00:16.6: BAR 13: failed to assign [io size 0x1000] Jul 14 22:45:16.791475 kernel: pci 0000:00:16.5: BAR 13: no space for [io size 0x1000] Jul 14 22:45:16.791528 kernel: pci 0000:00:16.5: BAR 13: failed to assign [io size 0x1000] Jul 14 22:45:16.791581 kernel: pci 0000:00:16.4: BAR 13: no space for [io size 0x1000] Jul 14 22:45:16.791637 kernel: pci 0000:00:16.4: BAR 13: failed to assign [io size 0x1000] Jul 14 22:45:16.791689 kernel: pci 0000:00:16.3: BAR 13: no space for [io size 0x1000] Jul 14 22:45:16.791742 kernel: pci 0000:00:16.3: BAR 13: failed to assign [io size 0x1000] Jul 14 22:45:16.791794 kernel: pci 0000:00:15.7: BAR 13: no space for [io size 0x1000] Jul 14 22:45:16.791845 kernel: pci 0000:00:15.7: BAR 13: failed to assign [io size 0x1000] Jul 14 22:45:16.791898 kernel: pci 0000:00:15.6: BAR 13: no space for [io size 0x1000] Jul 14 22:45:16.791950 kernel: pci 0000:00:15.6: BAR 13: failed to assign [io size 0x1000] Jul 14 22:45:16.792002 kernel: pci 0000:00:15.5: BAR 13: no space for [io size 0x1000] Jul 14 22:45:16.792054 kernel: pci 0000:00:15.5: BAR 13: failed to assign [io size 0x1000] Jul 14 22:45:16.792109 kernel: pci 0000:00:15.4: BAR 13: no space for [io size 0x1000] Jul 14 22:45:16.792161 kernel: pci 0000:00:15.4: BAR 13: failed to assign [io size 0x1000] Jul 14 22:45:16.792214 kernel: pci 0000:00:15.3: BAR 13: no space for [io size 0x1000] Jul 14 22:45:16.792266 kernel: pci 0000:00:15.3: BAR 13: failed to assign [io size 0x1000] Jul 14 22:45:16.792326 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Jul 14 22:45:16.792382 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] Jul 14 22:45:16.792434 kernel: pci 0000:00:11.0: bridge window [io 0x2000-0x3fff] Jul 14 22:45:16.792485 kernel: pci 0000:00:11.0: bridge window [mem 0xfd600000-0xfdffffff] Jul 14 22:45:16.792537 kernel: pci 0000:00:11.0: bridge window [mem 0xe7b00000-0xe7ffffff 64bit pref] Jul 14 22:45:16.792597 kernel: pci 0000:03:00.0: BAR 6: assigned [mem 0xfd500000-0xfd50ffff pref] Jul 14 22:45:16.792650 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] Jul 14 22:45:16.792704 kernel: pci 0000:00:15.0: bridge window [io 0x4000-0x4fff] Jul 14 22:45:16.792756 kernel: pci 0000:00:15.0: bridge window [mem 0xfd500000-0xfd5fffff] Jul 14 22:45:16.792808 kernel: pci 0000:00:15.0: bridge window [mem 0xc0000000-0xc01fffff 64bit pref] Jul 14 22:45:16.792862 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] Jul 14 22:45:16.792914 kernel: pci 0000:00:15.1: bridge window [io 0x8000-0x8fff] Jul 14 22:45:16.792967 kernel: pci 0000:00:15.1: bridge window [mem 0xfd100000-0xfd1fffff] Jul 14 22:45:16.793023 kernel: pci 0000:00:15.1: bridge window [mem 0xe7800000-0xe78fffff 64bit pref] Jul 14 22:45:16.793077 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] Jul 14 22:45:16.793130 kernel: pci 0000:00:15.2: bridge window [io 0xc000-0xcfff] Jul 14 22:45:16.793183 kernel: pci 0000:00:15.2: bridge window [mem 0xfcd00000-0xfcdfffff] Jul 14 22:45:16.793235 kernel: pci 0000:00:15.2: bridge window [mem 0xe7400000-0xe74fffff 64bit pref] Jul 14 22:45:16.793287 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] Jul 14 22:45:16.795457 kernel: pci 0000:00:15.3: bridge window [mem 0xfc900000-0xfc9fffff] Jul 14 22:45:16.795516 kernel: pci 0000:00:15.3: bridge window [mem 0xe7000000-0xe70fffff 64bit pref] Jul 14 22:45:16.795571 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] Jul 14 22:45:16.795623 kernel: pci 0000:00:15.4: bridge window [mem 0xfc500000-0xfc5fffff] Jul 14 22:45:16.795678 kernel: pci 0000:00:15.4: bridge window [mem 0xe6c00000-0xe6cfffff 64bit pref] Jul 14 22:45:16.795732 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] Jul 14 22:45:16.795784 kernel: pci 0000:00:15.5: bridge window [mem 0xfc100000-0xfc1fffff] Jul 14 22:45:16.795837 kernel: pci 0000:00:15.5: bridge window [mem 0xe6800000-0xe68fffff 64bit pref] Jul 14 22:45:16.795888 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] Jul 14 22:45:16.795940 kernel: pci 0000:00:15.6: bridge window [mem 0xfbd00000-0xfbdfffff] Jul 14 22:45:16.795994 kernel: pci 0000:00:15.6: bridge window [mem 0xe6400000-0xe64fffff 64bit pref] Jul 14 22:45:16.796045 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] Jul 14 22:45:16.796096 kernel: pci 0000:00:15.7: bridge window [mem 0xfb900000-0xfb9fffff] Jul 14 22:45:16.796147 kernel: pci 0000:00:15.7: bridge window [mem 0xe6000000-0xe60fffff 64bit pref] Jul 14 22:45:16.796202 kernel: pci 0000:0b:00.0: BAR 6: assigned [mem 0xfd400000-0xfd40ffff pref] Jul 14 22:45:16.796254 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] Jul 14 22:45:16.796305 kernel: pci 0000:00:16.0: bridge window [io 0x5000-0x5fff] Jul 14 22:45:16.796371 kernel: pci 0000:00:16.0: bridge window [mem 0xfd400000-0xfd4fffff] Jul 14 22:45:16.796424 kernel: pci 0000:00:16.0: bridge window [mem 0xc0200000-0xc03fffff 64bit pref] Jul 14 22:45:16.796481 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] Jul 14 22:45:16.796534 kernel: pci 0000:00:16.1: bridge window [io 0x9000-0x9fff] Jul 14 22:45:16.796586 kernel: pci 0000:00:16.1: bridge window [mem 0xfd000000-0xfd0fffff] Jul 14 22:45:16.796637 kernel: pci 0000:00:16.1: bridge window [mem 0xe7700000-0xe77fffff 64bit pref] Jul 14 22:45:16.796690 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] Jul 14 22:45:16.796742 kernel: pci 0000:00:16.2: bridge window [io 0xd000-0xdfff] Jul 14 22:45:16.796794 kernel: pci 0000:00:16.2: bridge window [mem 0xfcc00000-0xfccfffff] Jul 14 22:45:16.796845 kernel: pci 0000:00:16.2: bridge window [mem 0xe7300000-0xe73fffff 64bit pref] Jul 14 22:45:16.796897 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] Jul 14 22:45:16.796951 kernel: pci 0000:00:16.3: bridge window [mem 0xfc800000-0xfc8fffff] Jul 14 22:45:16.797003 kernel: pci 0000:00:16.3: bridge window [mem 0xe6f00000-0xe6ffffff 64bit pref] Jul 14 22:45:16.797055 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] Jul 14 22:45:16.797107 kernel: pci 0000:00:16.4: bridge window [mem 0xfc400000-0xfc4fffff] Jul 14 22:45:16.797158 kernel: pci 0000:00:16.4: bridge window [mem 0xe6b00000-0xe6bfffff 64bit pref] Jul 14 22:45:16.797210 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] Jul 14 22:45:16.797262 kernel: pci 0000:00:16.5: bridge window [mem 0xfc000000-0xfc0fffff] Jul 14 22:45:16.797313 kernel: pci 0000:00:16.5: bridge window [mem 0xe6700000-0xe67fffff 64bit pref] Jul 14 22:45:16.799458 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] Jul 14 22:45:16.799517 kernel: pci 0000:00:16.6: bridge window [mem 0xfbc00000-0xfbcfffff] Jul 14 22:45:16.799575 kernel: pci 0000:00:16.6: bridge window [mem 0xe6300000-0xe63fffff 64bit pref] Jul 14 22:45:16.799629 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] Jul 14 22:45:16.799681 kernel: pci 0000:00:16.7: bridge window [mem 0xfb800000-0xfb8fffff] Jul 14 22:45:16.799733 kernel: pci 0000:00:16.7: bridge window [mem 0xe5f00000-0xe5ffffff 64bit pref] Jul 14 22:45:16.799787 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] Jul 14 22:45:16.799839 kernel: pci 0000:00:17.0: bridge window [io 0x6000-0x6fff] Jul 14 22:45:16.799891 kernel: pci 0000:00:17.0: bridge window [mem 0xfd300000-0xfd3fffff] Jul 14 22:45:16.799943 kernel: pci 0000:00:17.0: bridge window [mem 0xe7a00000-0xe7afffff 64bit pref] Jul 14 22:45:16.799997 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] Jul 14 22:45:16.800052 kernel: pci 0000:00:17.1: bridge window [io 0xa000-0xafff] Jul 14 22:45:16.800105 kernel: pci 0000:00:17.1: bridge window [mem 0xfcf00000-0xfcffffff] Jul 14 22:45:16.800157 kernel: pci 0000:00:17.1: bridge window [mem 0xe7600000-0xe76fffff 64bit pref] Jul 14 22:45:16.800211 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] Jul 14 22:45:16.800263 kernel: pci 0000:00:17.2: bridge window [io 0xe000-0xefff] Jul 14 22:45:16.800316 kernel: pci 0000:00:17.2: bridge window [mem 0xfcb00000-0xfcbfffff] Jul 14 22:45:16.800377 kernel: pci 0000:00:17.2: bridge window [mem 0xe7200000-0xe72fffff 64bit pref] Jul 14 22:45:16.800430 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] Jul 14 22:45:16.800486 kernel: pci 0000:00:17.3: bridge window [mem 0xfc700000-0xfc7fffff] Jul 14 22:45:16.800538 kernel: pci 0000:00:17.3: bridge window [mem 0xe6e00000-0xe6efffff 64bit pref] Jul 14 22:45:16.800594 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] Jul 14 22:45:16.800646 kernel: pci 0000:00:17.4: bridge window [mem 0xfc300000-0xfc3fffff] Jul 14 22:45:16.800698 kernel: pci 0000:00:17.4: bridge window [mem 0xe6a00000-0xe6afffff 64bit pref] Jul 14 22:45:16.800751 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] Jul 14 22:45:16.800803 kernel: pci 0000:00:17.5: bridge window [mem 0xfbf00000-0xfbffffff] Jul 14 22:45:16.800855 kernel: pci 0000:00:17.5: bridge window [mem 0xe6600000-0xe66fffff 64bit pref] Jul 14 22:45:16.800907 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] Jul 14 22:45:16.800959 kernel: pci 0000:00:17.6: bridge window [mem 0xfbb00000-0xfbbfffff] Jul 14 22:45:16.801010 kernel: pci 0000:00:17.6: bridge window [mem 0xe6200000-0xe62fffff 64bit pref] Jul 14 22:45:16.801065 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] Jul 14 22:45:16.801118 kernel: pci 0000:00:17.7: bridge window [mem 0xfb700000-0xfb7fffff] Jul 14 22:45:16.801170 kernel: pci 0000:00:17.7: bridge window [mem 0xe5e00000-0xe5efffff 64bit pref] Jul 14 22:45:16.801223 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] Jul 14 22:45:16.801275 kernel: pci 0000:00:18.0: bridge window [io 0x7000-0x7fff] Jul 14 22:45:16.801548 kernel: pci 0000:00:18.0: bridge window [mem 0xfd200000-0xfd2fffff] Jul 14 22:45:16.801608 kernel: pci 0000:00:18.0: bridge window [mem 0xe7900000-0xe79fffff 64bit pref] Jul 14 22:45:16.801663 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] Jul 14 22:45:16.801717 kernel: pci 0000:00:18.1: bridge window [io 0xb000-0xbfff] Jul 14 22:45:16.801769 kernel: pci 0000:00:18.1: bridge window [mem 0xfce00000-0xfcefffff] Jul 14 22:45:16.801824 kernel: pci 0000:00:18.1: bridge window [mem 0xe7500000-0xe75fffff 64bit pref] Jul 14 22:45:16.801877 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] Jul 14 22:45:16.801929 kernel: pci 0000:00:18.2: bridge window [mem 0xfca00000-0xfcafffff] Jul 14 22:45:16.801981 kernel: pci 0000:00:18.2: bridge window [mem 0xe7100000-0xe71fffff 64bit pref] Jul 14 22:45:16.802034 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] Jul 14 22:45:16.802087 kernel: pci 0000:00:18.3: bridge window [mem 0xfc600000-0xfc6fffff] Jul 14 22:45:16.802139 kernel: pci 0000:00:18.3: bridge window [mem 0xe6d00000-0xe6dfffff 64bit pref] Jul 14 22:45:16.802192 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] Jul 14 22:45:16.802244 kernel: pci 0000:00:18.4: bridge window [mem 0xfc200000-0xfc2fffff] Jul 14 22:45:16.802298 kernel: pci 0000:00:18.4: bridge window [mem 0xe6900000-0xe69fffff 64bit pref] Jul 14 22:45:16.802359 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] Jul 14 22:45:16.802412 kernel: pci 0000:00:18.5: bridge window [mem 0xfbe00000-0xfbefffff] Jul 14 22:45:16.802463 kernel: pci 0000:00:18.5: bridge window [mem 0xe6500000-0xe65fffff 64bit pref] Jul 14 22:45:16.802515 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] Jul 14 22:45:16.802569 kernel: pci 0000:00:18.6: bridge window [mem 0xfba00000-0xfbafffff] Jul 14 22:45:16.802621 kernel: pci 0000:00:18.6: bridge window [mem 0xe6100000-0xe61fffff 64bit pref] Jul 14 22:45:16.802674 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] Jul 14 22:45:16.802726 kernel: pci 0000:00:18.7: bridge window [mem 0xfb600000-0xfb6fffff] Jul 14 22:45:16.802779 kernel: pci 0000:00:18.7: bridge window [mem 0xe5d00000-0xe5dfffff 64bit pref] Jul 14 22:45:16.802833 kernel: pci_bus 0000:00: resource 4 [mem 0x000a0000-0x000bffff window] Jul 14 22:45:16.802881 kernel: pci_bus 0000:00: resource 5 [mem 0x000cc000-0x000dbfff window] Jul 14 22:45:16.802928 kernel: pci_bus 0000:00: resource 6 [mem 0xc0000000-0xfebfffff window] Jul 14 22:45:16.802974 kernel: pci_bus 0000:00: resource 7 [io 0x0000-0x0cf7 window] Jul 14 22:45:16.803020 kernel: pci_bus 0000:00: resource 8 [io 0x0d00-0xfeff window] Jul 14 22:45:16.803072 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x3fff] Jul 14 22:45:16.803120 kernel: pci_bus 0000:02: resource 1 [mem 0xfd600000-0xfdffffff] Jul 14 22:45:16.803171 kernel: pci_bus 0000:02: resource 2 [mem 0xe7b00000-0xe7ffffff 64bit pref] Jul 14 22:45:16.803218 kernel: pci_bus 0000:02: resource 4 [mem 0x000a0000-0x000bffff window] Jul 14 22:45:16.803266 kernel: pci_bus 0000:02: resource 5 [mem 0x000cc000-0x000dbfff window] Jul 14 22:45:16.803314 kernel: pci_bus 0000:02: resource 6 [mem 0xc0000000-0xfebfffff window] Jul 14 22:45:16.803800 kernel: pci_bus 0000:02: resource 7 [io 0x0000-0x0cf7 window] Jul 14 22:45:16.803851 kernel: pci_bus 0000:02: resource 8 [io 0x0d00-0xfeff window] Jul 14 22:45:16.803905 kernel: pci_bus 0000:03: resource 0 [io 0x4000-0x4fff] Jul 14 22:45:16.803953 kernel: pci_bus 0000:03: resource 1 [mem 0xfd500000-0xfd5fffff] Jul 14 22:45:16.804004 kernel: pci_bus 0000:03: resource 2 [mem 0xc0000000-0xc01fffff 64bit pref] Jul 14 22:45:16.804058 kernel: pci_bus 0000:04: resource 0 [io 0x8000-0x8fff] Jul 14 22:45:16.804106 kernel: pci_bus 0000:04: resource 1 [mem 0xfd100000-0xfd1fffff] Jul 14 22:45:16.804153 kernel: pci_bus 0000:04: resource 2 [mem 0xe7800000-0xe78fffff 64bit pref] Jul 14 22:45:16.804205 kernel: pci_bus 0000:05: resource 0 [io 0xc000-0xcfff] Jul 14 22:45:16.804253 kernel: pci_bus 0000:05: resource 1 [mem 0xfcd00000-0xfcdfffff] Jul 14 22:45:16.804304 kernel: pci_bus 0000:05: resource 2 [mem 0xe7400000-0xe74fffff 64bit pref] Jul 14 22:45:16.804374 kernel: pci_bus 0000:06: resource 1 [mem 0xfc900000-0xfc9fffff] Jul 14 22:45:16.804423 kernel: pci_bus 0000:06: resource 2 [mem 0xe7000000-0xe70fffff 64bit pref] Jul 14 22:45:16.804482 kernel: pci_bus 0000:07: resource 1 [mem 0xfc500000-0xfc5fffff] Jul 14 22:45:16.804531 kernel: pci_bus 0000:07: resource 2 [mem 0xe6c00000-0xe6cfffff 64bit pref] Jul 14 22:45:16.804583 kernel: pci_bus 0000:08: resource 1 [mem 0xfc100000-0xfc1fffff] Jul 14 22:45:16.804631 kernel: pci_bus 0000:08: resource 2 [mem 0xe6800000-0xe68fffff 64bit pref] Jul 14 22:45:16.804688 kernel: pci_bus 0000:09: resource 1 [mem 0xfbd00000-0xfbdfffff] Jul 14 22:45:16.804737 kernel: pci_bus 0000:09: resource 2 [mem 0xe6400000-0xe64fffff 64bit pref] Jul 14 22:45:16.804789 kernel: pci_bus 0000:0a: resource 1 [mem 0xfb900000-0xfb9fffff] Jul 14 22:45:16.804837 kernel: pci_bus 0000:0a: resource 2 [mem 0xe6000000-0xe60fffff 64bit pref] Jul 14 22:45:16.804902 kernel: pci_bus 0000:0b: resource 0 [io 0x5000-0x5fff] Jul 14 22:45:16.804954 kernel: pci_bus 0000:0b: resource 1 [mem 0xfd400000-0xfd4fffff] Jul 14 22:45:16.805002 kernel: pci_bus 0000:0b: resource 2 [mem 0xc0200000-0xc03fffff 64bit pref] Jul 14 22:45:16.805056 kernel: pci_bus 0000:0c: resource 0 [io 0x9000-0x9fff] Jul 14 22:45:16.805105 kernel: pci_bus 0000:0c: resource 1 [mem 0xfd000000-0xfd0fffff] Jul 14 22:45:16.805156 kernel: pci_bus 0000:0c: resource 2 [mem 0xe7700000-0xe77fffff 64bit pref] Jul 14 22:45:16.805209 kernel: pci_bus 0000:0d: resource 0 [io 0xd000-0xdfff] Jul 14 22:45:16.805259 kernel: pci_bus 0000:0d: resource 1 [mem 0xfcc00000-0xfccfffff] Jul 14 22:45:16.805313 kernel: pci_bus 0000:0d: resource 2 [mem 0xe7300000-0xe73fffff 64bit pref] Jul 14 22:45:16.805428 kernel: pci_bus 0000:0e: resource 1 [mem 0xfc800000-0xfc8fffff] Jul 14 22:45:16.805477 kernel: pci_bus 0000:0e: resource 2 [mem 0xe6f00000-0xe6ffffff 64bit pref] Jul 14 22:45:16.805529 kernel: pci_bus 0000:0f: resource 1 [mem 0xfc400000-0xfc4fffff] Jul 14 22:45:16.805578 kernel: pci_bus 0000:0f: resource 2 [mem 0xe6b00000-0xe6bfffff 64bit pref] Jul 14 22:45:16.805886 kernel: pci_bus 0000:10: resource 1 [mem 0xfc000000-0xfc0fffff] Jul 14 22:45:16.805944 kernel: pci_bus 0000:10: resource 2 [mem 0xe6700000-0xe67fffff 64bit pref] Jul 14 22:45:16.805998 kernel: pci_bus 0000:11: resource 1 [mem 0xfbc00000-0xfbcfffff] Jul 14 22:45:16.806047 kernel: pci_bus 0000:11: resource 2 [mem 0xe6300000-0xe63fffff 64bit pref] Jul 14 22:45:16.806099 kernel: pci_bus 0000:12: resource 1 [mem 0xfb800000-0xfb8fffff] Jul 14 22:45:16.806147 kernel: pci_bus 0000:12: resource 2 [mem 0xe5f00000-0xe5ffffff 64bit pref] Jul 14 22:45:16.806199 kernel: pci_bus 0000:13: resource 0 [io 0x6000-0x6fff] Jul 14 22:45:16.806257 kernel: pci_bus 0000:13: resource 1 [mem 0xfd300000-0xfd3fffff] Jul 14 22:45:16.806564 kernel: pci_bus 0000:13: resource 2 [mem 0xe7a00000-0xe7afffff 64bit pref] Jul 14 22:45:16.806631 kernel: pci_bus 0000:14: resource 0 [io 0xa000-0xafff] Jul 14 22:45:16.806680 kernel: pci_bus 0000:14: resource 1 [mem 0xfcf00000-0xfcffffff] Jul 14 22:45:16.806728 kernel: pci_bus 0000:14: resource 2 [mem 0xe7600000-0xe76fffff 64bit pref] Jul 14 22:45:16.806780 kernel: pci_bus 0000:15: resource 0 [io 0xe000-0xefff] Jul 14 22:45:16.806828 kernel: pci_bus 0000:15: resource 1 [mem 0xfcb00000-0xfcbfffff] Jul 14 22:45:16.806879 kernel: pci_bus 0000:15: resource 2 [mem 0xe7200000-0xe72fffff 64bit pref] Jul 14 22:45:16.806932 kernel: pci_bus 0000:16: resource 1 [mem 0xfc700000-0xfc7fffff] Jul 14 22:45:16.806981 kernel: pci_bus 0000:16: resource 2 [mem 0xe6e00000-0xe6efffff 64bit pref] Jul 14 22:45:16.807032 kernel: pci_bus 0000:17: resource 1 [mem 0xfc300000-0xfc3fffff] Jul 14 22:45:16.807081 kernel: pci_bus 0000:17: resource 2 [mem 0xe6a00000-0xe6afffff 64bit pref] Jul 14 22:45:16.807132 kernel: pci_bus 0000:18: resource 1 [mem 0xfbf00000-0xfbffffff] Jul 14 22:45:16.807181 kernel: pci_bus 0000:18: resource 2 [mem 0xe6600000-0xe66fffff 64bit pref] Jul 14 22:45:16.807236 kernel: pci_bus 0000:19: resource 1 [mem 0xfbb00000-0xfbbfffff] Jul 14 22:45:16.807284 kernel: pci_bus 0000:19: resource 2 [mem 0xe6200000-0xe62fffff 64bit pref] Jul 14 22:45:16.807618 kernel: pci_bus 0000:1a: resource 1 [mem 0xfb700000-0xfb7fffff] Jul 14 22:45:16.807672 kernel: pci_bus 0000:1a: resource 2 [mem 0xe5e00000-0xe5efffff 64bit pref] Jul 14 22:45:16.807725 kernel: pci_bus 0000:1b: resource 0 [io 0x7000-0x7fff] Jul 14 22:45:16.807774 kernel: pci_bus 0000:1b: resource 1 [mem 0xfd200000-0xfd2fffff] Jul 14 22:45:16.807825 kernel: pci_bus 0000:1b: resource 2 [mem 0xe7900000-0xe79fffff 64bit pref] Jul 14 22:45:16.807876 kernel: pci_bus 0000:1c: resource 0 [io 0xb000-0xbfff] Jul 14 22:45:16.807924 kernel: pci_bus 0000:1c: resource 1 [mem 0xfce00000-0xfcefffff] Jul 14 22:45:16.807973 kernel: pci_bus 0000:1c: resource 2 [mem 0xe7500000-0xe75fffff 64bit pref] Jul 14 22:45:16.808027 kernel: pci_bus 0000:1d: resource 1 [mem 0xfca00000-0xfcafffff] Jul 14 22:45:16.808075 kernel: pci_bus 0000:1d: resource 2 [mem 0xe7100000-0xe71fffff 64bit pref] Jul 14 22:45:16.808130 kernel: pci_bus 0000:1e: resource 1 [mem 0xfc600000-0xfc6fffff] Jul 14 22:45:16.808178 kernel: pci_bus 0000:1e: resource 2 [mem 0xe6d00000-0xe6dfffff 64bit pref] Jul 14 22:45:16.808229 kernel: pci_bus 0000:1f: resource 1 [mem 0xfc200000-0xfc2fffff] Jul 14 22:45:16.808279 kernel: pci_bus 0000:1f: resource 2 [mem 0xe6900000-0xe69fffff 64bit pref] Jul 14 22:45:16.808363 kernel: pci_bus 0000:20: resource 1 [mem 0xfbe00000-0xfbefffff] Jul 14 22:45:16.808413 kernel: pci_bus 0000:20: resource 2 [mem 0xe6500000-0xe65fffff 64bit pref] Jul 14 22:45:16.808468 kernel: pci_bus 0000:21: resource 1 [mem 0xfba00000-0xfbafffff] Jul 14 22:45:16.808863 kernel: pci_bus 0000:21: resource 2 [mem 0xe6100000-0xe61fffff 64bit pref] Jul 14 22:45:16.808921 kernel: pci_bus 0000:22: resource 1 [mem 0xfb600000-0xfb6fffff] Jul 14 22:45:16.808972 kernel: pci_bus 0000:22: resource 2 [mem 0xe5d00000-0xe5dfffff 64bit pref] Jul 14 22:45:16.809032 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jul 14 22:45:16.809042 kernel: PCI: CLS 32 bytes, default 64 Jul 14 22:45:16.809049 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jul 14 22:45:16.809058 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x311fd3cd494, max_idle_ns: 440795223879 ns Jul 14 22:45:16.809065 kernel: clocksource: Switched to clocksource tsc Jul 14 22:45:16.809071 kernel: Initialise system trusted keyrings Jul 14 22:45:16.809077 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Jul 14 22:45:16.809084 kernel: Key type asymmetric registered Jul 14 22:45:16.809090 kernel: Asymmetric key parser 'x509' registered Jul 14 22:45:16.809096 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jul 14 22:45:16.809104 kernel: io scheduler mq-deadline registered Jul 14 22:45:16.809110 kernel: io scheduler kyber registered Jul 14 22:45:16.809118 kernel: io scheduler bfq registered Jul 14 22:45:16.809180 kernel: pcieport 0000:00:15.0: PME: Signaling with IRQ 24 Jul 14 22:45:16.809236 kernel: pcieport 0000:00:15.0: pciehp: Slot #160 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 14 22:45:16.809303 kernel: pcieport 0000:00:15.1: PME: Signaling with IRQ 25 Jul 14 22:45:16.809395 kernel: pcieport 0000:00:15.1: pciehp: Slot #161 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 14 22:45:16.809460 kernel: pcieport 0000:00:15.2: PME: Signaling with IRQ 26 Jul 14 22:45:16.809515 kernel: pcieport 0000:00:15.2: pciehp: Slot #162 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 14 22:45:16.809569 kernel: pcieport 0000:00:15.3: PME: Signaling with IRQ 27 Jul 14 22:45:16.809631 kernel: pcieport 0000:00:15.3: pciehp: Slot #163 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 14 22:45:16.809685 kernel: pcieport 0000:00:15.4: PME: Signaling with IRQ 28 Jul 14 22:45:16.809738 kernel: pcieport 0000:00:15.4: pciehp: Slot #164 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 14 22:45:16.809792 kernel: pcieport 0000:00:15.5: PME: Signaling with IRQ 29 Jul 14 22:45:16.809844 kernel: pcieport 0000:00:15.5: pciehp: Slot #165 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 14 22:45:16.809900 kernel: pcieport 0000:00:15.6: PME: Signaling with IRQ 30 Jul 14 22:45:16.809954 kernel: pcieport 0000:00:15.6: pciehp: Slot #166 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 14 22:45:16.810007 kernel: pcieport 0000:00:15.7: PME: Signaling with IRQ 31 Jul 14 22:45:16.810061 kernel: pcieport 0000:00:15.7: pciehp: Slot #167 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 14 22:45:16.810115 kernel: pcieport 0000:00:16.0: PME: Signaling with IRQ 32 Jul 14 22:45:16.810171 kernel: pcieport 0000:00:16.0: pciehp: Slot #192 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 14 22:45:16.810226 kernel: pcieport 0000:00:16.1: PME: Signaling with IRQ 33 Jul 14 22:45:16.810279 kernel: pcieport 0000:00:16.1: pciehp: Slot #193 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 14 22:45:16.810666 kernel: pcieport 0000:00:16.2: PME: Signaling with IRQ 34 Jul 14 22:45:16.810729 kernel: pcieport 0000:00:16.2: pciehp: Slot #194 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 14 22:45:16.810786 kernel: pcieport 0000:00:16.3: PME: Signaling with IRQ 35 Jul 14 22:45:16.810841 kernel: pcieport 0000:00:16.3: pciehp: Slot #195 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 14 22:45:16.810899 kernel: pcieport 0000:00:16.4: PME: Signaling with IRQ 36 Jul 14 22:45:16.810953 kernel: pcieport 0000:00:16.4: pciehp: Slot #196 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 14 22:45:16.811008 kernel: pcieport 0000:00:16.5: PME: Signaling with IRQ 37 Jul 14 22:45:16.811061 kernel: pcieport 0000:00:16.5: pciehp: Slot #197 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 14 22:45:16.811115 kernel: pcieport 0000:00:16.6: PME: Signaling with IRQ 38 Jul 14 22:45:16.811172 kernel: pcieport 0000:00:16.6: pciehp: Slot #198 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 14 22:45:16.811226 kernel: pcieport 0000:00:16.7: PME: Signaling with IRQ 39 Jul 14 22:45:16.811279 kernel: pcieport 0000:00:16.7: pciehp: Slot #199 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 14 22:45:16.811375 kernel: pcieport 0000:00:17.0: PME: Signaling with IRQ 40 Jul 14 22:45:16.811432 kernel: pcieport 0000:00:17.0: pciehp: Slot #224 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 14 22:45:16.811486 kernel: pcieport 0000:00:17.1: PME: Signaling with IRQ 41 Jul 14 22:45:16.811538 kernel: pcieport 0000:00:17.1: pciehp: Slot #225 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 14 22:45:16.811595 kernel: pcieport 0000:00:17.2: PME: Signaling with IRQ 42 Jul 14 22:45:16.811647 kernel: pcieport 0000:00:17.2: pciehp: Slot #226 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 14 22:45:16.811701 kernel: pcieport 0000:00:17.3: PME: Signaling with IRQ 43 Jul 14 22:45:16.811754 kernel: pcieport 0000:00:17.3: pciehp: Slot #227 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 14 22:45:16.811807 kernel: pcieport 0000:00:17.4: PME: Signaling with IRQ 44 Jul 14 22:45:16.811860 kernel: pcieport 0000:00:17.4: pciehp: Slot #228 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 14 22:45:16.811916 kernel: pcieport 0000:00:17.5: PME: Signaling with IRQ 45 Jul 14 22:45:16.811970 kernel: pcieport 0000:00:17.5: pciehp: Slot #229 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 14 22:45:16.812024 kernel: pcieport 0000:00:17.6: PME: Signaling with IRQ 46 Jul 14 22:45:16.812370 kernel: pcieport 0000:00:17.6: pciehp: Slot #230 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 14 22:45:16.812436 kernel: pcieport 0000:00:17.7: PME: Signaling with IRQ 47 Jul 14 22:45:16.812503 kernel: pcieport 0000:00:17.7: pciehp: Slot #231 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 14 22:45:16.812559 kernel: pcieport 0000:00:18.0: PME: Signaling with IRQ 48 Jul 14 22:45:16.812612 kernel: pcieport 0000:00:18.0: pciehp: Slot #256 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 14 22:45:16.812665 kernel: pcieport 0000:00:18.1: PME: Signaling with IRQ 49 Jul 14 22:45:16.812718 kernel: pcieport 0000:00:18.1: pciehp: Slot #257 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 14 22:45:16.812771 kernel: pcieport 0000:00:18.2: PME: Signaling with IRQ 50 Jul 14 22:45:16.812827 kernel: pcieport 0000:00:18.2: pciehp: Slot #258 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 14 22:45:16.812879 kernel: pcieport 0000:00:18.3: PME: Signaling with IRQ 51 Jul 14 22:45:16.812932 kernel: pcieport 0000:00:18.3: pciehp: Slot #259 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 14 22:45:16.812986 kernel: pcieport 0000:00:18.4: PME: Signaling with IRQ 52 Jul 14 22:45:16.813038 kernel: pcieport 0000:00:18.4: pciehp: Slot #260 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 14 22:45:16.813091 kernel: pcieport 0000:00:18.5: PME: Signaling with IRQ 53 Jul 14 22:45:16.813146 kernel: pcieport 0000:00:18.5: pciehp: Slot #261 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 14 22:45:16.813200 kernel: pcieport 0000:00:18.6: PME: Signaling with IRQ 54 Jul 14 22:45:16.813253 kernel: pcieport 0000:00:18.6: pciehp: Slot #262 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 14 22:45:16.813305 kernel: pcieport 0000:00:18.7: PME: Signaling with IRQ 55 Jul 14 22:45:16.813397 kernel: pcieport 0000:00:18.7: pciehp: Slot #263 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 14 22:45:16.813410 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jul 14 22:45:16.813417 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 14 22:45:16.813423 kernel: 00:05: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jul 14 22:45:16.813430 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBC,PNP0f13:MOUS] at 0x60,0x64 irq 1,12 Jul 14 22:45:16.813436 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jul 14 22:45:16.813442 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jul 14 22:45:16.813497 kernel: rtc_cmos 00:01: registered as rtc0 Jul 14 22:45:16.813547 kernel: rtc_cmos 00:01: setting system clock to 2025-07-14T22:45:16 UTC (1752533116) Jul 14 22:45:16.813558 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jul 14 22:45:16.813605 kernel: rtc_cmos 00:01: alarms up to one month, y3k, 114 bytes nvram Jul 14 22:45:16.813614 kernel: intel_pstate: CPU model not supported Jul 14 22:45:16.813621 kernel: NET: Registered PF_INET6 protocol family Jul 14 22:45:16.813627 kernel: Segment Routing with IPv6 Jul 14 22:45:16.813634 kernel: In-situ OAM (IOAM) with IPv6 Jul 14 22:45:16.813640 kernel: NET: Registered PF_PACKET protocol family Jul 14 22:45:16.813647 kernel: Key type dns_resolver registered Jul 14 22:45:16.813653 kernel: IPI shorthand broadcast: enabled Jul 14 22:45:16.813661 kernel: sched_clock: Marking stable (937397473, 233185090)->(1235031317, -64448754) Jul 14 22:45:16.813667 kernel: registered taskstats version 1 Jul 14 22:45:16.813673 kernel: Loading compiled-in X.509 certificates Jul 14 22:45:16.813680 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.97-flatcar: ff10e110ca3923b510cf0133f4e9f48dd636b870' Jul 14 22:45:16.813686 kernel: Key type .fscrypt registered Jul 14 22:45:16.813692 kernel: Key type fscrypt-provisioning registered Jul 14 22:45:16.813698 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 14 22:45:16.813705 kernel: ima: Allocated hash algorithm: sha1 Jul 14 22:45:16.813712 kernel: ima: No architecture policies found Jul 14 22:45:16.813719 kernel: clk: Disabling unused clocks Jul 14 22:45:16.813725 kernel: Freeing unused kernel image (initmem) memory: 42876K Jul 14 22:45:16.813732 kernel: Write protecting the kernel read-only data: 36864k Jul 14 22:45:16.813738 kernel: Freeing unused kernel image (rodata/data gap) memory: 1828K Jul 14 22:45:16.813744 kernel: Run /init as init process Jul 14 22:45:16.813751 kernel: with arguments: Jul 14 22:45:16.813757 kernel: /init Jul 14 22:45:16.813763 kernel: with environment: Jul 14 22:45:16.813769 kernel: HOME=/ Jul 14 22:45:16.813777 kernel: TERM=linux Jul 14 22:45:16.813783 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 14 22:45:16.813790 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 14 22:45:16.813798 systemd[1]: Detected virtualization vmware. Jul 14 22:45:16.813805 systemd[1]: Detected architecture x86-64. Jul 14 22:45:16.813811 systemd[1]: Running in initrd. Jul 14 22:45:16.813818 systemd[1]: No hostname configured, using default hostname. Jul 14 22:45:16.813825 systemd[1]: Hostname set to . Jul 14 22:45:16.813832 systemd[1]: Initializing machine ID from random generator. Jul 14 22:45:16.813839 systemd[1]: Queued start job for default target initrd.target. Jul 14 22:45:16.813845 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 14 22:45:16.813852 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 14 22:45:16.813859 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 14 22:45:16.813865 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 14 22:45:16.813872 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 14 22:45:16.813879 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 14 22:45:16.813887 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 14 22:45:16.813894 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 14 22:45:16.813900 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 14 22:45:16.813907 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 14 22:45:16.813913 systemd[1]: Reached target paths.target - Path Units. Jul 14 22:45:16.813921 systemd[1]: Reached target slices.target - Slice Units. Jul 14 22:45:16.813928 systemd[1]: Reached target swap.target - Swaps. Jul 14 22:45:16.813936 systemd[1]: Reached target timers.target - Timer Units. Jul 14 22:45:16.813943 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 14 22:45:16.813949 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 14 22:45:16.813955 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 14 22:45:16.813962 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jul 14 22:45:16.813968 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 14 22:45:16.813975 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 14 22:45:16.813982 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 14 22:45:16.813989 systemd[1]: Reached target sockets.target - Socket Units. Jul 14 22:45:16.813996 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 14 22:45:16.814002 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 14 22:45:16.814009 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 14 22:45:16.814015 systemd[1]: Starting systemd-fsck-usr.service... Jul 14 22:45:16.814022 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 14 22:45:16.814028 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 14 22:45:16.814035 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 14 22:45:16.814054 systemd-journald[215]: Collecting audit messages is disabled. Jul 14 22:45:16.814071 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 14 22:45:16.814078 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 14 22:45:16.814084 systemd[1]: Finished systemd-fsck-usr.service. Jul 14 22:45:16.814093 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 14 22:45:16.814101 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 14 22:45:16.814107 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 14 22:45:16.814114 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 14 22:45:16.814122 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 14 22:45:16.814128 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 14 22:45:16.814135 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 14 22:45:16.814350 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 14 22:45:16.814359 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 14 22:45:16.814366 kernel: Bridge firewalling registered Jul 14 22:45:16.814372 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 14 22:45:16.814379 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 14 22:45:16.814386 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 14 22:45:16.814396 systemd-journald[215]: Journal started Jul 14 22:45:16.814411 systemd-journald[215]: Runtime Journal (/run/log/journal/847721fa8d474d9e8d304217707f7cb7) is 4.8M, max 38.6M, 33.8M free. Jul 14 22:45:16.764394 systemd-modules-load[216]: Inserted module 'overlay' Jul 14 22:45:16.799187 systemd-modules-load[216]: Inserted module 'br_netfilter' Jul 14 22:45:16.816709 systemd[1]: Started systemd-journald.service - Journal Service. Jul 14 22:45:16.817002 dracut-cmdline[232]: dracut-dracut-053 Jul 14 22:45:16.817002 dracut-cmdline[232]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=bfa97d577a2baa7448b0ab2cae71f1606bd0084ffae5b72cc7eef5122a2ca497 Jul 14 22:45:16.822405 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 14 22:45:16.827686 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 14 22:45:16.829400 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 14 22:45:16.846610 systemd-resolved[280]: Positive Trust Anchors: Jul 14 22:45:16.846618 systemd-resolved[280]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 14 22:45:16.846639 systemd-resolved[280]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 14 22:45:16.849139 systemd-resolved[280]: Defaulting to hostname 'linux'. Jul 14 22:45:16.849724 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 14 22:45:16.849863 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 14 22:45:16.859332 kernel: SCSI subsystem initialized Jul 14 22:45:16.867331 kernel: Loading iSCSI transport class v2.0-870. Jul 14 22:45:16.873330 kernel: iscsi: registered transport (tcp) Jul 14 22:45:16.888332 kernel: iscsi: registered transport (qla4xxx) Jul 14 22:45:16.888348 kernel: QLogic iSCSI HBA Driver Jul 14 22:45:16.908312 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 14 22:45:16.911417 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 14 22:45:16.926769 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 14 22:45:16.926803 kernel: device-mapper: uevent: version 1.0.3 Jul 14 22:45:16.927930 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jul 14 22:45:16.960366 kernel: raid6: avx2x4 gen() 52067 MB/s Jul 14 22:45:16.976340 kernel: raid6: avx2x2 gen() 52688 MB/s Jul 14 22:45:16.993571 kernel: raid6: avx2x1 gen() 43126 MB/s Jul 14 22:45:16.993634 kernel: raid6: using algorithm avx2x2 gen() 52688 MB/s Jul 14 22:45:17.011582 kernel: raid6: .... xor() 30153 MB/s, rmw enabled Jul 14 22:45:17.011652 kernel: raid6: using avx2x2 recovery algorithm Jul 14 22:45:17.025342 kernel: xor: automatically using best checksumming function avx Jul 14 22:45:17.127672 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 14 22:45:17.132140 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 14 22:45:17.136413 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 14 22:45:17.143773 systemd-udevd[431]: Using default interface naming scheme 'v255'. Jul 14 22:45:17.146246 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 14 22:45:17.153410 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 14 22:45:17.160056 dracut-pre-trigger[437]: rd.md=0: removing MD RAID activation Jul 14 22:45:17.175156 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 14 22:45:17.178415 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 14 22:45:17.250298 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 14 22:45:17.256377 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 14 22:45:17.264527 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 14 22:45:17.265077 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 14 22:45:17.265451 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 14 22:45:17.265571 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 14 22:45:17.274475 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 14 22:45:17.281848 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 14 22:45:17.312398 kernel: VMware PVSCSI driver - version 1.0.7.0-k Jul 14 22:45:17.312437 kernel: vmw_pvscsi: using 64bit dma Jul 14 22:45:17.323232 kernel: VMware vmxnet3 virtual NIC driver - version 1.7.0.0-k-NAPI Jul 14 22:45:17.323258 kernel: vmxnet3 0000:0b:00.0: # of Tx queues : 2, # of Rx queues : 2 Jul 14 22:45:17.323379 kernel: vmxnet3 0000:0b:00.0 eth0: NIC Link is Up 10000 Mbps Jul 14 22:45:17.325742 kernel: vmw_pvscsi: max_id: 16 Jul 14 22:45:17.325758 kernel: vmw_pvscsi: setting ring_pages to 8 Jul 14 22:45:17.331329 kernel: vmw_pvscsi: enabling reqCallThreshold Jul 14 22:45:17.331347 kernel: vmw_pvscsi: driver-based request coalescing enabled Jul 14 22:45:17.331359 kernel: vmw_pvscsi: using MSI-X Jul 14 22:45:17.334336 kernel: scsi host0: VMware PVSCSI storage adapter rev 2, req/cmp/msg rings: 8/8/1 pages, cmd_per_lun=254 Jul 14 22:45:17.334363 kernel: cryptd: max_cpu_qlen set to 1000 Jul 14 22:45:17.340348 kernel: vmw_pvscsi 0000:03:00.0: VMware PVSCSI rev 2 host #0 Jul 14 22:45:17.343335 kernel: vmxnet3 0000:0b:00.0 ens192: renamed from eth0 Jul 14 22:45:17.345340 kernel: scsi 0:0:0:0: Direct-Access VMware Virtual disk 2.0 PQ: 0 ANSI: 6 Jul 14 22:45:17.349014 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 14 22:45:17.349049 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 14 22:45:17.351386 kernel: AVX2 version of gcm_enc/dec engaged. Jul 14 22:45:17.351399 kernel: AES CTR mode by8 optimization enabled Jul 14 22:45:17.349221 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 14 22:45:17.349314 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 14 22:45:17.349344 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 14 22:45:17.349446 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 14 22:45:17.357457 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 14 22:45:17.363559 kernel: libata version 3.00 loaded. Jul 14 22:45:17.368334 kernel: ata_piix 0000:00:07.1: version 2.13 Jul 14 22:45:17.371350 kernel: scsi host1: ata_piix Jul 14 22:45:17.372483 kernel: scsi host2: ata_piix Jul 14 22:45:17.372565 kernel: ata1: PATA max UDMA/33 cmd 0x1f0 ctl 0x3f6 bmdma 0x1060 irq 14 Jul 14 22:45:17.372575 kernel: ata2: PATA max UDMA/33 cmd 0x170 ctl 0x376 bmdma 0x1068 irq 15 Jul 14 22:45:17.373329 kernel: sd 0:0:0:0: [sda] 17805312 512-byte logical blocks: (9.12 GB/8.49 GiB) Jul 14 22:45:17.373436 kernel: sd 0:0:0:0: [sda] Write Protect is off Jul 14 22:45:17.373505 kernel: sd 0:0:0:0: [sda] Mode Sense: 31 00 00 00 Jul 14 22:45:17.373568 kernel: sd 0:0:0:0: [sda] Cache data unavailable Jul 14 22:45:17.373629 kernel: sd 0:0:0:0: [sda] Assuming drive cache: write through Jul 14 22:45:17.383219 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 14 22:45:17.386405 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 14 22:45:17.397875 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 14 22:45:17.416339 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 14 22:45:17.416368 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jul 14 22:45:17.545341 kernel: ata2.00: ATAPI: VMware Virtual IDE CDROM Drive, 00000001, max UDMA/33 Jul 14 22:45:17.549340 kernel: scsi 2:0:0:0: CD-ROM NECVMWar VMware IDE CDR10 1.00 PQ: 0 ANSI: 5 Jul 14 22:45:17.572652 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 1x/1x writer dvd-ram cd/rw xa/form2 cdda tray Jul 14 22:45:17.572797 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jul 14 22:45:17.582329 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jul 14 22:45:17.582427 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by (udev-worker) (487) Jul 14 22:45:17.583578 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_disk EFI-SYSTEM. Jul 14 22:45:17.586236 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_disk ROOT. Jul 14 22:45:17.588334 kernel: BTRFS: device fsid d23b6972-ad36-4741-bf36-4d440b923127 devid 1 transid 36 /dev/sda3 scanned by (udev-worker) (481) Jul 14 22:45:17.591494 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_disk OEM. Jul 14 22:45:17.595917 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_disk USR-A. Jul 14 22:45:17.596028 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_disk USR-A. Jul 14 22:45:17.600469 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 14 22:45:17.631352 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 14 22:45:17.638348 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 14 22:45:18.638348 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 14 22:45:18.638470 disk-uuid[589]: The operation has completed successfully. Jul 14 22:45:18.670365 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 14 22:45:18.670431 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 14 22:45:18.674411 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 14 22:45:18.676259 sh[606]: Success Jul 14 22:45:18.685362 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jul 14 22:45:18.719487 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 14 22:45:18.720374 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 14 22:45:18.720684 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 14 22:45:18.736160 kernel: BTRFS info (device dm-0): first mount of filesystem d23b6972-ad36-4741-bf36-4d440b923127 Jul 14 22:45:18.736182 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jul 14 22:45:18.736190 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jul 14 22:45:18.737245 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jul 14 22:45:18.738032 kernel: BTRFS info (device dm-0): using free space tree Jul 14 22:45:18.745329 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jul 14 22:45:18.748279 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 14 22:45:18.757386 systemd[1]: Starting afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments... Jul 14 22:45:18.758512 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 14 22:45:18.777128 kernel: BTRFS info (device sda6): first mount of filesystem 1f379987-f438-494c-89f9-63473ca1b18d Jul 14 22:45:18.777152 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jul 14 22:45:18.777160 kernel: BTRFS info (device sda6): using free space tree Jul 14 22:45:18.783329 kernel: BTRFS info (device sda6): enabling ssd optimizations Jul 14 22:45:18.791889 systemd[1]: mnt-oem.mount: Deactivated successfully. Jul 14 22:45:18.793341 kernel: BTRFS info (device sda6): last unmount of filesystem 1f379987-f438-494c-89f9-63473ca1b18d Jul 14 22:45:18.799167 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 14 22:45:18.802409 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 14 22:45:18.824982 systemd[1]: Finished afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments. Jul 14 22:45:18.830568 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 14 22:45:18.882012 ignition[666]: Ignition 2.19.0 Jul 14 22:45:18.882019 ignition[666]: Stage: fetch-offline Jul 14 22:45:18.882042 ignition[666]: no configs at "/usr/lib/ignition/base.d" Jul 14 22:45:18.882048 ignition[666]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Jul 14 22:45:18.882099 ignition[666]: parsed url from cmdline: "" Jul 14 22:45:18.882101 ignition[666]: no config URL provided Jul 14 22:45:18.882103 ignition[666]: reading system config file "/usr/lib/ignition/user.ign" Jul 14 22:45:18.882108 ignition[666]: no config at "/usr/lib/ignition/user.ign" Jul 14 22:45:18.882478 ignition[666]: config successfully fetched Jul 14 22:45:18.882496 ignition[666]: parsing config with SHA512: ad7da1504b708e0e7711baec01e44d45edc790e831c71dabaedeb448754a945eb4f3fd087cb5c07fc32466e7de36c1cbc50384d417d579a8c67227c727c8142e Jul 14 22:45:18.887952 unknown[666]: fetched base config from "system" Jul 14 22:45:18.888062 unknown[666]: fetched user config from "vmware" Jul 14 22:45:18.888357 ignition[666]: fetch-offline: fetch-offline passed Jul 14 22:45:18.888393 ignition[666]: Ignition finished successfully Jul 14 22:45:18.889583 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 14 22:45:18.889831 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 14 22:45:18.893418 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 14 22:45:18.904999 systemd-networkd[800]: lo: Link UP Jul 14 22:45:18.905005 systemd-networkd[800]: lo: Gained carrier Jul 14 22:45:18.905702 systemd-networkd[800]: Enumeration completed Jul 14 22:45:18.905861 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 14 22:45:18.905968 systemd-networkd[800]: ens192: Configuring with /etc/systemd/network/10-dracut-cmdline-99.network. Jul 14 22:45:18.906000 systemd[1]: Reached target network.target - Network. Jul 14 22:45:18.906091 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jul 14 22:45:18.909225 kernel: vmxnet3 0000:0b:00.0 ens192: intr type 3, mode 0, 3 vectors allocated Jul 14 22:45:18.909792 kernel: vmxnet3 0000:0b:00.0 ens192: NIC Link is Up 10000 Mbps Jul 14 22:45:18.909475 systemd-networkd[800]: ens192: Link UP Jul 14 22:45:18.909478 systemd-networkd[800]: ens192: Gained carrier Jul 14 22:45:18.918405 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 14 22:45:18.925837 ignition[802]: Ignition 2.19.0 Jul 14 22:45:18.925843 ignition[802]: Stage: kargs Jul 14 22:45:18.925946 ignition[802]: no configs at "/usr/lib/ignition/base.d" Jul 14 22:45:18.925952 ignition[802]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Jul 14 22:45:18.926569 ignition[802]: kargs: kargs passed Jul 14 22:45:18.926597 ignition[802]: Ignition finished successfully Jul 14 22:45:18.927786 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 14 22:45:18.931422 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 14 22:45:18.939073 ignition[809]: Ignition 2.19.0 Jul 14 22:45:18.939081 ignition[809]: Stage: disks Jul 14 22:45:18.939203 ignition[809]: no configs at "/usr/lib/ignition/base.d" Jul 14 22:45:18.939209 ignition[809]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Jul 14 22:45:18.939790 ignition[809]: disks: disks passed Jul 14 22:45:18.939814 ignition[809]: Ignition finished successfully Jul 14 22:45:18.940462 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 14 22:45:18.940839 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 14 22:45:18.940976 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 14 22:45:18.941167 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 14 22:45:18.941364 systemd[1]: Reached target sysinit.target - System Initialization. Jul 14 22:45:18.941535 systemd[1]: Reached target basic.target - Basic System. Jul 14 22:45:18.946410 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 14 22:45:18.956575 systemd-fsck[817]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Jul 14 22:45:18.957460 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 14 22:45:18.962407 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 14 22:45:19.023395 kernel: EXT4-fs (sda9): mounted filesystem dda007d3-640b-4d11-976f-3b761ca7aabd r/w with ordered data mode. Quota mode: none. Jul 14 22:45:19.023733 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 14 22:45:19.024089 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 14 22:45:19.033375 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 14 22:45:19.034706 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 14 22:45:19.034979 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jul 14 22:45:19.035005 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 14 22:45:19.035020 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 14 22:45:19.038016 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 14 22:45:19.038957 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 14 22:45:19.041505 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (825) Jul 14 22:45:19.044660 kernel: BTRFS info (device sda6): first mount of filesystem 1f379987-f438-494c-89f9-63473ca1b18d Jul 14 22:45:19.044682 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jul 14 22:45:19.044691 kernel: BTRFS info (device sda6): using free space tree Jul 14 22:45:19.049332 kernel: BTRFS info (device sda6): enabling ssd optimizations Jul 14 22:45:19.050931 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 14 22:45:19.067688 initrd-setup-root[849]: cut: /sysroot/etc/passwd: No such file or directory Jul 14 22:45:19.070650 initrd-setup-root[856]: cut: /sysroot/etc/group: No such file or directory Jul 14 22:45:19.072949 initrd-setup-root[863]: cut: /sysroot/etc/shadow: No such file or directory Jul 14 22:45:19.074819 initrd-setup-root[870]: cut: /sysroot/etc/gshadow: No such file or directory Jul 14 22:45:19.134778 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 14 22:45:19.137431 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 14 22:45:19.139736 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 14 22:45:19.145424 kernel: BTRFS info (device sda6): last unmount of filesystem 1f379987-f438-494c-89f9-63473ca1b18d Jul 14 22:45:19.156419 ignition[937]: INFO : Ignition 2.19.0 Jul 14 22:45:19.156664 ignition[937]: INFO : Stage: mount Jul 14 22:45:19.156916 ignition[937]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 14 22:45:19.157050 ignition[937]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Jul 14 22:45:19.158268 ignition[937]: INFO : mount: mount passed Jul 14 22:45:19.158471 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 14 22:45:19.158679 ignition[937]: INFO : Ignition finished successfully Jul 14 22:45:19.159364 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 14 22:45:19.163414 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 14 22:45:19.734828 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 14 22:45:19.740504 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 14 22:45:19.748358 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by mount (950) Jul 14 22:45:19.751012 kernel: BTRFS info (device sda6): first mount of filesystem 1f379987-f438-494c-89f9-63473ca1b18d Jul 14 22:45:19.751036 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jul 14 22:45:19.751045 kernel: BTRFS info (device sda6): using free space tree Jul 14 22:45:19.755354 kernel: BTRFS info (device sda6): enabling ssd optimizations Jul 14 22:45:19.762206 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 14 22:45:19.778990 ignition[967]: INFO : Ignition 2.19.0 Jul 14 22:45:19.778990 ignition[967]: INFO : Stage: files Jul 14 22:45:19.778990 ignition[967]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 14 22:45:19.778990 ignition[967]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Jul 14 22:45:19.779762 ignition[967]: DEBUG : files: compiled without relabeling support, skipping Jul 14 22:45:19.785914 ignition[967]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 14 22:45:19.785914 ignition[967]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 14 22:45:19.789684 ignition[967]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 14 22:45:19.789847 ignition[967]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 14 22:45:19.790018 ignition[967]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 14 22:45:19.789924 unknown[967]: wrote ssh authorized keys file for user: core Jul 14 22:45:19.791866 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Jul 14 22:45:19.792064 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Jul 14 22:45:19.792064 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jul 14 22:45:19.792064 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jul 14 22:45:19.852730 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jul 14 22:45:20.041285 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jul 14 22:45:20.041285 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jul 14 22:45:20.041811 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jul 14 22:45:20.041811 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 14 22:45:20.041811 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 14 22:45:20.041811 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 14 22:45:20.041811 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 14 22:45:20.041811 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 14 22:45:20.041811 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 14 22:45:20.041811 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 14 22:45:20.043115 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 14 22:45:20.043115 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 14 22:45:20.043115 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 14 22:45:20.043115 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 14 22:45:20.043115 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 Jul 14 22:45:20.754439 systemd-networkd[800]: ens192: Gained IPv6LL Jul 14 22:45:20.843539 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jul 14 22:45:21.091433 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 14 22:45:21.091433 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/etc/systemd/network/00-vmware.network" Jul 14 22:45:21.092003 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/etc/systemd/network/00-vmware.network" Jul 14 22:45:21.092003 ignition[967]: INFO : files: op(d): [started] processing unit "containerd.service" Jul 14 22:45:21.092732 ignition[967]: INFO : files: op(d): op(e): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jul 14 22:45:21.093016 ignition[967]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jul 14 22:45:21.093016 ignition[967]: INFO : files: op(d): [finished] processing unit "containerd.service" Jul 14 22:45:21.093016 ignition[967]: INFO : files: op(f): [started] processing unit "prepare-helm.service" Jul 14 22:45:21.093016 ignition[967]: INFO : files: op(f): op(10): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 14 22:45:21.093016 ignition[967]: INFO : files: op(f): op(10): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 14 22:45:21.093016 ignition[967]: INFO : files: op(f): [finished] processing unit "prepare-helm.service" Jul 14 22:45:21.093016 ignition[967]: INFO : files: op(11): [started] processing unit "coreos-metadata.service" Jul 14 22:45:21.093016 ignition[967]: INFO : files: op(11): op(12): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 14 22:45:21.094648 ignition[967]: INFO : files: op(11): op(12): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 14 22:45:21.094648 ignition[967]: INFO : files: op(11): [finished] processing unit "coreos-metadata.service" Jul 14 22:45:21.094648 ignition[967]: INFO : files: op(13): [started] setting preset to disabled for "coreos-metadata.service" Jul 14 22:45:21.229304 ignition[967]: INFO : files: op(13): op(14): [started] removing enablement symlink(s) for "coreos-metadata.service" Jul 14 22:45:21.233380 ignition[967]: INFO : files: op(13): op(14): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jul 14 22:45:21.233380 ignition[967]: INFO : files: op(13): [finished] setting preset to disabled for "coreos-metadata.service" Jul 14 22:45:21.233380 ignition[967]: INFO : files: op(15): [started] setting preset to enabled for "prepare-helm.service" Jul 14 22:45:21.233380 ignition[967]: INFO : files: op(15): [finished] setting preset to enabled for "prepare-helm.service" Jul 14 22:45:21.234173 ignition[967]: INFO : files: createResultFile: createFiles: op(16): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 14 22:45:21.234173 ignition[967]: INFO : files: createResultFile: createFiles: op(16): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 14 22:45:21.234173 ignition[967]: INFO : files: files passed Jul 14 22:45:21.234173 ignition[967]: INFO : Ignition finished successfully Jul 14 22:45:21.234418 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 14 22:45:21.239412 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 14 22:45:21.240862 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 14 22:45:21.242156 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 14 22:45:21.242373 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 14 22:45:21.247613 initrd-setup-root-after-ignition[998]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 14 22:45:21.247613 initrd-setup-root-after-ignition[998]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 14 22:45:21.248642 initrd-setup-root-after-ignition[1002]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 14 22:45:21.249564 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 14 22:45:21.249957 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 14 22:45:21.254502 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 14 22:45:21.267618 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 14 22:45:21.267688 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 14 22:45:21.267974 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 14 22:45:21.268092 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 14 22:45:21.268288 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 14 22:45:21.271430 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 14 22:45:21.278200 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 14 22:45:21.282438 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 14 22:45:21.287781 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 14 22:45:21.288056 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 14 22:45:21.288213 systemd[1]: Stopped target timers.target - Timer Units. Jul 14 22:45:21.288366 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 14 22:45:21.288446 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 14 22:45:21.288663 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 14 22:45:21.288881 systemd[1]: Stopped target basic.target - Basic System. Jul 14 22:45:21.289204 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 14 22:45:21.289413 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 14 22:45:21.289623 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 14 22:45:21.289825 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 14 22:45:21.290004 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 14 22:45:21.290218 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 14 22:45:21.290428 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 14 22:45:21.290620 systemd[1]: Stopped target swap.target - Swaps. Jul 14 22:45:21.290781 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 14 22:45:21.290842 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 14 22:45:21.291124 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 14 22:45:21.291254 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 14 22:45:21.291469 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 14 22:45:21.291514 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 14 22:45:21.291654 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 14 22:45:21.291713 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 14 22:45:21.291952 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 14 22:45:21.292016 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 14 22:45:21.292260 systemd[1]: Stopped target paths.target - Path Units. Jul 14 22:45:21.292412 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 14 22:45:21.298340 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 14 22:45:21.298508 systemd[1]: Stopped target slices.target - Slice Units. Jul 14 22:45:21.298702 systemd[1]: Stopped target sockets.target - Socket Units. Jul 14 22:45:21.298881 systemd[1]: iscsid.socket: Deactivated successfully. Jul 14 22:45:21.298950 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 14 22:45:21.299168 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 14 22:45:21.299214 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 14 22:45:21.299452 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 14 22:45:21.299514 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 14 22:45:21.299769 systemd[1]: ignition-files.service: Deactivated successfully. Jul 14 22:45:21.299825 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 14 22:45:21.304416 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 14 22:45:21.306300 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 14 22:45:21.306496 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 14 22:45:21.306583 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 14 22:45:21.306823 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 14 22:45:21.306898 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 14 22:45:21.309957 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 14 22:45:21.311061 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 14 22:45:21.315136 ignition[1022]: INFO : Ignition 2.19.0 Jul 14 22:45:21.315136 ignition[1022]: INFO : Stage: umount Jul 14 22:45:21.315469 ignition[1022]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 14 22:45:21.315469 ignition[1022]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Jul 14 22:45:21.317289 ignition[1022]: INFO : umount: umount passed Jul 14 22:45:21.317448 ignition[1022]: INFO : Ignition finished successfully Jul 14 22:45:21.318305 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 14 22:45:21.319402 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 14 22:45:21.319604 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 14 22:45:21.319904 systemd[1]: Stopped target network.target - Network. Jul 14 22:45:21.320118 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 14 22:45:21.320145 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 14 22:45:21.320420 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 14 22:45:21.320444 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 14 22:45:21.320913 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 14 22:45:21.320938 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 14 22:45:21.321047 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 14 22:45:21.321069 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 14 22:45:21.321253 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 14 22:45:21.321640 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 14 22:45:21.324097 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 14 22:45:21.324158 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 14 22:45:21.324474 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 14 22:45:21.324499 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 14 22:45:21.329433 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 14 22:45:21.329586 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 14 22:45:21.329613 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 14 22:45:21.330410 systemd[1]: afterburn-network-kargs.service: Deactivated successfully. Jul 14 22:45:21.330439 systemd[1]: Stopped afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments. Jul 14 22:45:21.330688 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 14 22:45:21.334749 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 14 22:45:21.334815 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 14 22:45:21.336152 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 14 22:45:21.336196 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 14 22:45:21.336463 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 14 22:45:21.336485 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 14 22:45:21.336943 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 14 22:45:21.336966 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 14 22:45:21.337578 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 14 22:45:21.337645 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 14 22:45:21.338264 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 14 22:45:21.338299 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 14 22:45:21.338516 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 14 22:45:21.338534 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 14 22:45:21.338684 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 14 22:45:21.338707 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 14 22:45:21.338985 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 14 22:45:21.339006 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 14 22:45:21.339556 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 14 22:45:21.339579 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 14 22:45:21.345547 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 14 22:45:21.345770 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 14 22:45:21.345798 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 14 22:45:21.345917 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jul 14 22:45:21.345938 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 14 22:45:21.346053 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 14 22:45:21.346074 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 14 22:45:21.346187 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 14 22:45:21.346206 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 14 22:45:21.346475 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 14 22:45:21.346532 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 14 22:45:21.348569 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 14 22:45:21.348615 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 14 22:45:21.427094 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 14 22:45:21.427186 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 14 22:45:21.427689 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 14 22:45:21.427860 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 14 22:45:21.427898 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 14 22:45:21.431447 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 14 22:45:21.457339 systemd[1]: Switching root. Jul 14 22:45:21.486132 systemd-journald[215]: Journal stopped Jul 14 22:45:22.930823 systemd-journald[215]: Received SIGTERM from PID 1 (systemd). Jul 14 22:45:22.930846 kernel: SELinux: policy capability network_peer_controls=1 Jul 14 22:45:22.930854 kernel: SELinux: policy capability open_perms=1 Jul 14 22:45:22.930860 kernel: SELinux: policy capability extended_socket_class=1 Jul 14 22:45:22.930865 kernel: SELinux: policy capability always_check_network=0 Jul 14 22:45:22.930870 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 14 22:45:22.930877 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 14 22:45:22.930884 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 14 22:45:22.930890 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 14 22:45:22.930896 systemd[1]: Successfully loaded SELinux policy in 34.305ms. Jul 14 22:45:22.930903 kernel: audit: type=1403 audit(1752533122.197:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 14 22:45:22.930909 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 7.695ms. Jul 14 22:45:22.930916 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 14 22:45:22.930923 systemd[1]: Detected virtualization vmware. Jul 14 22:45:22.930931 systemd[1]: Detected architecture x86-64. Jul 14 22:45:22.930937 systemd[1]: Detected first boot. Jul 14 22:45:22.930944 systemd[1]: Initializing machine ID from random generator. Jul 14 22:45:22.930951 zram_generator::config[1082]: No configuration found. Jul 14 22:45:22.930959 systemd[1]: Populated /etc with preset unit settings. Jul 14 22:45:22.930966 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Jul 14 22:45:22.930973 systemd[1]: COREOS_CUSTOM_PUBLIC_IPV4=$(ip addr show ens192 | grep -v "inet 10." | grep -Po "inet \K[\d.]+")" > ${OUTPUT}" Jul 14 22:45:22.930980 systemd[1]: Queued start job for default target multi-user.target. Jul 14 22:45:22.930986 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jul 14 22:45:22.930993 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 14 22:45:22.931001 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 14 22:45:22.931008 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 14 22:45:22.931015 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 14 22:45:22.931021 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 14 22:45:22.931028 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 14 22:45:22.931035 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 14 22:45:22.931041 systemd[1]: Created slice user.slice - User and Session Slice. Jul 14 22:45:22.931049 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 14 22:45:22.931056 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 14 22:45:22.931063 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 14 22:45:22.931069 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 14 22:45:22.931076 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 14 22:45:22.931083 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 14 22:45:22.931089 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jul 14 22:45:22.931095 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 14 22:45:22.931104 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 14 22:45:22.931111 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 14 22:45:22.931119 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 14 22:45:22.931126 systemd[1]: Reached target slices.target - Slice Units. Jul 14 22:45:22.931133 systemd[1]: Reached target swap.target - Swaps. Jul 14 22:45:22.931140 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 14 22:45:22.931147 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 14 22:45:22.931153 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 14 22:45:22.931161 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jul 14 22:45:22.931168 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 14 22:45:22.931175 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 14 22:45:22.931182 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 14 22:45:22.931189 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 14 22:45:22.931197 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 14 22:45:22.931205 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 14 22:45:22.931212 systemd[1]: Mounting media.mount - External Media Directory... Jul 14 22:45:22.931219 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 14 22:45:22.931226 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 14 22:45:22.931233 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 14 22:45:22.931240 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 14 22:45:22.931247 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 14 22:45:22.931255 systemd[1]: Starting ignition-delete-config.service - Ignition (delete config)... Jul 14 22:45:22.931262 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 14 22:45:22.931269 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 14 22:45:22.931276 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 14 22:45:22.931283 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 14 22:45:22.931289 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 14 22:45:22.931296 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 14 22:45:22.931303 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 14 22:45:22.931310 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 14 22:45:22.931325 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Jul 14 22:45:22.931334 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Jul 14 22:45:22.931341 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 14 22:45:22.931348 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 14 22:45:22.931355 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 14 22:45:22.931362 kernel: fuse: init (API version 7.39) Jul 14 22:45:22.931369 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 14 22:45:22.931376 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 14 22:45:22.931385 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 14 22:45:22.931393 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 14 22:45:22.931400 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 14 22:45:22.931406 systemd[1]: Mounted media.mount - External Media Directory. Jul 14 22:45:22.931413 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 14 22:45:22.931420 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 14 22:45:22.931427 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 14 22:45:22.931434 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 14 22:45:22.931442 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 14 22:45:22.931450 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 14 22:45:22.931456 kernel: loop: module loaded Jul 14 22:45:22.931473 systemd-journald[1180]: Collecting audit messages is disabled. Jul 14 22:45:22.931490 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 14 22:45:22.931498 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 14 22:45:22.931505 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 14 22:45:22.931511 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 14 22:45:22.931519 systemd-journald[1180]: Journal started Jul 14 22:45:22.931533 systemd-journald[1180]: Runtime Journal (/run/log/journal/c00a1c5c1e96464395c5b6e9542171c8) is 4.8M, max 38.6M, 33.8M free. Jul 14 22:45:22.931849 jq[1159]: true Jul 14 22:45:22.936354 systemd[1]: Started systemd-journald.service - Journal Service. Jul 14 22:45:22.933874 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 14 22:45:22.933955 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 14 22:45:22.934174 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 14 22:45:22.934248 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 14 22:45:22.935521 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 14 22:45:22.936501 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 14 22:45:22.936854 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 14 22:45:22.938061 jq[1196]: true Jul 14 22:45:22.959956 kernel: ACPI: bus type drm_connector registered Jul 14 22:45:22.957470 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 14 22:45:22.957591 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 14 22:45:22.963921 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 14 22:45:22.967421 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 14 22:45:22.972408 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 14 22:45:22.972576 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 14 22:45:22.979150 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 14 22:45:22.984502 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 14 22:45:22.984637 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 14 22:45:22.988443 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 14 22:45:22.988735 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 14 22:45:22.992913 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 14 22:45:23.002460 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 14 22:45:23.006067 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 14 22:45:23.006232 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 14 22:45:23.008582 systemd-journald[1180]: Time spent on flushing to /var/log/journal/c00a1c5c1e96464395c5b6e9542171c8 is 43.109ms for 1821 entries. Jul 14 22:45:23.008582 systemd-journald[1180]: System Journal (/var/log/journal/c00a1c5c1e96464395c5b6e9542171c8) is 8.0M, max 584.8M, 576.8M free. Jul 14 22:45:23.062337 systemd-journald[1180]: Received client request to flush runtime journal. Jul 14 22:45:23.023644 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 14 22:45:23.032686 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 14 22:45:23.032863 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 14 22:45:23.056911 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 14 22:45:23.063497 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 14 22:45:23.071109 ignition[1207]: Ignition 2.19.0 Jul 14 22:45:23.071560 ignition[1207]: deleting config from guestinfo properties Jul 14 22:45:23.090057 ignition[1207]: Successfully deleted config Jul 14 22:45:23.094190 systemd-tmpfiles[1244]: ACLs are not supported, ignoring. Jul 14 22:45:23.094201 systemd-tmpfiles[1244]: ACLs are not supported, ignoring. Jul 14 22:45:23.094604 systemd[1]: Finished ignition-delete-config.service - Ignition (delete config). Jul 14 22:45:23.098284 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 14 22:45:23.107575 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 14 22:45:23.124840 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 14 22:45:23.131421 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jul 14 22:45:23.136974 udevadm[1271]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jul 14 22:45:23.181697 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 14 22:45:23.189491 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 14 22:45:23.198221 systemd-tmpfiles[1273]: ACLs are not supported, ignoring. Jul 14 22:45:23.198234 systemd-tmpfiles[1273]: ACLs are not supported, ignoring. Jul 14 22:45:23.202664 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 14 22:45:23.677389 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 14 22:45:23.682480 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 14 22:45:23.697329 systemd-udevd[1280]: Using default interface naming scheme 'v255'. Jul 14 22:45:23.748846 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 14 22:45:23.756511 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 14 22:45:23.777114 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 14 22:45:23.801191 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Jul 14 22:45:23.822039 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 14 22:45:23.838337 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Jul 14 22:45:23.847342 kernel: ACPI: button: Power Button [PWRF] Jul 14 22:45:23.880259 systemd-networkd[1287]: lo: Link UP Jul 14 22:45:23.880266 systemd-networkd[1287]: lo: Gained carrier Jul 14 22:45:23.881272 systemd-networkd[1287]: Enumeration completed Jul 14 22:45:23.881378 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 14 22:45:23.881981 systemd-networkd[1287]: ens192: Configuring with /etc/systemd/network/00-vmware.network. Jul 14 22:45:23.885338 kernel: vmxnet3 0000:0b:00.0 ens192: intr type 3, mode 0, 3 vectors allocated Jul 14 22:45:23.885493 kernel: vmxnet3 0000:0b:00.0 ens192: NIC Link is Up 10000 Mbps Jul 14 22:45:23.886049 systemd-networkd[1287]: ens192: Link UP Jul 14 22:45:23.886143 systemd-networkd[1287]: ens192: Gained carrier Jul 14 22:45:23.888523 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 14 22:45:23.897814 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (1283) Jul 14 22:45:23.936342 kernel: piix4_smbus 0000:00:07.3: SMBus Host Controller not enabled! Jul 14 22:45:23.936148 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_disk OEM. Jul 14 22:45:23.958351 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input3 Jul 14 22:45:23.963976 (udev-worker)[1284]: id: Truncating stdout of 'dmi_memory_id' up to 16384 byte. Jul 14 22:45:23.972335 kernel: vmw_vmci 0000:00:07.7: Using capabilities 0xc Jul 14 22:45:23.974483 kernel: Guest personality initialized and is active Jul 14 22:45:23.974524 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Jul 14 22:45:23.974535 kernel: Initialized host personality Jul 14 22:45:23.976940 kernel: mousedev: PS/2 mouse device common for all mice Jul 14 22:45:23.978550 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 14 22:45:23.992123 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jul 14 22:45:23.996577 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jul 14 22:45:24.005094 lvm[1322]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 14 22:45:24.024248 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jul 14 22:45:24.024472 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 14 22:45:24.030412 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jul 14 22:45:24.035621 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 14 22:45:24.036197 lvm[1327]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 14 22:45:24.061293 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jul 14 22:45:24.061525 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 14 22:45:24.061644 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 14 22:45:24.061661 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 14 22:45:24.061759 systemd[1]: Reached target machines.target - Containers. Jul 14 22:45:24.062814 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jul 14 22:45:24.065454 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 14 22:45:24.067183 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 14 22:45:24.067452 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 14 22:45:24.068860 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 14 22:45:24.072479 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jul 14 22:45:24.074612 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 14 22:45:24.075256 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 14 22:45:24.089235 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 14 22:45:24.101455 kernel: loop0: detected capacity change from 0 to 2976 Jul 14 22:45:24.101450 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 14 22:45:24.102503 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jul 14 22:45:24.117340 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 14 22:45:24.143337 kernel: loop1: detected capacity change from 0 to 140768 Jul 14 22:45:24.210389 kernel: loop2: detected capacity change from 0 to 142488 Jul 14 22:45:24.258335 kernel: loop3: detected capacity change from 0 to 221472 Jul 14 22:45:24.327347 kernel: loop4: detected capacity change from 0 to 2976 Jul 14 22:45:24.372344 kernel: loop5: detected capacity change from 0 to 140768 Jul 14 22:45:24.399353 kernel: loop6: detected capacity change from 0 to 142488 Jul 14 22:45:24.418340 kernel: loop7: detected capacity change from 0 to 221472 Jul 14 22:45:24.478946 (sd-merge)[1351]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-vmware'. Jul 14 22:45:24.479296 (sd-merge)[1351]: Merged extensions into '/usr'. Jul 14 22:45:24.482221 systemd[1]: Reloading requested from client PID 1337 ('systemd-sysext') (unit systemd-sysext.service)... Jul 14 22:45:24.482232 systemd[1]: Reloading... Jul 14 22:45:24.524334 zram_generator::config[1378]: No configuration found. Jul 14 22:45:24.589103 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Jul 14 22:45:24.605628 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 14 22:45:24.642242 systemd[1]: Reloading finished in 159 ms. Jul 14 22:45:24.655962 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 14 22:45:24.662245 systemd[1]: Starting ensure-sysext.service... Jul 14 22:45:24.665539 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 14 22:45:24.669055 systemd[1]: Reloading requested from client PID 1440 ('systemctl') (unit ensure-sysext.service)... Jul 14 22:45:24.669066 systemd[1]: Reloading... Jul 14 22:45:24.687021 systemd-tmpfiles[1441]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 14 22:45:24.687281 systemd-tmpfiles[1441]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 14 22:45:24.687883 systemd-tmpfiles[1441]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 14 22:45:24.688066 systemd-tmpfiles[1441]: ACLs are not supported, ignoring. Jul 14 22:45:24.688113 systemd-tmpfiles[1441]: ACLs are not supported, ignoring. Jul 14 22:45:24.691747 systemd-tmpfiles[1441]: Detected autofs mount point /boot during canonicalization of boot. Jul 14 22:45:24.691756 systemd-tmpfiles[1441]: Skipping /boot Jul 14 22:45:24.702737 systemd-tmpfiles[1441]: Detected autofs mount point /boot during canonicalization of boot. Jul 14 22:45:24.702744 systemd-tmpfiles[1441]: Skipping /boot Jul 14 22:45:24.708332 zram_generator::config[1466]: No configuration found. Jul 14 22:45:24.789034 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Jul 14 22:45:24.804258 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 14 22:45:24.842256 ldconfig[1333]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 14 22:45:24.843882 systemd[1]: Reloading finished in 174 ms. Jul 14 22:45:24.854114 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 14 22:45:24.854533 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 14 22:45:24.867129 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 14 22:45:24.874483 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 14 22:45:24.877384 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 14 22:45:24.882501 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 14 22:45:24.885960 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 14 22:45:24.894282 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 14 22:45:24.895135 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 14 22:45:24.896485 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 14 22:45:24.900661 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 14 22:45:24.900820 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 14 22:45:24.900890 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 14 22:45:24.905534 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 14 22:45:24.905671 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 14 22:45:24.906577 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 14 22:45:24.906657 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 14 22:45:24.909652 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 14 22:45:24.909782 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 14 22:45:24.909853 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 14 22:45:24.909908 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 14 22:45:24.910257 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 14 22:45:24.912809 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 14 22:45:24.912896 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 14 22:45:24.918636 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 14 22:45:24.926424 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 14 22:45:24.929700 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 14 22:45:24.930415 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 14 22:45:24.942633 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 14 22:45:24.944312 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 14 22:45:24.944378 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 14 22:45:24.944930 systemd[1]: Finished ensure-sysext.service. Jul 14 22:45:24.945170 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 14 22:45:24.945267 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 14 22:45:24.946961 augenrules[1575]: No rules Jul 14 22:45:24.959187 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jul 14 22:45:24.961852 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 14 22:45:24.962153 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 14 22:45:24.964430 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 14 22:45:24.964559 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 14 22:45:24.965150 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 14 22:45:24.965254 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 14 22:45:24.970191 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 14 22:45:24.972734 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 14 22:45:24.973779 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 14 22:45:24.973832 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 14 22:45:24.974623 systemd-resolved[1541]: Positive Trust Anchors: Jul 14 22:45:24.974635 systemd-resolved[1541]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 14 22:45:24.974665 systemd-resolved[1541]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 14 22:45:24.977413 systemd-resolved[1541]: Defaulting to hostname 'linux'. Jul 14 22:45:24.981412 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 14 22:45:24.981651 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 14 22:45:24.981922 systemd[1]: Reached target network.target - Network. Jul 14 22:45:24.982020 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 14 22:45:24.997082 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 14 22:45:25.020523 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jul 14 22:45:25.020784 systemd[1]: Reached target time-set.target - System Time Set. Jul 14 22:45:25.029223 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 14 22:45:25.029471 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 14 22:45:25.029501 systemd[1]: Reached target sysinit.target - System Initialization. Jul 14 22:45:25.029733 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 14 22:45:25.029912 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 14 22:45:25.030196 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 14 22:45:25.030724 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 14 22:45:25.030910 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 14 22:45:25.031042 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 14 22:45:25.031058 systemd[1]: Reached target paths.target - Path Units. Jul 14 22:45:25.031292 systemd[1]: Reached target timers.target - Timer Units. Jul 14 22:45:25.031923 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 14 22:45:25.033242 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 14 22:45:25.034048 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 14 22:45:25.037015 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 14 22:45:25.037156 systemd[1]: Reached target sockets.target - Socket Units. Jul 14 22:45:25.037259 systemd[1]: Reached target basic.target - Basic System. Jul 14 22:45:25.037457 systemd[1]: System is tainted: cgroupsv1 Jul 14 22:45:25.037480 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 14 22:45:25.037494 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 14 22:45:25.039412 systemd[1]: Starting containerd.service - containerd container runtime... Jul 14 22:45:25.041538 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 14 22:45:25.045448 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 14 22:45:25.046509 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 14 22:45:25.046610 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 14 22:45:25.049400 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 14 22:45:25.051037 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 14 22:45:25.060483 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 14 22:45:25.061729 jq[1604]: false Jul 14 22:45:25.068302 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 14 22:45:25.073410 extend-filesystems[1606]: Found loop4 Jul 14 22:45:25.075437 extend-filesystems[1606]: Found loop5 Jul 14 22:45:25.075437 extend-filesystems[1606]: Found loop6 Jul 14 22:45:25.075437 extend-filesystems[1606]: Found loop7 Jul 14 22:45:25.075437 extend-filesystems[1606]: Found sda Jul 14 22:45:25.075437 extend-filesystems[1606]: Found sda1 Jul 14 22:45:25.075437 extend-filesystems[1606]: Found sda2 Jul 14 22:45:25.075437 extend-filesystems[1606]: Found sda3 Jul 14 22:45:25.075437 extend-filesystems[1606]: Found usr Jul 14 22:45:25.075437 extend-filesystems[1606]: Found sda4 Jul 14 22:45:25.075437 extend-filesystems[1606]: Found sda6 Jul 14 22:45:25.075437 extend-filesystems[1606]: Found sda7 Jul 14 22:45:25.075437 extend-filesystems[1606]: Found sda9 Jul 14 22:45:25.075437 extend-filesystems[1606]: Checking size of /dev/sda9 Jul 14 22:45:25.075102 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 14 22:45:25.075767 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 14 22:45:25.081673 systemd[1]: Starting update-engine.service - Update Engine... Jul 14 22:45:25.087819 dbus-daemon[1603]: [system] SELinux support is enabled Jul 14 22:45:25.088525 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 14 22:45:25.091176 systemd[1]: Starting vgauthd.service - VGAuth Service for open-vm-tools... Jul 14 22:45:25.091797 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 14 22:45:25.099667 jq[1621]: true Jul 14 22:45:25.107284 extend-filesystems[1606]: Old size kept for /dev/sda9 Jul 14 22:45:25.107284 extend-filesystems[1606]: Found sr0 Jul 14 22:45:25.100592 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 14 22:45:25.100733 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 14 22:45:25.100890 systemd[1]: motdgen.service: Deactivated successfully. Jul 14 22:45:25.101010 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 14 22:45:25.109752 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 14 22:45:25.109899 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 14 22:45:25.110217 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 14 22:46:52.072600 update_engine[1619]: I20250714 22:46:52.072525 1619 main.cc:92] Flatcar Update Engine starting Jul 14 22:45:25.110351 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 14 22:46:52.063030 systemd-timesyncd[1584]: Contacted time server 209.253.210.115:123 (0.flatcar.pool.ntp.org). Jul 14 22:46:52.063058 systemd-timesyncd[1584]: Initial clock synchronization to Mon 2025-07-14 22:46:52.062946 UTC. Jul 14 22:46:52.063089 systemd-resolved[1541]: Clock change detected. Flushing caches. Jul 14 22:46:52.078039 systemd[1]: Started vgauthd.service - VGAuth Service for open-vm-tools. Jul 14 22:46:52.090793 jq[1637]: true Jul 14 22:46:52.089617 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 14 22:46:52.108024 update_engine[1619]: I20250714 22:46:52.090468 1619 update_check_scheduler.cc:74] Next update check in 4m13s Jul 14 22:46:52.108100 tar[1634]: linux-amd64/helm Jul 14 22:46:52.089646 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 14 22:46:52.089850 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 14 22:46:52.089866 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 14 22:46:52.102544 systemd[1]: Starting vmtoolsd.service - Service for virtual machines hosted on VMware... Jul 14 22:46:52.103159 (ntainerd)[1647]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 14 22:46:52.109385 systemd[1]: Started update-engine.service - Update Engine. Jul 14 22:46:52.110872 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 14 22:46:52.120115 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 14 22:46:52.122176 systemd[1]: Started vmtoolsd.service - Service for virtual machines hosted on VMware. Jul 14 22:46:52.137678 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (1295) Jul 14 22:46:52.161315 systemd-logind[1613]: Watching system buttons on /dev/input/event1 (Power Button) Jul 14 22:46:52.161329 systemd-logind[1613]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jul 14 22:46:52.162034 systemd-logind[1613]: New seat seat0. Jul 14 22:46:52.167523 systemd[1]: Started systemd-logind.service - User Login Management. Jul 14 22:46:52.178520 unknown[1646]: Pref_Init: Using '/etc/vmware-tools/vgauth.conf' as preferences filepath Jul 14 22:46:52.192996 unknown[1646]: Core dump limit set to -1 Jul 14 22:46:52.224181 bash[1676]: Updated "/home/core/.ssh/authorized_keys" Jul 14 22:46:52.222974 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 14 22:46:52.224282 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jul 14 22:46:52.224897 kernel: NET: Registered PF_VSOCK protocol family Jul 14 22:46:52.275987 locksmithd[1658]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 14 22:46:52.379532 systemd-networkd[1287]: ens192: Gained IPv6LL Jul 14 22:46:52.383138 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 14 22:46:52.383494 systemd[1]: Reached target network-online.target - Network is Online. Jul 14 22:46:52.393129 systemd[1]: Starting coreos-metadata.service - VMware metadata agent... Jul 14 22:46:52.411019 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 14 22:46:52.414228 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 14 22:46:52.505430 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 14 22:46:52.507336 systemd[1]: coreos-metadata.service: Deactivated successfully. Jul 14 22:46:52.507481 systemd[1]: Finished coreos-metadata.service - VMware metadata agent. Jul 14 22:46:52.508409 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 14 22:46:52.552904 containerd[1647]: time="2025-07-14T22:46:52.551450158Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jul 14 22:46:52.591796 containerd[1647]: time="2025-07-14T22:46:52.591767837Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jul 14 22:46:52.595410 containerd[1647]: time="2025-07-14T22:46:52.595389388Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.97-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jul 14 22:46:52.596637 containerd[1647]: time="2025-07-14T22:46:52.595914997Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jul 14 22:46:52.596637 containerd[1647]: time="2025-07-14T22:46:52.595929540Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jul 14 22:46:52.596637 containerd[1647]: time="2025-07-14T22:46:52.596025165Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jul 14 22:46:52.596637 containerd[1647]: time="2025-07-14T22:46:52.596035340Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jul 14 22:46:52.596637 containerd[1647]: time="2025-07-14T22:46:52.596071186Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jul 14 22:46:52.596637 containerd[1647]: time="2025-07-14T22:46:52.596079335Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jul 14 22:46:52.596637 containerd[1647]: time="2025-07-14T22:46:52.596213765Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 14 22:46:52.596637 containerd[1647]: time="2025-07-14T22:46:52.596223244Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jul 14 22:46:52.596637 containerd[1647]: time="2025-07-14T22:46:52.596230607Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jul 14 22:46:52.596637 containerd[1647]: time="2025-07-14T22:46:52.596235987Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jul 14 22:46:52.596637 containerd[1647]: time="2025-07-14T22:46:52.596275205Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jul 14 22:46:52.596637 containerd[1647]: time="2025-07-14T22:46:52.596391949Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jul 14 22:46:52.596817 containerd[1647]: time="2025-07-14T22:46:52.596471896Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 14 22:46:52.596817 containerd[1647]: time="2025-07-14T22:46:52.596482520Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jul 14 22:46:52.596817 containerd[1647]: time="2025-07-14T22:46:52.596526972Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jul 14 22:46:52.596817 containerd[1647]: time="2025-07-14T22:46:52.596553617Z" level=info msg="metadata content store policy set" policy=shared Jul 14 22:46:52.629777 containerd[1647]: time="2025-07-14T22:46:52.629751449Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jul 14 22:46:52.629902 containerd[1647]: time="2025-07-14T22:46:52.629876539Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jul 14 22:46:52.630010 containerd[1647]: time="2025-07-14T22:46:52.630001855Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jul 14 22:46:52.630059 containerd[1647]: time="2025-07-14T22:46:52.630050996Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jul 14 22:46:52.630448 containerd[1647]: time="2025-07-14T22:46:52.630089819Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jul 14 22:46:52.630448 containerd[1647]: time="2025-07-14T22:46:52.630201593Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jul 14 22:46:52.630448 containerd[1647]: time="2025-07-14T22:46:52.630408309Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jul 14 22:46:52.630587 containerd[1647]: time="2025-07-14T22:46:52.630577628Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jul 14 22:46:52.630638 containerd[1647]: time="2025-07-14T22:46:52.630630755Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jul 14 22:46:52.630678 containerd[1647]: time="2025-07-14T22:46:52.630670811Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jul 14 22:46:52.630722 containerd[1647]: time="2025-07-14T22:46:52.630713360Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jul 14 22:46:52.630915 containerd[1647]: time="2025-07-14T22:46:52.630906149Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jul 14 22:46:52.630972 containerd[1647]: time="2025-07-14T22:46:52.630948510Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jul 14 22:46:52.630972 containerd[1647]: time="2025-07-14T22:46:52.630959510Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jul 14 22:46:52.631024 containerd[1647]: time="2025-07-14T22:46:52.631015531Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jul 14 22:46:52.632927 containerd[1647]: time="2025-07-14T22:46:52.631095812Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jul 14 22:46:52.632927 containerd[1647]: time="2025-07-14T22:46:52.631111679Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jul 14 22:46:52.632927 containerd[1647]: time="2025-07-14T22:46:52.631119116Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jul 14 22:46:52.632927 containerd[1647]: time="2025-07-14T22:46:52.631137476Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jul 14 22:46:52.632927 containerd[1647]: time="2025-07-14T22:46:52.631145906Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jul 14 22:46:52.633559 containerd[1647]: time="2025-07-14T22:46:52.631155269Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jul 14 22:46:52.633559 containerd[1647]: time="2025-07-14T22:46:52.633036724Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jul 14 22:46:52.633559 containerd[1647]: time="2025-07-14T22:46:52.633050206Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jul 14 22:46:52.633559 containerd[1647]: time="2025-07-14T22:46:52.633058251Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jul 14 22:46:52.633559 containerd[1647]: time="2025-07-14T22:46:52.633066741Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jul 14 22:46:52.633559 containerd[1647]: time="2025-07-14T22:46:52.633077900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jul 14 22:46:52.633559 containerd[1647]: time="2025-07-14T22:46:52.633086464Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jul 14 22:46:52.633559 containerd[1647]: time="2025-07-14T22:46:52.633120203Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jul 14 22:46:52.633559 containerd[1647]: time="2025-07-14T22:46:52.633130344Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jul 14 22:46:52.633559 containerd[1647]: time="2025-07-14T22:46:52.633137274Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jul 14 22:46:52.633559 containerd[1647]: time="2025-07-14T22:46:52.633144660Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jul 14 22:46:52.633559 containerd[1647]: time="2025-07-14T22:46:52.633156724Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jul 14 22:46:52.633559 containerd[1647]: time="2025-07-14T22:46:52.633180700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jul 14 22:46:52.633559 containerd[1647]: time="2025-07-14T22:46:52.633193484Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jul 14 22:46:52.633559 containerd[1647]: time="2025-07-14T22:46:52.633200323Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jul 14 22:46:52.633783 containerd[1647]: time="2025-07-14T22:46:52.633234784Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jul 14 22:46:52.633783 containerd[1647]: time="2025-07-14T22:46:52.633286423Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jul 14 22:46:52.633783 containerd[1647]: time="2025-07-14T22:46:52.633296022Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jul 14 22:46:52.633783 containerd[1647]: time="2025-07-14T22:46:52.633303343Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jul 14 22:46:52.633783 containerd[1647]: time="2025-07-14T22:46:52.633308689Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jul 14 22:46:52.633783 containerd[1647]: time="2025-07-14T22:46:52.633316152Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jul 14 22:46:52.633783 containerd[1647]: time="2025-07-14T22:46:52.633335456Z" level=info msg="NRI interface is disabled by configuration." Jul 14 22:46:52.633783 containerd[1647]: time="2025-07-14T22:46:52.633347101Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jul 14 22:46:52.634069 containerd[1647]: time="2025-07-14T22:46:52.633545635Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jul 14 22:46:52.634069 containerd[1647]: time="2025-07-14T22:46:52.633951152Z" level=info msg="Connect containerd service" Jul 14 22:46:52.634069 containerd[1647]: time="2025-07-14T22:46:52.633990476Z" level=info msg="using legacy CRI server" Jul 14 22:46:52.634069 containerd[1647]: time="2025-07-14T22:46:52.633998028Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 14 22:46:52.635058 containerd[1647]: time="2025-07-14T22:46:52.634242115Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jul 14 22:46:52.637159 containerd[1647]: time="2025-07-14T22:46:52.637139062Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 14 22:46:52.637951 containerd[1647]: time="2025-07-14T22:46:52.637933455Z" level=info msg="Start subscribing containerd event" Jul 14 22:46:52.639180 containerd[1647]: time="2025-07-14T22:46:52.639165453Z" level=info msg="Start recovering state" Jul 14 22:46:52.639218 containerd[1647]: time="2025-07-14T22:46:52.639210954Z" level=info msg="Start event monitor" Jul 14 22:46:52.639239 containerd[1647]: time="2025-07-14T22:46:52.639219275Z" level=info msg="Start snapshots syncer" Jul 14 22:46:52.639239 containerd[1647]: time="2025-07-14T22:46:52.639225162Z" level=info msg="Start cni network conf syncer for default" Jul 14 22:46:52.639239 containerd[1647]: time="2025-07-14T22:46:52.639229210Z" level=info msg="Start streaming server" Jul 14 22:46:52.639353 containerd[1647]: time="2025-07-14T22:46:52.639341872Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 14 22:46:52.639446 containerd[1647]: time="2025-07-14T22:46:52.639436293Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 14 22:46:52.639605 systemd[1]: Started containerd.service - containerd container runtime. Jul 14 22:46:52.640231 containerd[1647]: time="2025-07-14T22:46:52.640221435Z" level=info msg="containerd successfully booted in 0.089888s" Jul 14 22:46:52.651760 sshd_keygen[1636]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 14 22:46:52.668105 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 14 22:46:52.671532 tar[1634]: linux-amd64/LICENSE Jul 14 22:46:52.671571 tar[1634]: linux-amd64/README.md Jul 14 22:46:52.673099 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 14 22:46:52.678156 systemd[1]: issuegen.service: Deactivated successfully. Jul 14 22:46:52.678291 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 14 22:46:52.682026 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 14 22:46:52.682534 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 14 22:46:52.694710 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 14 22:46:52.700154 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 14 22:46:52.701211 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jul 14 22:46:52.701594 systemd[1]: Reached target getty.target - Login Prompts. Jul 14 22:46:54.895898 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 14 22:46:54.896322 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 14 22:46:54.896535 systemd[1]: Startup finished in 6.588s (kernel) + 5.779s (userspace) = 12.367s. Jul 14 22:46:54.901662 (kubelet)[1810]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 14 22:46:54.954559 login[1800]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jul 14 22:46:54.954925 login[1801]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jul 14 22:46:54.961218 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 14 22:46:54.973217 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 14 22:46:54.974963 systemd-logind[1613]: New session 2 of user core. Jul 14 22:46:54.977204 systemd-logind[1613]: New session 1 of user core. Jul 14 22:46:54.985265 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 14 22:46:54.999290 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 14 22:46:55.002880 (systemd)[1819]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 14 22:46:55.061744 systemd[1819]: Queued start job for default target default.target. Jul 14 22:46:55.061988 systemd[1819]: Created slice app.slice - User Application Slice. Jul 14 22:46:55.062002 systemd[1819]: Reached target paths.target - Paths. Jul 14 22:46:55.062011 systemd[1819]: Reached target timers.target - Timers. Jul 14 22:46:55.071028 systemd[1819]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 14 22:46:55.076005 systemd[1819]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 14 22:46:55.076362 systemd[1819]: Reached target sockets.target - Sockets. Jul 14 22:46:55.076378 systemd[1819]: Reached target basic.target - Basic System. Jul 14 22:46:55.076403 systemd[1819]: Reached target default.target - Main User Target. Jul 14 22:46:55.076420 systemd[1819]: Startup finished in 69ms. Jul 14 22:46:55.076534 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 14 22:46:55.077331 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 14 22:46:55.077726 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 14 22:46:55.756267 kubelet[1810]: E0714 22:46:55.756196 1810 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 14 22:46:55.757400 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 14 22:46:55.757502 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 14 22:47:05.807331 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 14 22:47:05.817100 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 14 22:47:06.159986 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 14 22:47:06.162377 (kubelet)[1871]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 14 22:47:06.216046 kubelet[1871]: E0714 22:47:06.215980 1871 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 14 22:47:06.218400 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 14 22:47:06.218514 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 14 22:47:16.307373 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 14 22:47:16.314069 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 14 22:47:16.657077 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 14 22:47:16.660017 (kubelet)[1892]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 14 22:47:16.681940 kubelet[1892]: E0714 22:47:16.681909 1892 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 14 22:47:16.682920 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 14 22:47:16.683008 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 14 22:47:22.308356 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 14 22:47:22.313057 systemd[1]: Started sshd@0-139.178.70.103:22-139.178.68.195:46130.service - OpenSSH per-connection server daemon (139.178.68.195:46130). Jul 14 22:47:22.341093 sshd[1900]: Accepted publickey for core from 139.178.68.195 port 46130 ssh2: RSA SHA256:P8jM/IfcNW8CAymy10q/kVwVtMfLKdB1OJSN9TFdkWE Jul 14 22:47:22.341722 sshd[1900]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 22:47:22.344156 systemd-logind[1613]: New session 3 of user core. Jul 14 22:47:22.350053 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 14 22:47:22.402190 systemd[1]: Started sshd@1-139.178.70.103:22-139.178.68.195:46140.service - OpenSSH per-connection server daemon (139.178.68.195:46140). Jul 14 22:47:22.425757 sshd[1905]: Accepted publickey for core from 139.178.68.195 port 46140 ssh2: RSA SHA256:P8jM/IfcNW8CAymy10q/kVwVtMfLKdB1OJSN9TFdkWE Jul 14 22:47:22.426590 sshd[1905]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 22:47:22.430948 systemd-logind[1613]: New session 4 of user core. Jul 14 22:47:22.434727 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 14 22:47:22.488332 sshd[1905]: pam_unix(sshd:session): session closed for user core Jul 14 22:47:22.494038 systemd[1]: Started sshd@2-139.178.70.103:22-139.178.68.195:46148.service - OpenSSH per-connection server daemon (139.178.68.195:46148). Jul 14 22:47:22.494678 systemd[1]: sshd@1-139.178.70.103:22-139.178.68.195:46140.service: Deactivated successfully. Jul 14 22:47:22.495413 systemd[1]: session-4.scope: Deactivated successfully. Jul 14 22:47:22.496427 systemd-logind[1613]: Session 4 logged out. Waiting for processes to exit. Jul 14 22:47:22.497155 systemd-logind[1613]: Removed session 4. Jul 14 22:47:22.516510 sshd[1910]: Accepted publickey for core from 139.178.68.195 port 46148 ssh2: RSA SHA256:P8jM/IfcNW8CAymy10q/kVwVtMfLKdB1OJSN9TFdkWE Jul 14 22:47:22.517576 sshd[1910]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 22:47:22.520743 systemd-logind[1613]: New session 5 of user core. Jul 14 22:47:22.531089 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 14 22:47:22.580015 sshd[1910]: pam_unix(sshd:session): session closed for user core Jul 14 22:47:22.591954 systemd[1]: Started sshd@3-139.178.70.103:22-139.178.68.195:46156.service - OpenSSH per-connection server daemon (139.178.68.195:46156). Jul 14 22:47:22.592335 systemd[1]: sshd@2-139.178.70.103:22-139.178.68.195:46148.service: Deactivated successfully. Jul 14 22:47:22.595011 systemd-logind[1613]: Session 5 logged out. Waiting for processes to exit. Jul 14 22:47:22.595403 systemd[1]: session-5.scope: Deactivated successfully. Jul 14 22:47:22.597000 systemd-logind[1613]: Removed session 5. Jul 14 22:47:22.615930 sshd[1918]: Accepted publickey for core from 139.178.68.195 port 46156 ssh2: RSA SHA256:P8jM/IfcNW8CAymy10q/kVwVtMfLKdB1OJSN9TFdkWE Jul 14 22:47:22.616744 sshd[1918]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 22:47:22.619913 systemd-logind[1613]: New session 6 of user core. Jul 14 22:47:22.630233 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 14 22:47:22.682014 sshd[1918]: pam_unix(sshd:session): session closed for user core Jul 14 22:47:22.691278 systemd[1]: Started sshd@4-139.178.70.103:22-139.178.68.195:46172.service - OpenSSH per-connection server daemon (139.178.68.195:46172). Jul 14 22:47:22.692160 systemd[1]: sshd@3-139.178.70.103:22-139.178.68.195:46156.service: Deactivated successfully. Jul 14 22:47:22.693093 systemd[1]: session-6.scope: Deactivated successfully. Jul 14 22:47:22.694762 systemd-logind[1613]: Session 6 logged out. Waiting for processes to exit. Jul 14 22:47:22.695693 systemd-logind[1613]: Removed session 6. Jul 14 22:47:22.716244 sshd[1926]: Accepted publickey for core from 139.178.68.195 port 46172 ssh2: RSA SHA256:P8jM/IfcNW8CAymy10q/kVwVtMfLKdB1OJSN9TFdkWE Jul 14 22:47:22.717087 sshd[1926]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 22:47:22.720028 systemd-logind[1613]: New session 7 of user core. Jul 14 22:47:22.730153 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 14 22:47:22.788488 sudo[1933]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 14 22:47:22.788698 sudo[1933]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 14 22:47:22.801753 sudo[1933]: pam_unix(sudo:session): session closed for user root Jul 14 22:47:22.803023 sshd[1926]: pam_unix(sshd:session): session closed for user core Jul 14 22:47:22.811093 systemd[1]: Started sshd@5-139.178.70.103:22-139.178.68.195:46182.service - OpenSSH per-connection server daemon (139.178.68.195:46182). Jul 14 22:47:22.811409 systemd[1]: sshd@4-139.178.70.103:22-139.178.68.195:46172.service: Deactivated successfully. Jul 14 22:47:22.814342 systemd[1]: session-7.scope: Deactivated successfully. Jul 14 22:47:22.815949 systemd-logind[1613]: Session 7 logged out. Waiting for processes to exit. Jul 14 22:47:22.816768 systemd-logind[1613]: Removed session 7. Jul 14 22:47:22.838145 sshd[1935]: Accepted publickey for core from 139.178.68.195 port 46182 ssh2: RSA SHA256:P8jM/IfcNW8CAymy10q/kVwVtMfLKdB1OJSN9TFdkWE Jul 14 22:47:22.839298 sshd[1935]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 22:47:22.843261 systemd-logind[1613]: New session 8 of user core. Jul 14 22:47:22.849137 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 14 22:47:22.899966 sudo[1943]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 14 22:47:22.900179 sudo[1943]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 14 22:47:22.902914 sudo[1943]: pam_unix(sudo:session): session closed for user root Jul 14 22:47:22.906488 sudo[1942]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jul 14 22:47:22.906688 sudo[1942]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 14 22:47:22.917198 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jul 14 22:47:22.917977 auditctl[1946]: No rules Jul 14 22:47:22.918290 systemd[1]: audit-rules.service: Deactivated successfully. Jul 14 22:47:22.918443 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jul 14 22:47:22.921112 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 14 22:47:22.947115 augenrules[1965]: No rules Jul 14 22:47:22.947839 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 14 22:47:22.949834 sudo[1942]: pam_unix(sudo:session): session closed for user root Jul 14 22:47:22.951702 sshd[1935]: pam_unix(sshd:session): session closed for user core Jul 14 22:47:22.963176 systemd[1]: Started sshd@6-139.178.70.103:22-139.178.68.195:46186.service - OpenSSH per-connection server daemon (139.178.68.195:46186). Jul 14 22:47:22.963515 systemd[1]: sshd@5-139.178.70.103:22-139.178.68.195:46182.service: Deactivated successfully. Jul 14 22:47:22.964411 systemd[1]: session-8.scope: Deactivated successfully. Jul 14 22:47:22.967132 systemd-logind[1613]: Session 8 logged out. Waiting for processes to exit. Jul 14 22:47:22.968136 systemd-logind[1613]: Removed session 8. Jul 14 22:47:22.987229 sshd[1971]: Accepted publickey for core from 139.178.68.195 port 46186 ssh2: RSA SHA256:P8jM/IfcNW8CAymy10q/kVwVtMfLKdB1OJSN9TFdkWE Jul 14 22:47:22.988003 sshd[1971]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 22:47:22.990683 systemd-logind[1613]: New session 9 of user core. Jul 14 22:47:22.998165 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 14 22:47:23.046470 sudo[1978]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 14 22:47:23.046685 sudo[1978]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 14 22:47:23.317305 (dockerd)[1994]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 14 22:47:23.317328 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 14 22:47:23.584567 dockerd[1994]: time="2025-07-14T22:47:23.584490396Z" level=info msg="Starting up" Jul 14 22:47:23.645780 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport1144087936-merged.mount: Deactivated successfully. Jul 14 22:47:23.916143 dockerd[1994]: time="2025-07-14T22:47:23.915981324Z" level=info msg="Loading containers: start." Jul 14 22:47:23.974908 kernel: Initializing XFRM netlink socket Jul 14 22:47:24.017881 systemd-networkd[1287]: docker0: Link UP Jul 14 22:47:24.028002 dockerd[1994]: time="2025-07-14T22:47:24.027770510Z" level=info msg="Loading containers: done." Jul 14 22:47:24.040827 dockerd[1994]: time="2025-07-14T22:47:24.040761014Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 14 22:47:24.041044 dockerd[1994]: time="2025-07-14T22:47:24.040832076Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jul 14 22:47:24.041044 dockerd[1994]: time="2025-07-14T22:47:24.040913078Z" level=info msg="Daemon has completed initialization" Jul 14 22:47:24.058576 dockerd[1994]: time="2025-07-14T22:47:24.058543967Z" level=info msg="API listen on /run/docker.sock" Jul 14 22:47:24.058906 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 14 22:47:24.644250 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2488419520-merged.mount: Deactivated successfully. Jul 14 22:47:24.744242 containerd[1647]: time="2025-07-14T22:47:24.744212840Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.10\"" Jul 14 22:47:25.318181 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2753843078.mount: Deactivated successfully. Jul 14 22:47:26.235292 containerd[1647]: time="2025-07-14T22:47:26.235266187Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:47:26.236053 containerd[1647]: time="2025-07-14T22:47:26.236031792Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.10: active requests=0, bytes read=28077744" Jul 14 22:47:26.236508 containerd[1647]: time="2025-07-14T22:47:26.236493514Z" level=info msg="ImageCreate event name:\"sha256:74c5154ea84d9a53c406e6c00e53cf66145cce821fd80e3c74e2e1bf312f3977\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:47:26.237878 containerd[1647]: time="2025-07-14T22:47:26.237851473Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:083d7d64af31cd090f870eb49fb815e6bb42c175fc602ee9dae2f28f082bd4dc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:47:26.238531 containerd[1647]: time="2025-07-14T22:47:26.238438546Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.10\" with image id \"sha256:74c5154ea84d9a53c406e6c00e53cf66145cce821fd80e3c74e2e1bf312f3977\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.10\", repo digest \"registry.k8s.io/kube-apiserver@sha256:083d7d64af31cd090f870eb49fb815e6bb42c175fc602ee9dae2f28f082bd4dc\", size \"28074544\" in 1.494195534s" Jul 14 22:47:26.238531 containerd[1647]: time="2025-07-14T22:47:26.238459416Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.10\" returns image reference \"sha256:74c5154ea84d9a53c406e6c00e53cf66145cce821fd80e3c74e2e1bf312f3977\"" Jul 14 22:47:26.238884 containerd[1647]: time="2025-07-14T22:47:26.238868275Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.10\"" Jul 14 22:47:26.807484 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jul 14 22:47:26.817071 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 14 22:47:26.891615 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 14 22:47:26.895470 (kubelet)[2201]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 14 22:47:26.963958 kubelet[2201]: E0714 22:47:26.963928 2201 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 14 22:47:26.965389 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 14 22:47:26.965496 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 14 22:47:27.969927 containerd[1647]: time="2025-07-14T22:47:27.969455212Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:47:27.975302 containerd[1647]: time="2025-07-14T22:47:27.975264938Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.10: active requests=0, bytes read=24713294" Jul 14 22:47:27.988978 containerd[1647]: time="2025-07-14T22:47:27.988939193Z" level=info msg="ImageCreate event name:\"sha256:c285c4e62c91c434e9928bee7063b361509f43f43faa31641b626d6eff97616d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:47:27.992080 containerd[1647]: time="2025-07-14T22:47:27.992051509Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3c67387d023c6114879f1e817669fd641797d30f117230682faf3930ecaaf0fe\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:47:27.992663 containerd[1647]: time="2025-07-14T22:47:27.992557666Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.10\" with image id \"sha256:c285c4e62c91c434e9928bee7063b361509f43f43faa31641b626d6eff97616d\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.10\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3c67387d023c6114879f1e817669fd641797d30f117230682faf3930ecaaf0fe\", size \"26315128\" in 1.753667328s" Jul 14 22:47:27.992663 containerd[1647]: time="2025-07-14T22:47:27.992581735Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.10\" returns image reference \"sha256:c285c4e62c91c434e9928bee7063b361509f43f43faa31641b626d6eff97616d\"" Jul 14 22:47:27.992923 containerd[1647]: time="2025-07-14T22:47:27.992830339Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.10\"" Jul 14 22:47:29.018556 containerd[1647]: time="2025-07-14T22:47:29.018523730Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:47:29.027623 containerd[1647]: time="2025-07-14T22:47:29.027583972Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.10: active requests=0, bytes read=18783671" Jul 14 22:47:29.038399 containerd[1647]: time="2025-07-14T22:47:29.038355159Z" level=info msg="ImageCreate event name:\"sha256:61daeb7d112d9547792027cb16242b1d131f357f511545477381457fff5a69e2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:47:29.047196 containerd[1647]: time="2025-07-14T22:47:29.047163680Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:284dc2a5cf6afc9b76e39ad4b79c680c23d289488517643b28784a06d0141272\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:47:29.047980 containerd[1647]: time="2025-07-14T22:47:29.047878598Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.10\" with image id \"sha256:61daeb7d112d9547792027cb16242b1d131f357f511545477381457fff5a69e2\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.10\", repo digest \"registry.k8s.io/kube-scheduler@sha256:284dc2a5cf6afc9b76e39ad4b79c680c23d289488517643b28784a06d0141272\", size \"20385523\" in 1.055030399s" Jul 14 22:47:29.047980 containerd[1647]: time="2025-07-14T22:47:29.047918660Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.10\" returns image reference \"sha256:61daeb7d112d9547792027cb16242b1d131f357f511545477381457fff5a69e2\"" Jul 14 22:47:29.048318 containerd[1647]: time="2025-07-14T22:47:29.048300376Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.10\"" Jul 14 22:47:30.134440 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2248086361.mount: Deactivated successfully. Jul 14 22:47:30.479939 containerd[1647]: time="2025-07-14T22:47:30.479833795Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:47:30.487907 containerd[1647]: time="2025-07-14T22:47:30.487852790Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.10: active requests=0, bytes read=30383943" Jul 14 22:47:30.496368 containerd[1647]: time="2025-07-14T22:47:30.496326789Z" level=info msg="ImageCreate event name:\"sha256:3ed600862d3e69931e0f9f4dbf5c2b46343af40aa079772434f13de771bdc30c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:47:30.512213 containerd[1647]: time="2025-07-14T22:47:30.512170517Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bcbb293812bdf587b28ea98369a8c347ca84884160046296761acdf12b27029d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:47:30.512669 containerd[1647]: time="2025-07-14T22:47:30.512581785Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.10\" with image id \"sha256:3ed600862d3e69931e0f9f4dbf5c2b46343af40aa079772434f13de771bdc30c\", repo tag \"registry.k8s.io/kube-proxy:v1.31.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:bcbb293812bdf587b28ea98369a8c347ca84884160046296761acdf12b27029d\", size \"30382962\" in 1.464215847s" Jul 14 22:47:30.512669 containerd[1647]: time="2025-07-14T22:47:30.512599669Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.10\" returns image reference \"sha256:3ed600862d3e69931e0f9f4dbf5c2b46343af40aa079772434f13de771bdc30c\"" Jul 14 22:47:30.513035 containerd[1647]: time="2025-07-14T22:47:30.513026432Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jul 14 22:47:31.512994 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2043754691.mount: Deactivated successfully. Jul 14 22:47:32.352137 containerd[1647]: time="2025-07-14T22:47:32.352102014Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:47:32.359231 containerd[1647]: time="2025-07-14T22:47:32.359198484Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Jul 14 22:47:32.371987 containerd[1647]: time="2025-07-14T22:47:32.371952758Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:47:32.378709 containerd[1647]: time="2025-07-14T22:47:32.378673408Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:47:32.379622 containerd[1647]: time="2025-07-14T22:47:32.379385656Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.866290346s" Jul 14 22:47:32.379622 containerd[1647]: time="2025-07-14T22:47:32.379408141Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Jul 14 22:47:32.379817 containerd[1647]: time="2025-07-14T22:47:32.379802298Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 14 22:47:33.036242 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1486100506.mount: Deactivated successfully. Jul 14 22:47:33.058764 containerd[1647]: time="2025-07-14T22:47:33.058730353Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:47:33.061132 containerd[1647]: time="2025-07-14T22:47:33.061062202Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Jul 14 22:47:33.063339 containerd[1647]: time="2025-07-14T22:47:33.063310282Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:47:33.067829 containerd[1647]: time="2025-07-14T22:47:33.067800953Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:47:33.068410 containerd[1647]: time="2025-07-14T22:47:33.068112557Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 688.251423ms" Jul 14 22:47:33.068410 containerd[1647]: time="2025-07-14T22:47:33.068129318Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jul 14 22:47:33.068410 containerd[1647]: time="2025-07-14T22:47:33.068393304Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Jul 14 22:47:34.155646 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4237056182.mount: Deactivated successfully. Jul 14 22:47:36.367765 containerd[1647]: time="2025-07-14T22:47:36.367087505Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:47:36.371041 containerd[1647]: time="2025-07-14T22:47:36.371012874Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56780013" Jul 14 22:47:36.373372 containerd[1647]: time="2025-07-14T22:47:36.373347918Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:47:36.375183 containerd[1647]: time="2025-07-14T22:47:36.375152590Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:47:36.376053 containerd[1647]: time="2025-07-14T22:47:36.375939468Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 3.307529813s" Jul 14 22:47:36.376053 containerd[1647]: time="2025-07-14T22:47:36.375964491Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Jul 14 22:47:37.057390 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jul 14 22:47:37.066058 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 14 22:47:37.543412 update_engine[1619]: I20250714 22:47:37.542943 1619 update_attempter.cc:509] Updating boot flags... Jul 14 22:47:37.770958 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (2357) Jul 14 22:47:37.807259 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 14 22:47:37.810459 (kubelet)[2371]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 14 22:47:37.993533 kubelet[2371]: E0714 22:47:37.993496 2371 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 14 22:47:37.994690 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 14 22:47:37.994879 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 14 22:47:43.563897 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 14 22:47:43.571019 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 14 22:47:43.590767 systemd[1]: Reloading requested from client PID 2401 ('systemctl') (unit session-9.scope)... Jul 14 22:47:43.590828 systemd[1]: Reloading... Jul 14 22:47:43.638934 zram_generator::config[2438]: No configuration found. Jul 14 22:47:43.705741 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Jul 14 22:47:43.720770 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 14 22:47:43.768256 systemd[1]: Reloading finished in 177 ms. Jul 14 22:47:43.798596 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 14 22:47:43.798642 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 14 22:47:43.798809 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 14 22:47:43.803120 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 14 22:47:44.118136 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 14 22:47:44.120827 (kubelet)[2516]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 14 22:47:44.144197 kubelet[2516]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 14 22:47:44.144197 kubelet[2516]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 14 22:47:44.144197 kubelet[2516]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 14 22:47:44.144197 kubelet[2516]: I0714 22:47:44.143600 2516 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 14 22:47:44.380642 kubelet[2516]: I0714 22:47:44.380569 2516 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jul 14 22:47:44.380642 kubelet[2516]: I0714 22:47:44.380599 2516 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 14 22:47:44.380881 kubelet[2516]: I0714 22:47:44.380826 2516 server.go:934] "Client rotation is on, will bootstrap in background" Jul 14 22:47:44.683957 kubelet[2516]: I0714 22:47:44.683882 2516 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 14 22:47:44.689297 kubelet[2516]: E0714 22:47:44.689259 2516 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://139.178.70.103:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 139.178.70.103:6443: connect: connection refused" logger="UnhandledError" Jul 14 22:47:44.724804 kubelet[2516]: E0714 22:47:44.724723 2516 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 14 22:47:44.724804 kubelet[2516]: I0714 22:47:44.724744 2516 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 14 22:47:44.732431 kubelet[2516]: I0714 22:47:44.732384 2516 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 14 22:47:44.738234 kubelet[2516]: I0714 22:47:44.738200 2516 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jul 14 22:47:44.738400 kubelet[2516]: I0714 22:47:44.738357 2516 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 14 22:47:44.738539 kubelet[2516]: I0714 22:47:44.738397 2516 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Jul 14 22:47:44.738640 kubelet[2516]: I0714 22:47:44.738545 2516 topology_manager.go:138] "Creating topology manager with none policy" Jul 14 22:47:44.738640 kubelet[2516]: I0714 22:47:44.738554 2516 container_manager_linux.go:300] "Creating device plugin manager" Jul 14 22:47:44.738640 kubelet[2516]: I0714 22:47:44.738635 2516 state_mem.go:36] "Initialized new in-memory state store" Jul 14 22:47:44.741660 kubelet[2516]: I0714 22:47:44.741629 2516 kubelet.go:408] "Attempting to sync node with API server" Jul 14 22:47:44.741660 kubelet[2516]: I0714 22:47:44.741655 2516 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 14 22:47:44.741742 kubelet[2516]: I0714 22:47:44.741682 2516 kubelet.go:314] "Adding apiserver pod source" Jul 14 22:47:44.741742 kubelet[2516]: I0714 22:47:44.741697 2516 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 14 22:47:44.745724 kubelet[2516]: W0714 22:47:44.745331 2516 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://139.178.70.103:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 139.178.70.103:6443: connect: connection refused Jul 14 22:47:44.745724 kubelet[2516]: E0714 22:47:44.745373 2516 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://139.178.70.103:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 139.178.70.103:6443: connect: connection refused" logger="UnhandledError" Jul 14 22:47:44.745724 kubelet[2516]: W0714 22:47:44.745652 2516 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://139.178.70.103:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 139.178.70.103:6443: connect: connection refused Jul 14 22:47:44.745724 kubelet[2516]: E0714 22:47:44.745685 2516 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://139.178.70.103:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 139.178.70.103:6443: connect: connection refused" logger="UnhandledError" Jul 14 22:47:44.746236 kubelet[2516]: I0714 22:47:44.746136 2516 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jul 14 22:47:44.750717 kubelet[2516]: I0714 22:47:44.750559 2516 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 14 22:47:44.751367 kubelet[2516]: W0714 22:47:44.751294 2516 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 14 22:47:44.752264 kubelet[2516]: I0714 22:47:44.752252 2516 server.go:1274] "Started kubelet" Jul 14 22:47:44.754960 kubelet[2516]: I0714 22:47:44.754472 2516 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 14 22:47:44.756864 kubelet[2516]: I0714 22:47:44.756501 2516 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 14 22:47:44.756864 kubelet[2516]: I0714 22:47:44.756722 2516 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 14 22:47:44.757755 kubelet[2516]: I0714 22:47:44.757363 2516 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 14 22:47:44.759053 kubelet[2516]: I0714 22:47:44.759035 2516 server.go:449] "Adding debug handlers to kubelet server" Jul 14 22:47:44.760077 kubelet[2516]: I0714 22:47:44.760010 2516 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 14 22:47:44.762862 kubelet[2516]: I0714 22:47:44.762849 2516 volume_manager.go:289] "Starting Kubelet Volume Manager" Jul 14 22:47:44.763732 kubelet[2516]: E0714 22:47:44.763488 2516 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 22:47:44.766244 kubelet[2516]: E0714 22:47:44.763057 2516 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://139.178.70.103:6443/api/v1/namespaces/default/events\": dial tcp 139.178.70.103:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18523fb80e9ccf85 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-07-14 22:47:44.752234373 +0000 UTC m=+0.629412699,LastTimestamp:2025-07-14 22:47:44.752234373 +0000 UTC m=+0.629412699,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jul 14 22:47:44.766899 kubelet[2516]: E0714 22:47:44.766471 2516 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.103:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.103:6443: connect: connection refused" interval="200ms" Jul 14 22:47:44.768112 kubelet[2516]: I0714 22:47:44.767754 2516 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jul 14 22:47:44.768736 kubelet[2516]: I0714 22:47:44.768158 2516 reconciler.go:26] "Reconciler: start to sync state" Jul 14 22:47:44.768736 kubelet[2516]: I0714 22:47:44.768258 2516 factory.go:221] Registration of the systemd container factory successfully Jul 14 22:47:44.768736 kubelet[2516]: I0714 22:47:44.768315 2516 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 14 22:47:44.768736 kubelet[2516]: W0714 22:47:44.768698 2516 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://139.178.70.103:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 139.178.70.103:6443: connect: connection refused Jul 14 22:47:44.768736 kubelet[2516]: E0714 22:47:44.768722 2516 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://139.178.70.103:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 139.178.70.103:6443: connect: connection refused" logger="UnhandledError" Jul 14 22:47:44.771218 kubelet[2516]: I0714 22:47:44.771204 2516 factory.go:221] Registration of the containerd container factory successfully Jul 14 22:47:44.775399 kubelet[2516]: I0714 22:47:44.775371 2516 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 14 22:47:44.776106 kubelet[2516]: I0714 22:47:44.776096 2516 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 14 22:47:44.776153 kubelet[2516]: I0714 22:47:44.776149 2516 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 14 22:47:44.776191 kubelet[2516]: I0714 22:47:44.776187 2516 kubelet.go:2321] "Starting kubelet main sync loop" Jul 14 22:47:44.776258 kubelet[2516]: E0714 22:47:44.776248 2516 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 14 22:47:44.811325 kubelet[2516]: W0714 22:47:44.805773 2516 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://139.178.70.103:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 139.178.70.103:6443: connect: connection refused Jul 14 22:47:44.811325 kubelet[2516]: E0714 22:47:44.810918 2516 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://139.178.70.103:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 139.178.70.103:6443: connect: connection refused" logger="UnhandledError" Jul 14 22:47:44.811690 kubelet[2516]: I0714 22:47:44.811677 2516 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 14 22:47:44.811690 kubelet[2516]: I0714 22:47:44.811689 2516 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 14 22:47:44.811735 kubelet[2516]: I0714 22:47:44.811701 2516 state_mem.go:36] "Initialized new in-memory state store" Jul 14 22:47:44.812842 kubelet[2516]: I0714 22:47:44.812828 2516 policy_none.go:49] "None policy: Start" Jul 14 22:47:44.813177 kubelet[2516]: I0714 22:47:44.813170 2516 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 14 22:47:44.813379 kubelet[2516]: I0714 22:47:44.813239 2516 state_mem.go:35] "Initializing new in-memory state store" Jul 14 22:47:44.816794 kubelet[2516]: I0714 22:47:44.816774 2516 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 14 22:47:44.816975 kubelet[2516]: I0714 22:47:44.816896 2516 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 14 22:47:44.816975 kubelet[2516]: I0714 22:47:44.816905 2516 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 14 22:47:44.818267 kubelet[2516]: I0714 22:47:44.818252 2516 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 14 22:47:44.821524 kubelet[2516]: E0714 22:47:44.821502 2516 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jul 14 22:47:44.922833 kubelet[2516]: I0714 22:47:44.922813 2516 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 14 22:47:44.923031 kubelet[2516]: E0714 22:47:44.923013 2516 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://139.178.70.103:6443/api/v1/nodes\": dial tcp 139.178.70.103:6443: connect: connection refused" node="localhost" Jul 14 22:47:44.967866 kubelet[2516]: E0714 22:47:44.967735 2516 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.103:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.103:6443: connect: connection refused" interval="400ms" Jul 14 22:47:45.070980 kubelet[2516]: I0714 22:47:45.070831 2516 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 14 22:47:45.070980 kubelet[2516]: I0714 22:47:45.070866 2516 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 14 22:47:45.070980 kubelet[2516]: I0714 22:47:45.070883 2516 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 14 22:47:45.070980 kubelet[2516]: I0714 22:47:45.070934 2516 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b35b56493416c25588cb530e37ffc065-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"b35b56493416c25588cb530e37ffc065\") " pod="kube-system/kube-scheduler-localhost" Jul 14 22:47:45.070980 kubelet[2516]: I0714 22:47:45.070948 2516 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ae75233e4ec4a26ca4f121d42c4b20ba-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"ae75233e4ec4a26ca4f121d42c4b20ba\") " pod="kube-system/kube-apiserver-localhost" Jul 14 22:47:45.071183 kubelet[2516]: I0714 22:47:45.070964 2516 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ae75233e4ec4a26ca4f121d42c4b20ba-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"ae75233e4ec4a26ca4f121d42c4b20ba\") " pod="kube-system/kube-apiserver-localhost" Jul 14 22:47:45.071183 kubelet[2516]: I0714 22:47:45.070979 2516 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ae75233e4ec4a26ca4f121d42c4b20ba-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"ae75233e4ec4a26ca4f121d42c4b20ba\") " pod="kube-system/kube-apiserver-localhost" Jul 14 22:47:45.071183 kubelet[2516]: I0714 22:47:45.070994 2516 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 14 22:47:45.071183 kubelet[2516]: I0714 22:47:45.071005 2516 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 14 22:47:45.125492 kubelet[2516]: I0714 22:47:45.125429 2516 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 14 22:47:45.125684 kubelet[2516]: E0714 22:47:45.125660 2516 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://139.178.70.103:6443/api/v1/nodes\": dial tcp 139.178.70.103:6443: connect: connection refused" node="localhost" Jul 14 22:47:45.190741 containerd[1647]: time="2025-07-14T22:47:45.190531684Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:3f04709fe51ae4ab5abd58e8da771b74,Namespace:kube-system,Attempt:0,}" Jul 14 22:47:45.190741 containerd[1647]: time="2025-07-14T22:47:45.190570196Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:b35b56493416c25588cb530e37ffc065,Namespace:kube-system,Attempt:0,}" Jul 14 22:47:45.190741 containerd[1647]: time="2025-07-14T22:47:45.190681050Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:ae75233e4ec4a26ca4f121d42c4b20ba,Namespace:kube-system,Attempt:0,}" Jul 14 22:47:45.368140 kubelet[2516]: E0714 22:47:45.368105 2516 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.103:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.103:6443: connect: connection refused" interval="800ms" Jul 14 22:47:45.527456 kubelet[2516]: I0714 22:47:45.527423 2516 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 14 22:47:45.527652 kubelet[2516]: E0714 22:47:45.527637 2516 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://139.178.70.103:6443/api/v1/nodes\": dial tcp 139.178.70.103:6443: connect: connection refused" node="localhost" Jul 14 22:47:45.657391 kubelet[2516]: W0714 22:47:45.657288 2516 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://139.178.70.103:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 139.178.70.103:6443: connect: connection refused Jul 14 22:47:45.657391 kubelet[2516]: E0714 22:47:45.657336 2516 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://139.178.70.103:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 139.178.70.103:6443: connect: connection refused" logger="UnhandledError" Jul 14 22:47:45.661723 kubelet[2516]: W0714 22:47:45.661685 2516 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://139.178.70.103:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 139.178.70.103:6443: connect: connection refused Jul 14 22:47:45.661723 kubelet[2516]: E0714 22:47:45.661709 2516 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://139.178.70.103:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 139.178.70.103:6443: connect: connection refused" logger="UnhandledError" Jul 14 22:47:45.762198 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1057371338.mount: Deactivated successfully. Jul 14 22:47:45.764866 containerd[1647]: time="2025-07-14T22:47:45.764792792Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 14 22:47:45.765425 containerd[1647]: time="2025-07-14T22:47:45.765385621Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 14 22:47:45.765841 containerd[1647]: time="2025-07-14T22:47:45.765812338Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 14 22:47:45.766401 containerd[1647]: time="2025-07-14T22:47:45.766286527Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 14 22:47:45.766401 containerd[1647]: time="2025-07-14T22:47:45.766292892Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 14 22:47:45.766921 containerd[1647]: time="2025-07-14T22:47:45.766901466Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jul 14 22:47:45.768326 containerd[1647]: time="2025-07-14T22:47:45.768255460Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 14 22:47:45.769273 containerd[1647]: time="2025-07-14T22:47:45.769134541Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 578.412293ms" Jul 14 22:47:45.769968 containerd[1647]: time="2025-07-14T22:47:45.769920441Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 579.326739ms" Jul 14 22:47:45.770786 containerd[1647]: time="2025-07-14T22:47:45.770250974Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 14 22:47:45.772229 containerd[1647]: time="2025-07-14T22:47:45.772192317Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 581.577919ms" Jul 14 22:47:45.898299 containerd[1647]: time="2025-07-14T22:47:45.898047254Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 22:47:45.898299 containerd[1647]: time="2025-07-14T22:47:45.898075853Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 22:47:45.898299 containerd[1647]: time="2025-07-14T22:47:45.898082655Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 22:47:45.898299 containerd[1647]: time="2025-07-14T22:47:45.898140630Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 22:47:45.911718 containerd[1647]: time="2025-07-14T22:47:45.902611385Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 22:47:45.911718 containerd[1647]: time="2025-07-14T22:47:45.902636345Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 22:47:45.911718 containerd[1647]: time="2025-07-14T22:47:45.902646235Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 22:47:45.911718 containerd[1647]: time="2025-07-14T22:47:45.902686764Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 22:47:45.911835 containerd[1647]: time="2025-07-14T22:47:45.901314257Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 22:47:45.911835 containerd[1647]: time="2025-07-14T22:47:45.901340891Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 22:47:45.911835 containerd[1647]: time="2025-07-14T22:47:45.901351192Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 22:47:45.911835 containerd[1647]: time="2025-07-14T22:47:45.901399968Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 22:47:45.967477 containerd[1647]: time="2025-07-14T22:47:45.967455937Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:3f04709fe51ae4ab5abd58e8da771b74,Namespace:kube-system,Attempt:0,} returns sandbox id \"bf9e4e8e728292925d89802df21d04656b00736fb69db7c067f4ecd525426587\"" Jul 14 22:47:45.971594 containerd[1647]: time="2025-07-14T22:47:45.971567317Z" level=info msg="CreateContainer within sandbox \"bf9e4e8e728292925d89802df21d04656b00736fb69db7c067f4ecd525426587\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 14 22:47:45.976234 containerd[1647]: time="2025-07-14T22:47:45.976214653Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:ae75233e4ec4a26ca4f121d42c4b20ba,Namespace:kube-system,Attempt:0,} returns sandbox id \"77845169d6107d6f5ec0190f12a5951771a45e5243244be1263e44c51440ec21\"" Jul 14 22:47:45.978047 containerd[1647]: time="2025-07-14T22:47:45.977710920Z" level=info msg="CreateContainer within sandbox \"77845169d6107d6f5ec0190f12a5951771a45e5243244be1263e44c51440ec21\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 14 22:47:45.979132 containerd[1647]: time="2025-07-14T22:47:45.979115544Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:b35b56493416c25588cb530e37ffc065,Namespace:kube-system,Attempt:0,} returns sandbox id \"9cd6a47525f9389b4fa4eed73f4279233c7e59d2e2fd8b13935c61def59725e6\"" Jul 14 22:47:45.980163 containerd[1647]: time="2025-07-14T22:47:45.980117091Z" level=info msg="CreateContainer within sandbox \"9cd6a47525f9389b4fa4eed73f4279233c7e59d2e2fd8b13935c61def59725e6\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 14 22:47:45.999306 containerd[1647]: time="2025-07-14T22:47:45.999281475Z" level=info msg="CreateContainer within sandbox \"77845169d6107d6f5ec0190f12a5951771a45e5243244be1263e44c51440ec21\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"b2143a944fce8c9767975507a9f66c4daae7e88261161a7ef662a9a281642c60\"" Jul 14 22:47:45.999755 containerd[1647]: time="2025-07-14T22:47:45.999712339Z" level=info msg="StartContainer for \"b2143a944fce8c9767975507a9f66c4daae7e88261161a7ef662a9a281642c60\"" Jul 14 22:47:46.002235 containerd[1647]: time="2025-07-14T22:47:46.002001227Z" level=info msg="CreateContainer within sandbox \"bf9e4e8e728292925d89802df21d04656b00736fb69db7c067f4ecd525426587\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"f20d75de7190a66e4ab9385d9889a45f1f508e63aa96c4504cf58a3bc7e45173\"" Jul 14 22:47:46.002235 containerd[1647]: time="2025-07-14T22:47:46.002146957Z" level=info msg="CreateContainer within sandbox \"9cd6a47525f9389b4fa4eed73f4279233c7e59d2e2fd8b13935c61def59725e6\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"9bb4d1f75608a8d7a4931ee5dc67201d149b03708dfe90886216cf5342d10bb5\"" Jul 14 22:47:46.002386 containerd[1647]: time="2025-07-14T22:47:46.002372223Z" level=info msg="StartContainer for \"9bb4d1f75608a8d7a4931ee5dc67201d149b03708dfe90886216cf5342d10bb5\"" Jul 14 22:47:46.002596 containerd[1647]: time="2025-07-14T22:47:46.002582473Z" level=info msg="StartContainer for \"f20d75de7190a66e4ab9385d9889a45f1f508e63aa96c4504cf58a3bc7e45173\"" Jul 14 22:47:46.076504 containerd[1647]: time="2025-07-14T22:47:46.076436442Z" level=info msg="StartContainer for \"b2143a944fce8c9767975507a9f66c4daae7e88261161a7ef662a9a281642c60\" returns successfully" Jul 14 22:47:46.077066 containerd[1647]: time="2025-07-14T22:47:46.076479447Z" level=info msg="StartContainer for \"f20d75de7190a66e4ab9385d9889a45f1f508e63aa96c4504cf58a3bc7e45173\" returns successfully" Jul 14 22:47:46.077376 containerd[1647]: time="2025-07-14T22:47:46.076481299Z" level=info msg="StartContainer for \"9bb4d1f75608a8d7a4931ee5dc67201d149b03708dfe90886216cf5342d10bb5\" returns successfully" Jul 14 22:47:46.168920 kubelet[2516]: E0714 22:47:46.168403 2516 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.103:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.103:6443: connect: connection refused" interval="1.6s" Jul 14 22:47:46.180437 kubelet[2516]: W0714 22:47:46.180365 2516 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://139.178.70.103:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 139.178.70.103:6443: connect: connection refused Jul 14 22:47:46.180437 kubelet[2516]: E0714 22:47:46.180423 2516 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://139.178.70.103:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 139.178.70.103:6443: connect: connection refused" logger="UnhandledError" Jul 14 22:47:46.312363 kubelet[2516]: W0714 22:47:46.312291 2516 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://139.178.70.103:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 139.178.70.103:6443: connect: connection refused Jul 14 22:47:46.312363 kubelet[2516]: E0714 22:47:46.312338 2516 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://139.178.70.103:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 139.178.70.103:6443: connect: connection refused" logger="UnhandledError" Jul 14 22:47:46.328666 kubelet[2516]: I0714 22:47:46.328647 2516 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 14 22:47:46.329007 kubelet[2516]: E0714 22:47:46.328819 2516 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://139.178.70.103:6443/api/v1/nodes\": dial tcp 139.178.70.103:6443: connect: connection refused" node="localhost" Jul 14 22:47:46.732462 kubelet[2516]: E0714 22:47:46.732432 2516 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://139.178.70.103:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 139.178.70.103:6443: connect: connection refused" logger="UnhandledError" Jul 14 22:47:47.929745 kubelet[2516]: I0714 22:47:47.929682 2516 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 14 22:47:48.705139 kubelet[2516]: E0714 22:47:48.705103 2516 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jul 14 22:47:48.748362 kubelet[2516]: I0714 22:47:48.748328 2516 apiserver.go:52] "Watching apiserver" Jul 14 22:47:48.771987 kubelet[2516]: I0714 22:47:48.771901 2516 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Jul 14 22:47:48.771987 kubelet[2516]: I0714 22:47:48.771955 2516 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jul 14 22:47:48.816379 kubelet[2516]: E0714 22:47:48.816322 2516 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.18523fb80e9ccf85 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-07-14 22:47:44.752234373 +0000 UTC m=+0.629412699,LastTimestamp:2025-07-14 22:47:44.752234373 +0000 UTC m=+0.629412699,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jul 14 22:47:50.844366 systemd[1]: Reloading requested from client PID 2789 ('systemctl') (unit session-9.scope)... Jul 14 22:47:50.844376 systemd[1]: Reloading... Jul 14 22:47:50.903922 zram_generator::config[2837]: No configuration found. Jul 14 22:47:50.969464 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Jul 14 22:47:50.985974 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 14 22:47:51.034785 systemd[1]: Reloading finished in 190 ms. Jul 14 22:47:51.058545 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 14 22:47:51.073622 systemd[1]: kubelet.service: Deactivated successfully. Jul 14 22:47:51.073847 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 14 22:47:51.079177 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 14 22:47:51.581493 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 14 22:47:51.592200 (kubelet)[2904]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 14 22:47:51.636917 kubelet[2904]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 14 22:47:51.636917 kubelet[2904]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 14 22:47:51.636917 kubelet[2904]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 14 22:47:51.636917 kubelet[2904]: I0714 22:47:51.636714 2904 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 14 22:47:51.644133 kubelet[2904]: I0714 22:47:51.644111 2904 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jul 14 22:47:51.644133 kubelet[2904]: I0714 22:47:51.644129 2904 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 14 22:47:51.644286 kubelet[2904]: I0714 22:47:51.644275 2904 server.go:934] "Client rotation is on, will bootstrap in background" Jul 14 22:47:51.645102 kubelet[2904]: I0714 22:47:51.645091 2904 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 14 22:47:51.656700 kubelet[2904]: I0714 22:47:51.656675 2904 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 14 22:47:51.658814 kubelet[2904]: E0714 22:47:51.658795 2904 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 14 22:47:51.658814 kubelet[2904]: I0714 22:47:51.658813 2904 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 14 22:47:51.661947 kubelet[2904]: I0714 22:47:51.660615 2904 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 14 22:47:51.661947 kubelet[2904]: I0714 22:47:51.660832 2904 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jul 14 22:47:51.661947 kubelet[2904]: I0714 22:47:51.660903 2904 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 14 22:47:51.661947 kubelet[2904]: I0714 22:47:51.660926 2904 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Jul 14 22:47:51.662085 kubelet[2904]: I0714 22:47:51.661073 2904 topology_manager.go:138] "Creating topology manager with none policy" Jul 14 22:47:51.662085 kubelet[2904]: I0714 22:47:51.661080 2904 container_manager_linux.go:300] "Creating device plugin manager" Jul 14 22:47:51.662085 kubelet[2904]: I0714 22:47:51.661100 2904 state_mem.go:36] "Initialized new in-memory state store" Jul 14 22:47:51.662085 kubelet[2904]: I0714 22:47:51.661149 2904 kubelet.go:408] "Attempting to sync node with API server" Jul 14 22:47:51.662085 kubelet[2904]: I0714 22:47:51.661156 2904 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 14 22:47:51.662085 kubelet[2904]: I0714 22:47:51.661173 2904 kubelet.go:314] "Adding apiserver pod source" Jul 14 22:47:51.662085 kubelet[2904]: I0714 22:47:51.661182 2904 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 14 22:47:51.662566 kubelet[2904]: I0714 22:47:51.662554 2904 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jul 14 22:47:51.662847 kubelet[2904]: I0714 22:47:51.662836 2904 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 14 22:47:51.663089 kubelet[2904]: I0714 22:47:51.663079 2904 server.go:1274] "Started kubelet" Jul 14 22:47:51.668628 kubelet[2904]: I0714 22:47:51.668155 2904 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 14 22:47:51.672175 kubelet[2904]: I0714 22:47:51.672158 2904 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 14 22:47:51.672328 kubelet[2904]: I0714 22:47:51.672321 2904 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 14 22:47:51.673147 kubelet[2904]: I0714 22:47:51.673136 2904 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 14 22:47:51.673382 kubelet[2904]: I0714 22:47:51.673375 2904 server.go:449] "Adding debug handlers to kubelet server" Jul 14 22:47:51.679953 kubelet[2904]: E0714 22:47:51.679940 2904 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 14 22:47:51.680107 kubelet[2904]: I0714 22:47:51.680098 2904 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 14 22:47:51.684486 kubelet[2904]: I0714 22:47:51.683958 2904 volume_manager.go:289] "Starting Kubelet Volume Manager" Jul 14 22:47:51.684661 kubelet[2904]: I0714 22:47:51.684653 2904 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jul 14 22:47:51.684777 kubelet[2904]: I0714 22:47:51.684770 2904 reconciler.go:26] "Reconciler: start to sync state" Jul 14 22:47:51.686749 kubelet[2904]: I0714 22:47:51.686731 2904 factory.go:221] Registration of the systemd container factory successfully Jul 14 22:47:51.686819 kubelet[2904]: I0714 22:47:51.686807 2904 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 14 22:47:51.688138 kubelet[2904]: I0714 22:47:51.688125 2904 factory.go:221] Registration of the containerd container factory successfully Jul 14 22:47:51.688881 kubelet[2904]: I0714 22:47:51.688867 2904 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 14 22:47:51.689550 kubelet[2904]: I0714 22:47:51.689533 2904 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 14 22:47:51.689630 kubelet[2904]: I0714 22:47:51.689619 2904 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 14 22:47:51.689691 kubelet[2904]: I0714 22:47:51.689685 2904 kubelet.go:2321] "Starting kubelet main sync loop" Jul 14 22:47:51.689766 kubelet[2904]: E0714 22:47:51.689757 2904 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 14 22:47:51.731707 kubelet[2904]: I0714 22:47:51.731694 2904 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 14 22:47:51.731810 kubelet[2904]: I0714 22:47:51.731804 2904 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 14 22:47:51.731895 kubelet[2904]: I0714 22:47:51.731880 2904 state_mem.go:36] "Initialized new in-memory state store" Jul 14 22:47:51.732026 kubelet[2904]: I0714 22:47:51.732018 2904 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 14 22:47:51.732069 kubelet[2904]: I0714 22:47:51.732056 2904 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 14 22:47:51.732097 kubelet[2904]: I0714 22:47:51.732093 2904 policy_none.go:49] "None policy: Start" Jul 14 22:47:51.732428 kubelet[2904]: I0714 22:47:51.732420 2904 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 14 22:47:51.732508 kubelet[2904]: I0714 22:47:51.732503 2904 state_mem.go:35] "Initializing new in-memory state store" Jul 14 22:47:51.732627 kubelet[2904]: I0714 22:47:51.732620 2904 state_mem.go:75] "Updated machine memory state" Jul 14 22:47:51.733417 kubelet[2904]: I0714 22:47:51.733408 2904 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 14 22:47:51.733995 kubelet[2904]: I0714 22:47:51.733988 2904 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 14 22:47:51.734046 kubelet[2904]: I0714 22:47:51.734032 2904 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 14 22:47:51.734250 kubelet[2904]: I0714 22:47:51.734243 2904 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 14 22:47:51.804742 kubelet[2904]: E0714 22:47:51.804704 2904 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jul 14 22:47:51.838276 kubelet[2904]: I0714 22:47:51.838122 2904 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 14 22:47:51.846380 kubelet[2904]: I0714 22:47:51.846258 2904 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Jul 14 22:47:51.846380 kubelet[2904]: I0714 22:47:51.846324 2904 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Jul 14 22:47:51.885325 kubelet[2904]: I0714 22:47:51.885281 2904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 14 22:47:51.885325 kubelet[2904]: I0714 22:47:51.885302 2904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 14 22:47:51.885467 kubelet[2904]: I0714 22:47:51.885317 2904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 14 22:47:51.885467 kubelet[2904]: I0714 22:47:51.885372 2904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b35b56493416c25588cb530e37ffc065-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"b35b56493416c25588cb530e37ffc065\") " pod="kube-system/kube-scheduler-localhost" Jul 14 22:47:51.885467 kubelet[2904]: I0714 22:47:51.885383 2904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ae75233e4ec4a26ca4f121d42c4b20ba-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"ae75233e4ec4a26ca4f121d42c4b20ba\") " pod="kube-system/kube-apiserver-localhost" Jul 14 22:47:51.885467 kubelet[2904]: I0714 22:47:51.885391 2904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ae75233e4ec4a26ca4f121d42c4b20ba-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"ae75233e4ec4a26ca4f121d42c4b20ba\") " pod="kube-system/kube-apiserver-localhost" Jul 14 22:47:51.885467 kubelet[2904]: I0714 22:47:51.885400 2904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ae75233e4ec4a26ca4f121d42c4b20ba-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"ae75233e4ec4a26ca4f121d42c4b20ba\") " pod="kube-system/kube-apiserver-localhost" Jul 14 22:47:51.885601 kubelet[2904]: I0714 22:47:51.885409 2904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 14 22:47:51.885601 kubelet[2904]: I0714 22:47:51.885447 2904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 14 22:47:52.662362 kubelet[2904]: I0714 22:47:52.662331 2904 apiserver.go:52] "Watching apiserver" Jul 14 22:47:52.685423 kubelet[2904]: I0714 22:47:52.685409 2904 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jul 14 22:47:52.720524 kubelet[2904]: E0714 22:47:52.720487 2904 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jul 14 22:47:52.758944 kubelet[2904]: I0714 22:47:52.758786 2904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.758764346 podStartE2EDuration="1.758764346s" podCreationTimestamp="2025-07-14 22:47:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-14 22:47:52.751121389 +0000 UTC m=+1.142413684" watchObservedRunningTime="2025-07-14 22:47:52.758764346 +0000 UTC m=+1.150056641" Jul 14 22:47:52.763623 kubelet[2904]: I0714 22:47:52.763325 2904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.76331437 podStartE2EDuration="2.76331437s" podCreationTimestamp="2025-07-14 22:47:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-14 22:47:52.762630679 +0000 UTC m=+1.153922970" watchObservedRunningTime="2025-07-14 22:47:52.76331437 +0000 UTC m=+1.154606656" Jul 14 22:47:52.763623 kubelet[2904]: I0714 22:47:52.763381 2904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.7633772890000001 podStartE2EDuration="1.763377289s" podCreationTimestamp="2025-07-14 22:47:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-14 22:47:52.75886688 +0000 UTC m=+1.150159170" watchObservedRunningTime="2025-07-14 22:47:52.763377289 +0000 UTC m=+1.154669583" Jul 14 22:47:55.111987 kubelet[2904]: I0714 22:47:55.111963 2904 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 14 22:47:55.112267 kubelet[2904]: I0714 22:47:55.112249 2904 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 14 22:47:55.112292 containerd[1647]: time="2025-07-14T22:47:55.112156914Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 14 22:47:56.111074 kubelet[2904]: I0714 22:47:56.110912 2904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7ft9d\" (UniqueName: \"kubernetes.io/projected/d7f03a40-efa1-4dfc-8f36-aed7c27e2b9d-kube-api-access-7ft9d\") pod \"kube-proxy-wl9tt\" (UID: \"d7f03a40-efa1-4dfc-8f36-aed7c27e2b9d\") " pod="kube-system/kube-proxy-wl9tt" Jul 14 22:47:56.111074 kubelet[2904]: I0714 22:47:56.110946 2904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/d3301bf5-4a80-47bd-ad4f-eddc8b43e48c-var-lib-calico\") pod \"tigera-operator-5bf8dfcb4-qqmqh\" (UID: \"d3301bf5-4a80-47bd-ad4f-eddc8b43e48c\") " pod="tigera-operator/tigera-operator-5bf8dfcb4-qqmqh" Jul 14 22:47:56.111074 kubelet[2904]: I0714 22:47:56.110961 2904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/d7f03a40-efa1-4dfc-8f36-aed7c27e2b9d-kube-proxy\") pod \"kube-proxy-wl9tt\" (UID: \"d7f03a40-efa1-4dfc-8f36-aed7c27e2b9d\") " pod="kube-system/kube-proxy-wl9tt" Jul 14 22:47:56.111074 kubelet[2904]: I0714 22:47:56.110977 2904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d7f03a40-efa1-4dfc-8f36-aed7c27e2b9d-lib-modules\") pod \"kube-proxy-wl9tt\" (UID: \"d7f03a40-efa1-4dfc-8f36-aed7c27e2b9d\") " pod="kube-system/kube-proxy-wl9tt" Jul 14 22:47:56.111074 kubelet[2904]: I0714 22:47:56.110997 2904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d7f03a40-efa1-4dfc-8f36-aed7c27e2b9d-xtables-lock\") pod \"kube-proxy-wl9tt\" (UID: \"d7f03a40-efa1-4dfc-8f36-aed7c27e2b9d\") " pod="kube-system/kube-proxy-wl9tt" Jul 14 22:47:56.111285 kubelet[2904]: I0714 22:47:56.111013 2904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7p4m9\" (UniqueName: \"kubernetes.io/projected/d3301bf5-4a80-47bd-ad4f-eddc8b43e48c-kube-api-access-7p4m9\") pod \"tigera-operator-5bf8dfcb4-qqmqh\" (UID: \"d3301bf5-4a80-47bd-ad4f-eddc8b43e48c\") " pod="tigera-operator/tigera-operator-5bf8dfcb4-qqmqh" Jul 14 22:47:56.367978 containerd[1647]: time="2025-07-14T22:47:56.367903687Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-wl9tt,Uid:d7f03a40-efa1-4dfc-8f36-aed7c27e2b9d,Namespace:kube-system,Attempt:0,}" Jul 14 22:47:56.394590 containerd[1647]: time="2025-07-14T22:47:56.394543234Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5bf8dfcb4-qqmqh,Uid:d3301bf5-4a80-47bd-ad4f-eddc8b43e48c,Namespace:tigera-operator,Attempt:0,}" Jul 14 22:47:56.444367 containerd[1647]: time="2025-07-14T22:47:56.442596705Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 22:47:56.444367 containerd[1647]: time="2025-07-14T22:47:56.442633489Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 22:47:56.444367 containerd[1647]: time="2025-07-14T22:47:56.442654650Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 22:47:56.444367 containerd[1647]: time="2025-07-14T22:47:56.442732084Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 22:47:56.444367 containerd[1647]: time="2025-07-14T22:47:56.442708441Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 22:47:56.444367 containerd[1647]: time="2025-07-14T22:47:56.442741332Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 22:47:56.444367 containerd[1647]: time="2025-07-14T22:47:56.442749709Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 22:47:56.444367 containerd[1647]: time="2025-07-14T22:47:56.443120188Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 22:47:56.484137 containerd[1647]: time="2025-07-14T22:47:56.484047948Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-wl9tt,Uid:d7f03a40-efa1-4dfc-8f36-aed7c27e2b9d,Namespace:kube-system,Attempt:0,} returns sandbox id \"0b2fdb6c1d6ddb6140bf996808d9dd10b9e675e769d78bc02f064f7aeacbdf0b\"" Jul 14 22:47:56.488679 containerd[1647]: time="2025-07-14T22:47:56.488613394Z" level=info msg="CreateContainer within sandbox \"0b2fdb6c1d6ddb6140bf996808d9dd10b9e675e769d78bc02f064f7aeacbdf0b\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 14 22:47:56.498284 containerd[1647]: time="2025-07-14T22:47:56.497733100Z" level=info msg="CreateContainer within sandbox \"0b2fdb6c1d6ddb6140bf996808d9dd10b9e675e769d78bc02f064f7aeacbdf0b\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"133ea5aa2a6a15c1a826722135bb4f5fe482719d6e4ae7770c4bcd9858a52f1e\"" Jul 14 22:47:56.498714 containerd[1647]: time="2025-07-14T22:47:56.498422984Z" level=info msg="StartContainer for \"133ea5aa2a6a15c1a826722135bb4f5fe482719d6e4ae7770c4bcd9858a52f1e\"" Jul 14 22:47:56.501272 containerd[1647]: time="2025-07-14T22:47:56.501220127Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5bf8dfcb4-qqmqh,Uid:d3301bf5-4a80-47bd-ad4f-eddc8b43e48c,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"98414cbb405855aa57cdc1c2be3fc529357c6cafad6be5468855e5793b16b17e\"" Jul 14 22:47:56.503558 containerd[1647]: time="2025-07-14T22:47:56.503538688Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\"" Jul 14 22:47:56.535280 containerd[1647]: time="2025-07-14T22:47:56.535254036Z" level=info msg="StartContainer for \"133ea5aa2a6a15c1a826722135bb4f5fe482719d6e4ae7770c4bcd9858a52f1e\" returns successfully" Jul 14 22:47:56.759730 kubelet[2904]: I0714 22:47:56.759120 2904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-wl9tt" podStartSLOduration=0.759106672 podStartE2EDuration="759.106672ms" podCreationTimestamp="2025-07-14 22:47:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-14 22:47:56.73590269 +0000 UTC m=+5.127194976" watchObservedRunningTime="2025-07-14 22:47:56.759106672 +0000 UTC m=+5.150398958" Jul 14 22:47:57.732660 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount79850911.mount: Deactivated successfully. Jul 14 22:47:58.330422 containerd[1647]: time="2025-07-14T22:47:58.330392792Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:47:58.331245 containerd[1647]: time="2025-07-14T22:47:58.331220366Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.3: active requests=0, bytes read=25056543" Jul 14 22:47:58.331479 containerd[1647]: time="2025-07-14T22:47:58.331462546Z" level=info msg="ImageCreate event name:\"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:47:58.336514 containerd[1647]: time="2025-07-14T22:47:58.336494257Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:47:58.337490 containerd[1647]: time="2025-07-14T22:47:58.337469881Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.3\" with image id \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\", repo tag \"quay.io/tigera/operator:v1.38.3\", repo digest \"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\", size \"25052538\" in 1.833909771s" Jul 14 22:47:58.337517 containerd[1647]: time="2025-07-14T22:47:58.337491392Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\" returns image reference \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\"" Jul 14 22:47:58.338807 containerd[1647]: time="2025-07-14T22:47:58.338737265Z" level=info msg="CreateContainer within sandbox \"98414cbb405855aa57cdc1c2be3fc529357c6cafad6be5468855e5793b16b17e\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jul 14 22:47:58.364899 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2668889711.mount: Deactivated successfully. Jul 14 22:47:58.374994 containerd[1647]: time="2025-07-14T22:47:58.374966750Z" level=info msg="CreateContainer within sandbox \"98414cbb405855aa57cdc1c2be3fc529357c6cafad6be5468855e5793b16b17e\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"79c338d07e60f38b36081c836f25b32605fda503b8cb356bcbc4de4088cabe9a\"" Jul 14 22:47:58.375384 containerd[1647]: time="2025-07-14T22:47:58.375366947Z" level=info msg="StartContainer for \"79c338d07e60f38b36081c836f25b32605fda503b8cb356bcbc4de4088cabe9a\"" Jul 14 22:47:58.392694 systemd[1]: run-containerd-runc-k8s.io-79c338d07e60f38b36081c836f25b32605fda503b8cb356bcbc4de4088cabe9a-runc.3JBvp6.mount: Deactivated successfully. Jul 14 22:47:58.412139 containerd[1647]: time="2025-07-14T22:47:58.412113159Z" level=info msg="StartContainer for \"79c338d07e60f38b36081c836f25b32605fda503b8cb356bcbc4de4088cabe9a\" returns successfully" Jul 14 22:47:58.736298 kubelet[2904]: I0714 22:47:58.736169 2904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-5bf8dfcb4-qqmqh" podStartSLOduration=0.901115478 podStartE2EDuration="2.7361561s" podCreationTimestamp="2025-07-14 22:47:56 +0000 UTC" firstStartedPulling="2025-07-14 22:47:56.502939104 +0000 UTC m=+4.894231390" lastFinishedPulling="2025-07-14 22:47:58.337979724 +0000 UTC m=+6.729272012" observedRunningTime="2025-07-14 22:47:58.735648683 +0000 UTC m=+7.126940987" watchObservedRunningTime="2025-07-14 22:47:58.7361561 +0000 UTC m=+7.127448402" Jul 14 22:48:04.698704 sudo[1978]: pam_unix(sudo:session): session closed for user root Jul 14 22:48:04.702251 sshd[1971]: pam_unix(sshd:session): session closed for user core Jul 14 22:48:04.706669 systemd[1]: sshd@6-139.178.70.103:22-139.178.68.195:46186.service: Deactivated successfully. Jul 14 22:48:04.722099 systemd[1]: session-9.scope: Deactivated successfully. Jul 14 22:48:04.722150 systemd-logind[1613]: Session 9 logged out. Waiting for processes to exit. Jul 14 22:48:04.727361 systemd-logind[1613]: Removed session 9. Jul 14 22:48:08.447032 kubelet[2904]: I0714 22:48:08.446673 2904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bd9c4404-bb25-48ec-be05-1c2aea52076b-tigera-ca-bundle\") pod \"calico-typha-5f5796f546-5n4cb\" (UID: \"bd9c4404-bb25-48ec-be05-1c2aea52076b\") " pod="calico-system/calico-typha-5f5796f546-5n4cb" Jul 14 22:48:08.447032 kubelet[2904]: I0714 22:48:08.446741 2904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/bd9c4404-bb25-48ec-be05-1c2aea52076b-typha-certs\") pod \"calico-typha-5f5796f546-5n4cb\" (UID: \"bd9c4404-bb25-48ec-be05-1c2aea52076b\") " pod="calico-system/calico-typha-5f5796f546-5n4cb" Jul 14 22:48:08.447032 kubelet[2904]: I0714 22:48:08.446761 2904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mn8nk\" (UniqueName: \"kubernetes.io/projected/bd9c4404-bb25-48ec-be05-1c2aea52076b-kube-api-access-mn8nk\") pod \"calico-typha-5f5796f546-5n4cb\" (UID: \"bd9c4404-bb25-48ec-be05-1c2aea52076b\") " pod="calico-system/calico-typha-5f5796f546-5n4cb" Jul 14 22:48:08.648916 kubelet[2904]: I0714 22:48:08.647810 2904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4fe498bf-795a-4d60-9b97-fda4601215e9-lib-modules\") pod \"calico-node-jk66q\" (UID: \"4fe498bf-795a-4d60-9b97-fda4601215e9\") " pod="calico-system/calico-node-jk66q" Jul 14 22:48:08.648916 kubelet[2904]: I0714 22:48:08.647834 2904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/4fe498bf-795a-4d60-9b97-fda4601215e9-policysync\") pod \"calico-node-jk66q\" (UID: \"4fe498bf-795a-4d60-9b97-fda4601215e9\") " pod="calico-system/calico-node-jk66q" Jul 14 22:48:08.648916 kubelet[2904]: I0714 22:48:08.647844 2904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6bpj2\" (UniqueName: \"kubernetes.io/projected/4fe498bf-795a-4d60-9b97-fda4601215e9-kube-api-access-6bpj2\") pod \"calico-node-jk66q\" (UID: \"4fe498bf-795a-4d60-9b97-fda4601215e9\") " pod="calico-system/calico-node-jk66q" Jul 14 22:48:08.648916 kubelet[2904]: I0714 22:48:08.647857 2904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/4fe498bf-795a-4d60-9b97-fda4601215e9-cni-log-dir\") pod \"calico-node-jk66q\" (UID: \"4fe498bf-795a-4d60-9b97-fda4601215e9\") " pod="calico-system/calico-node-jk66q" Jul 14 22:48:08.648916 kubelet[2904]: I0714 22:48:08.647865 2904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4fe498bf-795a-4d60-9b97-fda4601215e9-xtables-lock\") pod \"calico-node-jk66q\" (UID: \"4fe498bf-795a-4d60-9b97-fda4601215e9\") " pod="calico-system/calico-node-jk66q" Jul 14 22:48:08.649113 kubelet[2904]: I0714 22:48:08.647875 2904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/4fe498bf-795a-4d60-9b97-fda4601215e9-cni-bin-dir\") pod \"calico-node-jk66q\" (UID: \"4fe498bf-795a-4d60-9b97-fda4601215e9\") " pod="calico-system/calico-node-jk66q" Jul 14 22:48:08.649113 kubelet[2904]: I0714 22:48:08.647883 2904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/4fe498bf-795a-4d60-9b97-fda4601215e9-var-run-calico\") pod \"calico-node-jk66q\" (UID: \"4fe498bf-795a-4d60-9b97-fda4601215e9\") " pod="calico-system/calico-node-jk66q" Jul 14 22:48:08.649113 kubelet[2904]: I0714 22:48:08.647912 2904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4fe498bf-795a-4d60-9b97-fda4601215e9-tigera-ca-bundle\") pod \"calico-node-jk66q\" (UID: \"4fe498bf-795a-4d60-9b97-fda4601215e9\") " pod="calico-system/calico-node-jk66q" Jul 14 22:48:08.649113 kubelet[2904]: I0714 22:48:08.647921 2904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/4fe498bf-795a-4d60-9b97-fda4601215e9-flexvol-driver-host\") pod \"calico-node-jk66q\" (UID: \"4fe498bf-795a-4d60-9b97-fda4601215e9\") " pod="calico-system/calico-node-jk66q" Jul 14 22:48:08.649113 kubelet[2904]: I0714 22:48:08.647931 2904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/4fe498bf-795a-4d60-9b97-fda4601215e9-cni-net-dir\") pod \"calico-node-jk66q\" (UID: \"4fe498bf-795a-4d60-9b97-fda4601215e9\") " pod="calico-system/calico-node-jk66q" Jul 14 22:48:08.649217 kubelet[2904]: I0714 22:48:08.647940 2904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/4fe498bf-795a-4d60-9b97-fda4601215e9-node-certs\") pod \"calico-node-jk66q\" (UID: \"4fe498bf-795a-4d60-9b97-fda4601215e9\") " pod="calico-system/calico-node-jk66q" Jul 14 22:48:08.649217 kubelet[2904]: I0714 22:48:08.647950 2904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/4fe498bf-795a-4d60-9b97-fda4601215e9-var-lib-calico\") pod \"calico-node-jk66q\" (UID: \"4fe498bf-795a-4d60-9b97-fda4601215e9\") " pod="calico-system/calico-node-jk66q" Jul 14 22:48:08.703606 containerd[1647]: time="2025-07-14T22:48:08.703417241Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5f5796f546-5n4cb,Uid:bd9c4404-bb25-48ec-be05-1c2aea52076b,Namespace:calico-system,Attempt:0,}" Jul 14 22:48:08.764712 kubelet[2904]: E0714 22:48:08.764643 2904 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:48:08.764712 kubelet[2904]: W0714 22:48:08.764665 2904 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:48:08.764712 kubelet[2904]: E0714 22:48:08.764682 2904 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:48:08.777373 containerd[1647]: time="2025-07-14T22:48:08.777005512Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 22:48:08.778200 containerd[1647]: time="2025-07-14T22:48:08.777438594Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 22:48:08.778200 containerd[1647]: time="2025-07-14T22:48:08.777449506Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 22:48:08.778200 containerd[1647]: time="2025-07-14T22:48:08.777547293Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 22:48:08.819792 kubelet[2904]: E0714 22:48:08.819744 2904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-f78g4" podUID="33683c0b-99a6-49cc-aa17-19ada6d1c944" Jul 14 22:48:08.831999 kubelet[2904]: E0714 22:48:08.830896 2904 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:48:08.831999 kubelet[2904]: W0714 22:48:08.830917 2904 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:48:08.831999 kubelet[2904]: E0714 22:48:08.830940 2904 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:48:08.831999 kubelet[2904]: E0714 22:48:08.831073 2904 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:48:08.831999 kubelet[2904]: W0714 22:48:08.831080 2904 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:48:08.831999 kubelet[2904]: E0714 22:48:08.831090 2904 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:48:08.831999 kubelet[2904]: E0714 22:48:08.831194 2904 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:48:08.831999 kubelet[2904]: W0714 22:48:08.831200 2904 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:48:08.831999 kubelet[2904]: E0714 22:48:08.831236 2904 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:48:08.831999 kubelet[2904]: E0714 22:48:08.831359 2904 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:48:08.832332 kubelet[2904]: W0714 22:48:08.831366 2904 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:48:08.832332 kubelet[2904]: E0714 22:48:08.831390 2904 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:48:08.832332 kubelet[2904]: E0714 22:48:08.831530 2904 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:48:08.832332 kubelet[2904]: W0714 22:48:08.831538 2904 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:48:08.832332 kubelet[2904]: E0714 22:48:08.831546 2904 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:48:08.832332 kubelet[2904]: E0714 22:48:08.831682 2904 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:48:08.832332 kubelet[2904]: W0714 22:48:08.831688 2904 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:48:08.832332 kubelet[2904]: E0714 22:48:08.831694 2904 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:48:08.832332 kubelet[2904]: E0714 22:48:08.831910 2904 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:48:08.832332 kubelet[2904]: W0714 22:48:08.831916 2904 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:48:08.832862 kubelet[2904]: E0714 22:48:08.831924 2904 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:48:08.838671 kubelet[2904]: E0714 22:48:08.838612 2904 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:48:08.838671 kubelet[2904]: W0714 22:48:08.838629 2904 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:48:08.838671 kubelet[2904]: E0714 22:48:08.838649 2904 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:48:08.839208 kubelet[2904]: E0714 22:48:08.839025 2904 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:48:08.839208 kubelet[2904]: W0714 22:48:08.839034 2904 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:48:08.839208 kubelet[2904]: E0714 22:48:08.839045 2904 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:48:08.839208 kubelet[2904]: E0714 22:48:08.839148 2904 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:48:08.839208 kubelet[2904]: W0714 22:48:08.839155 2904 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:48:08.839208 kubelet[2904]: E0714 22:48:08.839164 2904 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:48:08.839533 kubelet[2904]: E0714 22:48:08.839431 2904 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:48:08.839533 kubelet[2904]: W0714 22:48:08.839438 2904 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:48:08.839533 kubelet[2904]: E0714 22:48:08.839446 2904 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:48:08.839701 kubelet[2904]: E0714 22:48:08.839627 2904 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:48:08.839701 kubelet[2904]: W0714 22:48:08.839634 2904 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:48:08.839701 kubelet[2904]: E0714 22:48:08.839646 2904 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:48:08.839955 kubelet[2904]: E0714 22:48:08.839857 2904 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:48:08.839955 kubelet[2904]: W0714 22:48:08.839865 2904 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:48:08.839955 kubelet[2904]: E0714 22:48:08.839873 2904 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:48:08.840593 kubelet[2904]: E0714 22:48:08.840479 2904 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:48:08.840593 kubelet[2904]: W0714 22:48:08.840488 2904 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:48:08.840593 kubelet[2904]: E0714 22:48:08.840495 2904 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:48:08.840769 kubelet[2904]: E0714 22:48:08.840702 2904 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:48:08.840769 kubelet[2904]: W0714 22:48:08.840712 2904 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:48:08.840769 kubelet[2904]: E0714 22:48:08.840719 2904 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:48:08.842813 kubelet[2904]: E0714 22:48:08.842694 2904 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:48:08.842813 kubelet[2904]: W0714 22:48:08.842707 2904 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:48:08.842813 kubelet[2904]: E0714 22:48:08.842719 2904 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:48:08.843532 kubelet[2904]: E0714 22:48:08.843415 2904 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:48:08.843532 kubelet[2904]: W0714 22:48:08.843424 2904 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:48:08.843532 kubelet[2904]: E0714 22:48:08.843439 2904 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:48:08.843920 kubelet[2904]: E0714 22:48:08.843852 2904 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:48:08.844109 kubelet[2904]: W0714 22:48:08.843966 2904 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:48:08.844109 kubelet[2904]: E0714 22:48:08.843984 2904 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:48:08.844323 kubelet[2904]: E0714 22:48:08.844243 2904 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:48:08.844323 kubelet[2904]: W0714 22:48:08.844252 2904 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:48:08.844323 kubelet[2904]: E0714 22:48:08.844263 2904 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:48:08.845143 kubelet[2904]: E0714 22:48:08.844707 2904 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:48:08.845143 kubelet[2904]: W0714 22:48:08.844720 2904 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:48:08.845143 kubelet[2904]: E0714 22:48:08.844730 2904 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:48:08.851150 kubelet[2904]: E0714 22:48:08.851133 2904 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:48:08.851431 kubelet[2904]: W0714 22:48:08.851252 2904 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:48:08.851431 kubelet[2904]: E0714 22:48:08.851269 2904 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:48:08.851431 kubelet[2904]: I0714 22:48:08.851290 2904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/33683c0b-99a6-49cc-aa17-19ada6d1c944-socket-dir\") pod \"csi-node-driver-f78g4\" (UID: \"33683c0b-99a6-49cc-aa17-19ada6d1c944\") " pod="calico-system/csi-node-driver-f78g4" Jul 14 22:48:08.851742 kubelet[2904]: E0714 22:48:08.851575 2904 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:48:08.851742 kubelet[2904]: W0714 22:48:08.851584 2904 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:48:08.851742 kubelet[2904]: E0714 22:48:08.851593 2904 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:48:08.851742 kubelet[2904]: I0714 22:48:08.851609 2904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/33683c0b-99a6-49cc-aa17-19ada6d1c944-varrun\") pod \"csi-node-driver-f78g4\" (UID: \"33683c0b-99a6-49cc-aa17-19ada6d1c944\") " pod="calico-system/csi-node-driver-f78g4" Jul 14 22:48:08.851940 kubelet[2904]: E0714 22:48:08.851931 2904 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:48:08.854459 kubelet[2904]: W0714 22:48:08.852589 2904 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:48:08.854459 kubelet[2904]: E0714 22:48:08.852605 2904 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:48:08.854459 kubelet[2904]: I0714 22:48:08.852618 2904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r7dbv\" (UniqueName: \"kubernetes.io/projected/33683c0b-99a6-49cc-aa17-19ada6d1c944-kube-api-access-r7dbv\") pod \"csi-node-driver-f78g4\" (UID: \"33683c0b-99a6-49cc-aa17-19ada6d1c944\") " pod="calico-system/csi-node-driver-f78g4" Jul 14 22:48:08.854459 kubelet[2904]: E0714 22:48:08.853170 2904 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:48:08.854459 kubelet[2904]: W0714 22:48:08.853179 2904 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:48:08.854459 kubelet[2904]: E0714 22:48:08.853187 2904 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:48:08.854459 kubelet[2904]: I0714 22:48:08.853199 2904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/33683c0b-99a6-49cc-aa17-19ada6d1c944-registration-dir\") pod \"csi-node-driver-f78g4\" (UID: \"33683c0b-99a6-49cc-aa17-19ada6d1c944\") " pod="calico-system/csi-node-driver-f78g4" Jul 14 22:48:08.854459 kubelet[2904]: E0714 22:48:08.853459 2904 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:48:08.854989 kubelet[2904]: W0714 22:48:08.853465 2904 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:48:08.854989 kubelet[2904]: E0714 22:48:08.853474 2904 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:48:08.854989 kubelet[2904]: I0714 22:48:08.853539 2904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/33683c0b-99a6-49cc-aa17-19ada6d1c944-kubelet-dir\") pod \"csi-node-driver-f78g4\" (UID: \"33683c0b-99a6-49cc-aa17-19ada6d1c944\") " pod="calico-system/csi-node-driver-f78g4" Jul 14 22:48:08.854989 kubelet[2904]: E0714 22:48:08.853856 2904 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:48:08.854989 kubelet[2904]: W0714 22:48:08.853862 2904 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:48:08.854989 kubelet[2904]: E0714 22:48:08.853869 2904 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:48:08.854989 kubelet[2904]: E0714 22:48:08.854273 2904 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:48:08.854989 kubelet[2904]: W0714 22:48:08.854282 2904 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:48:08.854989 kubelet[2904]: E0714 22:48:08.854290 2904 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:48:08.856725 kubelet[2904]: E0714 22:48:08.855110 2904 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:48:08.856725 kubelet[2904]: W0714 22:48:08.855118 2904 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:48:08.856725 kubelet[2904]: E0714 22:48:08.855126 2904 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:48:08.856725 kubelet[2904]: E0714 22:48:08.855833 2904 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:48:08.856725 kubelet[2904]: W0714 22:48:08.855841 2904 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:48:08.856725 kubelet[2904]: E0714 22:48:08.855848 2904 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:48:08.856725 kubelet[2904]: E0714 22:48:08.856111 2904 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:48:08.856725 kubelet[2904]: W0714 22:48:08.856116 2904 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:48:08.856725 kubelet[2904]: E0714 22:48:08.856124 2904 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:48:08.856725 kubelet[2904]: E0714 22:48:08.856214 2904 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:48:08.857113 kubelet[2904]: W0714 22:48:08.856219 2904 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:48:08.857113 kubelet[2904]: E0714 22:48:08.856224 2904 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:48:08.857113 kubelet[2904]: E0714 22:48:08.856628 2904 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:48:08.857113 kubelet[2904]: W0714 22:48:08.856634 2904 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:48:08.857113 kubelet[2904]: E0714 22:48:08.856646 2904 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:48:08.857113 kubelet[2904]: E0714 22:48:08.856763 2904 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:48:08.857113 kubelet[2904]: W0714 22:48:08.856769 2904 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:48:08.857113 kubelet[2904]: E0714 22:48:08.856775 2904 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:48:08.858031 containerd[1647]: time="2025-07-14T22:48:08.857662609Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-jk66q,Uid:4fe498bf-795a-4d60-9b97-fda4601215e9,Namespace:calico-system,Attempt:0,}" Jul 14 22:48:08.858074 kubelet[2904]: E0714 22:48:08.857933 2904 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:48:08.858074 kubelet[2904]: W0714 22:48:08.857940 2904 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:48:08.858074 kubelet[2904]: E0714 22:48:08.857948 2904 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:48:08.858074 kubelet[2904]: E0714 22:48:08.858043 2904 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:48:08.858074 kubelet[2904]: W0714 22:48:08.858048 2904 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:48:08.858074 kubelet[2904]: E0714 22:48:08.858053 2904 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:48:08.882261 containerd[1647]: time="2025-07-14T22:48:08.882019025Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5f5796f546-5n4cb,Uid:bd9c4404-bb25-48ec-be05-1c2aea52076b,Namespace:calico-system,Attempt:0,} returns sandbox id \"7bca51d0d63fbe7aa0bdc7b99abfb491d54926dfc69ee4697f06ec8bbe1ee407\"" Jul 14 22:48:08.890645 containerd[1647]: time="2025-07-14T22:48:08.890574862Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 22:48:08.891460 containerd[1647]: time="2025-07-14T22:48:08.891336856Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 22:48:08.891460 containerd[1647]: time="2025-07-14T22:48:08.891387215Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 22:48:08.892100 containerd[1647]: time="2025-07-14T22:48:08.891576275Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 22:48:08.897932 containerd[1647]: time="2025-07-14T22:48:08.897909849Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\"" Jul 14 22:48:08.936204 containerd[1647]: time="2025-07-14T22:48:08.936182986Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-jk66q,Uid:4fe498bf-795a-4d60-9b97-fda4601215e9,Namespace:calico-system,Attempt:0,} returns sandbox id \"fed184ab0b1eb10d20e84a5422e807cd6de8f94b0769075410fc3212cd82052c\"" Jul 14 22:48:08.954194 kubelet[2904]: E0714 22:48:08.954135 2904 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:48:08.954194 kubelet[2904]: W0714 22:48:08.954146 2904 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:48:08.954194 kubelet[2904]: E0714 22:48:08.954158 2904 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:48:08.954298 kubelet[2904]: E0714 22:48:08.954290 2904 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:48:08.954298 kubelet[2904]: W0714 22:48:08.954295 2904 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:48:08.954336 kubelet[2904]: E0714 22:48:08.954302 2904 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:48:08.954900 kubelet[2904]: E0714 22:48:08.954401 2904 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:48:08.954900 kubelet[2904]: W0714 22:48:08.954428 2904 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:48:08.954900 kubelet[2904]: E0714 22:48:08.954437 2904 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:48:08.954900 kubelet[2904]: E0714 22:48:08.954532 2904 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:48:08.954900 kubelet[2904]: W0714 22:48:08.954536 2904 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:48:08.954900 kubelet[2904]: E0714 22:48:08.954582 2904 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:48:08.954900 kubelet[2904]: E0714 22:48:08.954658 2904 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:48:08.954900 kubelet[2904]: W0714 22:48:08.954664 2904 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:48:08.954900 kubelet[2904]: E0714 22:48:08.954671 2904 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:48:08.954900 kubelet[2904]: E0714 22:48:08.954773 2904 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:48:08.961476 kubelet[2904]: W0714 22:48:08.954779 2904 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:48:08.961476 kubelet[2904]: E0714 22:48:08.954787 2904 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:48:08.961476 kubelet[2904]: E0714 22:48:08.954873 2904 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:48:08.961476 kubelet[2904]: W0714 22:48:08.954877 2904 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:48:08.961476 kubelet[2904]: E0714 22:48:08.954922 2904 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:48:08.961476 kubelet[2904]: E0714 22:48:08.955537 2904 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:48:08.961476 kubelet[2904]: W0714 22:48:08.955543 2904 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:48:08.961476 kubelet[2904]: E0714 22:48:08.955552 2904 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:48:08.961476 kubelet[2904]: E0714 22:48:08.955660 2904 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:48:08.961476 kubelet[2904]: W0714 22:48:08.955667 2904 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:48:08.961637 kubelet[2904]: E0714 22:48:08.955759 2904 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:48:08.961637 kubelet[2904]: W0714 22:48:08.955764 2904 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:48:08.961637 kubelet[2904]: E0714 22:48:08.955778 2904 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:48:08.961637 kubelet[2904]: E0714 22:48:08.955787 2904 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:48:08.961637 kubelet[2904]: E0714 22:48:08.955851 2904 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:48:08.961637 kubelet[2904]: W0714 22:48:08.955856 2904 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:48:08.961637 kubelet[2904]: E0714 22:48:08.955903 2904 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:48:08.961637 kubelet[2904]: E0714 22:48:08.955963 2904 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:48:08.961637 kubelet[2904]: W0714 22:48:08.955975 2904 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:48:08.961637 kubelet[2904]: E0714 22:48:08.956046 2904 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:48:08.961799 kubelet[2904]: E0714 22:48:08.956095 2904 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:48:08.961799 kubelet[2904]: W0714 22:48:08.956100 2904 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:48:08.961799 kubelet[2904]: E0714 22:48:08.956124 2904 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:48:08.961799 kubelet[2904]: E0714 22:48:08.956211 2904 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:48:08.961799 kubelet[2904]: W0714 22:48:08.956216 2904 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:48:08.961799 kubelet[2904]: E0714 22:48:08.956224 2904 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:48:08.961799 kubelet[2904]: E0714 22:48:08.956343 2904 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:48:08.961799 kubelet[2904]: W0714 22:48:08.956349 2904 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:48:08.961799 kubelet[2904]: E0714 22:48:08.956357 2904 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:48:08.961799 kubelet[2904]: E0714 22:48:08.956462 2904 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:48:08.961997 kubelet[2904]: W0714 22:48:08.956468 2904 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:48:08.961997 kubelet[2904]: E0714 22:48:08.956477 2904 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:48:08.961997 kubelet[2904]: E0714 22:48:08.956578 2904 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:48:08.961997 kubelet[2904]: W0714 22:48:08.956583 2904 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:48:08.961997 kubelet[2904]: E0714 22:48:08.956587 2904 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:48:08.961997 kubelet[2904]: E0714 22:48:08.956794 2904 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:48:08.961997 kubelet[2904]: W0714 22:48:08.956801 2904 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:48:08.961997 kubelet[2904]: E0714 22:48:08.956811 2904 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:48:08.961997 kubelet[2904]: E0714 22:48:08.956923 2904 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:48:08.961997 kubelet[2904]: W0714 22:48:08.956928 2904 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:48:08.962153 kubelet[2904]: E0714 22:48:08.956937 2904 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:48:08.962153 kubelet[2904]: E0714 22:48:08.957035 2904 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:48:08.962153 kubelet[2904]: W0714 22:48:08.957040 2904 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:48:08.962153 kubelet[2904]: E0714 22:48:08.957050 2904 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:48:08.962153 kubelet[2904]: E0714 22:48:08.957162 2904 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:48:08.962153 kubelet[2904]: W0714 22:48:08.957172 2904 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:48:08.962153 kubelet[2904]: E0714 22:48:08.957179 2904 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:48:08.962153 kubelet[2904]: E0714 22:48:08.957315 2904 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:48:08.962153 kubelet[2904]: W0714 22:48:08.957320 2904 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:48:08.962153 kubelet[2904]: E0714 22:48:08.957327 2904 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:48:08.962317 kubelet[2904]: E0714 22:48:08.957446 2904 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:48:08.962317 kubelet[2904]: W0714 22:48:08.957452 2904 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:48:08.962317 kubelet[2904]: E0714 22:48:08.957461 2904 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:48:08.962317 kubelet[2904]: E0714 22:48:08.957606 2904 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:48:08.962317 kubelet[2904]: W0714 22:48:08.957611 2904 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:48:08.962317 kubelet[2904]: E0714 22:48:08.957617 2904 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:48:08.966557 kubelet[2904]: E0714 22:48:08.966543 2904 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:48:08.966676 kubelet[2904]: W0714 22:48:08.966639 2904 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:48:08.966676 kubelet[2904]: E0714 22:48:08.966653 2904 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:48:08.967573 kubelet[2904]: E0714 22:48:08.967487 2904 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:48:08.967836 kubelet[2904]: W0714 22:48:08.967699 2904 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:48:08.967836 kubelet[2904]: E0714 22:48:08.967709 2904 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:48:10.258596 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1387513444.mount: Deactivated successfully. Jul 14 22:48:10.723660 kubelet[2904]: E0714 22:48:10.723385 2904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-f78g4" podUID="33683c0b-99a6-49cc-aa17-19ada6d1c944" Jul 14 22:48:11.064057 containerd[1647]: time="2025-07-14T22:48:11.063920975Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:48:11.064057 containerd[1647]: time="2025-07-14T22:48:11.064034888Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.2: active requests=0, bytes read=35233364" Jul 14 22:48:11.064728 containerd[1647]: time="2025-07-14T22:48:11.064705396Z" level=info msg="ImageCreate event name:\"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:48:11.069858 containerd[1647]: time="2025-07-14T22:48:11.069835307Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:48:11.070628 containerd[1647]: time="2025-07-14T22:48:11.070608776Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.2\" with image id \"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\", size \"35233218\" in 2.172570485s" Jul 14 22:48:11.070689 containerd[1647]: time="2025-07-14T22:48:11.070630104Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\" returns image reference \"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\"" Jul 14 22:48:11.072198 containerd[1647]: time="2025-07-14T22:48:11.072102859Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\"" Jul 14 22:48:11.102908 containerd[1647]: time="2025-07-14T22:48:11.102838238Z" level=info msg="CreateContainer within sandbox \"7bca51d0d63fbe7aa0bdc7b99abfb491d54926dfc69ee4697f06ec8bbe1ee407\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jul 14 22:48:11.146805 containerd[1647]: time="2025-07-14T22:48:11.146769940Z" level=info msg="CreateContainer within sandbox \"7bca51d0d63fbe7aa0bdc7b99abfb491d54926dfc69ee4697f06ec8bbe1ee407\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"e905bd085b16f42adfca517ccb0431c829d7255b438294c129cad76aa1403884\"" Jul 14 22:48:11.168203 containerd[1647]: time="2025-07-14T22:48:11.167915135Z" level=info msg="StartContainer for \"e905bd085b16f42adfca517ccb0431c829d7255b438294c129cad76aa1403884\"" Jul 14 22:48:11.237011 containerd[1647]: time="2025-07-14T22:48:11.236981099Z" level=info msg="StartContainer for \"e905bd085b16f42adfca517ccb0431c829d7255b438294c129cad76aa1403884\" returns successfully" Jul 14 22:48:11.870500 kubelet[2904]: E0714 22:48:11.870457 2904 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:48:11.870500 kubelet[2904]: W0714 22:48:11.870491 2904 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:48:11.871669 kubelet[2904]: E0714 22:48:11.870509 2904 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:48:11.871669 kubelet[2904]: E0714 22:48:11.870627 2904 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:48:11.871669 kubelet[2904]: W0714 22:48:11.870633 2904 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:48:11.871669 kubelet[2904]: E0714 22:48:11.870639 2904 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:48:11.871669 kubelet[2904]: E0714 22:48:11.870737 2904 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:48:11.871669 kubelet[2904]: W0714 22:48:11.870742 2904 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:48:11.871669 kubelet[2904]: E0714 22:48:11.870747 2904 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:48:11.871669 kubelet[2904]: E0714 22:48:11.870919 2904 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:48:11.871669 kubelet[2904]: W0714 22:48:11.870925 2904 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:48:11.871669 kubelet[2904]: E0714 22:48:11.870930 2904 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:48:11.871941 kubelet[2904]: E0714 22:48:11.871027 2904 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:48:11.871941 kubelet[2904]: W0714 22:48:11.871031 2904 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:48:11.871941 kubelet[2904]: E0714 22:48:11.871036 2904 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:48:11.871941 kubelet[2904]: E0714 22:48:11.871156 2904 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:48:11.871941 kubelet[2904]: W0714 22:48:11.871161 2904 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:48:11.871941 kubelet[2904]: E0714 22:48:11.871166 2904 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:48:11.871941 kubelet[2904]: E0714 22:48:11.871259 2904 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:48:11.871941 kubelet[2904]: W0714 22:48:11.871264 2904 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:48:11.871941 kubelet[2904]: E0714 22:48:11.871269 2904 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:48:11.871941 kubelet[2904]: E0714 22:48:11.871386 2904 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:48:11.872115 kubelet[2904]: W0714 22:48:11.871391 2904 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:48:11.872115 kubelet[2904]: E0714 22:48:11.871396 2904 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:48:11.872115 kubelet[2904]: E0714 22:48:11.871581 2904 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:48:11.872115 kubelet[2904]: W0714 22:48:11.871586 2904 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:48:11.872115 kubelet[2904]: E0714 22:48:11.871592 2904 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:48:11.872115 kubelet[2904]: E0714 22:48:11.871700 2904 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:48:11.872115 kubelet[2904]: W0714 22:48:11.871705 2904 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:48:11.872115 kubelet[2904]: E0714 22:48:11.871710 2904 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:48:11.872115 kubelet[2904]: E0714 22:48:11.871832 2904 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:48:11.872115 kubelet[2904]: W0714 22:48:11.871837 2904 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:48:11.872306 kubelet[2904]: E0714 22:48:11.871842 2904 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:48:11.872306 kubelet[2904]: E0714 22:48:11.871940 2904 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:48:11.872306 kubelet[2904]: W0714 22:48:11.871945 2904 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:48:11.872306 kubelet[2904]: E0714 22:48:11.871950 2904 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:48:11.872306 kubelet[2904]: E0714 22:48:11.872045 2904 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:48:11.872306 kubelet[2904]: W0714 22:48:11.872060 2904 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:48:11.872306 kubelet[2904]: E0714 22:48:11.872065 2904 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:48:11.872306 kubelet[2904]: E0714 22:48:11.872157 2904 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:48:11.872306 kubelet[2904]: W0714 22:48:11.872162 2904 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:48:11.872306 kubelet[2904]: E0714 22:48:11.872167 2904 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:48:11.872505 kubelet[2904]: E0714 22:48:11.872260 2904 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:48:11.872505 kubelet[2904]: W0714 22:48:11.872265 2904 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:48:11.872505 kubelet[2904]: E0714 22:48:11.872270 2904 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:48:11.912015 kubelet[2904]: E0714 22:48:11.911992 2904 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:48:11.912015 kubelet[2904]: W0714 22:48:11.912009 2904 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:48:11.912015 kubelet[2904]: E0714 22:48:11.912022 2904 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:48:11.923089 kubelet[2904]: E0714 22:48:11.912181 2904 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:48:11.923089 kubelet[2904]: W0714 22:48:11.912187 2904 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:48:11.923089 kubelet[2904]: E0714 22:48:11.912194 2904 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:48:11.923089 kubelet[2904]: E0714 22:48:11.912433 2904 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:48:11.923089 kubelet[2904]: W0714 22:48:11.912438 2904 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:48:11.923089 kubelet[2904]: E0714 22:48:11.912443 2904 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:48:11.923089 kubelet[2904]: E0714 22:48:11.912683 2904 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:48:11.923089 kubelet[2904]: W0714 22:48:11.912688 2904 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:48:11.923089 kubelet[2904]: E0714 22:48:11.912694 2904 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:48:11.923089 kubelet[2904]: E0714 22:48:11.912843 2904 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:48:11.923484 kubelet[2904]: W0714 22:48:11.912848 2904 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:48:11.923484 kubelet[2904]: E0714 22:48:11.912853 2904 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:48:11.923484 kubelet[2904]: E0714 22:48:11.912954 2904 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:48:11.923484 kubelet[2904]: W0714 22:48:11.912959 2904 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:48:11.923484 kubelet[2904]: E0714 22:48:11.912964 2904 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:48:11.923484 kubelet[2904]: E0714 22:48:11.913107 2904 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:48:11.923484 kubelet[2904]: W0714 22:48:11.913112 2904 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:48:11.923484 kubelet[2904]: E0714 22:48:11.913117 2904 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:48:11.923484 kubelet[2904]: E0714 22:48:11.913696 2904 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:48:11.923484 kubelet[2904]: W0714 22:48:11.913702 2904 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:48:11.923669 kubelet[2904]: E0714 22:48:11.913708 2904 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:48:11.923669 kubelet[2904]: E0714 22:48:11.913860 2904 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:48:11.923669 kubelet[2904]: W0714 22:48:11.913865 2904 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:48:11.923669 kubelet[2904]: E0714 22:48:11.913871 2904 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:48:11.923669 kubelet[2904]: E0714 22:48:11.914029 2904 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:48:11.923669 kubelet[2904]: W0714 22:48:11.914035 2904 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:48:11.923669 kubelet[2904]: E0714 22:48:11.914040 2904 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:48:11.923669 kubelet[2904]: E0714 22:48:11.914180 2904 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:48:11.923669 kubelet[2904]: W0714 22:48:11.914186 2904 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:48:11.923669 kubelet[2904]: E0714 22:48:11.914191 2904 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:48:11.936657 kubelet[2904]: E0714 22:48:11.914289 2904 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:48:11.936657 kubelet[2904]: W0714 22:48:11.914294 2904 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:48:11.936657 kubelet[2904]: E0714 22:48:11.914299 2904 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:48:11.936657 kubelet[2904]: E0714 22:48:11.914535 2904 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:48:11.936657 kubelet[2904]: W0714 22:48:11.914541 2904 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:48:11.936657 kubelet[2904]: E0714 22:48:11.914546 2904 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:48:11.936657 kubelet[2904]: E0714 22:48:11.914689 2904 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:48:11.936657 kubelet[2904]: W0714 22:48:11.914695 2904 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:48:11.936657 kubelet[2904]: E0714 22:48:11.914700 2904 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:48:11.936657 kubelet[2904]: E0714 22:48:11.914796 2904 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:48:11.936846 kubelet[2904]: W0714 22:48:11.914802 2904 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:48:11.936846 kubelet[2904]: E0714 22:48:11.914807 2904 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:48:11.936846 kubelet[2904]: E0714 22:48:11.914948 2904 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:48:11.936846 kubelet[2904]: W0714 22:48:11.914956 2904 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:48:11.936846 kubelet[2904]: E0714 22:48:11.914963 2904 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:48:11.936846 kubelet[2904]: E0714 22:48:11.915073 2904 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:48:11.936846 kubelet[2904]: W0714 22:48:11.915078 2904 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:48:11.936846 kubelet[2904]: E0714 22:48:11.915083 2904 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:48:11.936846 kubelet[2904]: E0714 22:48:11.915257 2904 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:48:11.936846 kubelet[2904]: W0714 22:48:11.915262 2904 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:48:11.937097 kubelet[2904]: E0714 22:48:11.915267 2904 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:48:12.077019 kubelet[2904]: I0714 22:48:12.076893 2904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-5f5796f546-5n4cb" podStartSLOduration=1.902329038 podStartE2EDuration="4.076849221s" podCreationTimestamp="2025-07-14 22:48:08 +0000 UTC" firstStartedPulling="2025-07-14 22:48:08.897147079 +0000 UTC m=+17.288439364" lastFinishedPulling="2025-07-14 22:48:11.07166726 +0000 UTC m=+19.462959547" observedRunningTime="2025-07-14 22:48:12.076471578 +0000 UTC m=+20.467763876" watchObservedRunningTime="2025-07-14 22:48:12.076849221 +0000 UTC m=+20.468141514" Jul 14 22:48:12.315101 systemd-resolved[1541]: Under memory pressure, flushing caches. Jul 14 22:48:12.315139 systemd-resolved[1541]: Flushed all caches. Jul 14 22:48:12.320158 systemd-journald[1180]: Under memory pressure, flushing caches. Jul 14 22:48:12.690770 kubelet[2904]: E0714 22:48:12.690701 2904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-f78g4" podUID="33683c0b-99a6-49cc-aa17-19ada6d1c944" Jul 14 22:48:12.717537 containerd[1647]: time="2025-07-14T22:48:12.717420173Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:48:12.720496 containerd[1647]: time="2025-07-14T22:48:12.720427833Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2: active requests=0, bytes read=4446956" Jul 14 22:48:12.722917 containerd[1647]: time="2025-07-14T22:48:12.722782339Z" level=info msg="ImageCreate event name:\"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:48:12.728017 containerd[1647]: time="2025-07-14T22:48:12.727982046Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:48:12.728901 containerd[1647]: time="2025-07-14T22:48:12.728867008Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" with image id \"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\", size \"5939619\" in 1.65673992s" Jul 14 22:48:12.728934 containerd[1647]: time="2025-07-14T22:48:12.728908362Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" returns image reference \"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\"" Jul 14 22:48:12.731007 containerd[1647]: time="2025-07-14T22:48:12.730966703Z" level=info msg="CreateContainer within sandbox \"fed184ab0b1eb10d20e84a5422e807cd6de8f94b0769075410fc3212cd82052c\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jul 14 22:48:12.861933 kubelet[2904]: I0714 22:48:12.861647 2904 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 14 22:48:12.901699 kubelet[2904]: E0714 22:48:12.901674 2904 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:48:12.901699 kubelet[2904]: W0714 22:48:12.901693 2904 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:48:12.916691 kubelet[2904]: E0714 22:48:12.901709 2904 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:48:12.916691 kubelet[2904]: E0714 22:48:12.901882 2904 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:48:12.916691 kubelet[2904]: W0714 22:48:12.901905 2904 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:48:12.916691 kubelet[2904]: E0714 22:48:12.901913 2904 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:48:12.916691 kubelet[2904]: E0714 22:48:12.902034 2904 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:48:12.916691 kubelet[2904]: W0714 22:48:12.902040 2904 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:48:12.916691 kubelet[2904]: E0714 22:48:12.902058 2904 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:48:12.916691 kubelet[2904]: E0714 22:48:12.902177 2904 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:48:12.916691 kubelet[2904]: W0714 22:48:12.902182 2904 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:48:12.916691 kubelet[2904]: E0714 22:48:12.902188 2904 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:48:12.916862 kubelet[2904]: E0714 22:48:12.902311 2904 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:48:12.916862 kubelet[2904]: W0714 22:48:12.902324 2904 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:48:12.916862 kubelet[2904]: E0714 22:48:12.902333 2904 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:48:12.916862 kubelet[2904]: E0714 22:48:12.902434 2904 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:48:12.916862 kubelet[2904]: W0714 22:48:12.902440 2904 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:48:12.916862 kubelet[2904]: E0714 22:48:12.902445 2904 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:48:12.916862 kubelet[2904]: E0714 22:48:12.902554 2904 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:48:12.916862 kubelet[2904]: W0714 22:48:12.902559 2904 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:48:12.916862 kubelet[2904]: E0714 22:48:12.902564 2904 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:48:12.916862 kubelet[2904]: E0714 22:48:12.902684 2904 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:48:12.917043 kubelet[2904]: W0714 22:48:12.902689 2904 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:48:12.917043 kubelet[2904]: E0714 22:48:12.902700 2904 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:48:12.917043 kubelet[2904]: E0714 22:48:12.902811 2904 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:48:12.917043 kubelet[2904]: W0714 22:48:12.902818 2904 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:48:12.917043 kubelet[2904]: E0714 22:48:12.902829 2904 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:48:12.917043 kubelet[2904]: E0714 22:48:12.902968 2904 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:48:12.917043 kubelet[2904]: W0714 22:48:12.902974 2904 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:48:12.917043 kubelet[2904]: E0714 22:48:12.902980 2904 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:48:12.917043 kubelet[2904]: E0714 22:48:12.903094 2904 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:48:12.917043 kubelet[2904]: W0714 22:48:12.903100 2904 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:48:12.917204 kubelet[2904]: E0714 22:48:12.903106 2904 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:48:12.917204 kubelet[2904]: E0714 22:48:12.903216 2904 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:48:12.917204 kubelet[2904]: W0714 22:48:12.903221 2904 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:48:12.917204 kubelet[2904]: E0714 22:48:12.903227 2904 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:48:12.917204 kubelet[2904]: E0714 22:48:12.903342 2904 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:48:12.917204 kubelet[2904]: W0714 22:48:12.903348 2904 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:48:12.917204 kubelet[2904]: E0714 22:48:12.903359 2904 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:48:12.917204 kubelet[2904]: E0714 22:48:12.903461 2904 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:48:12.917204 kubelet[2904]: W0714 22:48:12.903467 2904 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:48:12.917204 kubelet[2904]: E0714 22:48:12.903478 2904 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:48:12.917369 kubelet[2904]: E0714 22:48:12.903586 2904 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:48:12.917369 kubelet[2904]: W0714 22:48:12.903592 2904 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:48:12.917369 kubelet[2904]: E0714 22:48:12.903603 2904 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:48:12.918935 kubelet[2904]: E0714 22:48:12.918897 2904 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:48:12.918935 kubelet[2904]: W0714 22:48:12.918905 2904 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:48:12.918935 kubelet[2904]: E0714 22:48:12.918913 2904 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:48:12.919050 kubelet[2904]: E0714 22:48:12.919042 2904 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:48:12.919050 kubelet[2904]: W0714 22:48:12.919047 2904 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:48:12.926062 kubelet[2904]: E0714 22:48:12.919058 2904 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:48:12.926062 kubelet[2904]: E0714 22:48:12.919183 2904 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:48:12.926062 kubelet[2904]: W0714 22:48:12.919189 2904 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:48:12.926062 kubelet[2904]: E0714 22:48:12.919196 2904 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:48:12.926062 kubelet[2904]: E0714 22:48:12.919336 2904 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:48:12.926062 kubelet[2904]: W0714 22:48:12.919344 2904 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:48:12.926062 kubelet[2904]: E0714 22:48:12.919356 2904 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:48:12.926062 kubelet[2904]: E0714 22:48:12.919473 2904 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:48:12.926062 kubelet[2904]: W0714 22:48:12.919478 2904 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:48:12.926062 kubelet[2904]: E0714 22:48:12.919483 2904 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:48:12.926288 containerd[1647]: time="2025-07-14T22:48:12.921461475Z" level=info msg="CreateContainer within sandbox \"fed184ab0b1eb10d20e84a5422e807cd6de8f94b0769075410fc3212cd82052c\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"3105569ff65a473f8910d673040645ffca79d74bf84dd6b7a5bf0b2bd7dc20ea\"" Jul 14 22:48:12.926288 containerd[1647]: time="2025-07-14T22:48:12.922669409Z" level=info msg="StartContainer for \"3105569ff65a473f8910d673040645ffca79d74bf84dd6b7a5bf0b2bd7dc20ea\"" Jul 14 22:48:12.926343 kubelet[2904]: E0714 22:48:12.919572 2904 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:48:12.926343 kubelet[2904]: W0714 22:48:12.919577 2904 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:48:12.926343 kubelet[2904]: E0714 22:48:12.919584 2904 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:48:12.926343 kubelet[2904]: E0714 22:48:12.919691 2904 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:48:12.926343 kubelet[2904]: W0714 22:48:12.919697 2904 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:48:12.926343 kubelet[2904]: E0714 22:48:12.919702 2904 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:48:12.926343 kubelet[2904]: E0714 22:48:12.919857 2904 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:48:12.926343 kubelet[2904]: W0714 22:48:12.919863 2904 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:48:12.926343 kubelet[2904]: E0714 22:48:12.919868 2904 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:48:12.926343 kubelet[2904]: E0714 22:48:12.919970 2904 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:48:12.926505 kubelet[2904]: W0714 22:48:12.919975 2904 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:48:12.926505 kubelet[2904]: E0714 22:48:12.919981 2904 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:48:12.926505 kubelet[2904]: E0714 22:48:12.920068 2904 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:48:12.926505 kubelet[2904]: W0714 22:48:12.920072 2904 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:48:12.926505 kubelet[2904]: E0714 22:48:12.920078 2904 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:48:12.926505 kubelet[2904]: E0714 22:48:12.920253 2904 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:48:12.926505 kubelet[2904]: W0714 22:48:12.920259 2904 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:48:12.926505 kubelet[2904]: E0714 22:48:12.920269 2904 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:48:12.926505 kubelet[2904]: E0714 22:48:12.920657 2904 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:48:12.926505 kubelet[2904]: W0714 22:48:12.920662 2904 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:48:12.926665 kubelet[2904]: E0714 22:48:12.920679 2904 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:48:12.926665 kubelet[2904]: E0714 22:48:12.920787 2904 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:48:12.926665 kubelet[2904]: W0714 22:48:12.920792 2904 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:48:12.926665 kubelet[2904]: E0714 22:48:12.920798 2904 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:48:12.926665 kubelet[2904]: E0714 22:48:12.920903 2904 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:48:12.926665 kubelet[2904]: W0714 22:48:12.920907 2904 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:48:12.926665 kubelet[2904]: E0714 22:48:12.920913 2904 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:48:12.926665 kubelet[2904]: E0714 22:48:12.920997 2904 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:48:12.926665 kubelet[2904]: W0714 22:48:12.921002 2904 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:48:12.926665 kubelet[2904]: E0714 22:48:12.921009 2904 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:48:12.926827 kubelet[2904]: E0714 22:48:12.921106 2904 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:48:12.926827 kubelet[2904]: W0714 22:48:12.921111 2904 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:48:12.926827 kubelet[2904]: E0714 22:48:12.921117 2904 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:48:12.926827 kubelet[2904]: E0714 22:48:12.921616 2904 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:48:12.926827 kubelet[2904]: W0714 22:48:12.921623 2904 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:48:12.926827 kubelet[2904]: E0714 22:48:12.921631 2904 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:48:12.926827 kubelet[2904]: E0714 22:48:12.922153 2904 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:48:12.926827 kubelet[2904]: W0714 22:48:12.922159 2904 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:48:12.926827 kubelet[2904]: E0714 22:48:12.922165 2904 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:48:13.016865 containerd[1647]: time="2025-07-14T22:48:13.016783489Z" level=info msg="StartContainer for \"3105569ff65a473f8910d673040645ffca79d74bf84dd6b7a5bf0b2bd7dc20ea\" returns successfully" Jul 14 22:48:13.033051 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3105569ff65a473f8910d673040645ffca79d74bf84dd6b7a5bf0b2bd7dc20ea-rootfs.mount: Deactivated successfully. Jul 14 22:48:13.194313 containerd[1647]: time="2025-07-14T22:48:13.165982419Z" level=info msg="shim disconnected" id=3105569ff65a473f8910d673040645ffca79d74bf84dd6b7a5bf0b2bd7dc20ea namespace=k8s.io Jul 14 22:48:13.194313 containerd[1647]: time="2025-07-14T22:48:13.194191993Z" level=warning msg="cleaning up after shim disconnected" id=3105569ff65a473f8910d673040645ffca79d74bf84dd6b7a5bf0b2bd7dc20ea namespace=k8s.io Jul 14 22:48:13.194313 containerd[1647]: time="2025-07-14T22:48:13.194204280Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 14 22:48:13.865751 containerd[1647]: time="2025-07-14T22:48:13.865710695Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\"" Jul 14 22:48:14.363101 systemd-resolved[1541]: Under memory pressure, flushing caches. Jul 14 22:48:14.363899 systemd-journald[1180]: Under memory pressure, flushing caches. Jul 14 22:48:14.363120 systemd-resolved[1541]: Flushed all caches. Jul 14 22:48:14.691033 kubelet[2904]: E0714 22:48:14.690945 2904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-f78g4" podUID="33683c0b-99a6-49cc-aa17-19ada6d1c944" Jul 14 22:48:16.659510 containerd[1647]: time="2025-07-14T22:48:16.659481250Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:48:16.660363 containerd[1647]: time="2025-07-14T22:48:16.660336171Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.2: active requests=0, bytes read=70436221" Jul 14 22:48:16.660404 containerd[1647]: time="2025-07-14T22:48:16.660386975Z" level=info msg="ImageCreate event name:\"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:48:16.672913 containerd[1647]: time="2025-07-14T22:48:16.672742015Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:48:16.673544 containerd[1647]: time="2025-07-14T22:48:16.673459950Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.2\" with image id \"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\", size \"71928924\" in 2.807726268s" Jul 14 22:48:16.673544 containerd[1647]: time="2025-07-14T22:48:16.673485883Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\" returns image reference \"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\"" Jul 14 22:48:16.675960 containerd[1647]: time="2025-07-14T22:48:16.675854777Z" level=info msg="CreateContainer within sandbox \"fed184ab0b1eb10d20e84a5422e807cd6de8f94b0769075410fc3212cd82052c\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jul 14 22:48:16.684639 containerd[1647]: time="2025-07-14T22:48:16.684569216Z" level=info msg="CreateContainer within sandbox \"fed184ab0b1eb10d20e84a5422e807cd6de8f94b0769075410fc3212cd82052c\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"87b62075a7314d944b963f96d818af08491f824da37ec7e1453debe305d1033a\"" Jul 14 22:48:16.687220 containerd[1647]: time="2025-07-14T22:48:16.686407988Z" level=info msg="StartContainer for \"87b62075a7314d944b963f96d818af08491f824da37ec7e1453debe305d1033a\"" Jul 14 22:48:16.690738 kubelet[2904]: E0714 22:48:16.690709 2904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-f78g4" podUID="33683c0b-99a6-49cc-aa17-19ada6d1c944" Jul 14 22:48:16.735942 containerd[1647]: time="2025-07-14T22:48:16.735499287Z" level=info msg="StartContainer for \"87b62075a7314d944b963f96d818af08491f824da37ec7e1453debe305d1033a\" returns successfully" Jul 14 22:48:18.331047 systemd-resolved[1541]: Under memory pressure, flushing caches. Jul 14 22:48:18.331070 systemd-resolved[1541]: Flushed all caches. Jul 14 22:48:18.333457 systemd-journald[1180]: Under memory pressure, flushing caches. Jul 14 22:48:18.650608 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-87b62075a7314d944b963f96d818af08491f824da37ec7e1453debe305d1033a-rootfs.mount: Deactivated successfully. Jul 14 22:48:18.661302 kubelet[2904]: I0714 22:48:18.656339 2904 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Jul 14 22:48:18.664721 containerd[1647]: time="2025-07-14T22:48:18.661574754Z" level=info msg="shim disconnected" id=87b62075a7314d944b963f96d818af08491f824da37ec7e1453debe305d1033a namespace=k8s.io Jul 14 22:48:18.664721 containerd[1647]: time="2025-07-14T22:48:18.661676159Z" level=warning msg="cleaning up after shim disconnected" id=87b62075a7314d944b963f96d818af08491f824da37ec7e1453debe305d1033a namespace=k8s.io Jul 14 22:48:18.664721 containerd[1647]: time="2025-07-14T22:48:18.661683760Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 14 22:48:18.762632 kubelet[2904]: I0714 22:48:18.761529 2904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6e5c12ff-f464-401e-803f-958a1cf87455-tigera-ca-bundle\") pod \"calico-kube-controllers-58bddbbd7-xf2v7\" (UID: \"6e5c12ff-f464-401e-803f-958a1cf87455\") " pod="calico-system/calico-kube-controllers-58bddbbd7-xf2v7" Jul 14 22:48:18.762632 kubelet[2904]: I0714 22:48:18.761564 2904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nkr8c\" (UniqueName: \"kubernetes.io/projected/6e5c12ff-f464-401e-803f-958a1cf87455-kube-api-access-nkr8c\") pod \"calico-kube-controllers-58bddbbd7-xf2v7\" (UID: \"6e5c12ff-f464-401e-803f-958a1cf87455\") " pod="calico-system/calico-kube-controllers-58bddbbd7-xf2v7" Jul 14 22:48:18.769691 containerd[1647]: time="2025-07-14T22:48:18.769666176Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-f78g4,Uid:33683c0b-99a6-49cc-aa17-19ada6d1c944,Namespace:calico-system,Attempt:0,}" Jul 14 22:48:18.861747 kubelet[2904]: I0714 22:48:18.861720 2904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7ffbe111-cda0-4331-a242-4ff57bc4f605-config-volume\") pod \"coredns-7c65d6cfc9-gtzwg\" (UID: \"7ffbe111-cda0-4331-a242-4ff57bc4f605\") " pod="kube-system/coredns-7c65d6cfc9-gtzwg" Jul 14 22:48:18.861747 kubelet[2904]: I0714 22:48:18.861748 2904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/e45da145-d058-4e98-b935-90ad928df47e-calico-apiserver-certs\") pod \"calico-apiserver-77d5bbcd9f-rxk2m\" (UID: \"e45da145-d058-4e98-b935-90ad928df47e\") " pod="calico-apiserver/calico-apiserver-77d5bbcd9f-rxk2m" Jul 14 22:48:18.861944 kubelet[2904]: I0714 22:48:18.861761 2904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t9twf\" (UniqueName: \"kubernetes.io/projected/7ffbe111-cda0-4331-a242-4ff57bc4f605-kube-api-access-t9twf\") pod \"coredns-7c65d6cfc9-gtzwg\" (UID: \"7ffbe111-cda0-4331-a242-4ff57bc4f605\") " pod="kube-system/coredns-7c65d6cfc9-gtzwg" Jul 14 22:48:18.861944 kubelet[2904]: I0714 22:48:18.861771 2904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/026a21cb-112c-4f93-a129-af707bcdd455-whisker-ca-bundle\") pod \"whisker-6c897db977-sdsfz\" (UID: \"026a21cb-112c-4f93-a129-af707bcdd455\") " pod="calico-system/whisker-6c897db977-sdsfz" Jul 14 22:48:18.861944 kubelet[2904]: I0714 22:48:18.861781 2904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-br8fs\" (UniqueName: \"kubernetes.io/projected/026a21cb-112c-4f93-a129-af707bcdd455-kube-api-access-br8fs\") pod \"whisker-6c897db977-sdsfz\" (UID: \"026a21cb-112c-4f93-a129-af707bcdd455\") " pod="calico-system/whisker-6c897db977-sdsfz" Jul 14 22:48:18.861944 kubelet[2904]: I0714 22:48:18.861794 2904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b5e7e152-33a4-475e-81b6-5e99058e6154-config\") pod \"goldmane-58fd7646b9-c6d2z\" (UID: \"b5e7e152-33a4-475e-81b6-5e99058e6154\") " pod="calico-system/goldmane-58fd7646b9-c6d2z" Jul 14 22:48:18.861944 kubelet[2904]: I0714 22:48:18.861803 2904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b5e7e152-33a4-475e-81b6-5e99058e6154-goldmane-ca-bundle\") pod \"goldmane-58fd7646b9-c6d2z\" (UID: \"b5e7e152-33a4-475e-81b6-5e99058e6154\") " pod="calico-system/goldmane-58fd7646b9-c6d2z" Jul 14 22:48:18.863203 kubelet[2904]: I0714 22:48:18.861813 2904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/b5e7e152-33a4-475e-81b6-5e99058e6154-goldmane-key-pair\") pod \"goldmane-58fd7646b9-c6d2z\" (UID: \"b5e7e152-33a4-475e-81b6-5e99058e6154\") " pod="calico-system/goldmane-58fd7646b9-c6d2z" Jul 14 22:48:18.863203 kubelet[2904]: I0714 22:48:18.861822 2904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s9s94\" (UniqueName: \"kubernetes.io/projected/e45da145-d058-4e98-b935-90ad928df47e-kube-api-access-s9s94\") pod \"calico-apiserver-77d5bbcd9f-rxk2m\" (UID: \"e45da145-d058-4e98-b935-90ad928df47e\") " pod="calico-apiserver/calico-apiserver-77d5bbcd9f-rxk2m" Jul 14 22:48:18.863203 kubelet[2904]: I0714 22:48:18.861834 2904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8n2gg\" (UniqueName: \"kubernetes.io/projected/55a1ec98-b34e-4be5-aa77-830413ef608b-kube-api-access-8n2gg\") pod \"coredns-7c65d6cfc9-qt7fd\" (UID: \"55a1ec98-b34e-4be5-aa77-830413ef608b\") " pod="kube-system/coredns-7c65d6cfc9-qt7fd" Jul 14 22:48:18.863203 kubelet[2904]: I0714 22:48:18.861849 2904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/026a21cb-112c-4f93-a129-af707bcdd455-whisker-backend-key-pair\") pod \"whisker-6c897db977-sdsfz\" (UID: \"026a21cb-112c-4f93-a129-af707bcdd455\") " pod="calico-system/whisker-6c897db977-sdsfz" Jul 14 22:48:18.863203 kubelet[2904]: I0714 22:48:18.861858 2904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d7657\" (UniqueName: \"kubernetes.io/projected/b5e7e152-33a4-475e-81b6-5e99058e6154-kube-api-access-d7657\") pod \"goldmane-58fd7646b9-c6d2z\" (UID: \"b5e7e152-33a4-475e-81b6-5e99058e6154\") " pod="calico-system/goldmane-58fd7646b9-c6d2z" Jul 14 22:48:18.863305 kubelet[2904]: I0714 22:48:18.861906 2904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tjtkn\" (UniqueName: \"kubernetes.io/projected/5ab63976-6c51-436b-83f8-138c3003c36b-kube-api-access-tjtkn\") pod \"calico-apiserver-77d5bbcd9f-bfs2w\" (UID: \"5ab63976-6c51-436b-83f8-138c3003c36b\") " pod="calico-apiserver/calico-apiserver-77d5bbcd9f-bfs2w" Jul 14 22:48:18.863305 kubelet[2904]: I0714 22:48:18.861918 2904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/5ab63976-6c51-436b-83f8-138c3003c36b-calico-apiserver-certs\") pod \"calico-apiserver-77d5bbcd9f-bfs2w\" (UID: \"5ab63976-6c51-436b-83f8-138c3003c36b\") " pod="calico-apiserver/calico-apiserver-77d5bbcd9f-bfs2w" Jul 14 22:48:18.863305 kubelet[2904]: I0714 22:48:18.861937 2904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/55a1ec98-b34e-4be5-aa77-830413ef608b-config-volume\") pod \"coredns-7c65d6cfc9-qt7fd\" (UID: \"55a1ec98-b34e-4be5-aa77-830413ef608b\") " pod="kube-system/coredns-7c65d6cfc9-qt7fd" Jul 14 22:48:18.876640 containerd[1647]: time="2025-07-14T22:48:18.876611500Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\"" Jul 14 22:48:19.040669 containerd[1647]: time="2025-07-14T22:48:19.040603113Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-58bddbbd7-xf2v7,Uid:6e5c12ff-f464-401e-803f-958a1cf87455,Namespace:calico-system,Attempt:0,}" Jul 14 22:48:19.059038 containerd[1647]: time="2025-07-14T22:48:19.058997494Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-gtzwg,Uid:7ffbe111-cda0-4331-a242-4ff57bc4f605,Namespace:kube-system,Attempt:0,}" Jul 14 22:48:19.060767 containerd[1647]: time="2025-07-14T22:48:19.060675383Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-77d5bbcd9f-bfs2w,Uid:5ab63976-6c51-436b-83f8-138c3003c36b,Namespace:calico-apiserver,Attempt:0,}" Jul 14 22:48:19.063028 containerd[1647]: time="2025-07-14T22:48:19.063009258Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-qt7fd,Uid:55a1ec98-b34e-4be5-aa77-830413ef608b,Namespace:kube-system,Attempt:0,}" Jul 14 22:48:19.064730 containerd[1647]: time="2025-07-14T22:48:19.064685962Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-77d5bbcd9f-rxk2m,Uid:e45da145-d058-4e98-b935-90ad928df47e,Namespace:calico-apiserver,Attempt:0,}" Jul 14 22:48:19.070936 containerd[1647]: time="2025-07-14T22:48:19.070913396Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-c6d2z,Uid:b5e7e152-33a4-475e-81b6-5e99058e6154,Namespace:calico-system,Attempt:0,}" Jul 14 22:48:19.071170 containerd[1647]: time="2025-07-14T22:48:19.071158760Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6c897db977-sdsfz,Uid:026a21cb-112c-4f93-a129-af707bcdd455,Namespace:calico-system,Attempt:0,}" Jul 14 22:48:19.086866 containerd[1647]: time="2025-07-14T22:48:19.086072869Z" level=error msg="Failed to destroy network for sandbox \"0a7dd75a6ea966f208f3023566629d6115f7728e6919df1c479ac65d1af0336c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:48:19.093597 containerd[1647]: time="2025-07-14T22:48:19.093568682Z" level=error msg="encountered an error cleaning up failed sandbox \"0a7dd75a6ea966f208f3023566629d6115f7728e6919df1c479ac65d1af0336c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:48:19.106365 containerd[1647]: time="2025-07-14T22:48:19.106319355Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-f78g4,Uid:33683c0b-99a6-49cc-aa17-19ada6d1c944,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"0a7dd75a6ea966f208f3023566629d6115f7728e6919df1c479ac65d1af0336c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:48:19.115396 kubelet[2904]: E0714 22:48:19.114617 2904 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0a7dd75a6ea966f208f3023566629d6115f7728e6919df1c479ac65d1af0336c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:48:19.115396 kubelet[2904]: E0714 22:48:19.114676 2904 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0a7dd75a6ea966f208f3023566629d6115f7728e6919df1c479ac65d1af0336c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-f78g4" Jul 14 22:48:19.115396 kubelet[2904]: E0714 22:48:19.114695 2904 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0a7dd75a6ea966f208f3023566629d6115f7728e6919df1c479ac65d1af0336c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-f78g4" Jul 14 22:48:19.115533 kubelet[2904]: E0714 22:48:19.114725 2904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-f78g4_calico-system(33683c0b-99a6-49cc-aa17-19ada6d1c944)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-f78g4_calico-system(33683c0b-99a6-49cc-aa17-19ada6d1c944)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0a7dd75a6ea966f208f3023566629d6115f7728e6919df1c479ac65d1af0336c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-f78g4" podUID="33683c0b-99a6-49cc-aa17-19ada6d1c944" Jul 14 22:48:19.170477 containerd[1647]: time="2025-07-14T22:48:19.170437631Z" level=error msg="Failed to destroy network for sandbox \"35e1f265fc455895e37599f3f822f6e951a54ca84be8aa596a85abf73900a391\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:48:19.173779 containerd[1647]: time="2025-07-14T22:48:19.173752865Z" level=error msg="encountered an error cleaning up failed sandbox \"35e1f265fc455895e37599f3f822f6e951a54ca84be8aa596a85abf73900a391\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:48:19.173936 containerd[1647]: time="2025-07-14T22:48:19.173922781Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-77d5bbcd9f-bfs2w,Uid:5ab63976-6c51-436b-83f8-138c3003c36b,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"35e1f265fc455895e37599f3f822f6e951a54ca84be8aa596a85abf73900a391\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:48:19.174281 kubelet[2904]: E0714 22:48:19.174255 2904 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"35e1f265fc455895e37599f3f822f6e951a54ca84be8aa596a85abf73900a391\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:48:19.174329 kubelet[2904]: E0714 22:48:19.174298 2904 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"35e1f265fc455895e37599f3f822f6e951a54ca84be8aa596a85abf73900a391\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-77d5bbcd9f-bfs2w" Jul 14 22:48:19.174329 kubelet[2904]: E0714 22:48:19.174314 2904 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"35e1f265fc455895e37599f3f822f6e951a54ca84be8aa596a85abf73900a391\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-77d5bbcd9f-bfs2w" Jul 14 22:48:19.174368 kubelet[2904]: E0714 22:48:19.174342 2904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-77d5bbcd9f-bfs2w_calico-apiserver(5ab63976-6c51-436b-83f8-138c3003c36b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-77d5bbcd9f-bfs2w_calico-apiserver(5ab63976-6c51-436b-83f8-138c3003c36b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"35e1f265fc455895e37599f3f822f6e951a54ca84be8aa596a85abf73900a391\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-77d5bbcd9f-bfs2w" podUID="5ab63976-6c51-436b-83f8-138c3003c36b" Jul 14 22:48:19.178140 containerd[1647]: time="2025-07-14T22:48:19.178111016Z" level=error msg="Failed to destroy network for sandbox \"2a1150561925bddf190ca0976d94ca61566f7bd01fec93b03508341d9f258f5e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:48:19.180821 containerd[1647]: time="2025-07-14T22:48:19.180782317Z" level=error msg="encountered an error cleaning up failed sandbox \"2a1150561925bddf190ca0976d94ca61566f7bd01fec93b03508341d9f258f5e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:48:19.180931 containerd[1647]: time="2025-07-14T22:48:19.180833600Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-58bddbbd7-xf2v7,Uid:6e5c12ff-f464-401e-803f-958a1cf87455,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"2a1150561925bddf190ca0976d94ca61566f7bd01fec93b03508341d9f258f5e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:48:19.183003 kubelet[2904]: E0714 22:48:19.182978 2904 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2a1150561925bddf190ca0976d94ca61566f7bd01fec93b03508341d9f258f5e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:48:19.183090 kubelet[2904]: E0714 22:48:19.183013 2904 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2a1150561925bddf190ca0976d94ca61566f7bd01fec93b03508341d9f258f5e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-58bddbbd7-xf2v7" Jul 14 22:48:19.183090 kubelet[2904]: E0714 22:48:19.183027 2904 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2a1150561925bddf190ca0976d94ca61566f7bd01fec93b03508341d9f258f5e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-58bddbbd7-xf2v7" Jul 14 22:48:19.183090 kubelet[2904]: E0714 22:48:19.183049 2904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-58bddbbd7-xf2v7_calico-system(6e5c12ff-f464-401e-803f-958a1cf87455)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-58bddbbd7-xf2v7_calico-system(6e5c12ff-f464-401e-803f-958a1cf87455)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2a1150561925bddf190ca0976d94ca61566f7bd01fec93b03508341d9f258f5e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-58bddbbd7-xf2v7" podUID="6e5c12ff-f464-401e-803f-958a1cf87455" Jul 14 22:48:19.212576 containerd[1647]: time="2025-07-14T22:48:19.212548812Z" level=error msg="Failed to destroy network for sandbox \"46a1d2ca8b76004d02310e568cd53fd650a52a5d3ecad032094d772fa269a2d4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:48:19.213048 containerd[1647]: time="2025-07-14T22:48:19.212990815Z" level=error msg="encountered an error cleaning up failed sandbox \"46a1d2ca8b76004d02310e568cd53fd650a52a5d3ecad032094d772fa269a2d4\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:48:19.213048 containerd[1647]: time="2025-07-14T22:48:19.213020415Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-qt7fd,Uid:55a1ec98-b34e-4be5-aa77-830413ef608b,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"46a1d2ca8b76004d02310e568cd53fd650a52a5d3ecad032094d772fa269a2d4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:48:19.213497 kubelet[2904]: E0714 22:48:19.213401 2904 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"46a1d2ca8b76004d02310e568cd53fd650a52a5d3ecad032094d772fa269a2d4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:48:19.213497 kubelet[2904]: E0714 22:48:19.213454 2904 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"46a1d2ca8b76004d02310e568cd53fd650a52a5d3ecad032094d772fa269a2d4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-qt7fd" Jul 14 22:48:19.213497 kubelet[2904]: E0714 22:48:19.213476 2904 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"46a1d2ca8b76004d02310e568cd53fd650a52a5d3ecad032094d772fa269a2d4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-qt7fd" Jul 14 22:48:19.214552 kubelet[2904]: E0714 22:48:19.214071 2904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-qt7fd_kube-system(55a1ec98-b34e-4be5-aa77-830413ef608b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-qt7fd_kube-system(55a1ec98-b34e-4be5-aa77-830413ef608b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"46a1d2ca8b76004d02310e568cd53fd650a52a5d3ecad032094d772fa269a2d4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-qt7fd" podUID="55a1ec98-b34e-4be5-aa77-830413ef608b" Jul 14 22:48:19.215354 containerd[1647]: time="2025-07-14T22:48:19.215232142Z" level=error msg="Failed to destroy network for sandbox \"bd9eda5759c35bf186a67eac25d3145992baa4a23ddc146c2ed10eaf314fa297\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:48:19.215914 containerd[1647]: time="2025-07-14T22:48:19.215863753Z" level=error msg="Failed to destroy network for sandbox \"d116ef66974640c1c387c598585635d19686f4e057958521a59c62b407cb2786\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:48:19.216135 containerd[1647]: time="2025-07-14T22:48:19.216054696Z" level=error msg="encountered an error cleaning up failed sandbox \"d116ef66974640c1c387c598585635d19686f4e057958521a59c62b407cb2786\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:48:19.216135 containerd[1647]: time="2025-07-14T22:48:19.216083043Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-77d5bbcd9f-rxk2m,Uid:e45da145-d058-4e98-b935-90ad928df47e,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d116ef66974640c1c387c598585635d19686f4e057958521a59c62b407cb2786\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:48:19.216135 containerd[1647]: time="2025-07-14T22:48:19.216109648Z" level=error msg="encountered an error cleaning up failed sandbox \"bd9eda5759c35bf186a67eac25d3145992baa4a23ddc146c2ed10eaf314fa297\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:48:19.216236 kubelet[2904]: E0714 22:48:19.216180 2904 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d116ef66974640c1c387c598585635d19686f4e057958521a59c62b407cb2786\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:48:19.216236 kubelet[2904]: E0714 22:48:19.216211 2904 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d116ef66974640c1c387c598585635d19686f4e057958521a59c62b407cb2786\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-77d5bbcd9f-rxk2m" Jul 14 22:48:19.216236 kubelet[2904]: E0714 22:48:19.216223 2904 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d116ef66974640c1c387c598585635d19686f4e057958521a59c62b407cb2786\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-77d5bbcd9f-rxk2m" Jul 14 22:48:19.216309 kubelet[2904]: E0714 22:48:19.216257 2904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-77d5bbcd9f-rxk2m_calico-apiserver(e45da145-d058-4e98-b935-90ad928df47e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-77d5bbcd9f-rxk2m_calico-apiserver(e45da145-d058-4e98-b935-90ad928df47e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d116ef66974640c1c387c598585635d19686f4e057958521a59c62b407cb2786\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-77d5bbcd9f-rxk2m" podUID="e45da145-d058-4e98-b935-90ad928df47e" Jul 14 22:48:19.216656 containerd[1647]: time="2025-07-14T22:48:19.216467148Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-gtzwg,Uid:7ffbe111-cda0-4331-a242-4ff57bc4f605,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"bd9eda5759c35bf186a67eac25d3145992baa4a23ddc146c2ed10eaf314fa297\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:48:19.216861 kubelet[2904]: E0714 22:48:19.216799 2904 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bd9eda5759c35bf186a67eac25d3145992baa4a23ddc146c2ed10eaf314fa297\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:48:19.216861 kubelet[2904]: E0714 22:48:19.216820 2904 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bd9eda5759c35bf186a67eac25d3145992baa4a23ddc146c2ed10eaf314fa297\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-gtzwg" Jul 14 22:48:19.216861 kubelet[2904]: E0714 22:48:19.216830 2904 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bd9eda5759c35bf186a67eac25d3145992baa4a23ddc146c2ed10eaf314fa297\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-gtzwg" Jul 14 22:48:19.217347 kubelet[2904]: E0714 22:48:19.217260 2904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-gtzwg_kube-system(7ffbe111-cda0-4331-a242-4ff57bc4f605)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-gtzwg_kube-system(7ffbe111-cda0-4331-a242-4ff57bc4f605)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bd9eda5759c35bf186a67eac25d3145992baa4a23ddc146c2ed10eaf314fa297\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-gtzwg" podUID="7ffbe111-cda0-4331-a242-4ff57bc4f605" Jul 14 22:48:19.228975 containerd[1647]: time="2025-07-14T22:48:19.228812110Z" level=error msg="Failed to destroy network for sandbox \"181daaed2da55bac42912658eeefb79cb7b48e91950d6111fc6887af05f2ed09\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:48:19.229056 containerd[1647]: time="2025-07-14T22:48:19.229033151Z" level=error msg="encountered an error cleaning up failed sandbox \"181daaed2da55bac42912658eeefb79cb7b48e91950d6111fc6887af05f2ed09\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:48:19.229078 containerd[1647]: time="2025-07-14T22:48:19.229058826Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6c897db977-sdsfz,Uid:026a21cb-112c-4f93-a129-af707bcdd455,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"181daaed2da55bac42912658eeefb79cb7b48e91950d6111fc6887af05f2ed09\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:48:19.229276 kubelet[2904]: E0714 22:48:19.229193 2904 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"181daaed2da55bac42912658eeefb79cb7b48e91950d6111fc6887af05f2ed09\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:48:19.229276 kubelet[2904]: E0714 22:48:19.229229 2904 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"181daaed2da55bac42912658eeefb79cb7b48e91950d6111fc6887af05f2ed09\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6c897db977-sdsfz" Jul 14 22:48:19.229276 kubelet[2904]: E0714 22:48:19.229245 2904 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"181daaed2da55bac42912658eeefb79cb7b48e91950d6111fc6887af05f2ed09\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6c897db977-sdsfz" Jul 14 22:48:19.229352 kubelet[2904]: E0714 22:48:19.229268 2904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-6c897db977-sdsfz_calico-system(026a21cb-112c-4f93-a129-af707bcdd455)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-6c897db977-sdsfz_calico-system(026a21cb-112c-4f93-a129-af707bcdd455)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"181daaed2da55bac42912658eeefb79cb7b48e91950d6111fc6887af05f2ed09\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6c897db977-sdsfz" podUID="026a21cb-112c-4f93-a129-af707bcdd455" Jul 14 22:48:19.234977 containerd[1647]: time="2025-07-14T22:48:19.234947968Z" level=error msg="Failed to destroy network for sandbox \"25a9430590916749bdddebfb0b5fbad3ba8f65a13da22fbd0d318aaab67cab90\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:48:19.235178 containerd[1647]: time="2025-07-14T22:48:19.235159996Z" level=error msg="encountered an error cleaning up failed sandbox \"25a9430590916749bdddebfb0b5fbad3ba8f65a13da22fbd0d318aaab67cab90\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:48:19.235216 containerd[1647]: time="2025-07-14T22:48:19.235192008Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-c6d2z,Uid:b5e7e152-33a4-475e-81b6-5e99058e6154,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"25a9430590916749bdddebfb0b5fbad3ba8f65a13da22fbd0d318aaab67cab90\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:48:19.235359 kubelet[2904]: E0714 22:48:19.235336 2904 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"25a9430590916749bdddebfb0b5fbad3ba8f65a13da22fbd0d318aaab67cab90\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:48:19.235393 kubelet[2904]: E0714 22:48:19.235379 2904 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"25a9430590916749bdddebfb0b5fbad3ba8f65a13da22fbd0d318aaab67cab90\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-58fd7646b9-c6d2z" Jul 14 22:48:19.235413 kubelet[2904]: E0714 22:48:19.235393 2904 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"25a9430590916749bdddebfb0b5fbad3ba8f65a13da22fbd0d318aaab67cab90\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-58fd7646b9-c6d2z" Jul 14 22:48:19.235443 kubelet[2904]: E0714 22:48:19.235426 2904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-58fd7646b9-c6d2z_calico-system(b5e7e152-33a4-475e-81b6-5e99058e6154)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-58fd7646b9-c6d2z_calico-system(b5e7e152-33a4-475e-81b6-5e99058e6154)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"25a9430590916749bdddebfb0b5fbad3ba8f65a13da22fbd0d318aaab67cab90\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-58fd7646b9-c6d2z" podUID="b5e7e152-33a4-475e-81b6-5e99058e6154" Jul 14 22:48:19.657372 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0a7dd75a6ea966f208f3023566629d6115f7728e6919df1c479ac65d1af0336c-shm.mount: Deactivated successfully. Jul 14 22:48:19.875599 kubelet[2904]: I0714 22:48:19.875580 2904 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0a7dd75a6ea966f208f3023566629d6115f7728e6919df1c479ac65d1af0336c" Jul 14 22:48:19.876405 kubelet[2904]: I0714 22:48:19.876389 2904 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="25a9430590916749bdddebfb0b5fbad3ba8f65a13da22fbd0d318aaab67cab90" Jul 14 22:48:19.914459 kubelet[2904]: I0714 22:48:19.914368 2904 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d116ef66974640c1c387c598585635d19686f4e057958521a59c62b407cb2786" Jul 14 22:48:19.915913 kubelet[2904]: I0714 22:48:19.915403 2904 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2a1150561925bddf190ca0976d94ca61566f7bd01fec93b03508341d9f258f5e" Jul 14 22:48:19.916030 kubelet[2904]: I0714 22:48:19.916011 2904 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="35e1f265fc455895e37599f3f822f6e951a54ca84be8aa596a85abf73900a391" Jul 14 22:48:19.917223 kubelet[2904]: I0714 22:48:19.916631 2904 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="181daaed2da55bac42912658eeefb79cb7b48e91950d6111fc6887af05f2ed09" Jul 14 22:48:19.917530 kubelet[2904]: I0714 22:48:19.917516 2904 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="46a1d2ca8b76004d02310e568cd53fd650a52a5d3ecad032094d772fa269a2d4" Jul 14 22:48:19.918979 kubelet[2904]: I0714 22:48:19.918906 2904 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bd9eda5759c35bf186a67eac25d3145992baa4a23ddc146c2ed10eaf314fa297" Jul 14 22:48:19.935484 containerd[1647]: time="2025-07-14T22:48:19.934865077Z" level=info msg="StopPodSandbox for \"bd9eda5759c35bf186a67eac25d3145992baa4a23ddc146c2ed10eaf314fa297\"" Jul 14 22:48:19.935484 containerd[1647]: time="2025-07-14T22:48:19.935385434Z" level=info msg="StopPodSandbox for \"d116ef66974640c1c387c598585635d19686f4e057958521a59c62b407cb2786\"" Jul 14 22:48:19.935948 containerd[1647]: time="2025-07-14T22:48:19.935929997Z" level=info msg="Ensure that sandbox bd9eda5759c35bf186a67eac25d3145992baa4a23ddc146c2ed10eaf314fa297 in task-service has been cleanup successfully" Jul 14 22:48:19.936208 containerd[1647]: time="2025-07-14T22:48:19.936193548Z" level=info msg="StopPodSandbox for \"0a7dd75a6ea966f208f3023566629d6115f7728e6919df1c479ac65d1af0336c\"" Jul 14 22:48:19.936672 containerd[1647]: time="2025-07-14T22:48:19.936661012Z" level=info msg="Ensure that sandbox 0a7dd75a6ea966f208f3023566629d6115f7728e6919df1c479ac65d1af0336c in task-service has been cleanup successfully" Jul 14 22:48:19.940939 containerd[1647]: time="2025-07-14T22:48:19.940437012Z" level=info msg="StopPodSandbox for \"2a1150561925bddf190ca0976d94ca61566f7bd01fec93b03508341d9f258f5e\"" Jul 14 22:48:19.940939 containerd[1647]: time="2025-07-14T22:48:19.940651582Z" level=info msg="Ensure that sandbox 2a1150561925bddf190ca0976d94ca61566f7bd01fec93b03508341d9f258f5e in task-service has been cleanup successfully" Jul 14 22:48:19.941872 containerd[1647]: time="2025-07-14T22:48:19.941854492Z" level=info msg="StopPodSandbox for \"35e1f265fc455895e37599f3f822f6e951a54ca84be8aa596a85abf73900a391\"" Jul 14 22:48:19.942673 containerd[1647]: time="2025-07-14T22:48:19.942661814Z" level=info msg="Ensure that sandbox 35e1f265fc455895e37599f3f822f6e951a54ca84be8aa596a85abf73900a391 in task-service has been cleanup successfully" Jul 14 22:48:19.943680 containerd[1647]: time="2025-07-14T22:48:19.942822143Z" level=info msg="StopPodSandbox for \"25a9430590916749bdddebfb0b5fbad3ba8f65a13da22fbd0d318aaab67cab90\"" Jul 14 22:48:19.945487 containerd[1647]: time="2025-07-14T22:48:19.945473706Z" level=info msg="Ensure that sandbox 25a9430590916749bdddebfb0b5fbad3ba8f65a13da22fbd0d318aaab67cab90 in task-service has been cleanup successfully" Jul 14 22:48:19.945806 containerd[1647]: time="2025-07-14T22:48:19.935931331Z" level=info msg="Ensure that sandbox d116ef66974640c1c387c598585635d19686f4e057958521a59c62b407cb2786 in task-service has been cleanup successfully" Jul 14 22:48:19.946770 containerd[1647]: time="2025-07-14T22:48:19.942921214Z" level=info msg="StopPodSandbox for \"181daaed2da55bac42912658eeefb79cb7b48e91950d6111fc6887af05f2ed09\"" Jul 14 22:48:19.947192 containerd[1647]: time="2025-07-14T22:48:19.947176844Z" level=info msg="Ensure that sandbox 181daaed2da55bac42912658eeefb79cb7b48e91950d6111fc6887af05f2ed09 in task-service has been cleanup successfully" Jul 14 22:48:19.948263 containerd[1647]: time="2025-07-14T22:48:19.942981736Z" level=info msg="StopPodSandbox for \"46a1d2ca8b76004d02310e568cd53fd650a52a5d3ecad032094d772fa269a2d4\"" Jul 14 22:48:19.948554 containerd[1647]: time="2025-07-14T22:48:19.948539669Z" level=info msg="Ensure that sandbox 46a1d2ca8b76004d02310e568cd53fd650a52a5d3ecad032094d772fa269a2d4 in task-service has been cleanup successfully" Jul 14 22:48:20.005009 containerd[1647]: time="2025-07-14T22:48:20.004975388Z" level=error msg="StopPodSandbox for \"35e1f265fc455895e37599f3f822f6e951a54ca84be8aa596a85abf73900a391\" failed" error="failed to destroy network for sandbox \"35e1f265fc455895e37599f3f822f6e951a54ca84be8aa596a85abf73900a391\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:48:20.007301 containerd[1647]: time="2025-07-14T22:48:20.007265196Z" level=error msg="StopPodSandbox for \"bd9eda5759c35bf186a67eac25d3145992baa4a23ddc146c2ed10eaf314fa297\" failed" error="failed to destroy network for sandbox \"bd9eda5759c35bf186a67eac25d3145992baa4a23ddc146c2ed10eaf314fa297\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:48:20.010367 kubelet[2904]: E0714 22:48:20.005170 2904 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"35e1f265fc455895e37599f3f822f6e951a54ca84be8aa596a85abf73900a391\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="35e1f265fc455895e37599f3f822f6e951a54ca84be8aa596a85abf73900a391" Jul 14 22:48:20.010492 kubelet[2904]: E0714 22:48:20.007402 2904 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"bd9eda5759c35bf186a67eac25d3145992baa4a23ddc146c2ed10eaf314fa297\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="bd9eda5759c35bf186a67eac25d3145992baa4a23ddc146c2ed10eaf314fa297" Jul 14 22:48:20.018432 containerd[1647]: time="2025-07-14T22:48:20.018403561Z" level=error msg="StopPodSandbox for \"181daaed2da55bac42912658eeefb79cb7b48e91950d6111fc6887af05f2ed09\" failed" error="failed to destroy network for sandbox \"181daaed2da55bac42912658eeefb79cb7b48e91950d6111fc6887af05f2ed09\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:48:20.019829 containerd[1647]: time="2025-07-14T22:48:20.019802212Z" level=error msg="StopPodSandbox for \"2a1150561925bddf190ca0976d94ca61566f7bd01fec93b03508341d9f258f5e\" failed" error="failed to destroy network for sandbox \"2a1150561925bddf190ca0976d94ca61566f7bd01fec93b03508341d9f258f5e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:48:20.019976 containerd[1647]: time="2025-07-14T22:48:20.019962007Z" level=error msg="StopPodSandbox for \"0a7dd75a6ea966f208f3023566629d6115f7728e6919df1c479ac65d1af0336c\" failed" error="failed to destroy network for sandbox \"0a7dd75a6ea966f208f3023566629d6115f7728e6919df1c479ac65d1af0336c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:48:20.026266 kubelet[2904]: E0714 22:48:20.016067 2904 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"35e1f265fc455895e37599f3f822f6e951a54ca84be8aa596a85abf73900a391"} Jul 14 22:48:20.026359 kubelet[2904]: E0714 22:48:20.026282 2904 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"5ab63976-6c51-436b-83f8-138c3003c36b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"35e1f265fc455895e37599f3f822f6e951a54ca84be8aa596a85abf73900a391\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 14 22:48:20.026359 kubelet[2904]: E0714 22:48:20.026301 2904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"5ab63976-6c51-436b-83f8-138c3003c36b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"35e1f265fc455895e37599f3f822f6e951a54ca84be8aa596a85abf73900a391\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-77d5bbcd9f-bfs2w" podUID="5ab63976-6c51-436b-83f8-138c3003c36b" Jul 14 22:48:20.026449 kubelet[2904]: E0714 22:48:20.026387 2904 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"0a7dd75a6ea966f208f3023566629d6115f7728e6919df1c479ac65d1af0336c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="0a7dd75a6ea966f208f3023566629d6115f7728e6919df1c479ac65d1af0336c" Jul 14 22:48:20.026449 kubelet[2904]: E0714 22:48:20.026408 2904 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"0a7dd75a6ea966f208f3023566629d6115f7728e6919df1c479ac65d1af0336c"} Jul 14 22:48:20.026449 kubelet[2904]: E0714 22:48:20.026423 2904 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"33683c0b-99a6-49cc-aa17-19ada6d1c944\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0a7dd75a6ea966f208f3023566629d6115f7728e6919df1c479ac65d1af0336c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 14 22:48:20.026449 kubelet[2904]: E0714 22:48:20.026433 2904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"33683c0b-99a6-49cc-aa17-19ada6d1c944\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0a7dd75a6ea966f208f3023566629d6115f7728e6919df1c479ac65d1af0336c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-f78g4" podUID="33683c0b-99a6-49cc-aa17-19ada6d1c944" Jul 14 22:48:20.026552 kubelet[2904]: E0714 22:48:20.026447 2904 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"181daaed2da55bac42912658eeefb79cb7b48e91950d6111fc6887af05f2ed09\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="181daaed2da55bac42912658eeefb79cb7b48e91950d6111fc6887af05f2ed09" Jul 14 22:48:20.026552 kubelet[2904]: E0714 22:48:20.026456 2904 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"181daaed2da55bac42912658eeefb79cb7b48e91950d6111fc6887af05f2ed09"} Jul 14 22:48:20.026552 kubelet[2904]: E0714 22:48:20.026468 2904 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"026a21cb-112c-4f93-a129-af707bcdd455\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"181daaed2da55bac42912658eeefb79cb7b48e91950d6111fc6887af05f2ed09\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 14 22:48:20.026552 kubelet[2904]: E0714 22:48:20.026477 2904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"026a21cb-112c-4f93-a129-af707bcdd455\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"181daaed2da55bac42912658eeefb79cb7b48e91950d6111fc6887af05f2ed09\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6c897db977-sdsfz" podUID="026a21cb-112c-4f93-a129-af707bcdd455" Jul 14 22:48:20.026658 kubelet[2904]: E0714 22:48:20.026514 2904 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"2a1150561925bddf190ca0976d94ca61566f7bd01fec93b03508341d9f258f5e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="2a1150561925bddf190ca0976d94ca61566f7bd01fec93b03508341d9f258f5e" Jul 14 22:48:20.026658 kubelet[2904]: E0714 22:48:20.026527 2904 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"2a1150561925bddf190ca0976d94ca61566f7bd01fec93b03508341d9f258f5e"} Jul 14 22:48:20.026658 kubelet[2904]: E0714 22:48:20.026545 2904 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"6e5c12ff-f464-401e-803f-958a1cf87455\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2a1150561925bddf190ca0976d94ca61566f7bd01fec93b03508341d9f258f5e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 14 22:48:20.026658 kubelet[2904]: E0714 22:48:20.026559 2904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"6e5c12ff-f464-401e-803f-958a1cf87455\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2a1150561925bddf190ca0976d94ca61566f7bd01fec93b03508341d9f258f5e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-58bddbbd7-xf2v7" podUID="6e5c12ff-f464-401e-803f-958a1cf87455" Jul 14 22:48:20.026922 kubelet[2904]: E0714 22:48:20.016121 2904 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"bd9eda5759c35bf186a67eac25d3145992baa4a23ddc146c2ed10eaf314fa297"} Jul 14 22:48:20.026922 kubelet[2904]: E0714 22:48:20.026800 2904 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7ffbe111-cda0-4331-a242-4ff57bc4f605\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"bd9eda5759c35bf186a67eac25d3145992baa4a23ddc146c2ed10eaf314fa297\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 14 22:48:20.026922 kubelet[2904]: E0714 22:48:20.026814 2904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7ffbe111-cda0-4331-a242-4ff57bc4f605\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"bd9eda5759c35bf186a67eac25d3145992baa4a23ddc146c2ed10eaf314fa297\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-gtzwg" podUID="7ffbe111-cda0-4331-a242-4ff57bc4f605" Jul 14 22:48:20.031935 containerd[1647]: time="2025-07-14T22:48:20.031844185Z" level=error msg="StopPodSandbox for \"d116ef66974640c1c387c598585635d19686f4e057958521a59c62b407cb2786\" failed" error="failed to destroy network for sandbox \"d116ef66974640c1c387c598585635d19686f4e057958521a59c62b407cb2786\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:48:20.032285 kubelet[2904]: E0714 22:48:20.032162 2904 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d116ef66974640c1c387c598585635d19686f4e057958521a59c62b407cb2786\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d116ef66974640c1c387c598585635d19686f4e057958521a59c62b407cb2786" Jul 14 22:48:20.032285 kubelet[2904]: E0714 22:48:20.032209 2904 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d116ef66974640c1c387c598585635d19686f4e057958521a59c62b407cb2786"} Jul 14 22:48:20.032285 kubelet[2904]: E0714 22:48:20.032239 2904 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e45da145-d058-4e98-b935-90ad928df47e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d116ef66974640c1c387c598585635d19686f4e057958521a59c62b407cb2786\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 14 22:48:20.032285 kubelet[2904]: E0714 22:48:20.032254 2904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e45da145-d058-4e98-b935-90ad928df47e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d116ef66974640c1c387c598585635d19686f4e057958521a59c62b407cb2786\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-77d5bbcd9f-rxk2m" podUID="e45da145-d058-4e98-b935-90ad928df47e" Jul 14 22:48:20.032912 containerd[1647]: time="2025-07-14T22:48:20.032525853Z" level=error msg="StopPodSandbox for \"25a9430590916749bdddebfb0b5fbad3ba8f65a13da22fbd0d318aaab67cab90\" failed" error="failed to destroy network for sandbox \"25a9430590916749bdddebfb0b5fbad3ba8f65a13da22fbd0d318aaab67cab90\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:48:20.034252 kubelet[2904]: E0714 22:48:20.032831 2904 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"25a9430590916749bdddebfb0b5fbad3ba8f65a13da22fbd0d318aaab67cab90\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="25a9430590916749bdddebfb0b5fbad3ba8f65a13da22fbd0d318aaab67cab90" Jul 14 22:48:20.034252 kubelet[2904]: E0714 22:48:20.032850 2904 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"25a9430590916749bdddebfb0b5fbad3ba8f65a13da22fbd0d318aaab67cab90"} Jul 14 22:48:20.034252 kubelet[2904]: E0714 22:48:20.032867 2904 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b5e7e152-33a4-475e-81b6-5e99058e6154\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"25a9430590916749bdddebfb0b5fbad3ba8f65a13da22fbd0d318aaab67cab90\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 14 22:48:20.034252 kubelet[2904]: E0714 22:48:20.032880 2904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b5e7e152-33a4-475e-81b6-5e99058e6154\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"25a9430590916749bdddebfb0b5fbad3ba8f65a13da22fbd0d318aaab67cab90\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-58fd7646b9-c6d2z" podUID="b5e7e152-33a4-475e-81b6-5e99058e6154" Jul 14 22:48:20.034560 containerd[1647]: time="2025-07-14T22:48:20.032984578Z" level=error msg="StopPodSandbox for \"46a1d2ca8b76004d02310e568cd53fd650a52a5d3ecad032094d772fa269a2d4\" failed" error="failed to destroy network for sandbox \"46a1d2ca8b76004d02310e568cd53fd650a52a5d3ecad032094d772fa269a2d4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:48:20.034643 kubelet[2904]: E0714 22:48:20.033088 2904 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"46a1d2ca8b76004d02310e568cd53fd650a52a5d3ecad032094d772fa269a2d4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="46a1d2ca8b76004d02310e568cd53fd650a52a5d3ecad032094d772fa269a2d4" Jul 14 22:48:20.034643 kubelet[2904]: E0714 22:48:20.033105 2904 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"46a1d2ca8b76004d02310e568cd53fd650a52a5d3ecad032094d772fa269a2d4"} Jul 14 22:48:20.034643 kubelet[2904]: E0714 22:48:20.033118 2904 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"55a1ec98-b34e-4be5-aa77-830413ef608b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"46a1d2ca8b76004d02310e568cd53fd650a52a5d3ecad032094d772fa269a2d4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 14 22:48:20.034643 kubelet[2904]: E0714 22:48:20.033490 2904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"55a1ec98-b34e-4be5-aa77-830413ef608b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"46a1d2ca8b76004d02310e568cd53fd650a52a5d3ecad032094d772fa269a2d4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-qt7fd" podUID="55a1ec98-b34e-4be5-aa77-830413ef608b" Jul 14 22:48:20.379137 systemd-resolved[1541]: Under memory pressure, flushing caches. Jul 14 22:48:20.380020 systemd-journald[1180]: Under memory pressure, flushing caches. Jul 14 22:48:20.379157 systemd-resolved[1541]: Flushed all caches. Jul 14 22:48:22.427214 systemd-resolved[1541]: Under memory pressure, flushing caches. Jul 14 22:48:22.427926 systemd-journald[1180]: Under memory pressure, flushing caches. Jul 14 22:48:22.427219 systemd-resolved[1541]: Flushed all caches. Jul 14 22:48:23.998545 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3346263800.mount: Deactivated successfully. Jul 14 22:48:24.108646 containerd[1647]: time="2025-07-14T22:48:24.108595410Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:48:24.110387 containerd[1647]: time="2025-07-14T22:48:24.110262129Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.2: active requests=0, bytes read=158500163" Jul 14 22:48:24.131247 containerd[1647]: time="2025-07-14T22:48:24.131201456Z" level=info msg="ImageCreate event name:\"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:48:24.132580 containerd[1647]: time="2025-07-14T22:48:24.132558439Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:48:24.135572 containerd[1647]: time="2025-07-14T22:48:24.135546576Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.2\" with image id \"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\", size \"158500025\" in 5.256318327s" Jul 14 22:48:24.135623 containerd[1647]: time="2025-07-14T22:48:24.135573453Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\" returns image reference \"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\"" Jul 14 22:48:24.166216 containerd[1647]: time="2025-07-14T22:48:24.165713397Z" level=info msg="CreateContainer within sandbox \"fed184ab0b1eb10d20e84a5422e807cd6de8f94b0769075410fc3212cd82052c\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jul 14 22:48:24.199592 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount416479212.mount: Deactivated successfully. Jul 14 22:48:24.203950 containerd[1647]: time="2025-07-14T22:48:24.203926192Z" level=info msg="CreateContainer within sandbox \"fed184ab0b1eb10d20e84a5422e807cd6de8f94b0769075410fc3212cd82052c\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"59a5476f4f89edfb225680ce375bf0b08525d224d3bdc4709c20af2e929f9896\"" Jul 14 22:48:24.206048 containerd[1647]: time="2025-07-14T22:48:24.205351610Z" level=info msg="StartContainer for \"59a5476f4f89edfb225680ce375bf0b08525d224d3bdc4709c20af2e929f9896\"" Jul 14 22:48:24.280563 containerd[1647]: time="2025-07-14T22:48:24.280496058Z" level=info msg="StartContainer for \"59a5476f4f89edfb225680ce375bf0b08525d224d3bdc4709c20af2e929f9896\" returns successfully" Jul 14 22:48:24.475097 systemd-resolved[1541]: Under memory pressure, flushing caches. Jul 14 22:48:24.475938 systemd-journald[1180]: Under memory pressure, flushing caches. Jul 14 22:48:24.475102 systemd-resolved[1541]: Flushed all caches. Jul 14 22:48:24.481895 kubelet[2904]: I0714 22:48:24.419808 2904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-jk66q" podStartSLOduration=1.138393107 podStartE2EDuration="16.343292075s" podCreationTimestamp="2025-07-14 22:48:08 +0000 UTC" firstStartedPulling="2025-07-14 22:48:08.936855319 +0000 UTC m=+17.328147605" lastFinishedPulling="2025-07-14 22:48:24.141754288 +0000 UTC m=+32.533046573" observedRunningTime="2025-07-14 22:48:24.343046249 +0000 UTC m=+32.734338544" watchObservedRunningTime="2025-07-14 22:48:24.343292075 +0000 UTC m=+32.734584365" Jul 14 22:48:24.723474 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jul 14 22:48:24.725003 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jul 14 22:48:25.014284 containerd[1647]: time="2025-07-14T22:48:25.014033904Z" level=info msg="StopPodSandbox for \"181daaed2da55bac42912658eeefb79cb7b48e91950d6111fc6887af05f2ed09\"" Jul 14 22:48:25.512908 containerd[1647]: 2025-07-14 22:48:25.085 [INFO][4156] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="181daaed2da55bac42912658eeefb79cb7b48e91950d6111fc6887af05f2ed09" Jul 14 22:48:25.512908 containerd[1647]: 2025-07-14 22:48:25.087 [INFO][4156] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="181daaed2da55bac42912658eeefb79cb7b48e91950d6111fc6887af05f2ed09" iface="eth0" netns="/var/run/netns/cni-ef293e36-b752-f5eb-50b6-7f07da1fe229" Jul 14 22:48:25.512908 containerd[1647]: 2025-07-14 22:48:25.088 [INFO][4156] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="181daaed2da55bac42912658eeefb79cb7b48e91950d6111fc6887af05f2ed09" iface="eth0" netns="/var/run/netns/cni-ef293e36-b752-f5eb-50b6-7f07da1fe229" Jul 14 22:48:25.512908 containerd[1647]: 2025-07-14 22:48:25.088 [INFO][4156] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="181daaed2da55bac42912658eeefb79cb7b48e91950d6111fc6887af05f2ed09" iface="eth0" netns="/var/run/netns/cni-ef293e36-b752-f5eb-50b6-7f07da1fe229" Jul 14 22:48:25.512908 containerd[1647]: 2025-07-14 22:48:25.088 [INFO][4156] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="181daaed2da55bac42912658eeefb79cb7b48e91950d6111fc6887af05f2ed09" Jul 14 22:48:25.512908 containerd[1647]: 2025-07-14 22:48:25.088 [INFO][4156] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="181daaed2da55bac42912658eeefb79cb7b48e91950d6111fc6887af05f2ed09" Jul 14 22:48:25.512908 containerd[1647]: 2025-07-14 22:48:25.492 [INFO][4163] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="181daaed2da55bac42912658eeefb79cb7b48e91950d6111fc6887af05f2ed09" HandleID="k8s-pod-network.181daaed2da55bac42912658eeefb79cb7b48e91950d6111fc6887af05f2ed09" Workload="localhost-k8s-whisker--6c897db977--sdsfz-eth0" Jul 14 22:48:25.512908 containerd[1647]: 2025-07-14 22:48:25.495 [INFO][4163] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 22:48:25.512908 containerd[1647]: 2025-07-14 22:48:25.495 [INFO][4163] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 22:48:25.512908 containerd[1647]: 2025-07-14 22:48:25.509 [WARNING][4163] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="181daaed2da55bac42912658eeefb79cb7b48e91950d6111fc6887af05f2ed09" HandleID="k8s-pod-network.181daaed2da55bac42912658eeefb79cb7b48e91950d6111fc6887af05f2ed09" Workload="localhost-k8s-whisker--6c897db977--sdsfz-eth0" Jul 14 22:48:25.512908 containerd[1647]: 2025-07-14 22:48:25.509 [INFO][4163] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="181daaed2da55bac42912658eeefb79cb7b48e91950d6111fc6887af05f2ed09" HandleID="k8s-pod-network.181daaed2da55bac42912658eeefb79cb7b48e91950d6111fc6887af05f2ed09" Workload="localhost-k8s-whisker--6c897db977--sdsfz-eth0" Jul 14 22:48:25.512908 containerd[1647]: 2025-07-14 22:48:25.510 [INFO][4163] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 22:48:25.512908 containerd[1647]: 2025-07-14 22:48:25.511 [INFO][4156] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="181daaed2da55bac42912658eeefb79cb7b48e91950d6111fc6887af05f2ed09" Jul 14 22:48:25.516182 systemd[1]: run-netns-cni\x2def293e36\x2db752\x2df5eb\x2d50b6\x2d7f07da1fe229.mount: Deactivated successfully. Jul 14 22:48:25.518567 containerd[1647]: time="2025-07-14T22:48:25.517923384Z" level=info msg="TearDown network for sandbox \"181daaed2da55bac42912658eeefb79cb7b48e91950d6111fc6887af05f2ed09\" successfully" Jul 14 22:48:25.518567 containerd[1647]: time="2025-07-14T22:48:25.517947930Z" level=info msg="StopPodSandbox for \"181daaed2da55bac42912658eeefb79cb7b48e91950d6111fc6887af05f2ed09\" returns successfully" Jul 14 22:48:25.649540 kubelet[2904]: I0714 22:48:25.649508 2904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/026a21cb-112c-4f93-a129-af707bcdd455-whisker-backend-key-pair\") pod \"026a21cb-112c-4f93-a129-af707bcdd455\" (UID: \"026a21cb-112c-4f93-a129-af707bcdd455\") " Jul 14 22:48:25.649867 kubelet[2904]: I0714 22:48:25.649561 2904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-br8fs\" (UniqueName: \"kubernetes.io/projected/026a21cb-112c-4f93-a129-af707bcdd455-kube-api-access-br8fs\") pod \"026a21cb-112c-4f93-a129-af707bcdd455\" (UID: \"026a21cb-112c-4f93-a129-af707bcdd455\") " Jul 14 22:48:25.649867 kubelet[2904]: I0714 22:48:25.649576 2904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/026a21cb-112c-4f93-a129-af707bcdd455-whisker-ca-bundle\") pod \"026a21cb-112c-4f93-a129-af707bcdd455\" (UID: \"026a21cb-112c-4f93-a129-af707bcdd455\") " Jul 14 22:48:25.665518 systemd[1]: var-lib-kubelet-pods-026a21cb\x2d112c\x2d4f93\x2da129\x2daf707bcdd455-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dbr8fs.mount: Deactivated successfully. Jul 14 22:48:25.665606 systemd[1]: var-lib-kubelet-pods-026a21cb\x2d112c\x2d4f93\x2da129\x2daf707bcdd455-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jul 14 22:48:25.668169 kubelet[2904]: I0714 22:48:25.667092 2904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/026a21cb-112c-4f93-a129-af707bcdd455-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "026a21cb-112c-4f93-a129-af707bcdd455" (UID: "026a21cb-112c-4f93-a129-af707bcdd455"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 14 22:48:25.668169 kubelet[2904]: I0714 22:48:25.668094 2904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/026a21cb-112c-4f93-a129-af707bcdd455-kube-api-access-br8fs" (OuterVolumeSpecName: "kube-api-access-br8fs") pod "026a21cb-112c-4f93-a129-af707bcdd455" (UID: "026a21cb-112c-4f93-a129-af707bcdd455"). InnerVolumeSpecName "kube-api-access-br8fs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 14 22:48:25.668555 kubelet[2904]: I0714 22:48:25.667263 2904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/026a21cb-112c-4f93-a129-af707bcdd455-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "026a21cb-112c-4f93-a129-af707bcdd455" (UID: "026a21cb-112c-4f93-a129-af707bcdd455"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 14 22:48:25.749979 kubelet[2904]: I0714 22:48:25.749948 2904 reconciler_common.go:293] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/026a21cb-112c-4f93-a129-af707bcdd455-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Jul 14 22:48:25.749979 kubelet[2904]: I0714 22:48:25.749975 2904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-br8fs\" (UniqueName: \"kubernetes.io/projected/026a21cb-112c-4f93-a129-af707bcdd455-kube-api-access-br8fs\") on node \"localhost\" DevicePath \"\"" Jul 14 22:48:25.749979 kubelet[2904]: I0714 22:48:25.749983 2904 reconciler_common.go:293] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/026a21cb-112c-4f93-a129-af707bcdd455-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Jul 14 22:48:26.454537 kubelet[2904]: I0714 22:48:26.454461 2904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/01f038e7-5393-4c92-b691-f6743b212cd6-whisker-ca-bundle\") pod \"whisker-54b4758775-6h28m\" (UID: \"01f038e7-5393-4c92-b691-f6743b212cd6\") " pod="calico-system/whisker-54b4758775-6h28m" Jul 14 22:48:26.454537 kubelet[2904]: I0714 22:48:26.454491 2904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/01f038e7-5393-4c92-b691-f6743b212cd6-whisker-backend-key-pair\") pod \"whisker-54b4758775-6h28m\" (UID: \"01f038e7-5393-4c92-b691-f6743b212cd6\") " pod="calico-system/whisker-54b4758775-6h28m" Jul 14 22:48:26.454537 kubelet[2904]: I0714 22:48:26.454501 2904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7mf86\" (UniqueName: \"kubernetes.io/projected/01f038e7-5393-4c92-b691-f6743b212cd6-kube-api-access-7mf86\") pod \"whisker-54b4758775-6h28m\" (UID: \"01f038e7-5393-4c92-b691-f6743b212cd6\") " pod="calico-system/whisker-54b4758775-6h28m" Jul 14 22:48:26.685221 containerd[1647]: time="2025-07-14T22:48:26.685195879Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-54b4758775-6h28m,Uid:01f038e7-5393-4c92-b691-f6743b212cd6,Namespace:calico-system,Attempt:0,}" Jul 14 22:48:26.770804 systemd-networkd[1287]: calie57cc9fe765: Link UP Jul 14 22:48:26.770932 systemd-networkd[1287]: calie57cc9fe765: Gained carrier Jul 14 22:48:26.779750 containerd[1647]: 2025-07-14 22:48:26.710 [INFO][4288] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 14 22:48:26.779750 containerd[1647]: 2025-07-14 22:48:26.718 [INFO][4288] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--54b4758775--6h28m-eth0 whisker-54b4758775- calico-system 01f038e7-5393-4c92-b691-f6743b212cd6 905 0 2025-07-14 22:48:26 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:54b4758775 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-54b4758775-6h28m eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calie57cc9fe765 [] [] }} ContainerID="a0ca95705e636d0fd437f020dfb0d4fbb1b5060adceafffdc71a7729a242616b" Namespace="calico-system" Pod="whisker-54b4758775-6h28m" WorkloadEndpoint="localhost-k8s-whisker--54b4758775--6h28m-" Jul 14 22:48:26.779750 containerd[1647]: 2025-07-14 22:48:26.718 [INFO][4288] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a0ca95705e636d0fd437f020dfb0d4fbb1b5060adceafffdc71a7729a242616b" Namespace="calico-system" Pod="whisker-54b4758775-6h28m" WorkloadEndpoint="localhost-k8s-whisker--54b4758775--6h28m-eth0" Jul 14 22:48:26.779750 containerd[1647]: 2025-07-14 22:48:26.733 [INFO][4296] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a0ca95705e636d0fd437f020dfb0d4fbb1b5060adceafffdc71a7729a242616b" HandleID="k8s-pod-network.a0ca95705e636d0fd437f020dfb0d4fbb1b5060adceafffdc71a7729a242616b" Workload="localhost-k8s-whisker--54b4758775--6h28m-eth0" Jul 14 22:48:26.779750 containerd[1647]: 2025-07-14 22:48:26.734 [INFO][4296] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a0ca95705e636d0fd437f020dfb0d4fbb1b5060adceafffdc71a7729a242616b" HandleID="k8s-pod-network.a0ca95705e636d0fd437f020dfb0d4fbb1b5060adceafffdc71a7729a242616b" Workload="localhost-k8s-whisker--54b4758775--6h28m-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024eff0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-54b4758775-6h28m", "timestamp":"2025-07-14 22:48:26.733963584 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 14 22:48:26.779750 containerd[1647]: 2025-07-14 22:48:26.734 [INFO][4296] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 22:48:26.779750 containerd[1647]: 2025-07-14 22:48:26.734 [INFO][4296] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 22:48:26.779750 containerd[1647]: 2025-07-14 22:48:26.734 [INFO][4296] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 14 22:48:26.779750 containerd[1647]: 2025-07-14 22:48:26.740 [INFO][4296] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a0ca95705e636d0fd437f020dfb0d4fbb1b5060adceafffdc71a7729a242616b" host="localhost" Jul 14 22:48:26.779750 containerd[1647]: 2025-07-14 22:48:26.748 [INFO][4296] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 14 22:48:26.779750 containerd[1647]: 2025-07-14 22:48:26.750 [INFO][4296] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 14 22:48:26.779750 containerd[1647]: 2025-07-14 22:48:26.751 [INFO][4296] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 14 22:48:26.779750 containerd[1647]: 2025-07-14 22:48:26.752 [INFO][4296] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 14 22:48:26.779750 containerd[1647]: 2025-07-14 22:48:26.752 [INFO][4296] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.a0ca95705e636d0fd437f020dfb0d4fbb1b5060adceafffdc71a7729a242616b" host="localhost" Jul 14 22:48:26.779750 containerd[1647]: 2025-07-14 22:48:26.753 [INFO][4296] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.a0ca95705e636d0fd437f020dfb0d4fbb1b5060adceafffdc71a7729a242616b Jul 14 22:48:26.779750 containerd[1647]: 2025-07-14 22:48:26.755 [INFO][4296] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.a0ca95705e636d0fd437f020dfb0d4fbb1b5060adceafffdc71a7729a242616b" host="localhost" Jul 14 22:48:26.779750 containerd[1647]: 2025-07-14 22:48:26.758 [INFO][4296] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.a0ca95705e636d0fd437f020dfb0d4fbb1b5060adceafffdc71a7729a242616b" host="localhost" Jul 14 22:48:26.779750 containerd[1647]: 2025-07-14 22:48:26.758 [INFO][4296] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.a0ca95705e636d0fd437f020dfb0d4fbb1b5060adceafffdc71a7729a242616b" host="localhost" Jul 14 22:48:26.779750 containerd[1647]: 2025-07-14 22:48:26.758 [INFO][4296] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 22:48:26.779750 containerd[1647]: 2025-07-14 22:48:26.758 [INFO][4296] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="a0ca95705e636d0fd437f020dfb0d4fbb1b5060adceafffdc71a7729a242616b" HandleID="k8s-pod-network.a0ca95705e636d0fd437f020dfb0d4fbb1b5060adceafffdc71a7729a242616b" Workload="localhost-k8s-whisker--54b4758775--6h28m-eth0" Jul 14 22:48:26.785195 containerd[1647]: 2025-07-14 22:48:26.760 [INFO][4288] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a0ca95705e636d0fd437f020dfb0d4fbb1b5060adceafffdc71a7729a242616b" Namespace="calico-system" Pod="whisker-54b4758775-6h28m" WorkloadEndpoint="localhost-k8s-whisker--54b4758775--6h28m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--54b4758775--6h28m-eth0", GenerateName:"whisker-54b4758775-", Namespace:"calico-system", SelfLink:"", UID:"01f038e7-5393-4c92-b691-f6743b212cd6", ResourceVersion:"905", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 22, 48, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"54b4758775", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-54b4758775-6h28m", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calie57cc9fe765", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 22:48:26.785195 containerd[1647]: 2025-07-14 22:48:26.760 [INFO][4288] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="a0ca95705e636d0fd437f020dfb0d4fbb1b5060adceafffdc71a7729a242616b" Namespace="calico-system" Pod="whisker-54b4758775-6h28m" WorkloadEndpoint="localhost-k8s-whisker--54b4758775--6h28m-eth0" Jul 14 22:48:26.785195 containerd[1647]: 2025-07-14 22:48:26.760 [INFO][4288] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie57cc9fe765 ContainerID="a0ca95705e636d0fd437f020dfb0d4fbb1b5060adceafffdc71a7729a242616b" Namespace="calico-system" Pod="whisker-54b4758775-6h28m" WorkloadEndpoint="localhost-k8s-whisker--54b4758775--6h28m-eth0" Jul 14 22:48:26.785195 containerd[1647]: 2025-07-14 22:48:26.767 [INFO][4288] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a0ca95705e636d0fd437f020dfb0d4fbb1b5060adceafffdc71a7729a242616b" Namespace="calico-system" Pod="whisker-54b4758775-6h28m" WorkloadEndpoint="localhost-k8s-whisker--54b4758775--6h28m-eth0" Jul 14 22:48:26.785195 containerd[1647]: 2025-07-14 22:48:26.768 [INFO][4288] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a0ca95705e636d0fd437f020dfb0d4fbb1b5060adceafffdc71a7729a242616b" Namespace="calico-system" Pod="whisker-54b4758775-6h28m" WorkloadEndpoint="localhost-k8s-whisker--54b4758775--6h28m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--54b4758775--6h28m-eth0", GenerateName:"whisker-54b4758775-", Namespace:"calico-system", SelfLink:"", UID:"01f038e7-5393-4c92-b691-f6743b212cd6", ResourceVersion:"905", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 22, 48, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"54b4758775", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a0ca95705e636d0fd437f020dfb0d4fbb1b5060adceafffdc71a7729a242616b", Pod:"whisker-54b4758775-6h28m", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calie57cc9fe765", MAC:"de:5f:50:f2:71:08", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 22:48:26.785195 containerd[1647]: 2025-07-14 22:48:26.777 [INFO][4288] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a0ca95705e636d0fd437f020dfb0d4fbb1b5060adceafffdc71a7729a242616b" Namespace="calico-system" Pod="whisker-54b4758775-6h28m" WorkloadEndpoint="localhost-k8s-whisker--54b4758775--6h28m-eth0" Jul 14 22:48:26.797551 containerd[1647]: time="2025-07-14T22:48:26.796884280Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 22:48:26.797551 containerd[1647]: time="2025-07-14T22:48:26.796957036Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 22:48:26.797551 containerd[1647]: time="2025-07-14T22:48:26.796966890Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 22:48:26.797551 containerd[1647]: time="2025-07-14T22:48:26.797016907Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 22:48:26.819867 systemd-resolved[1541]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 14 22:48:26.855642 containerd[1647]: time="2025-07-14T22:48:26.855617719Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-54b4758775-6h28m,Uid:01f038e7-5393-4c92-b691-f6743b212cd6,Namespace:calico-system,Attempt:0,} returns sandbox id \"a0ca95705e636d0fd437f020dfb0d4fbb1b5060adceafffdc71a7729a242616b\"" Jul 14 22:48:26.862545 containerd[1647]: time="2025-07-14T22:48:26.862519032Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\"" Jul 14 22:48:27.693237 kubelet[2904]: I0714 22:48:27.693212 2904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="026a21cb-112c-4f93-a129-af707bcdd455" path="/var/lib/kubelet/pods/026a21cb-112c-4f93-a129-af707bcdd455/volumes" Jul 14 22:48:27.995055 systemd-networkd[1287]: calie57cc9fe765: Gained IPv6LL Jul 14 22:48:28.275804 containerd[1647]: time="2025-07-14T22:48:28.275239906Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:48:28.275804 containerd[1647]: time="2025-07-14T22:48:28.275730995Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.2: active requests=0, bytes read=4661207" Jul 14 22:48:28.276102 containerd[1647]: time="2025-07-14T22:48:28.276075686Z" level=info msg="ImageCreate event name:\"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:48:28.277610 containerd[1647]: time="2025-07-14T22:48:28.277591595Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.2\" with image id \"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\", size \"6153902\" in 1.415049352s" Jul 14 22:48:28.277688 containerd[1647]: time="2025-07-14T22:48:28.277679452Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\" returns image reference \"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\"" Jul 14 22:48:28.277838 containerd[1647]: time="2025-07-14T22:48:28.277822986Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:48:28.282611 containerd[1647]: time="2025-07-14T22:48:28.282511070Z" level=info msg="CreateContainer within sandbox \"a0ca95705e636d0fd437f020dfb0d4fbb1b5060adceafffdc71a7729a242616b\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Jul 14 22:48:28.296931 containerd[1647]: time="2025-07-14T22:48:28.296758896Z" level=info msg="CreateContainer within sandbox \"a0ca95705e636d0fd437f020dfb0d4fbb1b5060adceafffdc71a7729a242616b\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"21a0d2a487e54ffc0d4850c0baf0eda830932bbbf4556cccd5e137c42e5b671b\"" Jul 14 22:48:28.299485 containerd[1647]: time="2025-07-14T22:48:28.297649941Z" level=info msg="StartContainer for \"21a0d2a487e54ffc0d4850c0baf0eda830932bbbf4556cccd5e137c42e5b671b\"" Jul 14 22:48:28.369186 containerd[1647]: time="2025-07-14T22:48:28.369163809Z" level=info msg="StartContainer for \"21a0d2a487e54ffc0d4850c0baf0eda830932bbbf4556cccd5e137c42e5b671b\" returns successfully" Jul 14 22:48:28.371419 containerd[1647]: time="2025-07-14T22:48:28.371396667Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\"" Jul 14 22:48:30.168007 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3843745148.mount: Deactivated successfully. Jul 14 22:48:30.270211 kubelet[2904]: I0714 22:48:30.270186 2904 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 14 22:48:30.402227 containerd[1647]: time="2025-07-14T22:48:30.402192345Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:48:30.404102 containerd[1647]: time="2025-07-14T22:48:30.403599732Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.2: active requests=0, bytes read=33083477" Jul 14 22:48:30.404102 containerd[1647]: time="2025-07-14T22:48:30.403946075Z" level=info msg="ImageCreate event name:\"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:48:30.406370 containerd[1647]: time="2025-07-14T22:48:30.406154265Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:48:30.406747 containerd[1647]: time="2025-07-14T22:48:30.406662194Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" with image id \"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\", size \"33083307\" in 2.034708578s" Jul 14 22:48:30.406747 containerd[1647]: time="2025-07-14T22:48:30.406678869Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" returns image reference \"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\"" Jul 14 22:48:30.409516 containerd[1647]: time="2025-07-14T22:48:30.409282742Z" level=info msg="CreateContainer within sandbox \"a0ca95705e636d0fd437f020dfb0d4fbb1b5060adceafffdc71a7729a242616b\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Jul 14 22:48:30.418302 containerd[1647]: time="2025-07-14T22:48:30.418239392Z" level=info msg="CreateContainer within sandbox \"a0ca95705e636d0fd437f020dfb0d4fbb1b5060adceafffdc71a7729a242616b\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"39635abfeef546e5bba0b588e06d01ebc0ba1ae49f9cf743855b2e794cef2743\"" Jul 14 22:48:30.419164 containerd[1647]: time="2025-07-14T22:48:30.419004799Z" level=info msg="StartContainer for \"39635abfeef546e5bba0b588e06d01ebc0ba1ae49f9cf743855b2e794cef2743\"" Jul 14 22:48:30.509011 containerd[1647]: time="2025-07-14T22:48:30.508446447Z" level=info msg="StartContainer for \"39635abfeef546e5bba0b588e06d01ebc0ba1ae49f9cf743855b2e794cef2743\" returns successfully" Jul 14 22:48:30.690792 containerd[1647]: time="2025-07-14T22:48:30.690703327Z" level=info msg="StopPodSandbox for \"d116ef66974640c1c387c598585635d19686f4e057958521a59c62b407cb2786\"" Jul 14 22:48:30.747627 containerd[1647]: 2025-07-14 22:48:30.725 [INFO][4530] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d116ef66974640c1c387c598585635d19686f4e057958521a59c62b407cb2786" Jul 14 22:48:30.747627 containerd[1647]: 2025-07-14 22:48:30.725 [INFO][4530] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="d116ef66974640c1c387c598585635d19686f4e057958521a59c62b407cb2786" iface="eth0" netns="/var/run/netns/cni-bbf87355-781b-7344-c53b-3ff141866b9f" Jul 14 22:48:30.747627 containerd[1647]: 2025-07-14 22:48:30.725 [INFO][4530] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="d116ef66974640c1c387c598585635d19686f4e057958521a59c62b407cb2786" iface="eth0" netns="/var/run/netns/cni-bbf87355-781b-7344-c53b-3ff141866b9f" Jul 14 22:48:30.747627 containerd[1647]: 2025-07-14 22:48:30.725 [INFO][4530] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="d116ef66974640c1c387c598585635d19686f4e057958521a59c62b407cb2786" iface="eth0" netns="/var/run/netns/cni-bbf87355-781b-7344-c53b-3ff141866b9f" Jul 14 22:48:30.747627 containerd[1647]: 2025-07-14 22:48:30.725 [INFO][4530] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d116ef66974640c1c387c598585635d19686f4e057958521a59c62b407cb2786" Jul 14 22:48:30.747627 containerd[1647]: 2025-07-14 22:48:30.726 [INFO][4530] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d116ef66974640c1c387c598585635d19686f4e057958521a59c62b407cb2786" Jul 14 22:48:30.747627 containerd[1647]: 2025-07-14 22:48:30.740 [INFO][4537] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d116ef66974640c1c387c598585635d19686f4e057958521a59c62b407cb2786" HandleID="k8s-pod-network.d116ef66974640c1c387c598585635d19686f4e057958521a59c62b407cb2786" Workload="localhost-k8s-calico--apiserver--77d5bbcd9f--rxk2m-eth0" Jul 14 22:48:30.747627 containerd[1647]: 2025-07-14 22:48:30.740 [INFO][4537] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 22:48:30.747627 containerd[1647]: 2025-07-14 22:48:30.740 [INFO][4537] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 22:48:30.747627 containerd[1647]: 2025-07-14 22:48:30.744 [WARNING][4537] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d116ef66974640c1c387c598585635d19686f4e057958521a59c62b407cb2786" HandleID="k8s-pod-network.d116ef66974640c1c387c598585635d19686f4e057958521a59c62b407cb2786" Workload="localhost-k8s-calico--apiserver--77d5bbcd9f--rxk2m-eth0" Jul 14 22:48:30.747627 containerd[1647]: 2025-07-14 22:48:30.744 [INFO][4537] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d116ef66974640c1c387c598585635d19686f4e057958521a59c62b407cb2786" HandleID="k8s-pod-network.d116ef66974640c1c387c598585635d19686f4e057958521a59c62b407cb2786" Workload="localhost-k8s-calico--apiserver--77d5bbcd9f--rxk2m-eth0" Jul 14 22:48:30.747627 containerd[1647]: 2025-07-14 22:48:30.745 [INFO][4537] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 22:48:30.747627 containerd[1647]: 2025-07-14 22:48:30.746 [INFO][4530] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d116ef66974640c1c387c598585635d19686f4e057958521a59c62b407cb2786" Jul 14 22:48:30.751016 containerd[1647]: time="2025-07-14T22:48:30.747732194Z" level=info msg="TearDown network for sandbox \"d116ef66974640c1c387c598585635d19686f4e057958521a59c62b407cb2786\" successfully" Jul 14 22:48:30.751016 containerd[1647]: time="2025-07-14T22:48:30.747749376Z" level=info msg="StopPodSandbox for \"d116ef66974640c1c387c598585635d19686f4e057958521a59c62b407cb2786\" returns successfully" Jul 14 22:48:30.751016 containerd[1647]: time="2025-07-14T22:48:30.748168241Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-77d5bbcd9f-rxk2m,Uid:e45da145-d058-4e98-b935-90ad928df47e,Namespace:calico-apiserver,Attempt:1,}" Jul 14 22:48:30.809943 systemd-networkd[1287]: calie41ca5c1d74: Link UP Jul 14 22:48:30.812117 systemd-networkd[1287]: calie41ca5c1d74: Gained carrier Jul 14 22:48:30.821792 containerd[1647]: 2025-07-14 22:48:30.766 [INFO][4546] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 14 22:48:30.821792 containerd[1647]: 2025-07-14 22:48:30.772 [INFO][4546] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--77d5bbcd9f--rxk2m-eth0 calico-apiserver-77d5bbcd9f- calico-apiserver e45da145-d058-4e98-b935-90ad928df47e 934 0 2025-07-14 22:48:04 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:77d5bbcd9f projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-77d5bbcd9f-rxk2m eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calie41ca5c1d74 [] [] }} ContainerID="631a57e6751a7b3eb79f5126b4e234a42a0a5d77acdaa82eb36180f231d284fd" Namespace="calico-apiserver" Pod="calico-apiserver-77d5bbcd9f-rxk2m" WorkloadEndpoint="localhost-k8s-calico--apiserver--77d5bbcd9f--rxk2m-" Jul 14 22:48:30.821792 containerd[1647]: 2025-07-14 22:48:30.772 [INFO][4546] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="631a57e6751a7b3eb79f5126b4e234a42a0a5d77acdaa82eb36180f231d284fd" Namespace="calico-apiserver" Pod="calico-apiserver-77d5bbcd9f-rxk2m" WorkloadEndpoint="localhost-k8s-calico--apiserver--77d5bbcd9f--rxk2m-eth0" Jul 14 22:48:30.821792 containerd[1647]: 2025-07-14 22:48:30.788 [INFO][4557] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="631a57e6751a7b3eb79f5126b4e234a42a0a5d77acdaa82eb36180f231d284fd" HandleID="k8s-pod-network.631a57e6751a7b3eb79f5126b4e234a42a0a5d77acdaa82eb36180f231d284fd" Workload="localhost-k8s-calico--apiserver--77d5bbcd9f--rxk2m-eth0" Jul 14 22:48:30.821792 containerd[1647]: 2025-07-14 22:48:30.788 [INFO][4557] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="631a57e6751a7b3eb79f5126b4e234a42a0a5d77acdaa82eb36180f231d284fd" HandleID="k8s-pod-network.631a57e6751a7b3eb79f5126b4e234a42a0a5d77acdaa82eb36180f231d284fd" Workload="localhost-k8s-calico--apiserver--77d5bbcd9f--rxk2m-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d4ff0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-77d5bbcd9f-rxk2m", "timestamp":"2025-07-14 22:48:30.788149263 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 14 22:48:30.821792 containerd[1647]: 2025-07-14 22:48:30.788 [INFO][4557] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 22:48:30.821792 containerd[1647]: 2025-07-14 22:48:30.788 [INFO][4557] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 22:48:30.821792 containerd[1647]: 2025-07-14 22:48:30.788 [INFO][4557] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 14 22:48:30.821792 containerd[1647]: 2025-07-14 22:48:30.791 [INFO][4557] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.631a57e6751a7b3eb79f5126b4e234a42a0a5d77acdaa82eb36180f231d284fd" host="localhost" Jul 14 22:48:30.821792 containerd[1647]: 2025-07-14 22:48:30.793 [INFO][4557] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 14 22:48:30.821792 containerd[1647]: 2025-07-14 22:48:30.795 [INFO][4557] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 14 22:48:30.821792 containerd[1647]: 2025-07-14 22:48:30.796 [INFO][4557] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 14 22:48:30.821792 containerd[1647]: 2025-07-14 22:48:30.798 [INFO][4557] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 14 22:48:30.821792 containerd[1647]: 2025-07-14 22:48:30.798 [INFO][4557] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.631a57e6751a7b3eb79f5126b4e234a42a0a5d77acdaa82eb36180f231d284fd" host="localhost" Jul 14 22:48:30.821792 containerd[1647]: 2025-07-14 22:48:30.799 [INFO][4557] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.631a57e6751a7b3eb79f5126b4e234a42a0a5d77acdaa82eb36180f231d284fd Jul 14 22:48:30.821792 containerd[1647]: 2025-07-14 22:48:30.801 [INFO][4557] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.631a57e6751a7b3eb79f5126b4e234a42a0a5d77acdaa82eb36180f231d284fd" host="localhost" Jul 14 22:48:30.821792 containerd[1647]: 2025-07-14 22:48:30.804 [INFO][4557] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.631a57e6751a7b3eb79f5126b4e234a42a0a5d77acdaa82eb36180f231d284fd" host="localhost" Jul 14 22:48:30.821792 containerd[1647]: 2025-07-14 22:48:30.804 [INFO][4557] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.631a57e6751a7b3eb79f5126b4e234a42a0a5d77acdaa82eb36180f231d284fd" host="localhost" Jul 14 22:48:30.821792 containerd[1647]: 2025-07-14 22:48:30.804 [INFO][4557] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 22:48:30.821792 containerd[1647]: 2025-07-14 22:48:30.804 [INFO][4557] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="631a57e6751a7b3eb79f5126b4e234a42a0a5d77acdaa82eb36180f231d284fd" HandleID="k8s-pod-network.631a57e6751a7b3eb79f5126b4e234a42a0a5d77acdaa82eb36180f231d284fd" Workload="localhost-k8s-calico--apiserver--77d5bbcd9f--rxk2m-eth0" Jul 14 22:48:30.823903 containerd[1647]: 2025-07-14 22:48:30.805 [INFO][4546] cni-plugin/k8s.go 418: Populated endpoint ContainerID="631a57e6751a7b3eb79f5126b4e234a42a0a5d77acdaa82eb36180f231d284fd" Namespace="calico-apiserver" Pod="calico-apiserver-77d5bbcd9f-rxk2m" WorkloadEndpoint="localhost-k8s-calico--apiserver--77d5bbcd9f--rxk2m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--77d5bbcd9f--rxk2m-eth0", GenerateName:"calico-apiserver-77d5bbcd9f-", Namespace:"calico-apiserver", SelfLink:"", UID:"e45da145-d058-4e98-b935-90ad928df47e", ResourceVersion:"934", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 22, 48, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"77d5bbcd9f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-77d5bbcd9f-rxk2m", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie41ca5c1d74", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 22:48:30.823903 containerd[1647]: 2025-07-14 22:48:30.806 [INFO][4546] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="631a57e6751a7b3eb79f5126b4e234a42a0a5d77acdaa82eb36180f231d284fd" Namespace="calico-apiserver" Pod="calico-apiserver-77d5bbcd9f-rxk2m" WorkloadEndpoint="localhost-k8s-calico--apiserver--77d5bbcd9f--rxk2m-eth0" Jul 14 22:48:30.823903 containerd[1647]: 2025-07-14 22:48:30.806 [INFO][4546] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie41ca5c1d74 ContainerID="631a57e6751a7b3eb79f5126b4e234a42a0a5d77acdaa82eb36180f231d284fd" Namespace="calico-apiserver" Pod="calico-apiserver-77d5bbcd9f-rxk2m" WorkloadEndpoint="localhost-k8s-calico--apiserver--77d5bbcd9f--rxk2m-eth0" Jul 14 22:48:30.823903 containerd[1647]: 2025-07-14 22:48:30.812 [INFO][4546] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="631a57e6751a7b3eb79f5126b4e234a42a0a5d77acdaa82eb36180f231d284fd" Namespace="calico-apiserver" Pod="calico-apiserver-77d5bbcd9f-rxk2m" WorkloadEndpoint="localhost-k8s-calico--apiserver--77d5bbcd9f--rxk2m-eth0" Jul 14 22:48:30.823903 containerd[1647]: 2025-07-14 22:48:30.812 [INFO][4546] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="631a57e6751a7b3eb79f5126b4e234a42a0a5d77acdaa82eb36180f231d284fd" Namespace="calico-apiserver" Pod="calico-apiserver-77d5bbcd9f-rxk2m" WorkloadEndpoint="localhost-k8s-calico--apiserver--77d5bbcd9f--rxk2m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--77d5bbcd9f--rxk2m-eth0", GenerateName:"calico-apiserver-77d5bbcd9f-", Namespace:"calico-apiserver", SelfLink:"", UID:"e45da145-d058-4e98-b935-90ad928df47e", ResourceVersion:"934", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 22, 48, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"77d5bbcd9f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"631a57e6751a7b3eb79f5126b4e234a42a0a5d77acdaa82eb36180f231d284fd", Pod:"calico-apiserver-77d5bbcd9f-rxk2m", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie41ca5c1d74", MAC:"ae:70:dc:85:24:ae", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 22:48:30.823903 containerd[1647]: 2025-07-14 22:48:30.818 [INFO][4546] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="631a57e6751a7b3eb79f5126b4e234a42a0a5d77acdaa82eb36180f231d284fd" Namespace="calico-apiserver" Pod="calico-apiserver-77d5bbcd9f-rxk2m" WorkloadEndpoint="localhost-k8s-calico--apiserver--77d5bbcd9f--rxk2m-eth0" Jul 14 22:48:30.841791 containerd[1647]: time="2025-07-14T22:48:30.841591196Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 22:48:30.841791 containerd[1647]: time="2025-07-14T22:48:30.841657183Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 22:48:30.841791 containerd[1647]: time="2025-07-14T22:48:30.841671740Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 22:48:30.843795 systemd[1]: run-containerd-runc-k8s.io-39635abfeef546e5bba0b588e06d01ebc0ba1ae49f9cf743855b2e794cef2743-runc.sXsTs7.mount: Deactivated successfully. Jul 14 22:48:30.843876 systemd[1]: run-netns-cni\x2dbbf87355\x2d781b\x2d7344\x2dc53b\x2d3ff141866b9f.mount: Deactivated successfully. Jul 14 22:48:30.862635 containerd[1647]: time="2025-07-14T22:48:30.843028915Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 22:48:30.860831 systemd-resolved[1541]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 14 22:48:30.881083 containerd[1647]: time="2025-07-14T22:48:30.881063304Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-77d5bbcd9f-rxk2m,Uid:e45da145-d058-4e98-b935-90ad928df47e,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"631a57e6751a7b3eb79f5126b4e234a42a0a5d77acdaa82eb36180f231d284fd\"" Jul 14 22:48:30.882198 containerd[1647]: time="2025-07-14T22:48:30.881975390Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 14 22:48:31.348541 kernel: bpftool[4631]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jul 14 22:48:31.693170 containerd[1647]: time="2025-07-14T22:48:31.693101447Z" level=info msg="StopPodSandbox for \"2a1150561925bddf190ca0976d94ca61566f7bd01fec93b03508341d9f258f5e\"" Jul 14 22:48:31.694709 containerd[1647]: time="2025-07-14T22:48:31.694694746Z" level=info msg="StopPodSandbox for \"0a7dd75a6ea966f208f3023566629d6115f7728e6919df1c479ac65d1af0336c\"" Jul 14 22:48:31.695764 containerd[1647]: time="2025-07-14T22:48:31.694943634Z" level=info msg="StopPodSandbox for \"35e1f265fc455895e37599f3f822f6e951a54ca84be8aa596a85abf73900a391\"" Jul 14 22:48:31.769577 kubelet[2904]: I0714 22:48:31.769541 2904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-54b4758775-6h28m" podStartSLOduration=2.222161425 podStartE2EDuration="5.769525972s" podCreationTimestamp="2025-07-14 22:48:26 +0000 UTC" firstStartedPulling="2025-07-14 22:48:26.86006241 +0000 UTC m=+35.251354702" lastFinishedPulling="2025-07-14 22:48:30.407426963 +0000 UTC m=+38.798719249" observedRunningTime="2025-07-14 22:48:31.354935264 +0000 UTC m=+39.746227553" watchObservedRunningTime="2025-07-14 22:48:31.769525972 +0000 UTC m=+40.160818261" Jul 14 22:48:31.822213 systemd-networkd[1287]: vxlan.calico: Link UP Jul 14 22:48:31.822217 systemd-networkd[1287]: vxlan.calico: Gained carrier Jul 14 22:48:31.840631 containerd[1647]: 2025-07-14 22:48:31.770 [INFO][4707] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="35e1f265fc455895e37599f3f822f6e951a54ca84be8aa596a85abf73900a391" Jul 14 22:48:31.840631 containerd[1647]: 2025-07-14 22:48:31.770 [INFO][4707] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="35e1f265fc455895e37599f3f822f6e951a54ca84be8aa596a85abf73900a391" iface="eth0" netns="/var/run/netns/cni-06eccf6c-aac7-6c20-1207-63d88709c9b8" Jul 14 22:48:31.840631 containerd[1647]: 2025-07-14 22:48:31.770 [INFO][4707] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="35e1f265fc455895e37599f3f822f6e951a54ca84be8aa596a85abf73900a391" iface="eth0" netns="/var/run/netns/cni-06eccf6c-aac7-6c20-1207-63d88709c9b8" Jul 14 22:48:31.840631 containerd[1647]: 2025-07-14 22:48:31.770 [INFO][4707] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="35e1f265fc455895e37599f3f822f6e951a54ca84be8aa596a85abf73900a391" iface="eth0" netns="/var/run/netns/cni-06eccf6c-aac7-6c20-1207-63d88709c9b8" Jul 14 22:48:31.840631 containerd[1647]: 2025-07-14 22:48:31.770 [INFO][4707] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="35e1f265fc455895e37599f3f822f6e951a54ca84be8aa596a85abf73900a391" Jul 14 22:48:31.840631 containerd[1647]: 2025-07-14 22:48:31.770 [INFO][4707] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="35e1f265fc455895e37599f3f822f6e951a54ca84be8aa596a85abf73900a391" Jul 14 22:48:31.840631 containerd[1647]: 2025-07-14 22:48:31.809 [INFO][4721] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="35e1f265fc455895e37599f3f822f6e951a54ca84be8aa596a85abf73900a391" HandleID="k8s-pod-network.35e1f265fc455895e37599f3f822f6e951a54ca84be8aa596a85abf73900a391" Workload="localhost-k8s-calico--apiserver--77d5bbcd9f--bfs2w-eth0" Jul 14 22:48:31.840631 containerd[1647]: 2025-07-14 22:48:31.809 [INFO][4721] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 22:48:31.840631 containerd[1647]: 2025-07-14 22:48:31.810 [INFO][4721] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 22:48:31.840631 containerd[1647]: 2025-07-14 22:48:31.824 [WARNING][4721] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="35e1f265fc455895e37599f3f822f6e951a54ca84be8aa596a85abf73900a391" HandleID="k8s-pod-network.35e1f265fc455895e37599f3f822f6e951a54ca84be8aa596a85abf73900a391" Workload="localhost-k8s-calico--apiserver--77d5bbcd9f--bfs2w-eth0" Jul 14 22:48:31.840631 containerd[1647]: 2025-07-14 22:48:31.829 [INFO][4721] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="35e1f265fc455895e37599f3f822f6e951a54ca84be8aa596a85abf73900a391" HandleID="k8s-pod-network.35e1f265fc455895e37599f3f822f6e951a54ca84be8aa596a85abf73900a391" Workload="localhost-k8s-calico--apiserver--77d5bbcd9f--bfs2w-eth0" Jul 14 22:48:31.840631 containerd[1647]: 2025-07-14 22:48:31.838 [INFO][4721] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 22:48:31.840631 containerd[1647]: 2025-07-14 22:48:31.839 [INFO][4707] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="35e1f265fc455895e37599f3f822f6e951a54ca84be8aa596a85abf73900a391" Jul 14 22:48:31.848101 containerd[1647]: time="2025-07-14T22:48:31.841964534Z" level=info msg="TearDown network for sandbox \"35e1f265fc455895e37599f3f822f6e951a54ca84be8aa596a85abf73900a391\" successfully" Jul 14 22:48:31.848101 containerd[1647]: time="2025-07-14T22:48:31.842000627Z" level=info msg="StopPodSandbox for \"35e1f265fc455895e37599f3f822f6e951a54ca84be8aa596a85abf73900a391\" returns successfully" Jul 14 22:48:31.844273 systemd[1]: run-netns-cni\x2d06eccf6c\x2daac7\x2d6c20\x2d1207\x2d63d88709c9b8.mount: Deactivated successfully. Jul 14 22:48:31.854165 containerd[1647]: time="2025-07-14T22:48:31.854083413Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-77d5bbcd9f-bfs2w,Uid:5ab63976-6c51-436b-83f8-138c3003c36b,Namespace:calico-apiserver,Attempt:1,}" Jul 14 22:48:31.856772 containerd[1647]: 2025-07-14 22:48:31.779 [INFO][4698] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2a1150561925bddf190ca0976d94ca61566f7bd01fec93b03508341d9f258f5e" Jul 14 22:48:31.856772 containerd[1647]: 2025-07-14 22:48:31.780 [INFO][4698] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="2a1150561925bddf190ca0976d94ca61566f7bd01fec93b03508341d9f258f5e" iface="eth0" netns="/var/run/netns/cni-34ac0300-8c4c-226b-ce16-01c5c09be1ec" Jul 14 22:48:31.856772 containerd[1647]: 2025-07-14 22:48:31.780 [INFO][4698] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="2a1150561925bddf190ca0976d94ca61566f7bd01fec93b03508341d9f258f5e" iface="eth0" netns="/var/run/netns/cni-34ac0300-8c4c-226b-ce16-01c5c09be1ec" Jul 14 22:48:31.856772 containerd[1647]: 2025-07-14 22:48:31.780 [INFO][4698] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="2a1150561925bddf190ca0976d94ca61566f7bd01fec93b03508341d9f258f5e" iface="eth0" netns="/var/run/netns/cni-34ac0300-8c4c-226b-ce16-01c5c09be1ec" Jul 14 22:48:31.856772 containerd[1647]: 2025-07-14 22:48:31.780 [INFO][4698] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2a1150561925bddf190ca0976d94ca61566f7bd01fec93b03508341d9f258f5e" Jul 14 22:48:31.856772 containerd[1647]: 2025-07-14 22:48:31.780 [INFO][4698] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2a1150561925bddf190ca0976d94ca61566f7bd01fec93b03508341d9f258f5e" Jul 14 22:48:31.856772 containerd[1647]: 2025-07-14 22:48:31.833 [INFO][4726] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2a1150561925bddf190ca0976d94ca61566f7bd01fec93b03508341d9f258f5e" HandleID="k8s-pod-network.2a1150561925bddf190ca0976d94ca61566f7bd01fec93b03508341d9f258f5e" Workload="localhost-k8s-calico--kube--controllers--58bddbbd7--xf2v7-eth0" Jul 14 22:48:31.856772 containerd[1647]: 2025-07-14 22:48:31.833 [INFO][4726] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 22:48:31.856772 containerd[1647]: 2025-07-14 22:48:31.838 [INFO][4726] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 22:48:31.856772 containerd[1647]: 2025-07-14 22:48:31.850 [WARNING][4726] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2a1150561925bddf190ca0976d94ca61566f7bd01fec93b03508341d9f258f5e" HandleID="k8s-pod-network.2a1150561925bddf190ca0976d94ca61566f7bd01fec93b03508341d9f258f5e" Workload="localhost-k8s-calico--kube--controllers--58bddbbd7--xf2v7-eth0" Jul 14 22:48:31.856772 containerd[1647]: 2025-07-14 22:48:31.850 [INFO][4726] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2a1150561925bddf190ca0976d94ca61566f7bd01fec93b03508341d9f258f5e" HandleID="k8s-pod-network.2a1150561925bddf190ca0976d94ca61566f7bd01fec93b03508341d9f258f5e" Workload="localhost-k8s-calico--kube--controllers--58bddbbd7--xf2v7-eth0" Jul 14 22:48:31.856772 containerd[1647]: 2025-07-14 22:48:31.851 [INFO][4726] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 22:48:31.856772 containerd[1647]: 2025-07-14 22:48:31.854 [INFO][4698] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2a1150561925bddf190ca0976d94ca61566f7bd01fec93b03508341d9f258f5e" Jul 14 22:48:31.884299 containerd[1647]: time="2025-07-14T22:48:31.859838160Z" level=info msg="TearDown network for sandbox \"2a1150561925bddf190ca0976d94ca61566f7bd01fec93b03508341d9f258f5e\" successfully" Jul 14 22:48:31.884299 containerd[1647]: time="2025-07-14T22:48:31.859856620Z" level=info msg="StopPodSandbox for \"2a1150561925bddf190ca0976d94ca61566f7bd01fec93b03508341d9f258f5e\" returns successfully" Jul 14 22:48:31.884299 containerd[1647]: 2025-07-14 22:48:31.794 [INFO][4703] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0a7dd75a6ea966f208f3023566629d6115f7728e6919df1c479ac65d1af0336c" Jul 14 22:48:31.884299 containerd[1647]: 2025-07-14 22:48:31.796 [INFO][4703] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="0a7dd75a6ea966f208f3023566629d6115f7728e6919df1c479ac65d1af0336c" iface="eth0" netns="/var/run/netns/cni-e488abda-b088-86a3-b705-ff0e3eb85ece" Jul 14 22:48:31.884299 containerd[1647]: 2025-07-14 22:48:31.796 [INFO][4703] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="0a7dd75a6ea966f208f3023566629d6115f7728e6919df1c479ac65d1af0336c" iface="eth0" netns="/var/run/netns/cni-e488abda-b088-86a3-b705-ff0e3eb85ece" Jul 14 22:48:31.884299 containerd[1647]: 2025-07-14 22:48:31.797 [INFO][4703] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="0a7dd75a6ea966f208f3023566629d6115f7728e6919df1c479ac65d1af0336c" iface="eth0" netns="/var/run/netns/cni-e488abda-b088-86a3-b705-ff0e3eb85ece" Jul 14 22:48:31.884299 containerd[1647]: 2025-07-14 22:48:31.797 [INFO][4703] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0a7dd75a6ea966f208f3023566629d6115f7728e6919df1c479ac65d1af0336c" Jul 14 22:48:31.884299 containerd[1647]: 2025-07-14 22:48:31.797 [INFO][4703] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0a7dd75a6ea966f208f3023566629d6115f7728e6919df1c479ac65d1af0336c" Jul 14 22:48:31.884299 containerd[1647]: 2025-07-14 22:48:31.837 [INFO][4733] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0a7dd75a6ea966f208f3023566629d6115f7728e6919df1c479ac65d1af0336c" HandleID="k8s-pod-network.0a7dd75a6ea966f208f3023566629d6115f7728e6919df1c479ac65d1af0336c" Workload="localhost-k8s-csi--node--driver--f78g4-eth0" Jul 14 22:48:31.884299 containerd[1647]: 2025-07-14 22:48:31.837 [INFO][4733] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 22:48:31.884299 containerd[1647]: 2025-07-14 22:48:31.851 [INFO][4733] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 22:48:31.884299 containerd[1647]: 2025-07-14 22:48:31.856 [WARNING][4733] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0a7dd75a6ea966f208f3023566629d6115f7728e6919df1c479ac65d1af0336c" HandleID="k8s-pod-network.0a7dd75a6ea966f208f3023566629d6115f7728e6919df1c479ac65d1af0336c" Workload="localhost-k8s-csi--node--driver--f78g4-eth0" Jul 14 22:48:31.884299 containerd[1647]: 2025-07-14 22:48:31.856 [INFO][4733] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0a7dd75a6ea966f208f3023566629d6115f7728e6919df1c479ac65d1af0336c" HandleID="k8s-pod-network.0a7dd75a6ea966f208f3023566629d6115f7728e6919df1c479ac65d1af0336c" Workload="localhost-k8s-csi--node--driver--f78g4-eth0" Jul 14 22:48:31.884299 containerd[1647]: 2025-07-14 22:48:31.859 [INFO][4733] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 22:48:31.884299 containerd[1647]: 2025-07-14 22:48:31.861 [INFO][4703] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0a7dd75a6ea966f208f3023566629d6115f7728e6919df1c479ac65d1af0336c" Jul 14 22:48:31.884299 containerd[1647]: time="2025-07-14T22:48:31.862385533Z" level=info msg="TearDown network for sandbox \"0a7dd75a6ea966f208f3023566629d6115f7728e6919df1c479ac65d1af0336c\" successfully" Jul 14 22:48:31.884299 containerd[1647]: time="2025-07-14T22:48:31.862396284Z" level=info msg="StopPodSandbox for \"0a7dd75a6ea966f208f3023566629d6115f7728e6919df1c479ac65d1af0336c\" returns successfully" Jul 14 22:48:31.884299 containerd[1647]: time="2025-07-14T22:48:31.877940612Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-58bddbbd7-xf2v7,Uid:6e5c12ff-f464-401e-803f-958a1cf87455,Namespace:calico-system,Attempt:1,}" Jul 14 22:48:31.884299 containerd[1647]: time="2025-07-14T22:48:31.878082144Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-f78g4,Uid:33683c0b-99a6-49cc-aa17-19ada6d1c944,Namespace:calico-system,Attempt:1,}" Jul 14 22:48:31.858654 systemd[1]: run-netns-cni\x2d34ac0300\x2d8c4c\x2d226b\x2dce16\x2d01c5c09be1ec.mount: Deactivated successfully. Jul 14 22:48:31.863835 systemd[1]: run-netns-cni\x2de488abda\x2db088\x2d86a3\x2db705\x2dff0e3eb85ece.mount: Deactivated successfully. Jul 14 22:48:32.325615 systemd-networkd[1287]: cali5882607113a: Link UP Jul 14 22:48:32.325729 systemd-networkd[1287]: cali5882607113a: Gained carrier Jul 14 22:48:32.411257 systemd-networkd[1287]: calie41ca5c1d74: Gained IPv6LL Jul 14 22:48:32.412824 containerd[1647]: 2025-07-14 22:48:32.175 [INFO][4792] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--f78g4-eth0 csi-node-driver- calico-system 33683c0b-99a6-49cc-aa17-19ada6d1c944 949 0 2025-07-14 22:48:08 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:57bd658777 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-f78g4 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali5882607113a [] [] }} ContainerID="3f5205990f74e0c76941908ba3bf688cd5f37af613fd17132abf4a8081d13ee6" Namespace="calico-system" Pod="csi-node-driver-f78g4" WorkloadEndpoint="localhost-k8s-csi--node--driver--f78g4-" Jul 14 22:48:32.412824 containerd[1647]: 2025-07-14 22:48:32.175 [INFO][4792] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3f5205990f74e0c76941908ba3bf688cd5f37af613fd17132abf4a8081d13ee6" Namespace="calico-system" Pod="csi-node-driver-f78g4" WorkloadEndpoint="localhost-k8s-csi--node--driver--f78g4-eth0" Jul 14 22:48:32.412824 containerd[1647]: 2025-07-14 22:48:32.209 [INFO][4841] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3f5205990f74e0c76941908ba3bf688cd5f37af613fd17132abf4a8081d13ee6" HandleID="k8s-pod-network.3f5205990f74e0c76941908ba3bf688cd5f37af613fd17132abf4a8081d13ee6" Workload="localhost-k8s-csi--node--driver--f78g4-eth0" Jul 14 22:48:32.412824 containerd[1647]: 2025-07-14 22:48:32.209 [INFO][4841] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="3f5205990f74e0c76941908ba3bf688cd5f37af613fd17132abf4a8081d13ee6" HandleID="k8s-pod-network.3f5205990f74e0c76941908ba3bf688cd5f37af613fd17132abf4a8081d13ee6" Workload="localhost-k8s-csi--node--driver--f78g4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f1a0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-f78g4", "timestamp":"2025-07-14 22:48:32.209439876 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 14 22:48:32.412824 containerd[1647]: 2025-07-14 22:48:32.209 [INFO][4841] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 22:48:32.412824 containerd[1647]: 2025-07-14 22:48:32.209 [INFO][4841] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 22:48:32.412824 containerd[1647]: 2025-07-14 22:48:32.209 [INFO][4841] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 14 22:48:32.412824 containerd[1647]: 2025-07-14 22:48:32.216 [INFO][4841] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3f5205990f74e0c76941908ba3bf688cd5f37af613fd17132abf4a8081d13ee6" host="localhost" Jul 14 22:48:32.412824 containerd[1647]: 2025-07-14 22:48:32.221 [INFO][4841] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 14 22:48:32.412824 containerd[1647]: 2025-07-14 22:48:32.225 [INFO][4841] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 14 22:48:32.412824 containerd[1647]: 2025-07-14 22:48:32.226 [INFO][4841] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 14 22:48:32.412824 containerd[1647]: 2025-07-14 22:48:32.228 [INFO][4841] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 14 22:48:32.412824 containerd[1647]: 2025-07-14 22:48:32.228 [INFO][4841] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.3f5205990f74e0c76941908ba3bf688cd5f37af613fd17132abf4a8081d13ee6" host="localhost" Jul 14 22:48:32.412824 containerd[1647]: 2025-07-14 22:48:32.230 [INFO][4841] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.3f5205990f74e0c76941908ba3bf688cd5f37af613fd17132abf4a8081d13ee6 Jul 14 22:48:32.412824 containerd[1647]: 2025-07-14 22:48:32.234 [INFO][4841] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.3f5205990f74e0c76941908ba3bf688cd5f37af613fd17132abf4a8081d13ee6" host="localhost" Jul 14 22:48:32.412824 containerd[1647]: 2025-07-14 22:48:32.320 [INFO][4841] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.3f5205990f74e0c76941908ba3bf688cd5f37af613fd17132abf4a8081d13ee6" host="localhost" Jul 14 22:48:32.412824 containerd[1647]: 2025-07-14 22:48:32.320 [INFO][4841] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.3f5205990f74e0c76941908ba3bf688cd5f37af613fd17132abf4a8081d13ee6" host="localhost" Jul 14 22:48:32.412824 containerd[1647]: 2025-07-14 22:48:32.320 [INFO][4841] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 22:48:32.412824 containerd[1647]: 2025-07-14 22:48:32.320 [INFO][4841] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="3f5205990f74e0c76941908ba3bf688cd5f37af613fd17132abf4a8081d13ee6" HandleID="k8s-pod-network.3f5205990f74e0c76941908ba3bf688cd5f37af613fd17132abf4a8081d13ee6" Workload="localhost-k8s-csi--node--driver--f78g4-eth0" Jul 14 22:48:32.422875 containerd[1647]: 2025-07-14 22:48:32.322 [INFO][4792] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3f5205990f74e0c76941908ba3bf688cd5f37af613fd17132abf4a8081d13ee6" Namespace="calico-system" Pod="csi-node-driver-f78g4" WorkloadEndpoint="localhost-k8s-csi--node--driver--f78g4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--f78g4-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"33683c0b-99a6-49cc-aa17-19ada6d1c944", ResourceVersion:"949", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 22, 48, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-f78g4", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali5882607113a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 22:48:32.422875 containerd[1647]: 2025-07-14 22:48:32.322 [INFO][4792] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="3f5205990f74e0c76941908ba3bf688cd5f37af613fd17132abf4a8081d13ee6" Namespace="calico-system" Pod="csi-node-driver-f78g4" WorkloadEndpoint="localhost-k8s-csi--node--driver--f78g4-eth0" Jul 14 22:48:32.422875 containerd[1647]: 2025-07-14 22:48:32.322 [INFO][4792] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5882607113a ContainerID="3f5205990f74e0c76941908ba3bf688cd5f37af613fd17132abf4a8081d13ee6" Namespace="calico-system" Pod="csi-node-driver-f78g4" WorkloadEndpoint="localhost-k8s-csi--node--driver--f78g4-eth0" Jul 14 22:48:32.422875 containerd[1647]: 2025-07-14 22:48:32.325 [INFO][4792] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3f5205990f74e0c76941908ba3bf688cd5f37af613fd17132abf4a8081d13ee6" Namespace="calico-system" Pod="csi-node-driver-f78g4" WorkloadEndpoint="localhost-k8s-csi--node--driver--f78g4-eth0" Jul 14 22:48:32.422875 containerd[1647]: 2025-07-14 22:48:32.325 [INFO][4792] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3f5205990f74e0c76941908ba3bf688cd5f37af613fd17132abf4a8081d13ee6" Namespace="calico-system" Pod="csi-node-driver-f78g4" WorkloadEndpoint="localhost-k8s-csi--node--driver--f78g4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--f78g4-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"33683c0b-99a6-49cc-aa17-19ada6d1c944", ResourceVersion:"949", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 22, 48, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3f5205990f74e0c76941908ba3bf688cd5f37af613fd17132abf4a8081d13ee6", Pod:"csi-node-driver-f78g4", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali5882607113a", MAC:"da:a6:68:e1:55:aa", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 22:48:32.422875 containerd[1647]: 2025-07-14 22:48:32.410 [INFO][4792] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3f5205990f74e0c76941908ba3bf688cd5f37af613fd17132abf4a8081d13ee6" Namespace="calico-system" Pod="csi-node-driver-f78g4" WorkloadEndpoint="localhost-k8s-csi--node--driver--f78g4-eth0" Jul 14 22:48:32.487028 containerd[1647]: time="2025-07-14T22:48:32.486564217Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 22:48:32.487028 containerd[1647]: time="2025-07-14T22:48:32.486634043Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 22:48:32.487028 containerd[1647]: time="2025-07-14T22:48:32.486646819Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 22:48:32.487028 containerd[1647]: time="2025-07-14T22:48:32.486712002Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 22:48:32.488593 systemd-networkd[1287]: cali3cff5a53759: Link UP Jul 14 22:48:32.488707 systemd-networkd[1287]: cali3cff5a53759: Gained carrier Jul 14 22:48:32.515106 containerd[1647]: 2025-07-14 22:48:32.158 [INFO][4787] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--58bddbbd7--xf2v7-eth0 calico-kube-controllers-58bddbbd7- calico-system 6e5c12ff-f464-401e-803f-958a1cf87455 948 0 2025-07-14 22:48:08 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:58bddbbd7 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-58bddbbd7-xf2v7 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali3cff5a53759 [] [] }} ContainerID="20c81f0cb8556777c066d18b75fff643fba81580bc8249cc785a4256cfdb10e4" Namespace="calico-system" Pod="calico-kube-controllers-58bddbbd7-xf2v7" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--58bddbbd7--xf2v7-" Jul 14 22:48:32.515106 containerd[1647]: 2025-07-14 22:48:32.158 [INFO][4787] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="20c81f0cb8556777c066d18b75fff643fba81580bc8249cc785a4256cfdb10e4" Namespace="calico-system" Pod="calico-kube-controllers-58bddbbd7-xf2v7" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--58bddbbd7--xf2v7-eth0" Jul 14 22:48:32.515106 containerd[1647]: 2025-07-14 22:48:32.214 [INFO][4834] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="20c81f0cb8556777c066d18b75fff643fba81580bc8249cc785a4256cfdb10e4" HandleID="k8s-pod-network.20c81f0cb8556777c066d18b75fff643fba81580bc8249cc785a4256cfdb10e4" Workload="localhost-k8s-calico--kube--controllers--58bddbbd7--xf2v7-eth0" Jul 14 22:48:32.515106 containerd[1647]: 2025-07-14 22:48:32.214 [INFO][4834] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="20c81f0cb8556777c066d18b75fff643fba81580bc8249cc785a4256cfdb10e4" HandleID="k8s-pod-network.20c81f0cb8556777c066d18b75fff643fba81580bc8249cc785a4256cfdb10e4" Workload="localhost-k8s-calico--kube--controllers--58bddbbd7--xf2v7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d59c0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-58bddbbd7-xf2v7", "timestamp":"2025-07-14 22:48:32.214165154 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 14 22:48:32.515106 containerd[1647]: 2025-07-14 22:48:32.214 [INFO][4834] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 22:48:32.515106 containerd[1647]: 2025-07-14 22:48:32.320 [INFO][4834] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 22:48:32.515106 containerd[1647]: 2025-07-14 22:48:32.321 [INFO][4834] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 14 22:48:32.515106 containerd[1647]: 2025-07-14 22:48:32.330 [INFO][4834] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.20c81f0cb8556777c066d18b75fff643fba81580bc8249cc785a4256cfdb10e4" host="localhost" Jul 14 22:48:32.515106 containerd[1647]: 2025-07-14 22:48:32.424 [INFO][4834] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 14 22:48:32.515106 containerd[1647]: 2025-07-14 22:48:32.427 [INFO][4834] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 14 22:48:32.515106 containerd[1647]: 2025-07-14 22:48:32.428 [INFO][4834] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 14 22:48:32.515106 containerd[1647]: 2025-07-14 22:48:32.429 [INFO][4834] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 14 22:48:32.515106 containerd[1647]: 2025-07-14 22:48:32.429 [INFO][4834] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.20c81f0cb8556777c066d18b75fff643fba81580bc8249cc785a4256cfdb10e4" host="localhost" Jul 14 22:48:32.515106 containerd[1647]: 2025-07-14 22:48:32.430 [INFO][4834] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.20c81f0cb8556777c066d18b75fff643fba81580bc8249cc785a4256cfdb10e4 Jul 14 22:48:32.515106 containerd[1647]: 2025-07-14 22:48:32.451 [INFO][4834] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.20c81f0cb8556777c066d18b75fff643fba81580bc8249cc785a4256cfdb10e4" host="localhost" Jul 14 22:48:32.515106 containerd[1647]: 2025-07-14 22:48:32.479 [INFO][4834] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.20c81f0cb8556777c066d18b75fff643fba81580bc8249cc785a4256cfdb10e4" host="localhost" Jul 14 22:48:32.515106 containerd[1647]: 2025-07-14 22:48:32.479 [INFO][4834] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.20c81f0cb8556777c066d18b75fff643fba81580bc8249cc785a4256cfdb10e4" host="localhost" Jul 14 22:48:32.515106 containerd[1647]: 2025-07-14 22:48:32.479 [INFO][4834] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 22:48:32.515106 containerd[1647]: 2025-07-14 22:48:32.479 [INFO][4834] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="20c81f0cb8556777c066d18b75fff643fba81580bc8249cc785a4256cfdb10e4" HandleID="k8s-pod-network.20c81f0cb8556777c066d18b75fff643fba81580bc8249cc785a4256cfdb10e4" Workload="localhost-k8s-calico--kube--controllers--58bddbbd7--xf2v7-eth0" Jul 14 22:48:32.515583 containerd[1647]: 2025-07-14 22:48:32.485 [INFO][4787] cni-plugin/k8s.go 418: Populated endpoint ContainerID="20c81f0cb8556777c066d18b75fff643fba81580bc8249cc785a4256cfdb10e4" Namespace="calico-system" Pod="calico-kube-controllers-58bddbbd7-xf2v7" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--58bddbbd7--xf2v7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--58bddbbd7--xf2v7-eth0", GenerateName:"calico-kube-controllers-58bddbbd7-", Namespace:"calico-system", SelfLink:"", UID:"6e5c12ff-f464-401e-803f-958a1cf87455", ResourceVersion:"948", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 22, 48, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"58bddbbd7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-58bddbbd7-xf2v7", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali3cff5a53759", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 22:48:32.515583 containerd[1647]: 2025-07-14 22:48:32.485 [INFO][4787] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="20c81f0cb8556777c066d18b75fff643fba81580bc8249cc785a4256cfdb10e4" Namespace="calico-system" Pod="calico-kube-controllers-58bddbbd7-xf2v7" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--58bddbbd7--xf2v7-eth0" Jul 14 22:48:32.515583 containerd[1647]: 2025-07-14 22:48:32.485 [INFO][4787] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3cff5a53759 ContainerID="20c81f0cb8556777c066d18b75fff643fba81580bc8249cc785a4256cfdb10e4" Namespace="calico-system" Pod="calico-kube-controllers-58bddbbd7-xf2v7" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--58bddbbd7--xf2v7-eth0" Jul 14 22:48:32.515583 containerd[1647]: 2025-07-14 22:48:32.487 [INFO][4787] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="20c81f0cb8556777c066d18b75fff643fba81580bc8249cc785a4256cfdb10e4" Namespace="calico-system" Pod="calico-kube-controllers-58bddbbd7-xf2v7" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--58bddbbd7--xf2v7-eth0" Jul 14 22:48:32.515583 containerd[1647]: 2025-07-14 22:48:32.487 [INFO][4787] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="20c81f0cb8556777c066d18b75fff643fba81580bc8249cc785a4256cfdb10e4" Namespace="calico-system" Pod="calico-kube-controllers-58bddbbd7-xf2v7" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--58bddbbd7--xf2v7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--58bddbbd7--xf2v7-eth0", GenerateName:"calico-kube-controllers-58bddbbd7-", Namespace:"calico-system", SelfLink:"", UID:"6e5c12ff-f464-401e-803f-958a1cf87455", ResourceVersion:"948", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 22, 48, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"58bddbbd7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"20c81f0cb8556777c066d18b75fff643fba81580bc8249cc785a4256cfdb10e4", Pod:"calico-kube-controllers-58bddbbd7-xf2v7", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali3cff5a53759", MAC:"fa:93:7a:5c:c3:09", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 22:48:32.515583 containerd[1647]: 2025-07-14 22:48:32.510 [INFO][4787] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="20c81f0cb8556777c066d18b75fff643fba81580bc8249cc785a4256cfdb10e4" Namespace="calico-system" Pod="calico-kube-controllers-58bddbbd7-xf2v7" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--58bddbbd7--xf2v7-eth0" Jul 14 22:48:32.575870 systemd-resolved[1541]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 14 22:48:32.585001 containerd[1647]: time="2025-07-14T22:48:32.584972057Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-f78g4,Uid:33683c0b-99a6-49cc-aa17-19ada6d1c944,Namespace:calico-system,Attempt:1,} returns sandbox id \"3f5205990f74e0c76941908ba3bf688cd5f37af613fd17132abf4a8081d13ee6\"" Jul 14 22:48:32.595060 containerd[1647]: time="2025-07-14T22:48:32.594975371Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 22:48:32.595503 containerd[1647]: time="2025-07-14T22:48:32.595023346Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 22:48:32.595503 containerd[1647]: time="2025-07-14T22:48:32.595417935Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 22:48:32.595650 containerd[1647]: time="2025-07-14T22:48:32.595473874Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 22:48:32.630200 systemd-resolved[1541]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 14 22:48:32.698127 systemd-networkd[1287]: calie7787838cae: Link UP Jul 14 22:48:32.698265 systemd-networkd[1287]: calie7787838cae: Gained carrier Jul 14 22:48:32.763817 containerd[1647]: time="2025-07-14T22:48:32.763681593Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-58bddbbd7-xf2v7,Uid:6e5c12ff-f464-401e-803f-958a1cf87455,Namespace:calico-system,Attempt:1,} returns sandbox id \"20c81f0cb8556777c066d18b75fff643fba81580bc8249cc785a4256cfdb10e4\"" Jul 14 22:48:33.060279 containerd[1647]: 2025-07-14 22:48:32.183 [INFO][4810] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--77d5bbcd9f--bfs2w-eth0 calico-apiserver-77d5bbcd9f- calico-apiserver 5ab63976-6c51-436b-83f8-138c3003c36b 947 0 2025-07-14 22:48:04 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:77d5bbcd9f projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-77d5bbcd9f-bfs2w eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calie7787838cae [] [] }} ContainerID="7dbed6d81f56e52678440ca553284aa4f150f687d069a4d3aa092321efc50409" Namespace="calico-apiserver" Pod="calico-apiserver-77d5bbcd9f-bfs2w" WorkloadEndpoint="localhost-k8s-calico--apiserver--77d5bbcd9f--bfs2w-" Jul 14 22:48:33.060279 containerd[1647]: 2025-07-14 22:48:32.184 [INFO][4810] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7dbed6d81f56e52678440ca553284aa4f150f687d069a4d3aa092321efc50409" Namespace="calico-apiserver" Pod="calico-apiserver-77d5bbcd9f-bfs2w" WorkloadEndpoint="localhost-k8s-calico--apiserver--77d5bbcd9f--bfs2w-eth0" Jul 14 22:48:33.060279 containerd[1647]: 2025-07-14 22:48:32.230 [INFO][4848] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7dbed6d81f56e52678440ca553284aa4f150f687d069a4d3aa092321efc50409" HandleID="k8s-pod-network.7dbed6d81f56e52678440ca553284aa4f150f687d069a4d3aa092321efc50409" Workload="localhost-k8s-calico--apiserver--77d5bbcd9f--bfs2w-eth0" Jul 14 22:48:33.060279 containerd[1647]: 2025-07-14 22:48:32.231 [INFO][4848] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="7dbed6d81f56e52678440ca553284aa4f150f687d069a4d3aa092321efc50409" HandleID="k8s-pod-network.7dbed6d81f56e52678440ca553284aa4f150f687d069a4d3aa092321efc50409" Workload="localhost-k8s-calico--apiserver--77d5bbcd9f--bfs2w-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000258ff0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-77d5bbcd9f-bfs2w", "timestamp":"2025-07-14 22:48:32.230872279 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 14 22:48:33.060279 containerd[1647]: 2025-07-14 22:48:32.231 [INFO][4848] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 22:48:33.060279 containerd[1647]: 2025-07-14 22:48:32.479 [INFO][4848] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 22:48:33.060279 containerd[1647]: 2025-07-14 22:48:32.480 [INFO][4848] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 14 22:48:33.060279 containerd[1647]: 2025-07-14 22:48:32.487 [INFO][4848] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7dbed6d81f56e52678440ca553284aa4f150f687d069a4d3aa092321efc50409" host="localhost" Jul 14 22:48:33.060279 containerd[1647]: 2025-07-14 22:48:32.520 [INFO][4848] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 14 22:48:33.060279 containerd[1647]: 2025-07-14 22:48:32.601 [INFO][4848] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 14 22:48:33.060279 containerd[1647]: 2025-07-14 22:48:32.625 [INFO][4848] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 14 22:48:33.060279 containerd[1647]: 2025-07-14 22:48:32.628 [INFO][4848] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 14 22:48:33.060279 containerd[1647]: 2025-07-14 22:48:32.628 [INFO][4848] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.7dbed6d81f56e52678440ca553284aa4f150f687d069a4d3aa092321efc50409" host="localhost" Jul 14 22:48:33.060279 containerd[1647]: 2025-07-14 22:48:32.639 [INFO][4848] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.7dbed6d81f56e52678440ca553284aa4f150f687d069a4d3aa092321efc50409 Jul 14 22:48:33.060279 containerd[1647]: 2025-07-14 22:48:32.656 [INFO][4848] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.7dbed6d81f56e52678440ca553284aa4f150f687d069a4d3aa092321efc50409" host="localhost" Jul 14 22:48:33.060279 containerd[1647]: 2025-07-14 22:48:32.685 [INFO][4848] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.7dbed6d81f56e52678440ca553284aa4f150f687d069a4d3aa092321efc50409" host="localhost" Jul 14 22:48:33.060279 containerd[1647]: 2025-07-14 22:48:32.685 [INFO][4848] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.7dbed6d81f56e52678440ca553284aa4f150f687d069a4d3aa092321efc50409" host="localhost" Jul 14 22:48:33.060279 containerd[1647]: 2025-07-14 22:48:32.685 [INFO][4848] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 22:48:33.060279 containerd[1647]: 2025-07-14 22:48:32.685 [INFO][4848] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="7dbed6d81f56e52678440ca553284aa4f150f687d069a4d3aa092321efc50409" HandleID="k8s-pod-network.7dbed6d81f56e52678440ca553284aa4f150f687d069a4d3aa092321efc50409" Workload="localhost-k8s-calico--apiserver--77d5bbcd9f--bfs2w-eth0" Jul 14 22:48:33.061260 containerd[1647]: 2025-07-14 22:48:32.689 [INFO][4810] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7dbed6d81f56e52678440ca553284aa4f150f687d069a4d3aa092321efc50409" Namespace="calico-apiserver" Pod="calico-apiserver-77d5bbcd9f-bfs2w" WorkloadEndpoint="localhost-k8s-calico--apiserver--77d5bbcd9f--bfs2w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--77d5bbcd9f--bfs2w-eth0", GenerateName:"calico-apiserver-77d5bbcd9f-", Namespace:"calico-apiserver", SelfLink:"", UID:"5ab63976-6c51-436b-83f8-138c3003c36b", ResourceVersion:"947", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 22, 48, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"77d5bbcd9f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-77d5bbcd9f-bfs2w", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie7787838cae", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 22:48:33.061260 containerd[1647]: 2025-07-14 22:48:32.690 [INFO][4810] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="7dbed6d81f56e52678440ca553284aa4f150f687d069a4d3aa092321efc50409" Namespace="calico-apiserver" Pod="calico-apiserver-77d5bbcd9f-bfs2w" WorkloadEndpoint="localhost-k8s-calico--apiserver--77d5bbcd9f--bfs2w-eth0" Jul 14 22:48:33.061260 containerd[1647]: 2025-07-14 22:48:32.691 [INFO][4810] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie7787838cae ContainerID="7dbed6d81f56e52678440ca553284aa4f150f687d069a4d3aa092321efc50409" Namespace="calico-apiserver" Pod="calico-apiserver-77d5bbcd9f-bfs2w" WorkloadEndpoint="localhost-k8s-calico--apiserver--77d5bbcd9f--bfs2w-eth0" Jul 14 22:48:33.061260 containerd[1647]: 2025-07-14 22:48:32.700 [INFO][4810] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7dbed6d81f56e52678440ca553284aa4f150f687d069a4d3aa092321efc50409" Namespace="calico-apiserver" Pod="calico-apiserver-77d5bbcd9f-bfs2w" WorkloadEndpoint="localhost-k8s-calico--apiserver--77d5bbcd9f--bfs2w-eth0" Jul 14 22:48:33.061260 containerd[1647]: 2025-07-14 22:48:32.700 [INFO][4810] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7dbed6d81f56e52678440ca553284aa4f150f687d069a4d3aa092321efc50409" Namespace="calico-apiserver" Pod="calico-apiserver-77d5bbcd9f-bfs2w" WorkloadEndpoint="localhost-k8s-calico--apiserver--77d5bbcd9f--bfs2w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--77d5bbcd9f--bfs2w-eth0", GenerateName:"calico-apiserver-77d5bbcd9f-", Namespace:"calico-apiserver", SelfLink:"", UID:"5ab63976-6c51-436b-83f8-138c3003c36b", ResourceVersion:"947", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 22, 48, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"77d5bbcd9f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7dbed6d81f56e52678440ca553284aa4f150f687d069a4d3aa092321efc50409", Pod:"calico-apiserver-77d5bbcd9f-bfs2w", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie7787838cae", MAC:"3e:d8:02:3c:25:ef", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 22:48:33.061260 containerd[1647]: 2025-07-14 22:48:33.058 [INFO][4810] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7dbed6d81f56e52678440ca553284aa4f150f687d069a4d3aa092321efc50409" Namespace="calico-apiserver" Pod="calico-apiserver-77d5bbcd9f-bfs2w" WorkloadEndpoint="localhost-k8s-calico--apiserver--77d5bbcd9f--bfs2w-eth0" Jul 14 22:48:33.367808 containerd[1647]: time="2025-07-14T22:48:33.367705583Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 22:48:33.370008 containerd[1647]: time="2025-07-14T22:48:33.369100489Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 22:48:33.370008 containerd[1647]: time="2025-07-14T22:48:33.369131168Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 22:48:33.370008 containerd[1647]: time="2025-07-14T22:48:33.369265055Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 22:48:33.390880 systemd-resolved[1541]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 14 22:48:33.423925 containerd[1647]: time="2025-07-14T22:48:33.423902600Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-77d5bbcd9f-bfs2w,Uid:5ab63976-6c51-436b-83f8-138c3003c36b,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"7dbed6d81f56e52678440ca553284aa4f150f687d069a4d3aa092321efc50409\"" Jul 14 22:48:33.691762 containerd[1647]: time="2025-07-14T22:48:33.691482132Z" level=info msg="StopPodSandbox for \"bd9eda5759c35bf186a67eac25d3145992baa4a23ddc146c2ed10eaf314fa297\"" Jul 14 22:48:33.704549 containerd[1647]: time="2025-07-14T22:48:33.704477130Z" level=info msg="StopPodSandbox for \"25a9430590916749bdddebfb0b5fbad3ba8f65a13da22fbd0d318aaab67cab90\"" Jul 14 22:48:33.755463 systemd-networkd[1287]: vxlan.calico: Gained IPv6LL Jul 14 22:48:33.883189 systemd-networkd[1287]: cali3cff5a53759: Gained IPv6LL Jul 14 22:48:34.011008 systemd-networkd[1287]: cali5882607113a: Gained IPv6LL Jul 14 22:48:34.331263 systemd-resolved[1541]: Under memory pressure, flushing caches. Jul 14 22:48:34.347312 systemd-journald[1180]: Under memory pressure, flushing caches. Jul 14 22:48:34.331283 systemd-resolved[1541]: Flushed all caches. Jul 14 22:48:34.395017 systemd-networkd[1287]: calie7787838cae: Gained IPv6LL Jul 14 22:48:34.423644 containerd[1647]: 2025-07-14 22:48:34.137 [INFO][5039] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="bd9eda5759c35bf186a67eac25d3145992baa4a23ddc146c2ed10eaf314fa297" Jul 14 22:48:34.423644 containerd[1647]: 2025-07-14 22:48:34.168 [INFO][5039] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="bd9eda5759c35bf186a67eac25d3145992baa4a23ddc146c2ed10eaf314fa297" iface="eth0" netns="/var/run/netns/cni-1c52072e-1200-0bc0-2e2c-9bae1d21a32e" Jul 14 22:48:34.423644 containerd[1647]: 2025-07-14 22:48:34.168 [INFO][5039] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="bd9eda5759c35bf186a67eac25d3145992baa4a23ddc146c2ed10eaf314fa297" iface="eth0" netns="/var/run/netns/cni-1c52072e-1200-0bc0-2e2c-9bae1d21a32e" Jul 14 22:48:34.423644 containerd[1647]: 2025-07-14 22:48:34.168 [INFO][5039] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="bd9eda5759c35bf186a67eac25d3145992baa4a23ddc146c2ed10eaf314fa297" iface="eth0" netns="/var/run/netns/cni-1c52072e-1200-0bc0-2e2c-9bae1d21a32e" Jul 14 22:48:34.423644 containerd[1647]: 2025-07-14 22:48:34.168 [INFO][5039] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="bd9eda5759c35bf186a67eac25d3145992baa4a23ddc146c2ed10eaf314fa297" Jul 14 22:48:34.423644 containerd[1647]: 2025-07-14 22:48:34.168 [INFO][5039] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="bd9eda5759c35bf186a67eac25d3145992baa4a23ddc146c2ed10eaf314fa297" Jul 14 22:48:34.423644 containerd[1647]: 2025-07-14 22:48:34.366 [INFO][5055] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="bd9eda5759c35bf186a67eac25d3145992baa4a23ddc146c2ed10eaf314fa297" HandleID="k8s-pod-network.bd9eda5759c35bf186a67eac25d3145992baa4a23ddc146c2ed10eaf314fa297" Workload="localhost-k8s-coredns--7c65d6cfc9--gtzwg-eth0" Jul 14 22:48:34.423644 containerd[1647]: 2025-07-14 22:48:34.392 [INFO][5055] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 22:48:34.423644 containerd[1647]: 2025-07-14 22:48:34.392 [INFO][5055] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 22:48:34.423644 containerd[1647]: 2025-07-14 22:48:34.402 [WARNING][5055] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="bd9eda5759c35bf186a67eac25d3145992baa4a23ddc146c2ed10eaf314fa297" HandleID="k8s-pod-network.bd9eda5759c35bf186a67eac25d3145992baa4a23ddc146c2ed10eaf314fa297" Workload="localhost-k8s-coredns--7c65d6cfc9--gtzwg-eth0" Jul 14 22:48:34.423644 containerd[1647]: 2025-07-14 22:48:34.402 [INFO][5055] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="bd9eda5759c35bf186a67eac25d3145992baa4a23ddc146c2ed10eaf314fa297" HandleID="k8s-pod-network.bd9eda5759c35bf186a67eac25d3145992baa4a23ddc146c2ed10eaf314fa297" Workload="localhost-k8s-coredns--7c65d6cfc9--gtzwg-eth0" Jul 14 22:48:34.423644 containerd[1647]: 2025-07-14 22:48:34.403 [INFO][5055] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 22:48:34.423644 containerd[1647]: 2025-07-14 22:48:34.410 [INFO][5039] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="bd9eda5759c35bf186a67eac25d3145992baa4a23ddc146c2ed10eaf314fa297" Jul 14 22:48:34.424526 containerd[1647]: time="2025-07-14T22:48:34.424269228Z" level=info msg="TearDown network for sandbox \"bd9eda5759c35bf186a67eac25d3145992baa4a23ddc146c2ed10eaf314fa297\" successfully" Jul 14 22:48:34.424526 containerd[1647]: time="2025-07-14T22:48:34.424292690Z" level=info msg="StopPodSandbox for \"bd9eda5759c35bf186a67eac25d3145992baa4a23ddc146c2ed10eaf314fa297\" returns successfully" Jul 14 22:48:34.424856 containerd[1647]: time="2025-07-14T22:48:34.424783251Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-gtzwg,Uid:7ffbe111-cda0-4331-a242-4ff57bc4f605,Namespace:kube-system,Attempt:1,}" Jul 14 22:48:34.427132 systemd[1]: run-netns-cni\x2d1c52072e\x2d1200\x2d0bc0\x2d2e2c\x2d9bae1d21a32e.mount: Deactivated successfully. Jul 14 22:48:34.436352 containerd[1647]: 2025-07-14 22:48:34.097 [INFO][5040] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="25a9430590916749bdddebfb0b5fbad3ba8f65a13da22fbd0d318aaab67cab90" Jul 14 22:48:34.436352 containerd[1647]: 2025-07-14 22:48:34.097 [INFO][5040] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="25a9430590916749bdddebfb0b5fbad3ba8f65a13da22fbd0d318aaab67cab90" iface="eth0" netns="/var/run/netns/cni-d540ac73-ef26-01f2-92cf-a722950ec363" Jul 14 22:48:34.436352 containerd[1647]: 2025-07-14 22:48:34.097 [INFO][5040] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="25a9430590916749bdddebfb0b5fbad3ba8f65a13da22fbd0d318aaab67cab90" iface="eth0" netns="/var/run/netns/cni-d540ac73-ef26-01f2-92cf-a722950ec363" Jul 14 22:48:34.436352 containerd[1647]: 2025-07-14 22:48:34.098 [INFO][5040] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="25a9430590916749bdddebfb0b5fbad3ba8f65a13da22fbd0d318aaab67cab90" iface="eth0" netns="/var/run/netns/cni-d540ac73-ef26-01f2-92cf-a722950ec363" Jul 14 22:48:34.436352 containerd[1647]: 2025-07-14 22:48:34.098 [INFO][5040] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="25a9430590916749bdddebfb0b5fbad3ba8f65a13da22fbd0d318aaab67cab90" Jul 14 22:48:34.436352 containerd[1647]: 2025-07-14 22:48:34.098 [INFO][5040] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="25a9430590916749bdddebfb0b5fbad3ba8f65a13da22fbd0d318aaab67cab90" Jul 14 22:48:34.436352 containerd[1647]: 2025-07-14 22:48:34.371 [INFO][5052] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="25a9430590916749bdddebfb0b5fbad3ba8f65a13da22fbd0d318aaab67cab90" HandleID="k8s-pod-network.25a9430590916749bdddebfb0b5fbad3ba8f65a13da22fbd0d318aaab67cab90" Workload="localhost-k8s-goldmane--58fd7646b9--c6d2z-eth0" Jul 14 22:48:34.436352 containerd[1647]: 2025-07-14 22:48:34.392 [INFO][5052] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 22:48:34.436352 containerd[1647]: 2025-07-14 22:48:34.403 [INFO][5052] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 22:48:34.436352 containerd[1647]: 2025-07-14 22:48:34.414 [WARNING][5052] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="25a9430590916749bdddebfb0b5fbad3ba8f65a13da22fbd0d318aaab67cab90" HandleID="k8s-pod-network.25a9430590916749bdddebfb0b5fbad3ba8f65a13da22fbd0d318aaab67cab90" Workload="localhost-k8s-goldmane--58fd7646b9--c6d2z-eth0" Jul 14 22:48:34.436352 containerd[1647]: 2025-07-14 22:48:34.422 [INFO][5052] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="25a9430590916749bdddebfb0b5fbad3ba8f65a13da22fbd0d318aaab67cab90" HandleID="k8s-pod-network.25a9430590916749bdddebfb0b5fbad3ba8f65a13da22fbd0d318aaab67cab90" Workload="localhost-k8s-goldmane--58fd7646b9--c6d2z-eth0" Jul 14 22:48:34.436352 containerd[1647]: 2025-07-14 22:48:34.423 [INFO][5052] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 22:48:34.436352 containerd[1647]: 2025-07-14 22:48:34.425 [INFO][5040] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="25a9430590916749bdddebfb0b5fbad3ba8f65a13da22fbd0d318aaab67cab90" Jul 14 22:48:34.438357 containerd[1647]: time="2025-07-14T22:48:34.437944419Z" level=info msg="TearDown network for sandbox \"25a9430590916749bdddebfb0b5fbad3ba8f65a13da22fbd0d318aaab67cab90\" successfully" Jul 14 22:48:34.438357 containerd[1647]: time="2025-07-14T22:48:34.437960366Z" level=info msg="StopPodSandbox for \"25a9430590916749bdddebfb0b5fbad3ba8f65a13da22fbd0d318aaab67cab90\" returns successfully" Jul 14 22:48:34.438221 systemd[1]: run-netns-cni\x2dd540ac73\x2def26\x2d01f2\x2d92cf\x2da722950ec363.mount: Deactivated successfully. Jul 14 22:48:34.441136 containerd[1647]: time="2025-07-14T22:48:34.441111827Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-c6d2z,Uid:b5e7e152-33a4-475e-81b6-5e99058e6154,Namespace:calico-system,Attempt:1,}" Jul 14 22:48:34.701697 containerd[1647]: time="2025-07-14T22:48:34.701115375Z" level=info msg="StopPodSandbox for \"46a1d2ca8b76004d02310e568cd53fd650a52a5d3ecad032094d772fa269a2d4\"" Jul 14 22:48:34.702441 containerd[1647]: time="2025-07-14T22:48:34.702421617Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:48:34.781927 containerd[1647]: time="2025-07-14T22:48:34.781850597Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=47317977" Jul 14 22:48:34.808816 containerd[1647]: time="2025-07-14T22:48:34.808779814Z" level=info msg="ImageCreate event name:\"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:48:34.813088 containerd[1647]: time="2025-07-14T22:48:34.813045238Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:48:34.814998 containerd[1647]: time="2025-07-14T22:48:34.814961328Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"48810696\" in 3.932968246s" Jul 14 22:48:34.818199 containerd[1647]: time="2025-07-14T22:48:34.818002691Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\"" Jul 14 22:48:34.821556 containerd[1647]: time="2025-07-14T22:48:34.821525592Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\"" Jul 14 22:48:34.823186 containerd[1647]: time="2025-07-14T22:48:34.822957996Z" level=info msg="CreateContainer within sandbox \"631a57e6751a7b3eb79f5126b4e234a42a0a5d77acdaa82eb36180f231d284fd\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 14 22:48:34.833204 containerd[1647]: time="2025-07-14T22:48:34.833168359Z" level=info msg="CreateContainer within sandbox \"631a57e6751a7b3eb79f5126b4e234a42a0a5d77acdaa82eb36180f231d284fd\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"a72f955d0affbb1b8bf53cf51e22cc18dc7809046f1b6dd24d2c176d5b40fef8\"" Jul 14 22:48:34.834455 containerd[1647]: time="2025-07-14T22:48:34.834436354Z" level=info msg="StartContainer for \"a72f955d0affbb1b8bf53cf51e22cc18dc7809046f1b6dd24d2c176d5b40fef8\"" Jul 14 22:48:34.879503 systemd-networkd[1287]: calic86417ade66: Link UP Jul 14 22:48:34.881016 systemd-networkd[1287]: calic86417ade66: Gained carrier Jul 14 22:48:34.893473 containerd[1647]: 2025-07-14 22:48:34.819 [INFO][5086] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="46a1d2ca8b76004d02310e568cd53fd650a52a5d3ecad032094d772fa269a2d4" Jul 14 22:48:34.893473 containerd[1647]: 2025-07-14 22:48:34.821 [INFO][5086] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="46a1d2ca8b76004d02310e568cd53fd650a52a5d3ecad032094d772fa269a2d4" iface="eth0" netns="/var/run/netns/cni-c6c8be27-ee63-cd48-7043-370d384d4051" Jul 14 22:48:34.893473 containerd[1647]: 2025-07-14 22:48:34.822 [INFO][5086] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="46a1d2ca8b76004d02310e568cd53fd650a52a5d3ecad032094d772fa269a2d4" iface="eth0" netns="/var/run/netns/cni-c6c8be27-ee63-cd48-7043-370d384d4051" Jul 14 22:48:34.893473 containerd[1647]: 2025-07-14 22:48:34.822 [INFO][5086] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="46a1d2ca8b76004d02310e568cd53fd650a52a5d3ecad032094d772fa269a2d4" iface="eth0" netns="/var/run/netns/cni-c6c8be27-ee63-cd48-7043-370d384d4051" Jul 14 22:48:34.893473 containerd[1647]: 2025-07-14 22:48:34.823 [INFO][5086] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="46a1d2ca8b76004d02310e568cd53fd650a52a5d3ecad032094d772fa269a2d4" Jul 14 22:48:34.893473 containerd[1647]: 2025-07-14 22:48:34.823 [INFO][5086] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="46a1d2ca8b76004d02310e568cd53fd650a52a5d3ecad032094d772fa269a2d4" Jul 14 22:48:34.893473 containerd[1647]: 2025-07-14 22:48:34.868 [INFO][5115] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="46a1d2ca8b76004d02310e568cd53fd650a52a5d3ecad032094d772fa269a2d4" HandleID="k8s-pod-network.46a1d2ca8b76004d02310e568cd53fd650a52a5d3ecad032094d772fa269a2d4" Workload="localhost-k8s-coredns--7c65d6cfc9--qt7fd-eth0" Jul 14 22:48:34.893473 containerd[1647]: 2025-07-14 22:48:34.869 [INFO][5115] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 22:48:34.893473 containerd[1647]: 2025-07-14 22:48:34.869 [INFO][5115] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 22:48:34.893473 containerd[1647]: 2025-07-14 22:48:34.875 [WARNING][5115] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="46a1d2ca8b76004d02310e568cd53fd650a52a5d3ecad032094d772fa269a2d4" HandleID="k8s-pod-network.46a1d2ca8b76004d02310e568cd53fd650a52a5d3ecad032094d772fa269a2d4" Workload="localhost-k8s-coredns--7c65d6cfc9--qt7fd-eth0" Jul 14 22:48:34.893473 containerd[1647]: 2025-07-14 22:48:34.875 [INFO][5115] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="46a1d2ca8b76004d02310e568cd53fd650a52a5d3ecad032094d772fa269a2d4" HandleID="k8s-pod-network.46a1d2ca8b76004d02310e568cd53fd650a52a5d3ecad032094d772fa269a2d4" Workload="localhost-k8s-coredns--7c65d6cfc9--qt7fd-eth0" Jul 14 22:48:34.893473 containerd[1647]: 2025-07-14 22:48:34.879 [INFO][5115] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 22:48:34.893473 containerd[1647]: 2025-07-14 22:48:34.883 [INFO][5086] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="46a1d2ca8b76004d02310e568cd53fd650a52a5d3ecad032094d772fa269a2d4" Jul 14 22:48:34.895070 containerd[1647]: time="2025-07-14T22:48:34.893618304Z" level=info msg="TearDown network for sandbox \"46a1d2ca8b76004d02310e568cd53fd650a52a5d3ecad032094d772fa269a2d4\" successfully" Jul 14 22:48:34.895070 containerd[1647]: time="2025-07-14T22:48:34.893663404Z" level=info msg="StopPodSandbox for \"46a1d2ca8b76004d02310e568cd53fd650a52a5d3ecad032094d772fa269a2d4\" returns successfully" Jul 14 22:48:34.895070 containerd[1647]: 2025-07-14 22:48:34.782 [INFO][5071] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7c65d6cfc9--gtzwg-eth0 coredns-7c65d6cfc9- kube-system 7ffbe111-cda0-4331-a242-4ff57bc4f605 972 0 2025-07-14 22:47:56 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7c65d6cfc9-gtzwg eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calic86417ade66 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="3e8eb1ab15c6fbec83c67de8e7b8783c8f4a2df1739bd52edb2abedbdb05d6d2" Namespace="kube-system" Pod="coredns-7c65d6cfc9-gtzwg" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--gtzwg-" Jul 14 22:48:34.895070 containerd[1647]: 2025-07-14 22:48:34.782 [INFO][5071] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3e8eb1ab15c6fbec83c67de8e7b8783c8f4a2df1739bd52edb2abedbdb05d6d2" Namespace="kube-system" Pod="coredns-7c65d6cfc9-gtzwg" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--gtzwg-eth0" Jul 14 22:48:34.895070 containerd[1647]: 2025-07-14 22:48:34.826 [INFO][5099] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3e8eb1ab15c6fbec83c67de8e7b8783c8f4a2df1739bd52edb2abedbdb05d6d2" HandleID="k8s-pod-network.3e8eb1ab15c6fbec83c67de8e7b8783c8f4a2df1739bd52edb2abedbdb05d6d2" Workload="localhost-k8s-coredns--7c65d6cfc9--gtzwg-eth0" Jul 14 22:48:34.895070 containerd[1647]: 2025-07-14 22:48:34.827 [INFO][5099] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="3e8eb1ab15c6fbec83c67de8e7b8783c8f4a2df1739bd52edb2abedbdb05d6d2" HandleID="k8s-pod-network.3e8eb1ab15c6fbec83c67de8e7b8783c8f4a2df1739bd52edb2abedbdb05d6d2" Workload="localhost-k8s-coredns--7c65d6cfc9--gtzwg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024eff0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7c65d6cfc9-gtzwg", "timestamp":"2025-07-14 22:48:34.825158193 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 14 22:48:34.895070 containerd[1647]: 2025-07-14 22:48:34.827 [INFO][5099] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 22:48:34.895070 containerd[1647]: 2025-07-14 22:48:34.827 [INFO][5099] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 22:48:34.895070 containerd[1647]: 2025-07-14 22:48:34.827 [INFO][5099] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 14 22:48:34.895070 containerd[1647]: 2025-07-14 22:48:34.836 [INFO][5099] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3e8eb1ab15c6fbec83c67de8e7b8783c8f4a2df1739bd52edb2abedbdb05d6d2" host="localhost" Jul 14 22:48:34.895070 containerd[1647]: 2025-07-14 22:48:34.841 [INFO][5099] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 14 22:48:34.895070 containerd[1647]: 2025-07-14 22:48:34.849 [INFO][5099] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 14 22:48:34.895070 containerd[1647]: 2025-07-14 22:48:34.850 [INFO][5099] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 14 22:48:34.895070 containerd[1647]: 2025-07-14 22:48:34.854 [INFO][5099] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 14 22:48:34.895070 containerd[1647]: 2025-07-14 22:48:34.854 [INFO][5099] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.3e8eb1ab15c6fbec83c67de8e7b8783c8f4a2df1739bd52edb2abedbdb05d6d2" host="localhost" Jul 14 22:48:34.895070 containerd[1647]: 2025-07-14 22:48:34.857 [INFO][5099] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.3e8eb1ab15c6fbec83c67de8e7b8783c8f4a2df1739bd52edb2abedbdb05d6d2 Jul 14 22:48:34.895070 containerd[1647]: 2025-07-14 22:48:34.862 [INFO][5099] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.3e8eb1ab15c6fbec83c67de8e7b8783c8f4a2df1739bd52edb2abedbdb05d6d2" host="localhost" Jul 14 22:48:34.895070 containerd[1647]: 2025-07-14 22:48:34.867 [INFO][5099] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.3e8eb1ab15c6fbec83c67de8e7b8783c8f4a2df1739bd52edb2abedbdb05d6d2" host="localhost" Jul 14 22:48:34.895070 containerd[1647]: 2025-07-14 22:48:34.867 [INFO][5099] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.3e8eb1ab15c6fbec83c67de8e7b8783c8f4a2df1739bd52edb2abedbdb05d6d2" host="localhost" Jul 14 22:48:34.895070 containerd[1647]: 2025-07-14 22:48:34.867 [INFO][5099] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 22:48:34.895070 containerd[1647]: 2025-07-14 22:48:34.867 [INFO][5099] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="3e8eb1ab15c6fbec83c67de8e7b8783c8f4a2df1739bd52edb2abedbdb05d6d2" HandleID="k8s-pod-network.3e8eb1ab15c6fbec83c67de8e7b8783c8f4a2df1739bd52edb2abedbdb05d6d2" Workload="localhost-k8s-coredns--7c65d6cfc9--gtzwg-eth0" Jul 14 22:48:34.899357 containerd[1647]: 2025-07-14 22:48:34.869 [INFO][5071] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3e8eb1ab15c6fbec83c67de8e7b8783c8f4a2df1739bd52edb2abedbdb05d6d2" Namespace="kube-system" Pod="coredns-7c65d6cfc9-gtzwg" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--gtzwg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--gtzwg-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"7ffbe111-cda0-4331-a242-4ff57bc4f605", ResourceVersion:"972", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 22, 47, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7c65d6cfc9-gtzwg", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic86417ade66", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 22:48:34.899357 containerd[1647]: 2025-07-14 22:48:34.869 [INFO][5071] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="3e8eb1ab15c6fbec83c67de8e7b8783c8f4a2df1739bd52edb2abedbdb05d6d2" Namespace="kube-system" Pod="coredns-7c65d6cfc9-gtzwg" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--gtzwg-eth0" Jul 14 22:48:34.899357 containerd[1647]: 2025-07-14 22:48:34.869 [INFO][5071] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic86417ade66 ContainerID="3e8eb1ab15c6fbec83c67de8e7b8783c8f4a2df1739bd52edb2abedbdb05d6d2" Namespace="kube-system" Pod="coredns-7c65d6cfc9-gtzwg" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--gtzwg-eth0" Jul 14 22:48:34.899357 containerd[1647]: 2025-07-14 22:48:34.881 [INFO][5071] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3e8eb1ab15c6fbec83c67de8e7b8783c8f4a2df1739bd52edb2abedbdb05d6d2" Namespace="kube-system" Pod="coredns-7c65d6cfc9-gtzwg" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--gtzwg-eth0" Jul 14 22:48:34.899357 containerd[1647]: 2025-07-14 22:48:34.882 [INFO][5071] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3e8eb1ab15c6fbec83c67de8e7b8783c8f4a2df1739bd52edb2abedbdb05d6d2" Namespace="kube-system" Pod="coredns-7c65d6cfc9-gtzwg" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--gtzwg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--gtzwg-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"7ffbe111-cda0-4331-a242-4ff57bc4f605", ResourceVersion:"972", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 22, 47, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3e8eb1ab15c6fbec83c67de8e7b8783c8f4a2df1739bd52edb2abedbdb05d6d2", Pod:"coredns-7c65d6cfc9-gtzwg", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic86417ade66", MAC:"f2:09:e9:2f:d7:d1", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 22:48:34.899357 containerd[1647]: 2025-07-14 22:48:34.891 [INFO][5071] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3e8eb1ab15c6fbec83c67de8e7b8783c8f4a2df1739bd52edb2abedbdb05d6d2" Namespace="kube-system" Pod="coredns-7c65d6cfc9-gtzwg" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--gtzwg-eth0" Jul 14 22:48:34.904753 containerd[1647]: time="2025-07-14T22:48:34.904301786Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-qt7fd,Uid:55a1ec98-b34e-4be5-aa77-830413ef608b,Namespace:kube-system,Attempt:1,}" Jul 14 22:48:34.948200 containerd[1647]: time="2025-07-14T22:48:34.948030331Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 22:48:34.948200 containerd[1647]: time="2025-07-14T22:48:34.948089250Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 22:48:34.948528 containerd[1647]: time="2025-07-14T22:48:34.948101293Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 22:48:34.948528 containerd[1647]: time="2025-07-14T22:48:34.948383811Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 22:48:34.984323 systemd-resolved[1541]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 14 22:48:34.990630 systemd-networkd[1287]: cali31c6b5f388e: Link UP Jul 14 22:48:34.992351 systemd-networkd[1287]: cali31c6b5f388e: Gained carrier Jul 14 22:48:35.037396 containerd[1647]: 2025-07-14 22:48:34.854 [INFO][5103] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--58fd7646b9--c6d2z-eth0 goldmane-58fd7646b9- calico-system b5e7e152-33a4-475e-81b6-5e99058e6154 971 0 2025-07-14 22:48:08 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:58fd7646b9 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-58fd7646b9-c6d2z eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali31c6b5f388e [] [] }} ContainerID="5a9504ffba451788cd4a1164cb2d6fc7a927315dfc20e83476e936db23d76485" Namespace="calico-system" Pod="goldmane-58fd7646b9-c6d2z" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--c6d2z-" Jul 14 22:48:35.037396 containerd[1647]: 2025-07-14 22:48:34.854 [INFO][5103] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5a9504ffba451788cd4a1164cb2d6fc7a927315dfc20e83476e936db23d76485" Namespace="calico-system" Pod="goldmane-58fd7646b9-c6d2z" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--c6d2z-eth0" Jul 14 22:48:35.037396 containerd[1647]: 2025-07-14 22:48:34.927 [INFO][5131] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5a9504ffba451788cd4a1164cb2d6fc7a927315dfc20e83476e936db23d76485" HandleID="k8s-pod-network.5a9504ffba451788cd4a1164cb2d6fc7a927315dfc20e83476e936db23d76485" Workload="localhost-k8s-goldmane--58fd7646b9--c6d2z-eth0" Jul 14 22:48:35.037396 containerd[1647]: 2025-07-14 22:48:34.927 [INFO][5131] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="5a9504ffba451788cd4a1164cb2d6fc7a927315dfc20e83476e936db23d76485" HandleID="k8s-pod-network.5a9504ffba451788cd4a1164cb2d6fc7a927315dfc20e83476e936db23d76485" Workload="localhost-k8s-goldmane--58fd7646b9--c6d2z-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5600), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-58fd7646b9-c6d2z", "timestamp":"2025-07-14 22:48:34.927100688 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 14 22:48:35.037396 containerd[1647]: 2025-07-14 22:48:34.927 [INFO][5131] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 22:48:35.037396 containerd[1647]: 2025-07-14 22:48:34.927 [INFO][5131] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 22:48:35.037396 containerd[1647]: 2025-07-14 22:48:34.927 [INFO][5131] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 14 22:48:35.037396 containerd[1647]: 2025-07-14 22:48:34.938 [INFO][5131] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.5a9504ffba451788cd4a1164cb2d6fc7a927315dfc20e83476e936db23d76485" host="localhost" Jul 14 22:48:35.037396 containerd[1647]: 2025-07-14 22:48:34.943 [INFO][5131] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 14 22:48:35.037396 containerd[1647]: 2025-07-14 22:48:34.950 [INFO][5131] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 14 22:48:35.037396 containerd[1647]: 2025-07-14 22:48:34.954 [INFO][5131] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 14 22:48:35.037396 containerd[1647]: 2025-07-14 22:48:34.957 [INFO][5131] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 14 22:48:35.037396 containerd[1647]: 2025-07-14 22:48:34.957 [INFO][5131] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.5a9504ffba451788cd4a1164cb2d6fc7a927315dfc20e83476e936db23d76485" host="localhost" Jul 14 22:48:35.037396 containerd[1647]: 2025-07-14 22:48:34.959 [INFO][5131] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.5a9504ffba451788cd4a1164cb2d6fc7a927315dfc20e83476e936db23d76485 Jul 14 22:48:35.037396 containerd[1647]: 2025-07-14 22:48:34.973 [INFO][5131] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.5a9504ffba451788cd4a1164cb2d6fc7a927315dfc20e83476e936db23d76485" host="localhost" Jul 14 22:48:35.037396 containerd[1647]: 2025-07-14 22:48:34.979 [INFO][5131] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.5a9504ffba451788cd4a1164cb2d6fc7a927315dfc20e83476e936db23d76485" host="localhost" Jul 14 22:48:35.037396 containerd[1647]: 2025-07-14 22:48:34.983 [INFO][5131] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.5a9504ffba451788cd4a1164cb2d6fc7a927315dfc20e83476e936db23d76485" host="localhost" Jul 14 22:48:35.037396 containerd[1647]: 2025-07-14 22:48:34.983 [INFO][5131] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 22:48:35.037396 containerd[1647]: 2025-07-14 22:48:34.983 [INFO][5131] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="5a9504ffba451788cd4a1164cb2d6fc7a927315dfc20e83476e936db23d76485" HandleID="k8s-pod-network.5a9504ffba451788cd4a1164cb2d6fc7a927315dfc20e83476e936db23d76485" Workload="localhost-k8s-goldmane--58fd7646b9--c6d2z-eth0" Jul 14 22:48:35.043317 containerd[1647]: 2025-07-14 22:48:34.986 [INFO][5103] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5a9504ffba451788cd4a1164cb2d6fc7a927315dfc20e83476e936db23d76485" Namespace="calico-system" Pod="goldmane-58fd7646b9-c6d2z" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--c6d2z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--58fd7646b9--c6d2z-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"b5e7e152-33a4-475e-81b6-5e99058e6154", ResourceVersion:"971", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 22, 48, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-58fd7646b9-c6d2z", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali31c6b5f388e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 22:48:35.043317 containerd[1647]: 2025-07-14 22:48:34.988 [INFO][5103] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="5a9504ffba451788cd4a1164cb2d6fc7a927315dfc20e83476e936db23d76485" Namespace="calico-system" Pod="goldmane-58fd7646b9-c6d2z" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--c6d2z-eth0" Jul 14 22:48:35.043317 containerd[1647]: 2025-07-14 22:48:34.988 [INFO][5103] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali31c6b5f388e ContainerID="5a9504ffba451788cd4a1164cb2d6fc7a927315dfc20e83476e936db23d76485" Namespace="calico-system" Pod="goldmane-58fd7646b9-c6d2z" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--c6d2z-eth0" Jul 14 22:48:35.043317 containerd[1647]: 2025-07-14 22:48:34.999 [INFO][5103] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5a9504ffba451788cd4a1164cb2d6fc7a927315dfc20e83476e936db23d76485" Namespace="calico-system" Pod="goldmane-58fd7646b9-c6d2z" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--c6d2z-eth0" Jul 14 22:48:35.043317 containerd[1647]: 2025-07-14 22:48:35.005 [INFO][5103] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5a9504ffba451788cd4a1164cb2d6fc7a927315dfc20e83476e936db23d76485" Namespace="calico-system" Pod="goldmane-58fd7646b9-c6d2z" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--c6d2z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--58fd7646b9--c6d2z-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"b5e7e152-33a4-475e-81b6-5e99058e6154", ResourceVersion:"971", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 22, 48, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5a9504ffba451788cd4a1164cb2d6fc7a927315dfc20e83476e936db23d76485", Pod:"goldmane-58fd7646b9-c6d2z", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali31c6b5f388e", MAC:"56:8b:78:b6:4a:af", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 22:48:35.043317 containerd[1647]: 2025-07-14 22:48:35.030 [INFO][5103] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5a9504ffba451788cd4a1164cb2d6fc7a927315dfc20e83476e936db23d76485" Namespace="calico-system" Pod="goldmane-58fd7646b9-c6d2z" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--c6d2z-eth0" Jul 14 22:48:35.043317 containerd[1647]: time="2025-07-14T22:48:35.039841594Z" level=info msg="StartContainer for \"a72f955d0affbb1b8bf53cf51e22cc18dc7809046f1b6dd24d2c176d5b40fef8\" returns successfully" Jul 14 22:48:35.043317 containerd[1647]: time="2025-07-14T22:48:35.039951667Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-gtzwg,Uid:7ffbe111-cda0-4331-a242-4ff57bc4f605,Namespace:kube-system,Attempt:1,} returns sandbox id \"3e8eb1ab15c6fbec83c67de8e7b8783c8f4a2df1739bd52edb2abedbdb05d6d2\"" Jul 14 22:48:35.061186 containerd[1647]: time="2025-07-14T22:48:35.061096350Z" level=info msg="CreateContainer within sandbox \"3e8eb1ab15c6fbec83c67de8e7b8783c8f4a2df1739bd52edb2abedbdb05d6d2\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 14 22:48:35.082930 containerd[1647]: time="2025-07-14T22:48:35.082593596Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 22:48:35.083331 containerd[1647]: time="2025-07-14T22:48:35.082997285Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 22:48:35.083331 containerd[1647]: time="2025-07-14T22:48:35.083011292Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 22:48:35.083331 containerd[1647]: time="2025-07-14T22:48:35.083079144Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 22:48:35.122349 systemd-resolved[1541]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 14 22:48:35.141109 systemd-networkd[1287]: cali3124dc45690: Link UP Jul 14 22:48:35.142729 systemd-networkd[1287]: cali3124dc45690: Gained carrier Jul 14 22:48:35.173103 containerd[1647]: time="2025-07-14T22:48:35.172989428Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-c6d2z,Uid:b5e7e152-33a4-475e-81b6-5e99058e6154,Namespace:calico-system,Attempt:1,} returns sandbox id \"5a9504ffba451788cd4a1164cb2d6fc7a927315dfc20e83476e936db23d76485\"" Jul 14 22:48:35.173979 containerd[1647]: 2025-07-14 22:48:35.031 [INFO][5157] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7c65d6cfc9--qt7fd-eth0 coredns-7c65d6cfc9- kube-system 55a1ec98-b34e-4be5-aa77-830413ef608b 977 0 2025-07-14 22:47:56 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7c65d6cfc9-qt7fd eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali3124dc45690 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="ea1614aed4fcf9fdfda96a9eeb81ab1d477bd528a44be7b736d05a8ca30de8c4" Namespace="kube-system" Pod="coredns-7c65d6cfc9-qt7fd" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--qt7fd-" Jul 14 22:48:35.173979 containerd[1647]: 2025-07-14 22:48:35.034 [INFO][5157] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ea1614aed4fcf9fdfda96a9eeb81ab1d477bd528a44be7b736d05a8ca30de8c4" Namespace="kube-system" Pod="coredns-7c65d6cfc9-qt7fd" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--qt7fd-eth0" Jul 14 22:48:35.173979 containerd[1647]: 2025-07-14 22:48:35.090 [INFO][5237] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ea1614aed4fcf9fdfda96a9eeb81ab1d477bd528a44be7b736d05a8ca30de8c4" HandleID="k8s-pod-network.ea1614aed4fcf9fdfda96a9eeb81ab1d477bd528a44be7b736d05a8ca30de8c4" Workload="localhost-k8s-coredns--7c65d6cfc9--qt7fd-eth0" Jul 14 22:48:35.173979 containerd[1647]: 2025-07-14 22:48:35.091 [INFO][5237] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ea1614aed4fcf9fdfda96a9eeb81ab1d477bd528a44be7b736d05a8ca30de8c4" HandleID="k8s-pod-network.ea1614aed4fcf9fdfda96a9eeb81ab1d477bd528a44be7b736d05a8ca30de8c4" Workload="localhost-k8s-coredns--7c65d6cfc9--qt7fd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5b10), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7c65d6cfc9-qt7fd", "timestamp":"2025-07-14 22:48:35.090917323 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 14 22:48:35.173979 containerd[1647]: 2025-07-14 22:48:35.091 [INFO][5237] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 22:48:35.173979 containerd[1647]: 2025-07-14 22:48:35.091 [INFO][5237] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 22:48:35.173979 containerd[1647]: 2025-07-14 22:48:35.091 [INFO][5237] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 14 22:48:35.173979 containerd[1647]: 2025-07-14 22:48:35.096 [INFO][5237] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ea1614aed4fcf9fdfda96a9eeb81ab1d477bd528a44be7b736d05a8ca30de8c4" host="localhost" Jul 14 22:48:35.173979 containerd[1647]: 2025-07-14 22:48:35.099 [INFO][5237] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 14 22:48:35.173979 containerd[1647]: 2025-07-14 22:48:35.103 [INFO][5237] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 14 22:48:35.173979 containerd[1647]: 2025-07-14 22:48:35.105 [INFO][5237] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 14 22:48:35.173979 containerd[1647]: 2025-07-14 22:48:35.106 [INFO][5237] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 14 22:48:35.173979 containerd[1647]: 2025-07-14 22:48:35.106 [INFO][5237] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.ea1614aed4fcf9fdfda96a9eeb81ab1d477bd528a44be7b736d05a8ca30de8c4" host="localhost" Jul 14 22:48:35.173979 containerd[1647]: 2025-07-14 22:48:35.108 [INFO][5237] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.ea1614aed4fcf9fdfda96a9eeb81ab1d477bd528a44be7b736d05a8ca30de8c4 Jul 14 22:48:35.173979 containerd[1647]: 2025-07-14 22:48:35.116 [INFO][5237] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.ea1614aed4fcf9fdfda96a9eeb81ab1d477bd528a44be7b736d05a8ca30de8c4" host="localhost" Jul 14 22:48:35.173979 containerd[1647]: 2025-07-14 22:48:35.128 [INFO][5237] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.ea1614aed4fcf9fdfda96a9eeb81ab1d477bd528a44be7b736d05a8ca30de8c4" host="localhost" Jul 14 22:48:35.173979 containerd[1647]: 2025-07-14 22:48:35.128 [INFO][5237] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.ea1614aed4fcf9fdfda96a9eeb81ab1d477bd528a44be7b736d05a8ca30de8c4" host="localhost" Jul 14 22:48:35.173979 containerd[1647]: 2025-07-14 22:48:35.128 [INFO][5237] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 22:48:35.173979 containerd[1647]: 2025-07-14 22:48:35.129 [INFO][5237] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="ea1614aed4fcf9fdfda96a9eeb81ab1d477bd528a44be7b736d05a8ca30de8c4" HandleID="k8s-pod-network.ea1614aed4fcf9fdfda96a9eeb81ab1d477bd528a44be7b736d05a8ca30de8c4" Workload="localhost-k8s-coredns--7c65d6cfc9--qt7fd-eth0" Jul 14 22:48:35.174486 containerd[1647]: 2025-07-14 22:48:35.136 [INFO][5157] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ea1614aed4fcf9fdfda96a9eeb81ab1d477bd528a44be7b736d05a8ca30de8c4" Namespace="kube-system" Pod="coredns-7c65d6cfc9-qt7fd" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--qt7fd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--qt7fd-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"55a1ec98-b34e-4be5-aa77-830413ef608b", ResourceVersion:"977", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 22, 47, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7c65d6cfc9-qt7fd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3124dc45690", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 22:48:35.174486 containerd[1647]: 2025-07-14 22:48:35.138 [INFO][5157] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="ea1614aed4fcf9fdfda96a9eeb81ab1d477bd528a44be7b736d05a8ca30de8c4" Namespace="kube-system" Pod="coredns-7c65d6cfc9-qt7fd" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--qt7fd-eth0" Jul 14 22:48:35.174486 containerd[1647]: 2025-07-14 22:48:35.138 [INFO][5157] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3124dc45690 ContainerID="ea1614aed4fcf9fdfda96a9eeb81ab1d477bd528a44be7b736d05a8ca30de8c4" Namespace="kube-system" Pod="coredns-7c65d6cfc9-qt7fd" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--qt7fd-eth0" Jul 14 22:48:35.174486 containerd[1647]: 2025-07-14 22:48:35.143 [INFO][5157] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ea1614aed4fcf9fdfda96a9eeb81ab1d477bd528a44be7b736d05a8ca30de8c4" Namespace="kube-system" Pod="coredns-7c65d6cfc9-qt7fd" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--qt7fd-eth0" Jul 14 22:48:35.174486 containerd[1647]: 2025-07-14 22:48:35.145 [INFO][5157] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ea1614aed4fcf9fdfda96a9eeb81ab1d477bd528a44be7b736d05a8ca30de8c4" Namespace="kube-system" Pod="coredns-7c65d6cfc9-qt7fd" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--qt7fd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--qt7fd-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"55a1ec98-b34e-4be5-aa77-830413ef608b", ResourceVersion:"977", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 22, 47, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ea1614aed4fcf9fdfda96a9eeb81ab1d477bd528a44be7b736d05a8ca30de8c4", Pod:"coredns-7c65d6cfc9-qt7fd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3124dc45690", MAC:"4a:23:1c:1a:13:58", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 22:48:35.174486 containerd[1647]: 2025-07-14 22:48:35.169 [INFO][5157] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ea1614aed4fcf9fdfda96a9eeb81ab1d477bd528a44be7b736d05a8ca30de8c4" Namespace="kube-system" Pod="coredns-7c65d6cfc9-qt7fd" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--qt7fd-eth0" Jul 14 22:48:35.187408 containerd[1647]: time="2025-07-14T22:48:35.187372470Z" level=info msg="CreateContainer within sandbox \"3e8eb1ab15c6fbec83c67de8e7b8783c8f4a2df1739bd52edb2abedbdb05d6d2\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"1cccd546f0b7fc706f87c97a23760ebc3968f0d4cb159f1b96f5e79200740583\"" Jul 14 22:48:35.189180 containerd[1647]: time="2025-07-14T22:48:35.189156594Z" level=info msg="StartContainer for \"1cccd546f0b7fc706f87c97a23760ebc3968f0d4cb159f1b96f5e79200740583\"" Jul 14 22:48:35.227593 containerd[1647]: time="2025-07-14T22:48:35.227533668Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 22:48:35.228611 containerd[1647]: time="2025-07-14T22:48:35.228382742Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 22:48:35.228611 containerd[1647]: time="2025-07-14T22:48:35.228413065Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 22:48:35.228611 containerd[1647]: time="2025-07-14T22:48:35.228519929Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 22:48:35.268138 containerd[1647]: time="2025-07-14T22:48:35.267296422Z" level=info msg="StartContainer for \"1cccd546f0b7fc706f87c97a23760ebc3968f0d4cb159f1b96f5e79200740583\" returns successfully" Jul 14 22:48:35.275083 systemd-resolved[1541]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 14 22:48:35.317076 containerd[1647]: time="2025-07-14T22:48:35.317054235Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-qt7fd,Uid:55a1ec98-b34e-4be5-aa77-830413ef608b,Namespace:kube-system,Attempt:1,} returns sandbox id \"ea1614aed4fcf9fdfda96a9eeb81ab1d477bd528a44be7b736d05a8ca30de8c4\"" Jul 14 22:48:35.330577 containerd[1647]: time="2025-07-14T22:48:35.329623183Z" level=info msg="CreateContainer within sandbox \"ea1614aed4fcf9fdfda96a9eeb81ab1d477bd528a44be7b736d05a8ca30de8c4\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 14 22:48:35.367962 containerd[1647]: time="2025-07-14T22:48:35.367930325Z" level=info msg="CreateContainer within sandbox \"ea1614aed4fcf9fdfda96a9eeb81ab1d477bd528a44be7b736d05a8ca30de8c4\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"726834fcc61a85f7441cd1a374d0fcdea7d5d267763b31f3f1cd5fb62afe5fdb\"" Jul 14 22:48:35.370911 containerd[1647]: time="2025-07-14T22:48:35.369111865Z" level=info msg="StartContainer for \"726834fcc61a85f7441cd1a374d0fcdea7d5d267763b31f3f1cd5fb62afe5fdb\"" Jul 14 22:48:35.447983 systemd[1]: run-netns-cni\x2dc6c8be27\x2dee63\x2dcd48\x2d7043\x2d370d384d4051.mount: Deactivated successfully. Jul 14 22:48:35.474561 containerd[1647]: time="2025-07-14T22:48:35.474478604Z" level=info msg="StartContainer for \"726834fcc61a85f7441cd1a374d0fcdea7d5d267763b31f3f1cd5fb62afe5fdb\" returns successfully" Jul 14 22:48:35.510628 kubelet[2904]: I0714 22:48:35.509537 2904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-77d5bbcd9f-rxk2m" podStartSLOduration=27.571602899 podStartE2EDuration="31.509520872s" podCreationTimestamp="2025-07-14 22:48:04 +0000 UTC" firstStartedPulling="2025-07-14 22:48:30.881818618 +0000 UTC m=+39.273110904" lastFinishedPulling="2025-07-14 22:48:34.819736585 +0000 UTC m=+43.211028877" observedRunningTime="2025-07-14 22:48:35.509004604 +0000 UTC m=+43.900296900" watchObservedRunningTime="2025-07-14 22:48:35.509520872 +0000 UTC m=+43.900813161" Jul 14 22:48:35.510628 kubelet[2904]: I0714 22:48:35.509744 2904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-gtzwg" podStartSLOduration=39.509738636 podStartE2EDuration="39.509738636s" podCreationTimestamp="2025-07-14 22:47:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-14 22:48:35.44396216 +0000 UTC m=+43.835254450" watchObservedRunningTime="2025-07-14 22:48:35.509738636 +0000 UTC m=+43.901030927" Jul 14 22:48:36.059077 systemd-networkd[1287]: calic86417ade66: Gained IPv6LL Jul 14 22:48:36.379045 systemd-networkd[1287]: cali31c6b5f388e: Gained IPv6LL Jul 14 22:48:36.380354 systemd-resolved[1541]: Under memory pressure, flushing caches. Jul 14 22:48:36.380368 systemd-resolved[1541]: Flushed all caches. Jul 14 22:48:36.382319 systemd-journald[1180]: Under memory pressure, flushing caches. Jul 14 22:48:36.408826 containerd[1647]: time="2025-07-14T22:48:36.408796030Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:48:36.409289 containerd[1647]: time="2025-07-14T22:48:36.409247359Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.2: active requests=0, bytes read=8759190" Jul 14 22:48:36.411996 containerd[1647]: time="2025-07-14T22:48:36.409283180Z" level=info msg="ImageCreate event name:\"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:48:36.411996 containerd[1647]: time="2025-07-14T22:48:36.410424503Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:48:36.411996 containerd[1647]: time="2025-07-14T22:48:36.410834816Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.2\" with image id \"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\", size \"10251893\" in 1.589161827s" Jul 14 22:48:36.411996 containerd[1647]: time="2025-07-14T22:48:36.410852105Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\" returns image reference \"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\"" Jul 14 22:48:36.411996 containerd[1647]: time="2025-07-14T22:48:36.411664244Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\"" Jul 14 22:48:36.415310 containerd[1647]: time="2025-07-14T22:48:36.415205636Z" level=info msg="CreateContainer within sandbox \"3f5205990f74e0c76941908ba3bf688cd5f37af613fd17132abf4a8081d13ee6\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jul 14 22:48:36.460625 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1110633764.mount: Deactivated successfully. Jul 14 22:48:36.469044 containerd[1647]: time="2025-07-14T22:48:36.468764360Z" level=info msg="CreateContainer within sandbox \"3f5205990f74e0c76941908ba3bf688cd5f37af613fd17132abf4a8081d13ee6\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"6c766882404b97a42941e58bc40e4ec0dc492d53f784ee7b3c78df3d1b3629c6\"" Jul 14 22:48:36.469646 containerd[1647]: time="2025-07-14T22:48:36.469167834Z" level=info msg="StartContainer for \"6c766882404b97a42941e58bc40e4ec0dc492d53f784ee7b3c78df3d1b3629c6\"" Jul 14 22:48:36.563553 containerd[1647]: time="2025-07-14T22:48:36.563523097Z" level=info msg="StartContainer for \"6c766882404b97a42941e58bc40e4ec0dc492d53f784ee7b3c78df3d1b3629c6\" returns successfully" Jul 14 22:48:36.567100 kubelet[2904]: I0714 22:48:36.567058 2904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-qt7fd" podStartSLOduration=40.567041954 podStartE2EDuration="40.567041954s" podCreationTimestamp="2025-07-14 22:47:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-14 22:48:36.561400814 +0000 UTC m=+44.952693112" watchObservedRunningTime="2025-07-14 22:48:36.567041954 +0000 UTC m=+44.958334246" Jul 14 22:48:36.763243 systemd-networkd[1287]: cali3124dc45690: Gained IPv6LL Jul 14 22:48:38.426949 systemd-resolved[1541]: Under memory pressure, flushing caches. Jul 14 22:48:38.427947 systemd-journald[1180]: Under memory pressure, flushing caches. Jul 14 22:48:38.426970 systemd-resolved[1541]: Flushed all caches. Jul 14 22:48:39.461249 containerd[1647]: time="2025-07-14T22:48:39.461178675Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:48:39.464714 containerd[1647]: time="2025-07-14T22:48:39.462607282Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.2: active requests=0, bytes read=51276688" Jul 14 22:48:39.464917 containerd[1647]: time="2025-07-14T22:48:39.464896170Z" level=info msg="ImageCreate event name:\"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:48:39.480561 containerd[1647]: time="2025-07-14T22:48:39.480527831Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:48:39.485870 containerd[1647]: time="2025-07-14T22:48:39.480920394Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" with image id \"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\", size \"52769359\" in 3.069239806s" Jul 14 22:48:39.485870 containerd[1647]: time="2025-07-14T22:48:39.480942964Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" returns image reference \"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\"" Jul 14 22:48:39.932660 containerd[1647]: time="2025-07-14T22:48:39.932630434Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 14 22:48:40.048265 containerd[1647]: time="2025-07-14T22:48:40.048227716Z" level=info msg="CreateContainer within sandbox \"20c81f0cb8556777c066d18b75fff643fba81580bc8249cc785a4256cfdb10e4\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jul 14 22:48:40.096671 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2723184584.mount: Deactivated successfully. Jul 14 22:48:40.193672 containerd[1647]: time="2025-07-14T22:48:40.193183849Z" level=info msg="CreateContainer within sandbox \"20c81f0cb8556777c066d18b75fff643fba81580bc8249cc785a4256cfdb10e4\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"540fe05e00e3f29e1c3d736f5c9495773651d0a2e90ed20c305cba7bc1af6eef\"" Jul 14 22:48:40.226400 containerd[1647]: time="2025-07-14T22:48:40.225981991Z" level=info msg="StartContainer for \"540fe05e00e3f29e1c3d736f5c9495773651d0a2e90ed20c305cba7bc1af6eef\"" Jul 14 22:48:40.475080 systemd-resolved[1541]: Under memory pressure, flushing caches. Jul 14 22:48:40.475098 systemd-resolved[1541]: Flushed all caches. Jul 14 22:48:40.476904 systemd-journald[1180]: Under memory pressure, flushing caches. Jul 14 22:48:40.537926 containerd[1647]: time="2025-07-14T22:48:40.537743967Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:48:40.547363 containerd[1647]: time="2025-07-14T22:48:40.547321663Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=77" Jul 14 22:48:40.558912 containerd[1647]: time="2025-07-14T22:48:40.558668651Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"48810696\" in 626.005698ms" Jul 14 22:48:40.558912 containerd[1647]: time="2025-07-14T22:48:40.558697258Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\"" Jul 14 22:48:40.615841 containerd[1647]: time="2025-07-14T22:48:40.615820090Z" level=info msg="StartContainer for \"540fe05e00e3f29e1c3d736f5c9495773651d0a2e90ed20c305cba7bc1af6eef\" returns successfully" Jul 14 22:48:40.627311 containerd[1647]: time="2025-07-14T22:48:40.627061223Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\"" Jul 14 22:48:40.631026 containerd[1647]: time="2025-07-14T22:48:40.630586975Z" level=info msg="CreateContainer within sandbox \"7dbed6d81f56e52678440ca553284aa4f150f687d069a4d3aa092321efc50409\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 14 22:48:40.646904 containerd[1647]: time="2025-07-14T22:48:40.646431304Z" level=info msg="CreateContainer within sandbox \"7dbed6d81f56e52678440ca553284aa4f150f687d069a4d3aa092321efc50409\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"cf4cb4a6d0602cfda820047f691d103678f513da36c8fb5fc4def2e1b6d2f8c6\"" Jul 14 22:48:40.648900 containerd[1647]: time="2025-07-14T22:48:40.648009550Z" level=info msg="StartContainer for \"cf4cb4a6d0602cfda820047f691d103678f513da36c8fb5fc4def2e1b6d2f8c6\"" Jul 14 22:48:40.765638 containerd[1647]: time="2025-07-14T22:48:40.765578665Z" level=info msg="StartContainer for \"cf4cb4a6d0602cfda820047f691d103678f513da36c8fb5fc4def2e1b6d2f8c6\" returns successfully" Jul 14 22:48:41.153827 kubelet[2904]: I0714 22:48:41.149554 2904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-58bddbbd7-xf2v7" podStartSLOduration=26.156268939 podStartE2EDuration="33.144382715s" podCreationTimestamp="2025-07-14 22:48:08 +0000 UTC" firstStartedPulling="2025-07-14 22:48:32.865859269 +0000 UTC m=+41.257151558" lastFinishedPulling="2025-07-14 22:48:39.853973048 +0000 UTC m=+48.245265334" observedRunningTime="2025-07-14 22:48:41.049465903 +0000 UTC m=+49.440758199" watchObservedRunningTime="2025-07-14 22:48:41.144382715 +0000 UTC m=+49.535675006" Jul 14 22:48:41.153827 kubelet[2904]: I0714 22:48:41.153759 2904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-77d5bbcd9f-bfs2w" podStartSLOduration=29.952715974 podStartE2EDuration="37.15374599s" podCreationTimestamp="2025-07-14 22:48:04 +0000 UTC" firstStartedPulling="2025-07-14 22:48:33.424851585 +0000 UTC m=+41.816143872" lastFinishedPulling="2025-07-14 22:48:40.625881602 +0000 UTC m=+49.017173888" observedRunningTime="2025-07-14 22:48:41.060601911 +0000 UTC m=+49.451894207" watchObservedRunningTime="2025-07-14 22:48:41.15374599 +0000 UTC m=+49.545038287" Jul 14 22:48:42.525011 systemd-resolved[1541]: Under memory pressure, flushing caches. Jul 14 22:48:42.535717 systemd-journald[1180]: Under memory pressure, flushing caches. Jul 14 22:48:42.525032 systemd-resolved[1541]: Flushed all caches. Jul 14 22:48:43.781275 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount364732977.mount: Deactivated successfully. Jul 14 22:48:44.572305 systemd-journald[1180]: Under memory pressure, flushing caches. Jul 14 22:48:44.571297 systemd-resolved[1541]: Under memory pressure, flushing caches. Jul 14 22:48:44.571302 systemd-resolved[1541]: Flushed all caches. Jul 14 22:48:45.212807 containerd[1647]: time="2025-07-14T22:48:45.212758813Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:48:45.305502 containerd[1647]: time="2025-07-14T22:48:45.213425468Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.2: active requests=0, bytes read=66352308" Jul 14 22:48:45.322182 containerd[1647]: time="2025-07-14T22:48:45.322158616Z" level=info msg="ImageCreate event name:\"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:48:45.331786 containerd[1647]: time="2025-07-14T22:48:45.331763954Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:48:45.332871 containerd[1647]: time="2025-07-14T22:48:45.332333199Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" with image id \"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\", size \"66352154\" in 4.705244201s" Jul 14 22:48:45.332871 containerd[1647]: time="2025-07-14T22:48:45.332358764Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" returns image reference \"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\"" Jul 14 22:48:45.504471 containerd[1647]: time="2025-07-14T22:48:45.504152113Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\"" Jul 14 22:48:45.521032 kubelet[2904]: E0714 22:48:45.520275 2904 kubelet.go:2512] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.642s" Jul 14 22:48:45.529045 systemd[1]: run-containerd-runc-k8s.io-59a5476f4f89edfb225680ce375bf0b08525d224d3bdc4709c20af2e929f9896-runc.zjc8P6.mount: Deactivated successfully. Jul 14 22:48:45.642947 containerd[1647]: time="2025-07-14T22:48:45.642785059Z" level=info msg="CreateContainer within sandbox \"5a9504ffba451788cd4a1164cb2d6fc7a927315dfc20e83476e936db23d76485\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Jul 14 22:48:45.685262 containerd[1647]: time="2025-07-14T22:48:45.685206626Z" level=info msg="CreateContainer within sandbox \"5a9504ffba451788cd4a1164cb2d6fc7a927315dfc20e83476e936db23d76485\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"b8ec4823e84a9b160d05aced6b442eb5092232a9851661b3ee5dbc6f3e5f8454\"" Jul 14 22:48:45.687015 containerd[1647]: time="2025-07-14T22:48:45.686873777Z" level=info msg="StartContainer for \"b8ec4823e84a9b160d05aced6b442eb5092232a9851661b3ee5dbc6f3e5f8454\"" Jul 14 22:48:46.036122 containerd[1647]: time="2025-07-14T22:48:46.036052695Z" level=info msg="StartContainer for \"b8ec4823e84a9b160d05aced6b442eb5092232a9851661b3ee5dbc6f3e5f8454\" returns successfully" Jul 14 22:48:46.619171 systemd-resolved[1541]: Under memory pressure, flushing caches. Jul 14 22:48:46.622550 systemd-journald[1180]: Under memory pressure, flushing caches. Jul 14 22:48:46.619186 systemd-resolved[1541]: Flushed all caches. Jul 14 22:48:46.904170 kubelet[2904]: I0714 22:48:46.837715 2904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-58fd7646b9-c6d2z" podStartSLOduration=28.492865578 podStartE2EDuration="38.76751878s" podCreationTimestamp="2025-07-14 22:48:08 +0000 UTC" firstStartedPulling="2025-07-14 22:48:35.204521897 +0000 UTC m=+43.595814183" lastFinishedPulling="2025-07-14 22:48:45.479175096 +0000 UTC m=+53.870467385" observedRunningTime="2025-07-14 22:48:46.729168202 +0000 UTC m=+55.120460495" watchObservedRunningTime="2025-07-14 22:48:46.76751878 +0000 UTC m=+55.158811071" Jul 14 22:48:47.454481 containerd[1647]: time="2025-07-14T22:48:47.454404521Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:48:47.497384 containerd[1647]: time="2025-07-14T22:48:47.497310938Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2: active requests=0, bytes read=14703784" Jul 14 22:48:47.507379 containerd[1647]: time="2025-07-14T22:48:47.502636182Z" level=info msg="ImageCreate event name:\"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:48:47.523527 containerd[1647]: time="2025-07-14T22:48:47.511811866Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:48:47.523527 containerd[1647]: time="2025-07-14T22:48:47.512234710Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" with image id \"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\", size \"16196439\" in 2.008049656s" Jul 14 22:48:47.523527 containerd[1647]: time="2025-07-14T22:48:47.512251422Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" returns image reference \"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\"" Jul 14 22:48:47.525469 containerd[1647]: time="2025-07-14T22:48:47.525449744Z" level=info msg="CreateContainer within sandbox \"3f5205990f74e0c76941908ba3bf688cd5f37af613fd17132abf4a8081d13ee6\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jul 14 22:48:47.601858 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4234849517.mount: Deactivated successfully. Jul 14 22:48:47.625869 containerd[1647]: time="2025-07-14T22:48:47.625801245Z" level=info msg="CreateContainer within sandbox \"3f5205990f74e0c76941908ba3bf688cd5f37af613fd17132abf4a8081d13ee6\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"69e8c1209c1463a9a70065a741373446a0682a38c2a4828a0778f23a08c4eb74\"" Jul 14 22:48:47.635540 containerd[1647]: time="2025-07-14T22:48:47.626284965Z" level=info msg="StartContainer for \"69e8c1209c1463a9a70065a741373446a0682a38c2a4828a0778f23a08c4eb74\"" Jul 14 22:48:47.744859 containerd[1647]: time="2025-07-14T22:48:47.744801352Z" level=info msg="StartContainer for \"69e8c1209c1463a9a70065a741373446a0682a38c2a4828a0778f23a08c4eb74\" returns successfully" Jul 14 22:48:48.668557 systemd-journald[1180]: Under memory pressure, flushing caches. Jul 14 22:48:48.666982 systemd-resolved[1541]: Under memory pressure, flushing caches. Jul 14 22:48:48.667005 systemd-resolved[1541]: Flushed all caches. Jul 14 22:48:50.172425 kubelet[2904]: I0714 22:48:50.167981 2904 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jul 14 22:48:50.184850 kubelet[2904]: I0714 22:48:50.184759 2904 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jul 14 22:48:50.715162 systemd-resolved[1541]: Under memory pressure, flushing caches. Jul 14 22:48:50.720395 systemd-journald[1180]: Under memory pressure, flushing caches. Jul 14 22:48:50.715168 systemd-resolved[1541]: Flushed all caches. Jul 14 22:48:52.371363 containerd[1647]: time="2025-07-14T22:48:52.371277698Z" level=info msg="StopPodSandbox for \"2a1150561925bddf190ca0976d94ca61566f7bd01fec93b03508341d9f258f5e\"" Jul 14 22:48:52.764076 systemd-resolved[1541]: Under memory pressure, flushing caches. Jul 14 22:48:52.765017 systemd-journald[1180]: Under memory pressure, flushing caches. Jul 14 22:48:52.764082 systemd-resolved[1541]: Flushed all caches. Jul 14 22:48:53.298703 containerd[1647]: 2025-07-14 22:48:52.961 [WARNING][5829] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2a1150561925bddf190ca0976d94ca61566f7bd01fec93b03508341d9f258f5e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--58bddbbd7--xf2v7-eth0", GenerateName:"calico-kube-controllers-58bddbbd7-", Namespace:"calico-system", SelfLink:"", UID:"6e5c12ff-f464-401e-803f-958a1cf87455", ResourceVersion:"1052", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 22, 48, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"58bddbbd7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"20c81f0cb8556777c066d18b75fff643fba81580bc8249cc785a4256cfdb10e4", Pod:"calico-kube-controllers-58bddbbd7-xf2v7", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali3cff5a53759", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 22:48:53.298703 containerd[1647]: 2025-07-14 22:48:52.966 [INFO][5829] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2a1150561925bddf190ca0976d94ca61566f7bd01fec93b03508341d9f258f5e" Jul 14 22:48:53.298703 containerd[1647]: 2025-07-14 22:48:52.966 [INFO][5829] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2a1150561925bddf190ca0976d94ca61566f7bd01fec93b03508341d9f258f5e" iface="eth0" netns="" Jul 14 22:48:53.298703 containerd[1647]: 2025-07-14 22:48:52.966 [INFO][5829] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2a1150561925bddf190ca0976d94ca61566f7bd01fec93b03508341d9f258f5e" Jul 14 22:48:53.298703 containerd[1647]: 2025-07-14 22:48:52.966 [INFO][5829] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2a1150561925bddf190ca0976d94ca61566f7bd01fec93b03508341d9f258f5e" Jul 14 22:48:53.298703 containerd[1647]: 2025-07-14 22:48:53.281 [INFO][5836] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2a1150561925bddf190ca0976d94ca61566f7bd01fec93b03508341d9f258f5e" HandleID="k8s-pod-network.2a1150561925bddf190ca0976d94ca61566f7bd01fec93b03508341d9f258f5e" Workload="localhost-k8s-calico--kube--controllers--58bddbbd7--xf2v7-eth0" Jul 14 22:48:53.298703 containerd[1647]: 2025-07-14 22:48:53.284 [INFO][5836] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 22:48:53.298703 containerd[1647]: 2025-07-14 22:48:53.284 [INFO][5836] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 22:48:53.298703 containerd[1647]: 2025-07-14 22:48:53.295 [WARNING][5836] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2a1150561925bddf190ca0976d94ca61566f7bd01fec93b03508341d9f258f5e" HandleID="k8s-pod-network.2a1150561925bddf190ca0976d94ca61566f7bd01fec93b03508341d9f258f5e" Workload="localhost-k8s-calico--kube--controllers--58bddbbd7--xf2v7-eth0" Jul 14 22:48:53.298703 containerd[1647]: 2025-07-14 22:48:53.295 [INFO][5836] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2a1150561925bddf190ca0976d94ca61566f7bd01fec93b03508341d9f258f5e" HandleID="k8s-pod-network.2a1150561925bddf190ca0976d94ca61566f7bd01fec93b03508341d9f258f5e" Workload="localhost-k8s-calico--kube--controllers--58bddbbd7--xf2v7-eth0" Jul 14 22:48:53.298703 containerd[1647]: 2025-07-14 22:48:53.295 [INFO][5836] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 22:48:53.298703 containerd[1647]: 2025-07-14 22:48:53.297 [INFO][5829] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2a1150561925bddf190ca0976d94ca61566f7bd01fec93b03508341d9f258f5e" Jul 14 22:48:53.309942 containerd[1647]: time="2025-07-14T22:48:53.302017504Z" level=info msg="TearDown network for sandbox \"2a1150561925bddf190ca0976d94ca61566f7bd01fec93b03508341d9f258f5e\" successfully" Jul 14 22:48:53.309942 containerd[1647]: time="2025-07-14T22:48:53.302049365Z" level=info msg="StopPodSandbox for \"2a1150561925bddf190ca0976d94ca61566f7bd01fec93b03508341d9f258f5e\" returns successfully" Jul 14 22:48:53.410478 containerd[1647]: time="2025-07-14T22:48:53.409861055Z" level=info msg="RemovePodSandbox for \"2a1150561925bddf190ca0976d94ca61566f7bd01fec93b03508341d9f258f5e\"" Jul 14 22:48:53.421385 containerd[1647]: time="2025-07-14T22:48:53.421354263Z" level=info msg="Forcibly stopping sandbox \"2a1150561925bddf190ca0976d94ca61566f7bd01fec93b03508341d9f258f5e\"" Jul 14 22:48:53.561309 containerd[1647]: 2025-07-14 22:48:53.534 [WARNING][5855] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2a1150561925bddf190ca0976d94ca61566f7bd01fec93b03508341d9f258f5e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--58bddbbd7--xf2v7-eth0", GenerateName:"calico-kube-controllers-58bddbbd7-", Namespace:"calico-system", SelfLink:"", UID:"6e5c12ff-f464-401e-803f-958a1cf87455", ResourceVersion:"1052", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 22, 48, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"58bddbbd7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"20c81f0cb8556777c066d18b75fff643fba81580bc8249cc785a4256cfdb10e4", Pod:"calico-kube-controllers-58bddbbd7-xf2v7", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali3cff5a53759", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 22:48:53.561309 containerd[1647]: 2025-07-14 22:48:53.535 [INFO][5855] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2a1150561925bddf190ca0976d94ca61566f7bd01fec93b03508341d9f258f5e" Jul 14 22:48:53.561309 containerd[1647]: 2025-07-14 22:48:53.535 [INFO][5855] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2a1150561925bddf190ca0976d94ca61566f7bd01fec93b03508341d9f258f5e" iface="eth0" netns="" Jul 14 22:48:53.561309 containerd[1647]: 2025-07-14 22:48:53.535 [INFO][5855] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2a1150561925bddf190ca0976d94ca61566f7bd01fec93b03508341d9f258f5e" Jul 14 22:48:53.561309 containerd[1647]: 2025-07-14 22:48:53.535 [INFO][5855] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2a1150561925bddf190ca0976d94ca61566f7bd01fec93b03508341d9f258f5e" Jul 14 22:48:53.561309 containerd[1647]: 2025-07-14 22:48:53.552 [INFO][5862] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2a1150561925bddf190ca0976d94ca61566f7bd01fec93b03508341d9f258f5e" HandleID="k8s-pod-network.2a1150561925bddf190ca0976d94ca61566f7bd01fec93b03508341d9f258f5e" Workload="localhost-k8s-calico--kube--controllers--58bddbbd7--xf2v7-eth0" Jul 14 22:48:53.561309 containerd[1647]: 2025-07-14 22:48:53.552 [INFO][5862] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 22:48:53.561309 containerd[1647]: 2025-07-14 22:48:53.552 [INFO][5862] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 22:48:53.561309 containerd[1647]: 2025-07-14 22:48:53.556 [WARNING][5862] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2a1150561925bddf190ca0976d94ca61566f7bd01fec93b03508341d9f258f5e" HandleID="k8s-pod-network.2a1150561925bddf190ca0976d94ca61566f7bd01fec93b03508341d9f258f5e" Workload="localhost-k8s-calico--kube--controllers--58bddbbd7--xf2v7-eth0" Jul 14 22:48:53.561309 containerd[1647]: 2025-07-14 22:48:53.556 [INFO][5862] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2a1150561925bddf190ca0976d94ca61566f7bd01fec93b03508341d9f258f5e" HandleID="k8s-pod-network.2a1150561925bddf190ca0976d94ca61566f7bd01fec93b03508341d9f258f5e" Workload="localhost-k8s-calico--kube--controllers--58bddbbd7--xf2v7-eth0" Jul 14 22:48:53.561309 containerd[1647]: 2025-07-14 22:48:53.557 [INFO][5862] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 22:48:53.561309 containerd[1647]: 2025-07-14 22:48:53.559 [INFO][5855] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2a1150561925bddf190ca0976d94ca61566f7bd01fec93b03508341d9f258f5e" Jul 14 22:48:53.564716 containerd[1647]: time="2025-07-14T22:48:53.561342164Z" level=info msg="TearDown network for sandbox \"2a1150561925bddf190ca0976d94ca61566f7bd01fec93b03508341d9f258f5e\" successfully" Jul 14 22:48:53.643293 containerd[1647]: time="2025-07-14T22:48:53.643258173Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2a1150561925bddf190ca0976d94ca61566f7bd01fec93b03508341d9f258f5e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 14 22:48:53.647396 containerd[1647]: time="2025-07-14T22:48:53.647375382Z" level=info msg="RemovePodSandbox \"2a1150561925bddf190ca0976d94ca61566f7bd01fec93b03508341d9f258f5e\" returns successfully" Jul 14 22:48:53.653255 containerd[1647]: time="2025-07-14T22:48:53.653232327Z" level=info msg="StopPodSandbox for \"25a9430590916749bdddebfb0b5fbad3ba8f65a13da22fbd0d318aaab67cab90\"" Jul 14 22:48:53.711444 containerd[1647]: 2025-07-14 22:48:53.684 [WARNING][5876] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="25a9430590916749bdddebfb0b5fbad3ba8f65a13da22fbd0d318aaab67cab90" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--58fd7646b9--c6d2z-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"b5e7e152-33a4-475e-81b6-5e99058e6154", ResourceVersion:"1075", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 22, 48, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5a9504ffba451788cd4a1164cb2d6fc7a927315dfc20e83476e936db23d76485", Pod:"goldmane-58fd7646b9-c6d2z", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali31c6b5f388e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 22:48:53.711444 containerd[1647]: 2025-07-14 22:48:53.684 [INFO][5876] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="25a9430590916749bdddebfb0b5fbad3ba8f65a13da22fbd0d318aaab67cab90" Jul 14 22:48:53.711444 containerd[1647]: 2025-07-14 22:48:53.684 [INFO][5876] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="25a9430590916749bdddebfb0b5fbad3ba8f65a13da22fbd0d318aaab67cab90" iface="eth0" netns="" Jul 14 22:48:53.711444 containerd[1647]: 2025-07-14 22:48:53.684 [INFO][5876] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="25a9430590916749bdddebfb0b5fbad3ba8f65a13da22fbd0d318aaab67cab90" Jul 14 22:48:53.711444 containerd[1647]: 2025-07-14 22:48:53.684 [INFO][5876] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="25a9430590916749bdddebfb0b5fbad3ba8f65a13da22fbd0d318aaab67cab90" Jul 14 22:48:53.711444 containerd[1647]: 2025-07-14 22:48:53.704 [INFO][5883] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="25a9430590916749bdddebfb0b5fbad3ba8f65a13da22fbd0d318aaab67cab90" HandleID="k8s-pod-network.25a9430590916749bdddebfb0b5fbad3ba8f65a13da22fbd0d318aaab67cab90" Workload="localhost-k8s-goldmane--58fd7646b9--c6d2z-eth0" Jul 14 22:48:53.711444 containerd[1647]: 2025-07-14 22:48:53.704 [INFO][5883] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 22:48:53.711444 containerd[1647]: 2025-07-14 22:48:53.704 [INFO][5883] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 22:48:53.711444 containerd[1647]: 2025-07-14 22:48:53.708 [WARNING][5883] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="25a9430590916749bdddebfb0b5fbad3ba8f65a13da22fbd0d318aaab67cab90" HandleID="k8s-pod-network.25a9430590916749bdddebfb0b5fbad3ba8f65a13da22fbd0d318aaab67cab90" Workload="localhost-k8s-goldmane--58fd7646b9--c6d2z-eth0" Jul 14 22:48:53.711444 containerd[1647]: 2025-07-14 22:48:53.708 [INFO][5883] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="25a9430590916749bdddebfb0b5fbad3ba8f65a13da22fbd0d318aaab67cab90" HandleID="k8s-pod-network.25a9430590916749bdddebfb0b5fbad3ba8f65a13da22fbd0d318aaab67cab90" Workload="localhost-k8s-goldmane--58fd7646b9--c6d2z-eth0" Jul 14 22:48:53.711444 containerd[1647]: 2025-07-14 22:48:53.708 [INFO][5883] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 22:48:53.711444 containerd[1647]: 2025-07-14 22:48:53.710 [INFO][5876] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="25a9430590916749bdddebfb0b5fbad3ba8f65a13da22fbd0d318aaab67cab90" Jul 14 22:48:53.712345 containerd[1647]: time="2025-07-14T22:48:53.711815056Z" level=info msg="TearDown network for sandbox \"25a9430590916749bdddebfb0b5fbad3ba8f65a13da22fbd0d318aaab67cab90\" successfully" Jul 14 22:48:53.712345 containerd[1647]: time="2025-07-14T22:48:53.711832841Z" level=info msg="StopPodSandbox for \"25a9430590916749bdddebfb0b5fbad3ba8f65a13da22fbd0d318aaab67cab90\" returns successfully" Jul 14 22:48:53.712345 containerd[1647]: time="2025-07-14T22:48:53.712139793Z" level=info msg="RemovePodSandbox for \"25a9430590916749bdddebfb0b5fbad3ba8f65a13da22fbd0d318aaab67cab90\"" Jul 14 22:48:53.712345 containerd[1647]: time="2025-07-14T22:48:53.712156043Z" level=info msg="Forcibly stopping sandbox \"25a9430590916749bdddebfb0b5fbad3ba8f65a13da22fbd0d318aaab67cab90\"" Jul 14 22:48:53.761908 containerd[1647]: 2025-07-14 22:48:53.738 [WARNING][5897] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="25a9430590916749bdddebfb0b5fbad3ba8f65a13da22fbd0d318aaab67cab90" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--58fd7646b9--c6d2z-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"b5e7e152-33a4-475e-81b6-5e99058e6154", ResourceVersion:"1075", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 22, 48, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5a9504ffba451788cd4a1164cb2d6fc7a927315dfc20e83476e936db23d76485", Pod:"goldmane-58fd7646b9-c6d2z", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali31c6b5f388e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 22:48:53.761908 containerd[1647]: 2025-07-14 22:48:53.738 [INFO][5897] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="25a9430590916749bdddebfb0b5fbad3ba8f65a13da22fbd0d318aaab67cab90" Jul 14 22:48:53.761908 containerd[1647]: 2025-07-14 22:48:53.738 [INFO][5897] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="25a9430590916749bdddebfb0b5fbad3ba8f65a13da22fbd0d318aaab67cab90" iface="eth0" netns="" Jul 14 22:48:53.761908 containerd[1647]: 2025-07-14 22:48:53.738 [INFO][5897] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="25a9430590916749bdddebfb0b5fbad3ba8f65a13da22fbd0d318aaab67cab90" Jul 14 22:48:53.761908 containerd[1647]: 2025-07-14 22:48:53.738 [INFO][5897] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="25a9430590916749bdddebfb0b5fbad3ba8f65a13da22fbd0d318aaab67cab90" Jul 14 22:48:53.761908 containerd[1647]: 2025-07-14 22:48:53.754 [INFO][5904] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="25a9430590916749bdddebfb0b5fbad3ba8f65a13da22fbd0d318aaab67cab90" HandleID="k8s-pod-network.25a9430590916749bdddebfb0b5fbad3ba8f65a13da22fbd0d318aaab67cab90" Workload="localhost-k8s-goldmane--58fd7646b9--c6d2z-eth0" Jul 14 22:48:53.761908 containerd[1647]: 2025-07-14 22:48:53.755 [INFO][5904] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 22:48:53.761908 containerd[1647]: 2025-07-14 22:48:53.755 [INFO][5904] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 22:48:53.761908 containerd[1647]: 2025-07-14 22:48:53.758 [WARNING][5904] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="25a9430590916749bdddebfb0b5fbad3ba8f65a13da22fbd0d318aaab67cab90" HandleID="k8s-pod-network.25a9430590916749bdddebfb0b5fbad3ba8f65a13da22fbd0d318aaab67cab90" Workload="localhost-k8s-goldmane--58fd7646b9--c6d2z-eth0" Jul 14 22:48:53.761908 containerd[1647]: 2025-07-14 22:48:53.758 [INFO][5904] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="25a9430590916749bdddebfb0b5fbad3ba8f65a13da22fbd0d318aaab67cab90" HandleID="k8s-pod-network.25a9430590916749bdddebfb0b5fbad3ba8f65a13da22fbd0d318aaab67cab90" Workload="localhost-k8s-goldmane--58fd7646b9--c6d2z-eth0" Jul 14 22:48:53.761908 containerd[1647]: 2025-07-14 22:48:53.759 [INFO][5904] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 22:48:53.761908 containerd[1647]: 2025-07-14 22:48:53.760 [INFO][5897] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="25a9430590916749bdddebfb0b5fbad3ba8f65a13da22fbd0d318aaab67cab90" Jul 14 22:48:53.761908 containerd[1647]: time="2025-07-14T22:48:53.761834682Z" level=info msg="TearDown network for sandbox \"25a9430590916749bdddebfb0b5fbad3ba8f65a13da22fbd0d318aaab67cab90\" successfully" Jul 14 22:48:53.765017 containerd[1647]: time="2025-07-14T22:48:53.765000473Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"25a9430590916749bdddebfb0b5fbad3ba8f65a13da22fbd0d318aaab67cab90\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 14 22:48:53.765063 containerd[1647]: time="2025-07-14T22:48:53.765035542Z" level=info msg="RemovePodSandbox \"25a9430590916749bdddebfb0b5fbad3ba8f65a13da22fbd0d318aaab67cab90\" returns successfully" Jul 14 22:48:53.765363 containerd[1647]: time="2025-07-14T22:48:53.765349167Z" level=info msg="StopPodSandbox for \"181daaed2da55bac42912658eeefb79cb7b48e91950d6111fc6887af05f2ed09\"" Jul 14 22:48:53.812074 containerd[1647]: 2025-07-14 22:48:53.788 [WARNING][5918] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="181daaed2da55bac42912658eeefb79cb7b48e91950d6111fc6887af05f2ed09" WorkloadEndpoint="localhost-k8s-whisker--6c897db977--sdsfz-eth0" Jul 14 22:48:53.812074 containerd[1647]: 2025-07-14 22:48:53.789 [INFO][5918] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="181daaed2da55bac42912658eeefb79cb7b48e91950d6111fc6887af05f2ed09" Jul 14 22:48:53.812074 containerd[1647]: 2025-07-14 22:48:53.789 [INFO][5918] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="181daaed2da55bac42912658eeefb79cb7b48e91950d6111fc6887af05f2ed09" iface="eth0" netns="" Jul 14 22:48:53.812074 containerd[1647]: 2025-07-14 22:48:53.789 [INFO][5918] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="181daaed2da55bac42912658eeefb79cb7b48e91950d6111fc6887af05f2ed09" Jul 14 22:48:53.812074 containerd[1647]: 2025-07-14 22:48:53.789 [INFO][5918] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="181daaed2da55bac42912658eeefb79cb7b48e91950d6111fc6887af05f2ed09" Jul 14 22:48:53.812074 containerd[1647]: 2025-07-14 22:48:53.802 [INFO][5925] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="181daaed2da55bac42912658eeefb79cb7b48e91950d6111fc6887af05f2ed09" HandleID="k8s-pod-network.181daaed2da55bac42912658eeefb79cb7b48e91950d6111fc6887af05f2ed09" Workload="localhost-k8s-whisker--6c897db977--sdsfz-eth0" Jul 14 22:48:53.812074 containerd[1647]: 2025-07-14 22:48:53.803 [INFO][5925] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 22:48:53.812074 containerd[1647]: 2025-07-14 22:48:53.803 [INFO][5925] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 22:48:53.812074 containerd[1647]: 2025-07-14 22:48:53.808 [WARNING][5925] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="181daaed2da55bac42912658eeefb79cb7b48e91950d6111fc6887af05f2ed09" HandleID="k8s-pod-network.181daaed2da55bac42912658eeefb79cb7b48e91950d6111fc6887af05f2ed09" Workload="localhost-k8s-whisker--6c897db977--sdsfz-eth0" Jul 14 22:48:53.812074 containerd[1647]: 2025-07-14 22:48:53.808 [INFO][5925] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="181daaed2da55bac42912658eeefb79cb7b48e91950d6111fc6887af05f2ed09" HandleID="k8s-pod-network.181daaed2da55bac42912658eeefb79cb7b48e91950d6111fc6887af05f2ed09" Workload="localhost-k8s-whisker--6c897db977--sdsfz-eth0" Jul 14 22:48:53.812074 containerd[1647]: 2025-07-14 22:48:53.809 [INFO][5925] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 22:48:53.812074 containerd[1647]: 2025-07-14 22:48:53.810 [INFO][5918] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="181daaed2da55bac42912658eeefb79cb7b48e91950d6111fc6887af05f2ed09" Jul 14 22:48:53.812074 containerd[1647]: time="2025-07-14T22:48:53.811933649Z" level=info msg="TearDown network for sandbox \"181daaed2da55bac42912658eeefb79cb7b48e91950d6111fc6887af05f2ed09\" successfully" Jul 14 22:48:53.812074 containerd[1647]: time="2025-07-14T22:48:53.811949827Z" level=info msg="StopPodSandbox for \"181daaed2da55bac42912658eeefb79cb7b48e91950d6111fc6887af05f2ed09\" returns successfully" Jul 14 22:48:53.813004 containerd[1647]: time="2025-07-14T22:48:53.812240629Z" level=info msg="RemovePodSandbox for \"181daaed2da55bac42912658eeefb79cb7b48e91950d6111fc6887af05f2ed09\"" Jul 14 22:48:53.813004 containerd[1647]: time="2025-07-14T22:48:53.812269284Z" level=info msg="Forcibly stopping sandbox \"181daaed2da55bac42912658eeefb79cb7b48e91950d6111fc6887af05f2ed09\"" Jul 14 22:48:53.864800 containerd[1647]: 2025-07-14 22:48:53.840 [WARNING][5939] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="181daaed2da55bac42912658eeefb79cb7b48e91950d6111fc6887af05f2ed09" WorkloadEndpoint="localhost-k8s-whisker--6c897db977--sdsfz-eth0" Jul 14 22:48:53.864800 containerd[1647]: 2025-07-14 22:48:53.840 [INFO][5939] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="181daaed2da55bac42912658eeefb79cb7b48e91950d6111fc6887af05f2ed09" Jul 14 22:48:53.864800 containerd[1647]: 2025-07-14 22:48:53.840 [INFO][5939] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="181daaed2da55bac42912658eeefb79cb7b48e91950d6111fc6887af05f2ed09" iface="eth0" netns="" Jul 14 22:48:53.864800 containerd[1647]: 2025-07-14 22:48:53.840 [INFO][5939] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="181daaed2da55bac42912658eeefb79cb7b48e91950d6111fc6887af05f2ed09" Jul 14 22:48:53.864800 containerd[1647]: 2025-07-14 22:48:53.840 [INFO][5939] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="181daaed2da55bac42912658eeefb79cb7b48e91950d6111fc6887af05f2ed09" Jul 14 22:48:53.864800 containerd[1647]: 2025-07-14 22:48:53.857 [INFO][5946] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="181daaed2da55bac42912658eeefb79cb7b48e91950d6111fc6887af05f2ed09" HandleID="k8s-pod-network.181daaed2da55bac42912658eeefb79cb7b48e91950d6111fc6887af05f2ed09" Workload="localhost-k8s-whisker--6c897db977--sdsfz-eth0" Jul 14 22:48:53.864800 containerd[1647]: 2025-07-14 22:48:53.857 [INFO][5946] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 22:48:53.864800 containerd[1647]: 2025-07-14 22:48:53.857 [INFO][5946] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 22:48:53.864800 containerd[1647]: 2025-07-14 22:48:53.861 [WARNING][5946] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="181daaed2da55bac42912658eeefb79cb7b48e91950d6111fc6887af05f2ed09" HandleID="k8s-pod-network.181daaed2da55bac42912658eeefb79cb7b48e91950d6111fc6887af05f2ed09" Workload="localhost-k8s-whisker--6c897db977--sdsfz-eth0" Jul 14 22:48:53.864800 containerd[1647]: 2025-07-14 22:48:53.861 [INFO][5946] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="181daaed2da55bac42912658eeefb79cb7b48e91950d6111fc6887af05f2ed09" HandleID="k8s-pod-network.181daaed2da55bac42912658eeefb79cb7b48e91950d6111fc6887af05f2ed09" Workload="localhost-k8s-whisker--6c897db977--sdsfz-eth0" Jul 14 22:48:53.864800 containerd[1647]: 2025-07-14 22:48:53.862 [INFO][5946] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 22:48:53.864800 containerd[1647]: 2025-07-14 22:48:53.863 [INFO][5939] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="181daaed2da55bac42912658eeefb79cb7b48e91950d6111fc6887af05f2ed09" Jul 14 22:48:53.865995 containerd[1647]: time="2025-07-14T22:48:53.864825531Z" level=info msg="TearDown network for sandbox \"181daaed2da55bac42912658eeefb79cb7b48e91950d6111fc6887af05f2ed09\" successfully" Jul 14 22:48:53.868247 containerd[1647]: time="2025-07-14T22:48:53.868227931Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"181daaed2da55bac42912658eeefb79cb7b48e91950d6111fc6887af05f2ed09\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 14 22:48:53.868297 containerd[1647]: time="2025-07-14T22:48:53.868264704Z" level=info msg="RemovePodSandbox \"181daaed2da55bac42912658eeefb79cb7b48e91950d6111fc6887af05f2ed09\" returns successfully" Jul 14 22:48:53.868813 containerd[1647]: time="2025-07-14T22:48:53.868757004Z" level=info msg="StopPodSandbox for \"d116ef66974640c1c387c598585635d19686f4e057958521a59c62b407cb2786\"" Jul 14 22:48:53.972925 containerd[1647]: 2025-07-14 22:48:53.924 [WARNING][5960] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d116ef66974640c1c387c598585635d19686f4e057958521a59c62b407cb2786" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--77d5bbcd9f--rxk2m-eth0", GenerateName:"calico-apiserver-77d5bbcd9f-", Namespace:"calico-apiserver", SelfLink:"", UID:"e45da145-d058-4e98-b935-90ad928df47e", ResourceVersion:"1005", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 22, 48, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"77d5bbcd9f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"631a57e6751a7b3eb79f5126b4e234a42a0a5d77acdaa82eb36180f231d284fd", Pod:"calico-apiserver-77d5bbcd9f-rxk2m", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie41ca5c1d74", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 22:48:53.972925 containerd[1647]: 2025-07-14 22:48:53.927 [INFO][5960] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d116ef66974640c1c387c598585635d19686f4e057958521a59c62b407cb2786" Jul 14 22:48:53.972925 containerd[1647]: 2025-07-14 22:48:53.927 [INFO][5960] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d116ef66974640c1c387c598585635d19686f4e057958521a59c62b407cb2786" iface="eth0" netns="" Jul 14 22:48:53.972925 containerd[1647]: 2025-07-14 22:48:53.927 [INFO][5960] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d116ef66974640c1c387c598585635d19686f4e057958521a59c62b407cb2786" Jul 14 22:48:53.972925 containerd[1647]: 2025-07-14 22:48:53.927 [INFO][5960] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d116ef66974640c1c387c598585635d19686f4e057958521a59c62b407cb2786" Jul 14 22:48:53.972925 containerd[1647]: 2025-07-14 22:48:53.960 [INFO][5967] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d116ef66974640c1c387c598585635d19686f4e057958521a59c62b407cb2786" HandleID="k8s-pod-network.d116ef66974640c1c387c598585635d19686f4e057958521a59c62b407cb2786" Workload="localhost-k8s-calico--apiserver--77d5bbcd9f--rxk2m-eth0" Jul 14 22:48:53.972925 containerd[1647]: 2025-07-14 22:48:53.960 [INFO][5967] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 22:48:53.972925 containerd[1647]: 2025-07-14 22:48:53.960 [INFO][5967] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 22:48:53.972925 containerd[1647]: 2025-07-14 22:48:53.965 [WARNING][5967] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d116ef66974640c1c387c598585635d19686f4e057958521a59c62b407cb2786" HandleID="k8s-pod-network.d116ef66974640c1c387c598585635d19686f4e057958521a59c62b407cb2786" Workload="localhost-k8s-calico--apiserver--77d5bbcd9f--rxk2m-eth0" Jul 14 22:48:53.972925 containerd[1647]: 2025-07-14 22:48:53.965 [INFO][5967] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d116ef66974640c1c387c598585635d19686f4e057958521a59c62b407cb2786" HandleID="k8s-pod-network.d116ef66974640c1c387c598585635d19686f4e057958521a59c62b407cb2786" Workload="localhost-k8s-calico--apiserver--77d5bbcd9f--rxk2m-eth0" Jul 14 22:48:53.972925 containerd[1647]: 2025-07-14 22:48:53.967 [INFO][5967] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 22:48:53.972925 containerd[1647]: 2025-07-14 22:48:53.969 [INFO][5960] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d116ef66974640c1c387c598585635d19686f4e057958521a59c62b407cb2786" Jul 14 22:48:53.972925 containerd[1647]: time="2025-07-14T22:48:53.971984273Z" level=info msg="TearDown network for sandbox \"d116ef66974640c1c387c598585635d19686f4e057958521a59c62b407cb2786\" successfully" Jul 14 22:48:53.972925 containerd[1647]: time="2025-07-14T22:48:53.972008586Z" level=info msg="StopPodSandbox for \"d116ef66974640c1c387c598585635d19686f4e057958521a59c62b407cb2786\" returns successfully" Jul 14 22:48:53.972925 containerd[1647]: time="2025-07-14T22:48:53.972380788Z" level=info msg="RemovePodSandbox for \"d116ef66974640c1c387c598585635d19686f4e057958521a59c62b407cb2786\"" Jul 14 22:48:53.972925 containerd[1647]: time="2025-07-14T22:48:53.972411742Z" level=info msg="Forcibly stopping sandbox \"d116ef66974640c1c387c598585635d19686f4e057958521a59c62b407cb2786\"" Jul 14 22:48:54.038831 containerd[1647]: 2025-07-14 22:48:54.005 [WARNING][5981] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d116ef66974640c1c387c598585635d19686f4e057958521a59c62b407cb2786" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--77d5bbcd9f--rxk2m-eth0", GenerateName:"calico-apiserver-77d5bbcd9f-", Namespace:"calico-apiserver", SelfLink:"", UID:"e45da145-d058-4e98-b935-90ad928df47e", ResourceVersion:"1005", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 22, 48, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"77d5bbcd9f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"631a57e6751a7b3eb79f5126b4e234a42a0a5d77acdaa82eb36180f231d284fd", Pod:"calico-apiserver-77d5bbcd9f-rxk2m", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie41ca5c1d74", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 22:48:54.038831 containerd[1647]: 2025-07-14 22:48:54.005 [INFO][5981] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d116ef66974640c1c387c598585635d19686f4e057958521a59c62b407cb2786" Jul 14 22:48:54.038831 containerd[1647]: 2025-07-14 22:48:54.005 [INFO][5981] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d116ef66974640c1c387c598585635d19686f4e057958521a59c62b407cb2786" iface="eth0" netns="" Jul 14 22:48:54.038831 containerd[1647]: 2025-07-14 22:48:54.005 [INFO][5981] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d116ef66974640c1c387c598585635d19686f4e057958521a59c62b407cb2786" Jul 14 22:48:54.038831 containerd[1647]: 2025-07-14 22:48:54.005 [INFO][5981] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d116ef66974640c1c387c598585635d19686f4e057958521a59c62b407cb2786" Jul 14 22:48:54.038831 containerd[1647]: 2025-07-14 22:48:54.022 [INFO][5988] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d116ef66974640c1c387c598585635d19686f4e057958521a59c62b407cb2786" HandleID="k8s-pod-network.d116ef66974640c1c387c598585635d19686f4e057958521a59c62b407cb2786" Workload="localhost-k8s-calico--apiserver--77d5bbcd9f--rxk2m-eth0" Jul 14 22:48:54.038831 containerd[1647]: 2025-07-14 22:48:54.022 [INFO][5988] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 22:48:54.038831 containerd[1647]: 2025-07-14 22:48:54.023 [INFO][5988] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 22:48:54.038831 containerd[1647]: 2025-07-14 22:48:54.028 [WARNING][5988] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d116ef66974640c1c387c598585635d19686f4e057958521a59c62b407cb2786" HandleID="k8s-pod-network.d116ef66974640c1c387c598585635d19686f4e057958521a59c62b407cb2786" Workload="localhost-k8s-calico--apiserver--77d5bbcd9f--rxk2m-eth0" Jul 14 22:48:54.038831 containerd[1647]: 2025-07-14 22:48:54.028 [INFO][5988] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d116ef66974640c1c387c598585635d19686f4e057958521a59c62b407cb2786" HandleID="k8s-pod-network.d116ef66974640c1c387c598585635d19686f4e057958521a59c62b407cb2786" Workload="localhost-k8s-calico--apiserver--77d5bbcd9f--rxk2m-eth0" Jul 14 22:48:54.038831 containerd[1647]: 2025-07-14 22:48:54.031 [INFO][5988] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 22:48:54.038831 containerd[1647]: 2025-07-14 22:48:54.035 [INFO][5981] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d116ef66974640c1c387c598585635d19686f4e057958521a59c62b407cb2786" Jul 14 22:48:54.038831 containerd[1647]: time="2025-07-14T22:48:54.038693016Z" level=info msg="TearDown network for sandbox \"d116ef66974640c1c387c598585635d19686f4e057958521a59c62b407cb2786\" successfully" Jul 14 22:48:54.053703 containerd[1647]: time="2025-07-14T22:48:54.053678287Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d116ef66974640c1c387c598585635d19686f4e057958521a59c62b407cb2786\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 14 22:48:54.053826 containerd[1647]: time="2025-07-14T22:48:54.053815613Z" level=info msg="RemovePodSandbox \"d116ef66974640c1c387c598585635d19686f4e057958521a59c62b407cb2786\" returns successfully" Jul 14 22:48:54.054153 containerd[1647]: time="2025-07-14T22:48:54.054142201Z" level=info msg="StopPodSandbox for \"bd9eda5759c35bf186a67eac25d3145992baa4a23ddc146c2ed10eaf314fa297\"" Jul 14 22:48:54.117426 containerd[1647]: 2025-07-14 22:48:54.084 [WARNING][6002] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="bd9eda5759c35bf186a67eac25d3145992baa4a23ddc146c2ed10eaf314fa297" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--gtzwg-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"7ffbe111-cda0-4331-a242-4ff57bc4f605", ResourceVersion:"1014", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 22, 47, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3e8eb1ab15c6fbec83c67de8e7b8783c8f4a2df1739bd52edb2abedbdb05d6d2", Pod:"coredns-7c65d6cfc9-gtzwg", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic86417ade66", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 22:48:54.117426 containerd[1647]: 2025-07-14 22:48:54.084 [INFO][6002] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="bd9eda5759c35bf186a67eac25d3145992baa4a23ddc146c2ed10eaf314fa297" Jul 14 22:48:54.117426 containerd[1647]: 2025-07-14 22:48:54.084 [INFO][6002] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="bd9eda5759c35bf186a67eac25d3145992baa4a23ddc146c2ed10eaf314fa297" iface="eth0" netns="" Jul 14 22:48:54.117426 containerd[1647]: 2025-07-14 22:48:54.084 [INFO][6002] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="bd9eda5759c35bf186a67eac25d3145992baa4a23ddc146c2ed10eaf314fa297" Jul 14 22:48:54.117426 containerd[1647]: 2025-07-14 22:48:54.084 [INFO][6002] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="bd9eda5759c35bf186a67eac25d3145992baa4a23ddc146c2ed10eaf314fa297" Jul 14 22:48:54.117426 containerd[1647]: 2025-07-14 22:48:54.104 [INFO][6009] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="bd9eda5759c35bf186a67eac25d3145992baa4a23ddc146c2ed10eaf314fa297" HandleID="k8s-pod-network.bd9eda5759c35bf186a67eac25d3145992baa4a23ddc146c2ed10eaf314fa297" Workload="localhost-k8s-coredns--7c65d6cfc9--gtzwg-eth0" Jul 14 22:48:54.117426 containerd[1647]: 2025-07-14 22:48:54.104 [INFO][6009] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 22:48:54.117426 containerd[1647]: 2025-07-14 22:48:54.104 [INFO][6009] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 22:48:54.117426 containerd[1647]: 2025-07-14 22:48:54.110 [WARNING][6009] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="bd9eda5759c35bf186a67eac25d3145992baa4a23ddc146c2ed10eaf314fa297" HandleID="k8s-pod-network.bd9eda5759c35bf186a67eac25d3145992baa4a23ddc146c2ed10eaf314fa297" Workload="localhost-k8s-coredns--7c65d6cfc9--gtzwg-eth0" Jul 14 22:48:54.117426 containerd[1647]: 2025-07-14 22:48:54.110 [INFO][6009] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="bd9eda5759c35bf186a67eac25d3145992baa4a23ddc146c2ed10eaf314fa297" HandleID="k8s-pod-network.bd9eda5759c35bf186a67eac25d3145992baa4a23ddc146c2ed10eaf314fa297" Workload="localhost-k8s-coredns--7c65d6cfc9--gtzwg-eth0" Jul 14 22:48:54.117426 containerd[1647]: 2025-07-14 22:48:54.111 [INFO][6009] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 22:48:54.117426 containerd[1647]: 2025-07-14 22:48:54.114 [INFO][6002] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="bd9eda5759c35bf186a67eac25d3145992baa4a23ddc146c2ed10eaf314fa297" Jul 14 22:48:54.117426 containerd[1647]: time="2025-07-14T22:48:54.117299784Z" level=info msg="TearDown network for sandbox \"bd9eda5759c35bf186a67eac25d3145992baa4a23ddc146c2ed10eaf314fa297\" successfully" Jul 14 22:48:54.117426 containerd[1647]: time="2025-07-14T22:48:54.117316243Z" level=info msg="StopPodSandbox for \"bd9eda5759c35bf186a67eac25d3145992baa4a23ddc146c2ed10eaf314fa297\" returns successfully" Jul 14 22:48:54.119448 containerd[1647]: time="2025-07-14T22:48:54.118020254Z" level=info msg="RemovePodSandbox for \"bd9eda5759c35bf186a67eac25d3145992baa4a23ddc146c2ed10eaf314fa297\"" Jul 14 22:48:54.119448 containerd[1647]: time="2025-07-14T22:48:54.118047460Z" level=info msg="Forcibly stopping sandbox \"bd9eda5759c35bf186a67eac25d3145992baa4a23ddc146c2ed10eaf314fa297\"" Jul 14 22:48:54.176251 containerd[1647]: 2025-07-14 22:48:54.149 [WARNING][6023] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="bd9eda5759c35bf186a67eac25d3145992baa4a23ddc146c2ed10eaf314fa297" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--gtzwg-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"7ffbe111-cda0-4331-a242-4ff57bc4f605", ResourceVersion:"1014", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 22, 47, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3e8eb1ab15c6fbec83c67de8e7b8783c8f4a2df1739bd52edb2abedbdb05d6d2", Pod:"coredns-7c65d6cfc9-gtzwg", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic86417ade66", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 22:48:54.176251 containerd[1647]: 2025-07-14 22:48:54.150 [INFO][6023] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="bd9eda5759c35bf186a67eac25d3145992baa4a23ddc146c2ed10eaf314fa297" Jul 14 22:48:54.176251 containerd[1647]: 2025-07-14 22:48:54.150 [INFO][6023] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="bd9eda5759c35bf186a67eac25d3145992baa4a23ddc146c2ed10eaf314fa297" iface="eth0" netns="" Jul 14 22:48:54.176251 containerd[1647]: 2025-07-14 22:48:54.150 [INFO][6023] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="bd9eda5759c35bf186a67eac25d3145992baa4a23ddc146c2ed10eaf314fa297" Jul 14 22:48:54.176251 containerd[1647]: 2025-07-14 22:48:54.150 [INFO][6023] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="bd9eda5759c35bf186a67eac25d3145992baa4a23ddc146c2ed10eaf314fa297" Jul 14 22:48:54.176251 containerd[1647]: 2025-07-14 22:48:54.169 [INFO][6031] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="bd9eda5759c35bf186a67eac25d3145992baa4a23ddc146c2ed10eaf314fa297" HandleID="k8s-pod-network.bd9eda5759c35bf186a67eac25d3145992baa4a23ddc146c2ed10eaf314fa297" Workload="localhost-k8s-coredns--7c65d6cfc9--gtzwg-eth0" Jul 14 22:48:54.176251 containerd[1647]: 2025-07-14 22:48:54.169 [INFO][6031] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 22:48:54.176251 containerd[1647]: 2025-07-14 22:48:54.169 [INFO][6031] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 22:48:54.176251 containerd[1647]: 2025-07-14 22:48:54.172 [WARNING][6031] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="bd9eda5759c35bf186a67eac25d3145992baa4a23ddc146c2ed10eaf314fa297" HandleID="k8s-pod-network.bd9eda5759c35bf186a67eac25d3145992baa4a23ddc146c2ed10eaf314fa297" Workload="localhost-k8s-coredns--7c65d6cfc9--gtzwg-eth0" Jul 14 22:48:54.176251 containerd[1647]: 2025-07-14 22:48:54.172 [INFO][6031] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="bd9eda5759c35bf186a67eac25d3145992baa4a23ddc146c2ed10eaf314fa297" HandleID="k8s-pod-network.bd9eda5759c35bf186a67eac25d3145992baa4a23ddc146c2ed10eaf314fa297" Workload="localhost-k8s-coredns--7c65d6cfc9--gtzwg-eth0" Jul 14 22:48:54.176251 containerd[1647]: 2025-07-14 22:48:54.173 [INFO][6031] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 22:48:54.176251 containerd[1647]: 2025-07-14 22:48:54.174 [INFO][6023] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="bd9eda5759c35bf186a67eac25d3145992baa4a23ddc146c2ed10eaf314fa297" Jul 14 22:48:54.177328 containerd[1647]: time="2025-07-14T22:48:54.176272101Z" level=info msg="TearDown network for sandbox \"bd9eda5759c35bf186a67eac25d3145992baa4a23ddc146c2ed10eaf314fa297\" successfully" Jul 14 22:48:54.184715 containerd[1647]: time="2025-07-14T22:48:54.184614744Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"bd9eda5759c35bf186a67eac25d3145992baa4a23ddc146c2ed10eaf314fa297\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 14 22:48:54.184715 containerd[1647]: time="2025-07-14T22:48:54.184653781Z" level=info msg="RemovePodSandbox \"bd9eda5759c35bf186a67eac25d3145992baa4a23ddc146c2ed10eaf314fa297\" returns successfully" Jul 14 22:48:54.185184 containerd[1647]: time="2025-07-14T22:48:54.185022728Z" level=info msg="StopPodSandbox for \"35e1f265fc455895e37599f3f822f6e951a54ca84be8aa596a85abf73900a391\"" Jul 14 22:48:54.267990 containerd[1647]: 2025-07-14 22:48:54.215 [WARNING][6045] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="35e1f265fc455895e37599f3f822f6e951a54ca84be8aa596a85abf73900a391" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--77d5bbcd9f--bfs2w-eth0", GenerateName:"calico-apiserver-77d5bbcd9f-", Namespace:"calico-apiserver", SelfLink:"", UID:"5ab63976-6c51-436b-83f8-138c3003c36b", ResourceVersion:"1060", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 22, 48, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"77d5bbcd9f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7dbed6d81f56e52678440ca553284aa4f150f687d069a4d3aa092321efc50409", Pod:"calico-apiserver-77d5bbcd9f-bfs2w", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie7787838cae", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 22:48:54.267990 containerd[1647]: 2025-07-14 22:48:54.215 [INFO][6045] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="35e1f265fc455895e37599f3f822f6e951a54ca84be8aa596a85abf73900a391" Jul 14 22:48:54.267990 containerd[1647]: 2025-07-14 22:48:54.215 [INFO][6045] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="35e1f265fc455895e37599f3f822f6e951a54ca84be8aa596a85abf73900a391" iface="eth0" netns="" Jul 14 22:48:54.267990 containerd[1647]: 2025-07-14 22:48:54.215 [INFO][6045] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="35e1f265fc455895e37599f3f822f6e951a54ca84be8aa596a85abf73900a391" Jul 14 22:48:54.267990 containerd[1647]: 2025-07-14 22:48:54.215 [INFO][6045] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="35e1f265fc455895e37599f3f822f6e951a54ca84be8aa596a85abf73900a391" Jul 14 22:48:54.267990 containerd[1647]: 2025-07-14 22:48:54.257 [INFO][6053] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="35e1f265fc455895e37599f3f822f6e951a54ca84be8aa596a85abf73900a391" HandleID="k8s-pod-network.35e1f265fc455895e37599f3f822f6e951a54ca84be8aa596a85abf73900a391" Workload="localhost-k8s-calico--apiserver--77d5bbcd9f--bfs2w-eth0" Jul 14 22:48:54.267990 containerd[1647]: 2025-07-14 22:48:54.257 [INFO][6053] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 22:48:54.267990 containerd[1647]: 2025-07-14 22:48:54.257 [INFO][6053] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 22:48:54.267990 containerd[1647]: 2025-07-14 22:48:54.261 [WARNING][6053] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="35e1f265fc455895e37599f3f822f6e951a54ca84be8aa596a85abf73900a391" HandleID="k8s-pod-network.35e1f265fc455895e37599f3f822f6e951a54ca84be8aa596a85abf73900a391" Workload="localhost-k8s-calico--apiserver--77d5bbcd9f--bfs2w-eth0" Jul 14 22:48:54.267990 containerd[1647]: 2025-07-14 22:48:54.261 [INFO][6053] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="35e1f265fc455895e37599f3f822f6e951a54ca84be8aa596a85abf73900a391" HandleID="k8s-pod-network.35e1f265fc455895e37599f3f822f6e951a54ca84be8aa596a85abf73900a391" Workload="localhost-k8s-calico--apiserver--77d5bbcd9f--bfs2w-eth0" Jul 14 22:48:54.267990 containerd[1647]: 2025-07-14 22:48:54.262 [INFO][6053] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 22:48:54.267990 containerd[1647]: 2025-07-14 22:48:54.265 [INFO][6045] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="35e1f265fc455895e37599f3f822f6e951a54ca84be8aa596a85abf73900a391" Jul 14 22:48:54.271633 containerd[1647]: time="2025-07-14T22:48:54.268289765Z" level=info msg="TearDown network for sandbox \"35e1f265fc455895e37599f3f822f6e951a54ca84be8aa596a85abf73900a391\" successfully" Jul 14 22:48:54.271633 containerd[1647]: time="2025-07-14T22:48:54.268306182Z" level=info msg="StopPodSandbox for \"35e1f265fc455895e37599f3f822f6e951a54ca84be8aa596a85abf73900a391\" returns successfully" Jul 14 22:48:54.271633 containerd[1647]: time="2025-07-14T22:48:54.268600782Z" level=info msg="RemovePodSandbox for \"35e1f265fc455895e37599f3f822f6e951a54ca84be8aa596a85abf73900a391\"" Jul 14 22:48:54.271633 containerd[1647]: time="2025-07-14T22:48:54.268615618Z" level=info msg="Forcibly stopping sandbox \"35e1f265fc455895e37599f3f822f6e951a54ca84be8aa596a85abf73900a391\"" Jul 14 22:48:54.378560 containerd[1647]: 2025-07-14 22:48:54.330 [WARNING][6067] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="35e1f265fc455895e37599f3f822f6e951a54ca84be8aa596a85abf73900a391" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--77d5bbcd9f--bfs2w-eth0", GenerateName:"calico-apiserver-77d5bbcd9f-", Namespace:"calico-apiserver", SelfLink:"", UID:"5ab63976-6c51-436b-83f8-138c3003c36b", ResourceVersion:"1060", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 22, 48, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"77d5bbcd9f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7dbed6d81f56e52678440ca553284aa4f150f687d069a4d3aa092321efc50409", Pod:"calico-apiserver-77d5bbcd9f-bfs2w", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie7787838cae", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 22:48:54.378560 containerd[1647]: 2025-07-14 22:48:54.330 [INFO][6067] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="35e1f265fc455895e37599f3f822f6e951a54ca84be8aa596a85abf73900a391" Jul 14 22:48:54.378560 containerd[1647]: 2025-07-14 22:48:54.330 [INFO][6067] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="35e1f265fc455895e37599f3f822f6e951a54ca84be8aa596a85abf73900a391" iface="eth0" netns="" Jul 14 22:48:54.378560 containerd[1647]: 2025-07-14 22:48:54.330 [INFO][6067] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="35e1f265fc455895e37599f3f822f6e951a54ca84be8aa596a85abf73900a391" Jul 14 22:48:54.378560 containerd[1647]: 2025-07-14 22:48:54.330 [INFO][6067] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="35e1f265fc455895e37599f3f822f6e951a54ca84be8aa596a85abf73900a391" Jul 14 22:48:54.378560 containerd[1647]: 2025-07-14 22:48:54.355 [INFO][6074] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="35e1f265fc455895e37599f3f822f6e951a54ca84be8aa596a85abf73900a391" HandleID="k8s-pod-network.35e1f265fc455895e37599f3f822f6e951a54ca84be8aa596a85abf73900a391" Workload="localhost-k8s-calico--apiserver--77d5bbcd9f--bfs2w-eth0" Jul 14 22:48:54.378560 containerd[1647]: 2025-07-14 22:48:54.358 [INFO][6074] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 22:48:54.378560 containerd[1647]: 2025-07-14 22:48:54.358 [INFO][6074] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 22:48:54.378560 containerd[1647]: 2025-07-14 22:48:54.364 [WARNING][6074] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="35e1f265fc455895e37599f3f822f6e951a54ca84be8aa596a85abf73900a391" HandleID="k8s-pod-network.35e1f265fc455895e37599f3f822f6e951a54ca84be8aa596a85abf73900a391" Workload="localhost-k8s-calico--apiserver--77d5bbcd9f--bfs2w-eth0" Jul 14 22:48:54.378560 containerd[1647]: 2025-07-14 22:48:54.364 [INFO][6074] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="35e1f265fc455895e37599f3f822f6e951a54ca84be8aa596a85abf73900a391" HandleID="k8s-pod-network.35e1f265fc455895e37599f3f822f6e951a54ca84be8aa596a85abf73900a391" Workload="localhost-k8s-calico--apiserver--77d5bbcd9f--bfs2w-eth0" Jul 14 22:48:54.378560 containerd[1647]: 2025-07-14 22:48:54.366 [INFO][6074] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 22:48:54.378560 containerd[1647]: 2025-07-14 22:48:54.368 [INFO][6067] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="35e1f265fc455895e37599f3f822f6e951a54ca84be8aa596a85abf73900a391" Jul 14 22:48:54.378560 containerd[1647]: time="2025-07-14T22:48:54.377325260Z" level=info msg="TearDown network for sandbox \"35e1f265fc455895e37599f3f822f6e951a54ca84be8aa596a85abf73900a391\" successfully" Jul 14 22:48:54.408367 containerd[1647]: time="2025-07-14T22:48:54.408230327Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"35e1f265fc455895e37599f3f822f6e951a54ca84be8aa596a85abf73900a391\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 14 22:48:54.409043 containerd[1647]: time="2025-07-14T22:48:54.408538101Z" level=info msg="RemovePodSandbox \"35e1f265fc455895e37599f3f822f6e951a54ca84be8aa596a85abf73900a391\" returns successfully" Jul 14 22:48:54.438968 containerd[1647]: time="2025-07-14T22:48:54.437316654Z" level=info msg="StopPodSandbox for \"46a1d2ca8b76004d02310e568cd53fd650a52a5d3ecad032094d772fa269a2d4\"" Jul 14 22:48:54.501282 containerd[1647]: 2025-07-14 22:48:54.464 [WARNING][6088] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="46a1d2ca8b76004d02310e568cd53fd650a52a5d3ecad032094d772fa269a2d4" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--qt7fd-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"55a1ec98-b34e-4be5-aa77-830413ef608b", ResourceVersion:"1017", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 22, 47, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ea1614aed4fcf9fdfda96a9eeb81ab1d477bd528a44be7b736d05a8ca30de8c4", Pod:"coredns-7c65d6cfc9-qt7fd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3124dc45690", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 22:48:54.501282 containerd[1647]: 2025-07-14 22:48:54.464 [INFO][6088] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="46a1d2ca8b76004d02310e568cd53fd650a52a5d3ecad032094d772fa269a2d4" Jul 14 22:48:54.501282 containerd[1647]: 2025-07-14 22:48:54.464 [INFO][6088] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="46a1d2ca8b76004d02310e568cd53fd650a52a5d3ecad032094d772fa269a2d4" iface="eth0" netns="" Jul 14 22:48:54.501282 containerd[1647]: 2025-07-14 22:48:54.464 [INFO][6088] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="46a1d2ca8b76004d02310e568cd53fd650a52a5d3ecad032094d772fa269a2d4" Jul 14 22:48:54.501282 containerd[1647]: 2025-07-14 22:48:54.464 [INFO][6088] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="46a1d2ca8b76004d02310e568cd53fd650a52a5d3ecad032094d772fa269a2d4" Jul 14 22:48:54.501282 containerd[1647]: 2025-07-14 22:48:54.484 [INFO][6095] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="46a1d2ca8b76004d02310e568cd53fd650a52a5d3ecad032094d772fa269a2d4" HandleID="k8s-pod-network.46a1d2ca8b76004d02310e568cd53fd650a52a5d3ecad032094d772fa269a2d4" Workload="localhost-k8s-coredns--7c65d6cfc9--qt7fd-eth0" Jul 14 22:48:54.501282 containerd[1647]: 2025-07-14 22:48:54.485 [INFO][6095] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 22:48:54.501282 containerd[1647]: 2025-07-14 22:48:54.485 [INFO][6095] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 22:48:54.501282 containerd[1647]: 2025-07-14 22:48:54.493 [WARNING][6095] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="46a1d2ca8b76004d02310e568cd53fd650a52a5d3ecad032094d772fa269a2d4" HandleID="k8s-pod-network.46a1d2ca8b76004d02310e568cd53fd650a52a5d3ecad032094d772fa269a2d4" Workload="localhost-k8s-coredns--7c65d6cfc9--qt7fd-eth0" Jul 14 22:48:54.501282 containerd[1647]: 2025-07-14 22:48:54.493 [INFO][6095] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="46a1d2ca8b76004d02310e568cd53fd650a52a5d3ecad032094d772fa269a2d4" HandleID="k8s-pod-network.46a1d2ca8b76004d02310e568cd53fd650a52a5d3ecad032094d772fa269a2d4" Workload="localhost-k8s-coredns--7c65d6cfc9--qt7fd-eth0" Jul 14 22:48:54.501282 containerd[1647]: 2025-07-14 22:48:54.496 [INFO][6095] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 22:48:54.501282 containerd[1647]: 2025-07-14 22:48:54.499 [INFO][6088] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="46a1d2ca8b76004d02310e568cd53fd650a52a5d3ecad032094d772fa269a2d4" Jul 14 22:48:54.501750 containerd[1647]: time="2025-07-14T22:48:54.501322270Z" level=info msg="TearDown network for sandbox \"46a1d2ca8b76004d02310e568cd53fd650a52a5d3ecad032094d772fa269a2d4\" successfully" Jul 14 22:48:54.501750 containerd[1647]: time="2025-07-14T22:48:54.501340311Z" level=info msg="StopPodSandbox for \"46a1d2ca8b76004d02310e568cd53fd650a52a5d3ecad032094d772fa269a2d4\" returns successfully" Jul 14 22:48:54.501750 containerd[1647]: time="2025-07-14T22:48:54.501734008Z" level=info msg="RemovePodSandbox for \"46a1d2ca8b76004d02310e568cd53fd650a52a5d3ecad032094d772fa269a2d4\"" Jul 14 22:48:54.501838 containerd[1647]: time="2025-07-14T22:48:54.501756380Z" level=info msg="Forcibly stopping sandbox \"46a1d2ca8b76004d02310e568cd53fd650a52a5d3ecad032094d772fa269a2d4\"" Jul 14 22:48:54.563318 containerd[1647]: 2025-07-14 22:48:54.534 [WARNING][6109] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="46a1d2ca8b76004d02310e568cd53fd650a52a5d3ecad032094d772fa269a2d4" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--qt7fd-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"55a1ec98-b34e-4be5-aa77-830413ef608b", ResourceVersion:"1017", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 22, 47, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ea1614aed4fcf9fdfda96a9eeb81ab1d477bd528a44be7b736d05a8ca30de8c4", Pod:"coredns-7c65d6cfc9-qt7fd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3124dc45690", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 22:48:54.563318 containerd[1647]: 2025-07-14 22:48:54.534 [INFO][6109] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="46a1d2ca8b76004d02310e568cd53fd650a52a5d3ecad032094d772fa269a2d4" Jul 14 22:48:54.563318 containerd[1647]: 2025-07-14 22:48:54.534 [INFO][6109] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="46a1d2ca8b76004d02310e568cd53fd650a52a5d3ecad032094d772fa269a2d4" iface="eth0" netns="" Jul 14 22:48:54.563318 containerd[1647]: 2025-07-14 22:48:54.534 [INFO][6109] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="46a1d2ca8b76004d02310e568cd53fd650a52a5d3ecad032094d772fa269a2d4" Jul 14 22:48:54.563318 containerd[1647]: 2025-07-14 22:48:54.534 [INFO][6109] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="46a1d2ca8b76004d02310e568cd53fd650a52a5d3ecad032094d772fa269a2d4" Jul 14 22:48:54.563318 containerd[1647]: 2025-07-14 22:48:54.555 [INFO][6116] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="46a1d2ca8b76004d02310e568cd53fd650a52a5d3ecad032094d772fa269a2d4" HandleID="k8s-pod-network.46a1d2ca8b76004d02310e568cd53fd650a52a5d3ecad032094d772fa269a2d4" Workload="localhost-k8s-coredns--7c65d6cfc9--qt7fd-eth0" Jul 14 22:48:54.563318 containerd[1647]: 2025-07-14 22:48:54.555 [INFO][6116] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 22:48:54.563318 containerd[1647]: 2025-07-14 22:48:54.555 [INFO][6116] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 22:48:54.563318 containerd[1647]: 2025-07-14 22:48:54.559 [WARNING][6116] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="46a1d2ca8b76004d02310e568cd53fd650a52a5d3ecad032094d772fa269a2d4" HandleID="k8s-pod-network.46a1d2ca8b76004d02310e568cd53fd650a52a5d3ecad032094d772fa269a2d4" Workload="localhost-k8s-coredns--7c65d6cfc9--qt7fd-eth0" Jul 14 22:48:54.563318 containerd[1647]: 2025-07-14 22:48:54.559 [INFO][6116] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="46a1d2ca8b76004d02310e568cd53fd650a52a5d3ecad032094d772fa269a2d4" HandleID="k8s-pod-network.46a1d2ca8b76004d02310e568cd53fd650a52a5d3ecad032094d772fa269a2d4" Workload="localhost-k8s-coredns--7c65d6cfc9--qt7fd-eth0" Jul 14 22:48:54.563318 containerd[1647]: 2025-07-14 22:48:54.560 [INFO][6116] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 22:48:54.563318 containerd[1647]: 2025-07-14 22:48:54.561 [INFO][6109] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="46a1d2ca8b76004d02310e568cd53fd650a52a5d3ecad032094d772fa269a2d4" Jul 14 22:48:54.564270 containerd[1647]: time="2025-07-14T22:48:54.563396813Z" level=info msg="TearDown network for sandbox \"46a1d2ca8b76004d02310e568cd53fd650a52a5d3ecad032094d772fa269a2d4\" successfully" Jul 14 22:48:54.567799 containerd[1647]: time="2025-07-14T22:48:54.567778329Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"46a1d2ca8b76004d02310e568cd53fd650a52a5d3ecad032094d772fa269a2d4\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 14 22:48:54.567936 containerd[1647]: time="2025-07-14T22:48:54.567819049Z" level=info msg="RemovePodSandbox \"46a1d2ca8b76004d02310e568cd53fd650a52a5d3ecad032094d772fa269a2d4\" returns successfully" Jul 14 22:48:54.568280 containerd[1647]: time="2025-07-14T22:48:54.568119925Z" level=info msg="StopPodSandbox for \"0a7dd75a6ea966f208f3023566629d6115f7728e6919df1c479ac65d1af0336c\"" Jul 14 22:48:54.626194 containerd[1647]: 2025-07-14 22:48:54.604 [WARNING][6130] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0a7dd75a6ea966f208f3023566629d6115f7728e6919df1c479ac65d1af0336c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--f78g4-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"33683c0b-99a6-49cc-aa17-19ada6d1c944", ResourceVersion:"1092", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 22, 48, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3f5205990f74e0c76941908ba3bf688cd5f37af613fd17132abf4a8081d13ee6", Pod:"csi-node-driver-f78g4", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali5882607113a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 22:48:54.626194 containerd[1647]: 2025-07-14 22:48:54.605 [INFO][6130] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0a7dd75a6ea966f208f3023566629d6115f7728e6919df1c479ac65d1af0336c" Jul 14 22:48:54.626194 containerd[1647]: 2025-07-14 22:48:54.605 [INFO][6130] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0a7dd75a6ea966f208f3023566629d6115f7728e6919df1c479ac65d1af0336c" iface="eth0" netns="" Jul 14 22:48:54.626194 containerd[1647]: 2025-07-14 22:48:54.605 [INFO][6130] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0a7dd75a6ea966f208f3023566629d6115f7728e6919df1c479ac65d1af0336c" Jul 14 22:48:54.626194 containerd[1647]: 2025-07-14 22:48:54.605 [INFO][6130] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0a7dd75a6ea966f208f3023566629d6115f7728e6919df1c479ac65d1af0336c" Jul 14 22:48:54.626194 containerd[1647]: 2025-07-14 22:48:54.619 [INFO][6137] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0a7dd75a6ea966f208f3023566629d6115f7728e6919df1c479ac65d1af0336c" HandleID="k8s-pod-network.0a7dd75a6ea966f208f3023566629d6115f7728e6919df1c479ac65d1af0336c" Workload="localhost-k8s-csi--node--driver--f78g4-eth0" Jul 14 22:48:54.626194 containerd[1647]: 2025-07-14 22:48:54.619 [INFO][6137] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 22:48:54.626194 containerd[1647]: 2025-07-14 22:48:54.619 [INFO][6137] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 22:48:54.626194 containerd[1647]: 2025-07-14 22:48:54.622 [WARNING][6137] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0a7dd75a6ea966f208f3023566629d6115f7728e6919df1c479ac65d1af0336c" HandleID="k8s-pod-network.0a7dd75a6ea966f208f3023566629d6115f7728e6919df1c479ac65d1af0336c" Workload="localhost-k8s-csi--node--driver--f78g4-eth0" Jul 14 22:48:54.626194 containerd[1647]: 2025-07-14 22:48:54.623 [INFO][6137] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0a7dd75a6ea966f208f3023566629d6115f7728e6919df1c479ac65d1af0336c" HandleID="k8s-pod-network.0a7dd75a6ea966f208f3023566629d6115f7728e6919df1c479ac65d1af0336c" Workload="localhost-k8s-csi--node--driver--f78g4-eth0" Jul 14 22:48:54.626194 containerd[1647]: 2025-07-14 22:48:54.623 [INFO][6137] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 22:48:54.626194 containerd[1647]: 2025-07-14 22:48:54.624 [INFO][6130] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0a7dd75a6ea966f208f3023566629d6115f7728e6919df1c479ac65d1af0336c" Jul 14 22:48:54.626488 containerd[1647]: time="2025-07-14T22:48:54.626227879Z" level=info msg="TearDown network for sandbox \"0a7dd75a6ea966f208f3023566629d6115f7728e6919df1c479ac65d1af0336c\" successfully" Jul 14 22:48:54.626488 containerd[1647]: time="2025-07-14T22:48:54.626244517Z" level=info msg="StopPodSandbox for \"0a7dd75a6ea966f208f3023566629d6115f7728e6919df1c479ac65d1af0336c\" returns successfully" Jul 14 22:48:54.626629 containerd[1647]: time="2025-07-14T22:48:54.626575876Z" level=info msg="RemovePodSandbox for \"0a7dd75a6ea966f208f3023566629d6115f7728e6919df1c479ac65d1af0336c\"" Jul 14 22:48:54.626654 containerd[1647]: time="2025-07-14T22:48:54.626631883Z" level=info msg="Forcibly stopping sandbox \"0a7dd75a6ea966f208f3023566629d6115f7728e6919df1c479ac65d1af0336c\"" Jul 14 22:48:54.671342 containerd[1647]: 2025-07-14 22:48:54.648 [WARNING][6151] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0a7dd75a6ea966f208f3023566629d6115f7728e6919df1c479ac65d1af0336c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--f78g4-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"33683c0b-99a6-49cc-aa17-19ada6d1c944", ResourceVersion:"1092", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 22, 48, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3f5205990f74e0c76941908ba3bf688cd5f37af613fd17132abf4a8081d13ee6", Pod:"csi-node-driver-f78g4", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali5882607113a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 22:48:54.671342 containerd[1647]: 2025-07-14 22:48:54.648 [INFO][6151] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0a7dd75a6ea966f208f3023566629d6115f7728e6919df1c479ac65d1af0336c" Jul 14 22:48:54.671342 containerd[1647]: 2025-07-14 22:48:54.648 [INFO][6151] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0a7dd75a6ea966f208f3023566629d6115f7728e6919df1c479ac65d1af0336c" iface="eth0" netns="" Jul 14 22:48:54.671342 containerd[1647]: 2025-07-14 22:48:54.648 [INFO][6151] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0a7dd75a6ea966f208f3023566629d6115f7728e6919df1c479ac65d1af0336c" Jul 14 22:48:54.671342 containerd[1647]: 2025-07-14 22:48:54.648 [INFO][6151] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0a7dd75a6ea966f208f3023566629d6115f7728e6919df1c479ac65d1af0336c" Jul 14 22:48:54.671342 containerd[1647]: 2025-07-14 22:48:54.661 [INFO][6158] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0a7dd75a6ea966f208f3023566629d6115f7728e6919df1c479ac65d1af0336c" HandleID="k8s-pod-network.0a7dd75a6ea966f208f3023566629d6115f7728e6919df1c479ac65d1af0336c" Workload="localhost-k8s-csi--node--driver--f78g4-eth0" Jul 14 22:48:54.671342 containerd[1647]: 2025-07-14 22:48:54.661 [INFO][6158] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 22:48:54.671342 containerd[1647]: 2025-07-14 22:48:54.661 [INFO][6158] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 22:48:54.671342 containerd[1647]: 2025-07-14 22:48:54.665 [WARNING][6158] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0a7dd75a6ea966f208f3023566629d6115f7728e6919df1c479ac65d1af0336c" HandleID="k8s-pod-network.0a7dd75a6ea966f208f3023566629d6115f7728e6919df1c479ac65d1af0336c" Workload="localhost-k8s-csi--node--driver--f78g4-eth0" Jul 14 22:48:54.671342 containerd[1647]: 2025-07-14 22:48:54.665 [INFO][6158] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0a7dd75a6ea966f208f3023566629d6115f7728e6919df1c479ac65d1af0336c" HandleID="k8s-pod-network.0a7dd75a6ea966f208f3023566629d6115f7728e6919df1c479ac65d1af0336c" Workload="localhost-k8s-csi--node--driver--f78g4-eth0" Jul 14 22:48:54.671342 containerd[1647]: 2025-07-14 22:48:54.666 [INFO][6158] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 22:48:54.671342 containerd[1647]: 2025-07-14 22:48:54.669 [INFO][6151] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0a7dd75a6ea966f208f3023566629d6115f7728e6919df1c479ac65d1af0336c" Jul 14 22:48:54.671342 containerd[1647]: time="2025-07-14T22:48:54.671326473Z" level=info msg="TearDown network for sandbox \"0a7dd75a6ea966f208f3023566629d6115f7728e6919df1c479ac65d1af0336c\" successfully" Jul 14 22:48:54.673502 containerd[1647]: time="2025-07-14T22:48:54.673486698Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0a7dd75a6ea966f208f3023566629d6115f7728e6919df1c479ac65d1af0336c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 14 22:48:54.673955 containerd[1647]: time="2025-07-14T22:48:54.673523033Z" level=info msg="RemovePodSandbox \"0a7dd75a6ea966f208f3023566629d6115f7728e6919df1c479ac65d1af0336c\" returns successfully" Jul 14 22:48:56.859125 systemd-resolved[1541]: Under memory pressure, flushing caches. Jul 14 22:48:56.863019 systemd-journald[1180]: Under memory pressure, flushing caches. Jul 14 22:48:56.859131 systemd-resolved[1541]: Flushed all caches. Jul 14 22:49:02.363152 systemd-resolved[1541]: Under memory pressure, flushing caches. Jul 14 22:49:02.363158 systemd-resolved[1541]: Flushed all caches. Jul 14 22:49:02.365112 systemd-journald[1180]: Under memory pressure, flushing caches. Jul 14 22:49:12.347054 systemd-resolved[1541]: Under memory pressure, flushing caches. Jul 14 22:49:12.359097 systemd-journald[1180]: Under memory pressure, flushing caches. Jul 14 22:49:12.347060 systemd-resolved[1541]: Flushed all caches. Jul 14 22:49:12.626092 systemd[1]: Started sshd@7-139.178.70.103:22-139.178.68.195:45056.service - OpenSSH per-connection server daemon (139.178.68.195:45056). Jul 14 22:49:13.943021 sshd[6188]: Accepted publickey for core from 139.178.68.195 port 45056 ssh2: RSA SHA256:P8jM/IfcNW8CAymy10q/kVwVtMfLKdB1OJSN9TFdkWE Jul 14 22:49:13.953703 sshd[6188]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 22:49:13.986248 systemd-logind[1613]: New session 10 of user core. Jul 14 22:49:13.992648 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 14 22:49:14.396931 systemd-journald[1180]: Under memory pressure, flushing caches. Jul 14 22:49:14.396320 systemd-resolved[1541]: Under memory pressure, flushing caches. Jul 14 22:49:14.396326 systemd-resolved[1541]: Flushed all caches. Jul 14 22:49:15.064996 sshd[6188]: pam_unix(sshd:session): session closed for user core Jul 14 22:49:15.076066 systemd[1]: sshd@7-139.178.70.103:22-139.178.68.195:45056.service: Deactivated successfully. Jul 14 22:49:15.080694 systemd-logind[1613]: Session 10 logged out. Waiting for processes to exit. Jul 14 22:49:15.082107 systemd[1]: session-10.scope: Deactivated successfully. Jul 14 22:49:15.083410 systemd-logind[1613]: Removed session 10. Jul 14 22:49:16.443899 systemd-journald[1180]: Under memory pressure, flushing caches. Jul 14 22:49:16.445848 systemd-resolved[1541]: Under memory pressure, flushing caches. Jul 14 22:49:16.445856 systemd-resolved[1541]: Flushed all caches. Jul 14 22:49:17.600786 kubelet[2904]: I0714 22:49:17.586163 2904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-f78g4" podStartSLOduration=54.641857296 podStartE2EDuration="1m9.547055922s" podCreationTimestamp="2025-07-14 22:48:08 +0000 UTC" firstStartedPulling="2025-07-14 22:48:32.60760587 +0000 UTC m=+40.998898156" lastFinishedPulling="2025-07-14 22:48:47.512804496 +0000 UTC m=+55.904096782" observedRunningTime="2025-07-14 22:48:48.510460044 +0000 UTC m=+56.901752339" watchObservedRunningTime="2025-07-14 22:49:17.547055922 +0000 UTC m=+85.938348212" Jul 14 22:49:18.493007 systemd-journald[1180]: Under memory pressure, flushing caches. Jul 14 22:49:18.492067 systemd-resolved[1541]: Under memory pressure, flushing caches. Jul 14 22:49:18.492071 systemd-resolved[1541]: Flushed all caches. Jul 14 22:49:20.145102 systemd[1]: Started sshd@8-139.178.70.103:22-139.178.68.195:42774.service - OpenSSH per-connection server daemon (139.178.68.195:42774). Jul 14 22:49:20.498907 systemd[1]: run-containerd-runc-k8s.io-540fe05e00e3f29e1c3d736f5c9495773651d0a2e90ed20c305cba7bc1af6eef-runc.0Wdh0R.mount: Deactivated successfully. Jul 14 22:49:20.539126 systemd-resolved[1541]: Under memory pressure, flushing caches. Jul 14 22:49:20.539130 systemd-resolved[1541]: Flushed all caches. Jul 14 22:49:20.561636 systemd-journald[1180]: Under memory pressure, flushing caches. Jul 14 22:49:21.029282 sshd[6259]: Accepted publickey for core from 139.178.68.195 port 42774 ssh2: RSA SHA256:P8jM/IfcNW8CAymy10q/kVwVtMfLKdB1OJSN9TFdkWE Jul 14 22:49:21.070011 sshd[6259]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 22:49:21.091205 systemd-logind[1613]: New session 11 of user core. Jul 14 22:49:21.096257 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 14 22:49:22.589019 systemd-journald[1180]: Under memory pressure, flushing caches. Jul 14 22:49:22.587328 systemd-resolved[1541]: Under memory pressure, flushing caches. Jul 14 22:49:22.587334 systemd-resolved[1541]: Flushed all caches. Jul 14 22:49:22.723523 sshd[6259]: pam_unix(sshd:session): session closed for user core Jul 14 22:49:22.732426 systemd[1]: Started sshd@9-139.178.70.103:22-139.178.68.195:42778.service - OpenSSH per-connection server daemon (139.178.68.195:42778). Jul 14 22:49:22.734955 systemd[1]: sshd@8-139.178.70.103:22-139.178.68.195:42774.service: Deactivated successfully. Jul 14 22:49:22.739701 systemd-logind[1613]: Session 11 logged out. Waiting for processes to exit. Jul 14 22:49:22.740076 systemd[1]: session-11.scope: Deactivated successfully. Jul 14 22:49:22.741095 systemd-logind[1613]: Removed session 11. Jul 14 22:49:22.789062 sshd[6300]: Accepted publickey for core from 139.178.68.195 port 42778 ssh2: RSA SHA256:P8jM/IfcNW8CAymy10q/kVwVtMfLKdB1OJSN9TFdkWE Jul 14 22:49:22.790401 sshd[6300]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 22:49:22.800913 systemd-logind[1613]: New session 12 of user core. Jul 14 22:49:22.804133 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 14 22:49:22.975339 sshd[6300]: pam_unix(sshd:session): session closed for user core Jul 14 22:49:22.979097 systemd[1]: sshd@9-139.178.70.103:22-139.178.68.195:42778.service: Deactivated successfully. Jul 14 22:49:22.984773 systemd-logind[1613]: Session 12 logged out. Waiting for processes to exit. Jul 14 22:49:22.996398 systemd[1]: Started sshd@10-139.178.70.103:22-139.178.68.195:42782.service - OpenSSH per-connection server daemon (139.178.68.195:42782). Jul 14 22:49:22.998452 systemd[1]: session-12.scope: Deactivated successfully. Jul 14 22:49:23.001381 systemd-logind[1613]: Removed session 12. Jul 14 22:49:23.053897 sshd[6315]: Accepted publickey for core from 139.178.68.195 port 42782 ssh2: RSA SHA256:P8jM/IfcNW8CAymy10q/kVwVtMfLKdB1OJSN9TFdkWE Jul 14 22:49:23.056992 sshd[6315]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 22:49:23.064042 systemd-logind[1613]: New session 13 of user core. Jul 14 22:49:23.069116 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 14 22:49:23.199509 sshd[6315]: pam_unix(sshd:session): session closed for user core Jul 14 22:49:23.202101 systemd-logind[1613]: Session 13 logged out. Waiting for processes to exit. Jul 14 22:49:23.204786 systemd[1]: sshd@10-139.178.70.103:22-139.178.68.195:42782.service: Deactivated successfully. Jul 14 22:49:23.208304 systemd[1]: session-13.scope: Deactivated successfully. Jul 14 22:49:23.210067 systemd-logind[1613]: Removed session 13. Jul 14 22:49:24.635993 systemd-journald[1180]: Under memory pressure, flushing caches. Jul 14 22:49:24.637005 systemd-resolved[1541]: Under memory pressure, flushing caches. Jul 14 22:49:24.637010 systemd-resolved[1541]: Flushed all caches. Jul 14 22:49:26.683262 systemd-resolved[1541]: Under memory pressure, flushing caches. Jul 14 22:49:26.684118 systemd-journald[1180]: Under memory pressure, flushing caches. Jul 14 22:49:26.683266 systemd-resolved[1541]: Flushed all caches. Jul 14 22:49:28.218063 systemd[1]: Started sshd@11-139.178.70.103:22-139.178.68.195:42792.service - OpenSSH per-connection server daemon (139.178.68.195:42792). Jul 14 22:49:28.330220 sshd[6355]: Accepted publickey for core from 139.178.68.195 port 42792 ssh2: RSA SHA256:P8jM/IfcNW8CAymy10q/kVwVtMfLKdB1OJSN9TFdkWE Jul 14 22:49:28.333569 sshd[6355]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 22:49:28.336968 systemd-logind[1613]: New session 14 of user core. Jul 14 22:49:28.341205 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 14 22:49:28.731307 systemd-resolved[1541]: Under memory pressure, flushing caches. Jul 14 22:49:28.732009 systemd-journald[1180]: Under memory pressure, flushing caches. Jul 14 22:49:28.731312 systemd-resolved[1541]: Flushed all caches. Jul 14 22:49:29.594177 sshd[6355]: pam_unix(sshd:session): session closed for user core Jul 14 22:49:29.610802 systemd[1]: sshd@11-139.178.70.103:22-139.178.68.195:42792.service: Deactivated successfully. Jul 14 22:49:29.612773 systemd-logind[1613]: Session 14 logged out. Waiting for processes to exit. Jul 14 22:49:29.618271 systemd[1]: session-14.scope: Deactivated successfully. Jul 14 22:49:29.619437 systemd-logind[1613]: Removed session 14. Jul 14 22:49:34.609062 systemd[1]: Started sshd@12-139.178.70.103:22-139.178.68.195:44036.service - OpenSSH per-connection server daemon (139.178.68.195:44036). Jul 14 22:49:34.702833 sshd[6370]: Accepted publickey for core from 139.178.68.195 port 44036 ssh2: RSA SHA256:P8jM/IfcNW8CAymy10q/kVwVtMfLKdB1OJSN9TFdkWE Jul 14 22:49:34.705850 sshd[6370]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 22:49:34.709507 systemd-logind[1613]: New session 15 of user core. Jul 14 22:49:34.714041 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 14 22:49:35.288199 sshd[6370]: pam_unix(sshd:session): session closed for user core Jul 14 22:49:35.292044 systemd[1]: sshd@12-139.178.70.103:22-139.178.68.195:44036.service: Deactivated successfully. Jul 14 22:49:35.294308 systemd-logind[1613]: Session 15 logged out. Waiting for processes to exit. Jul 14 22:49:35.295274 systemd[1]: session-15.scope: Deactivated successfully. Jul 14 22:49:35.296226 systemd-logind[1613]: Removed session 15. Jul 14 22:49:36.347193 systemd-resolved[1541]: Under memory pressure, flushing caches. Jul 14 22:49:36.352534 systemd-journald[1180]: Under memory pressure, flushing caches. Jul 14 22:49:36.347199 systemd-resolved[1541]: Flushed all caches. Jul 14 22:49:38.396002 systemd-journald[1180]: Under memory pressure, flushing caches. Jul 14 22:49:38.395095 systemd-resolved[1541]: Under memory pressure, flushing caches. Jul 14 22:49:38.395101 systemd-resolved[1541]: Flushed all caches. Jul 14 22:49:40.295363 systemd[1]: Started sshd@13-139.178.70.103:22-139.178.68.195:42364.service - OpenSSH per-connection server daemon (139.178.68.195:42364). Jul 14 22:49:40.391898 sshd[6384]: Accepted publickey for core from 139.178.68.195 port 42364 ssh2: RSA SHA256:P8jM/IfcNW8CAymy10q/kVwVtMfLKdB1OJSN9TFdkWE Jul 14 22:49:40.408467 sshd[6384]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 22:49:40.415080 systemd-logind[1613]: New session 16 of user core. Jul 14 22:49:40.419078 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 14 22:49:42.366980 systemd-journald[1180]: Under memory pressure, flushing caches. Jul 14 22:49:42.363161 systemd-resolved[1541]: Under memory pressure, flushing caches. Jul 14 22:49:42.363166 systemd-resolved[1541]: Flushed all caches. Jul 14 22:49:42.780118 sshd[6384]: pam_unix(sshd:session): session closed for user core Jul 14 22:49:42.789880 systemd[1]: sshd@13-139.178.70.103:22-139.178.68.195:42364.service: Deactivated successfully. Jul 14 22:49:42.791235 systemd-logind[1613]: Session 16 logged out. Waiting for processes to exit. Jul 14 22:49:42.791244 systemd[1]: session-16.scope: Deactivated successfully. Jul 14 22:49:42.792687 systemd-logind[1613]: Removed session 16. Jul 14 22:49:44.411464 systemd-resolved[1541]: Under memory pressure, flushing caches. Jul 14 22:49:44.411473 systemd-resolved[1541]: Flushed all caches. Jul 14 22:49:44.412911 systemd-journald[1180]: Under memory pressure, flushing caches. Jul 14 22:49:46.459930 systemd-journald[1180]: Under memory pressure, flushing caches. Jul 14 22:49:46.459086 systemd-resolved[1541]: Under memory pressure, flushing caches. Jul 14 22:49:46.459092 systemd-resolved[1541]: Flushed all caches. Jul 14 22:49:47.805068 systemd[1]: Started sshd@14-139.178.70.103:22-139.178.68.195:42374.service - OpenSSH per-connection server daemon (139.178.68.195:42374). Jul 14 22:49:47.916522 sshd[6459]: Accepted publickey for core from 139.178.68.195 port 42374 ssh2: RSA SHA256:P8jM/IfcNW8CAymy10q/kVwVtMfLKdB1OJSN9TFdkWE Jul 14 22:49:47.920037 sshd[6459]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 22:49:47.923032 systemd-logind[1613]: New session 17 of user core. Jul 14 22:49:47.930086 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 14 22:49:48.537462 sshd[6459]: pam_unix(sshd:session): session closed for user core Jul 14 22:49:48.547781 systemd[1]: Started sshd@15-139.178.70.103:22-139.178.68.195:42382.service - OpenSSH per-connection server daemon (139.178.68.195:42382). Jul 14 22:49:48.548335 systemd[1]: sshd@14-139.178.70.103:22-139.178.68.195:42374.service: Deactivated successfully. Jul 14 22:49:48.550551 systemd-logind[1613]: Session 17 logged out. Waiting for processes to exit. Jul 14 22:49:48.551860 systemd[1]: session-17.scope: Deactivated successfully. Jul 14 22:49:48.552630 systemd-logind[1613]: Removed session 17. Jul 14 22:49:48.599412 sshd[6470]: Accepted publickey for core from 139.178.68.195 port 42382 ssh2: RSA SHA256:P8jM/IfcNW8CAymy10q/kVwVtMfLKdB1OJSN9TFdkWE Jul 14 22:49:48.600218 sshd[6470]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 22:49:48.604016 systemd-logind[1613]: New session 18 of user core. Jul 14 22:49:48.607030 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 14 22:49:49.077717 sshd[6470]: pam_unix(sshd:session): session closed for user core Jul 14 22:49:49.085129 systemd[1]: Started sshd@16-139.178.70.103:22-139.178.68.195:42384.service - OpenSSH per-connection server daemon (139.178.68.195:42384). Jul 14 22:49:49.105578 systemd[1]: sshd@15-139.178.70.103:22-139.178.68.195:42382.service: Deactivated successfully. Jul 14 22:49:49.112773 systemd[1]: session-18.scope: Deactivated successfully. Jul 14 22:49:49.114203 systemd-logind[1613]: Session 18 logged out. Waiting for processes to exit. Jul 14 22:49:49.116371 systemd-logind[1613]: Removed session 18. Jul 14 22:49:49.193341 sshd[6482]: Accepted publickey for core from 139.178.68.195 port 42384 ssh2: RSA SHA256:P8jM/IfcNW8CAymy10q/kVwVtMfLKdB1OJSN9TFdkWE Jul 14 22:49:49.192414 sshd[6482]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 22:49:49.200210 systemd-logind[1613]: New session 19 of user core. Jul 14 22:49:49.207166 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 14 22:49:50.364457 systemd-journald[1180]: Under memory pressure, flushing caches. Jul 14 22:49:50.365287 systemd-resolved[1541]: Under memory pressure, flushing caches. Jul 14 22:49:50.365291 systemd-resolved[1541]: Flushed all caches. Jul 14 22:49:52.475031 systemd-journald[1180]: Under memory pressure, flushing caches. Jul 14 22:49:52.474186 systemd-resolved[1541]: Under memory pressure, flushing caches. Jul 14 22:49:52.474192 systemd-resolved[1541]: Flushed all caches. Jul 14 22:49:54.087983 sshd[6482]: pam_unix(sshd:session): session closed for user core Jul 14 22:49:54.128084 systemd[1]: Started sshd@17-139.178.70.103:22-139.178.68.195:42000.service - OpenSSH per-connection server daemon (139.178.68.195:42000). Jul 14 22:49:54.134342 systemd[1]: sshd@16-139.178.70.103:22-139.178.68.195:42384.service: Deactivated successfully. Jul 14 22:49:54.143512 systemd-logind[1613]: Session 19 logged out. Waiting for processes to exit. Jul 14 22:49:54.143631 systemd[1]: session-19.scope: Deactivated successfully. Jul 14 22:49:54.150055 systemd-logind[1613]: Removed session 19. Jul 14 22:49:54.328087 sshd[6530]: Accepted publickey for core from 139.178.68.195 port 42000 ssh2: RSA SHA256:P8jM/IfcNW8CAymy10q/kVwVtMfLKdB1OJSN9TFdkWE Jul 14 22:49:54.329753 sshd[6530]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 22:49:54.339933 systemd-logind[1613]: New session 20 of user core. Jul 14 22:49:54.344744 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 14 22:49:54.460338 systemd-journald[1180]: Under memory pressure, flushing caches. Jul 14 22:49:54.464596 systemd-resolved[1541]: Under memory pressure, flushing caches. Jul 14 22:49:54.464603 systemd-resolved[1541]: Flushed all caches. Jul 14 22:49:55.706947 kubelet[2904]: E0714 22:49:55.574281 2904 kubelet.go:2512] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.203s" Jul 14 22:49:56.529645 systemd-resolved[1541]: Under memory pressure, flushing caches. Jul 14 22:49:56.620710 systemd-journald[1180]: Under memory pressure, flushing caches. Jul 14 22:49:56.529654 systemd-resolved[1541]: Flushed all caches. Jul 14 22:49:58.576549 systemd-journald[1180]: Under memory pressure, flushing caches. Jul 14 22:49:58.570532 systemd-resolved[1541]: Under memory pressure, flushing caches. Jul 14 22:49:58.570543 systemd-resolved[1541]: Flushed all caches. Jul 14 22:50:00.638997 systemd-journald[1180]: Under memory pressure, flushing caches. Jul 14 22:50:00.629309 systemd-resolved[1541]: Under memory pressure, flushing caches. Jul 14 22:50:00.638792 systemd-resolved[1541]: Flushed all caches. Jul 14 22:50:02.673162 systemd-journald[1180]: Under memory pressure, flushing caches. Jul 14 22:50:02.666316 systemd-resolved[1541]: Under memory pressure, flushing caches. Jul 14 22:50:02.666326 systemd-resolved[1541]: Flushed all caches. Jul 14 22:50:04.560057 sshd[6530]: pam_unix(sshd:session): session closed for user core Jul 14 22:50:04.680121 systemd[1]: Started sshd@18-139.178.70.103:22-139.178.68.195:52824.service - OpenSSH per-connection server daemon (139.178.68.195:52824). Jul 14 22:50:04.808068 systemd-journald[1180]: Under memory pressure, flushing caches. Jul 14 22:50:04.698735 systemd-logind[1613]: Session 20 logged out. Waiting for processes to exit. Jul 14 22:50:04.699812 systemd[1]: sshd@17-139.178.70.103:22-139.178.68.195:42000.service: Deactivated successfully. Jul 14 22:50:04.701744 systemd[1]: session-20.scope: Deactivated successfully. Jul 14 22:50:04.702541 systemd-logind[1613]: Removed session 20. Jul 14 22:50:04.703850 systemd-resolved[1541]: Under memory pressure, flushing caches. Jul 14 22:50:04.703856 systemd-resolved[1541]: Flushed all caches. Jul 14 22:50:05.212230 sshd[6571]: Accepted publickey for core from 139.178.68.195 port 52824 ssh2: RSA SHA256:P8jM/IfcNW8CAymy10q/kVwVtMfLKdB1OJSN9TFdkWE Jul 14 22:50:05.221618 sshd[6571]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 22:50:05.234605 systemd-logind[1613]: New session 21 of user core. Jul 14 22:50:05.240762 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 14 22:50:06.002634 kubelet[2904]: E0714 22:50:05.998942 2904 kubelet.go:2512] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.234s" Jul 14 22:50:06.760912 systemd-journald[1180]: Under memory pressure, flushing caches. Jul 14 22:50:06.767525 systemd-resolved[1541]: Under memory pressure, flushing caches. Jul 14 22:50:06.767539 systemd-resolved[1541]: Flushed all caches. Jul 14 22:50:08.726967 sshd[6571]: pam_unix(sshd:session): session closed for user core Jul 14 22:50:08.997511 systemd-journald[1180]: Under memory pressure, flushing caches. Jul 14 22:50:08.805623 systemd-resolved[1541]: Under memory pressure, flushing caches. Jul 14 22:50:08.805632 systemd-resolved[1541]: Flushed all caches. Jul 14 22:50:08.859899 systemd[1]: sshd@18-139.178.70.103:22-139.178.68.195:52824.service: Deactivated successfully. Jul 14 22:50:08.871594 systemd[1]: session-21.scope: Deactivated successfully. Jul 14 22:50:08.871596 systemd-logind[1613]: Session 21 logged out. Waiting for processes to exit. Jul 14 22:50:08.895762 systemd-logind[1613]: Removed session 21. Jul 14 22:50:10.853903 systemd-journald[1180]: Under memory pressure, flushing caches. Jul 14 22:50:10.860495 systemd-resolved[1541]: Under memory pressure, flushing caches. Jul 14 22:50:10.860506 systemd-resolved[1541]: Flushed all caches. Jul 14 22:50:13.761108 systemd[1]: Started sshd@19-139.178.70.103:22-139.178.68.195:47440.service - OpenSSH per-connection server daemon (139.178.68.195:47440). Jul 14 22:50:14.363904 systemd-journald[1180]: Under memory pressure, flushing caches. Jul 14 22:50:14.371066 systemd-resolved[1541]: Under memory pressure, flushing caches. Jul 14 22:50:14.438620 sshd[6610]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 22:50:14.502880 sshd[6610]: Accepted publickey for core from 139.178.68.195 port 47440 ssh2: RSA SHA256:P8jM/IfcNW8CAymy10q/kVwVtMfLKdB1OJSN9TFdkWE Jul 14 22:50:14.371073 systemd-resolved[1541]: Flushed all caches. Jul 14 22:50:14.473546 systemd-logind[1613]: New session 22 of user core. Jul 14 22:50:14.483242 systemd[1]: Started session-22.scope - Session 22 of User core. Jul 14 22:50:16.433181 systemd-journald[1180]: Under memory pressure, flushing caches. Jul 14 22:50:16.424324 systemd-resolved[1541]: Under memory pressure, flushing caches. Jul 14 22:50:16.424335 systemd-resolved[1541]: Flushed all caches. Jul 14 22:50:18.464104 systemd-resolved[1541]: Under memory pressure, flushing caches. Jul 14 22:50:18.556536 systemd-journald[1180]: Under memory pressure, flushing caches. Jul 14 22:50:18.464115 systemd-resolved[1541]: Flushed all caches. Jul 14 22:50:20.525041 systemd-journald[1180]: Under memory pressure, flushing caches. Jul 14 22:50:20.507348 systemd-resolved[1541]: Under memory pressure, flushing caches. Jul 14 22:50:20.515082 systemd-resolved[1541]: Flushed all caches. Jul 14 22:50:22.563478 systemd-journald[1180]: Under memory pressure, flushing caches. Jul 14 22:50:22.562040 systemd-resolved[1541]: Under memory pressure, flushing caches. Jul 14 22:50:22.562049 systemd-resolved[1541]: Flushed all caches. Jul 14 22:50:23.856141 kubelet[2904]: E0714 22:50:23.849407 2904 kubelet.go:2512] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="5.259s" Jul 14 22:50:24.561283 sshd[6610]: pam_unix(sshd:session): session closed for user core Jul 14 22:50:24.584394 systemd-logind[1613]: Session 22 logged out. Waiting for processes to exit. Jul 14 22:50:24.584518 systemd[1]: sshd@19-139.178.70.103:22-139.178.68.195:47440.service: Deactivated successfully. Jul 14 22:50:24.585738 systemd[1]: session-22.scope: Deactivated successfully. Jul 14 22:50:24.588579 systemd-logind[1613]: Removed session 22. Jul 14 22:50:24.603944 systemd-journald[1180]: Under memory pressure, flushing caches. Jul 14 22:50:24.605922 systemd-resolved[1541]: Under memory pressure, flushing caches. Jul 14 22:50:24.605929 systemd-resolved[1541]: Flushed all caches.