Jul 6 23:55:01.750524 kernel: Linux version 6.6.95-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Sun Jul 6 22:23:50 -00 2025 Jul 6 23:55:01.750541 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=65c65ff9d50198f0ae5c37458dc3ff85c6a690e7aa124bb306a2f4c63a54d876 Jul 6 23:55:01.750548 kernel: Disabled fast string operations Jul 6 23:55:01.750559 kernel: BIOS-provided physical RAM map: Jul 6 23:55:01.750563 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ebff] usable Jul 6 23:55:01.750567 kernel: BIOS-e820: [mem 0x000000000009ec00-0x000000000009ffff] reserved Jul 6 23:55:01.750574 kernel: BIOS-e820: [mem 0x00000000000dc000-0x00000000000fffff] reserved Jul 6 23:55:01.750578 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007fedffff] usable Jul 6 23:55:01.750582 kernel: BIOS-e820: [mem 0x000000007fee0000-0x000000007fefefff] ACPI data Jul 6 23:55:01.750587 kernel: BIOS-e820: [mem 0x000000007feff000-0x000000007fefffff] ACPI NVS Jul 6 23:55:01.750591 kernel: BIOS-e820: [mem 0x000000007ff00000-0x000000007fffffff] usable Jul 6 23:55:01.750595 kernel: BIOS-e820: [mem 0x00000000f0000000-0x00000000f7ffffff] reserved Jul 6 23:55:01.750599 kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec0ffff] reserved Jul 6 23:55:01.750603 kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved Jul 6 23:55:01.750610 kernel: BIOS-e820: [mem 0x00000000fffe0000-0x00000000ffffffff] reserved Jul 6 23:55:01.750615 kernel: NX (Execute Disable) protection: active Jul 6 23:55:01.750619 kernel: APIC: Static calls initialized Jul 6 23:55:01.750624 kernel: SMBIOS 2.7 present. Jul 6 23:55:01.750629 kernel: DMI: VMware, Inc. VMware Virtual Platform/440BX Desktop Reference Platform, BIOS 6.00 05/28/2020 Jul 6 23:55:01.750634 kernel: vmware: hypercall mode: 0x00 Jul 6 23:55:01.750638 kernel: Hypervisor detected: VMware Jul 6 23:55:01.750643 kernel: vmware: TSC freq read from hypervisor : 3408.000 MHz Jul 6 23:55:01.750649 kernel: vmware: Host bus clock speed read from hypervisor : 66000000 Hz Jul 6 23:55:01.750653 kernel: vmware: using clock offset of 2662504574 ns Jul 6 23:55:01.750658 kernel: tsc: Detected 3408.000 MHz processor Jul 6 23:55:01.750663 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jul 6 23:55:01.750669 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jul 6 23:55:01.750673 kernel: last_pfn = 0x80000 max_arch_pfn = 0x400000000 Jul 6 23:55:01.750678 kernel: total RAM covered: 3072M Jul 6 23:55:01.750683 kernel: Found optimal setting for mtrr clean up Jul 6 23:55:01.750688 kernel: gran_size: 64K chunk_size: 64K num_reg: 2 lose cover RAM: 0G Jul 6 23:55:01.750694 kernel: MTRR map: 6 entries (5 fixed + 1 variable; max 21), built from 8 variable MTRRs Jul 6 23:55:01.750699 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jul 6 23:55:01.750704 kernel: Using GB pages for direct mapping Jul 6 23:55:01.750709 kernel: ACPI: Early table checksum verification disabled Jul 6 23:55:01.750713 kernel: ACPI: RSDP 0x00000000000F6A00 000024 (v02 PTLTD ) Jul 6 23:55:01.750719 kernel: ACPI: XSDT 0x000000007FEE965B 00005C (v01 INTEL 440BX 06040000 VMW 01324272) Jul 6 23:55:01.750723 kernel: ACPI: FACP 0x000000007FEFEE73 0000F4 (v04 INTEL 440BX 06040000 PTL 000F4240) Jul 6 23:55:01.750728 kernel: ACPI: DSDT 0x000000007FEEAD55 01411E (v01 PTLTD Custom 06040000 MSFT 03000001) Jul 6 23:55:01.750733 kernel: ACPI: FACS 0x000000007FEFFFC0 000040 Jul 6 23:55:01.750741 kernel: ACPI: FACS 0x000000007FEFFFC0 000040 Jul 6 23:55:01.750746 kernel: ACPI: BOOT 0x000000007FEEAD2D 000028 (v01 PTLTD $SBFTBL$ 06040000 LTP 00000001) Jul 6 23:55:01.750751 kernel: ACPI: APIC 0x000000007FEEA5EB 000742 (v01 PTLTD ? APIC 06040000 LTP 00000000) Jul 6 23:55:01.750757 kernel: ACPI: MCFG 0x000000007FEEA5AF 00003C (v01 PTLTD $PCITBL$ 06040000 LTP 00000001) Jul 6 23:55:01.750762 kernel: ACPI: SRAT 0x000000007FEE9757 0008A8 (v02 VMWARE MEMPLUG 06040000 VMW 00000001) Jul 6 23:55:01.750768 kernel: ACPI: HPET 0x000000007FEE971F 000038 (v01 VMWARE VMW HPET 06040000 VMW 00000001) Jul 6 23:55:01.750773 kernel: ACPI: WAET 0x000000007FEE96F7 000028 (v01 VMWARE VMW WAET 06040000 VMW 00000001) Jul 6 23:55:01.750778 kernel: ACPI: Reserving FACP table memory at [mem 0x7fefee73-0x7fefef66] Jul 6 23:55:01.750783 kernel: ACPI: Reserving DSDT table memory at [mem 0x7feead55-0x7fefee72] Jul 6 23:55:01.750788 kernel: ACPI: Reserving FACS table memory at [mem 0x7fefffc0-0x7fefffff] Jul 6 23:55:01.750794 kernel: ACPI: Reserving FACS table memory at [mem 0x7fefffc0-0x7fefffff] Jul 6 23:55:01.750799 kernel: ACPI: Reserving BOOT table memory at [mem 0x7feead2d-0x7feead54] Jul 6 23:55:01.750804 kernel: ACPI: Reserving APIC table memory at [mem 0x7feea5eb-0x7feead2c] Jul 6 23:55:01.750809 kernel: ACPI: Reserving MCFG table memory at [mem 0x7feea5af-0x7feea5ea] Jul 6 23:55:01.750814 kernel: ACPI: Reserving SRAT table memory at [mem 0x7fee9757-0x7fee9ffe] Jul 6 23:55:01.750820 kernel: ACPI: Reserving HPET table memory at [mem 0x7fee971f-0x7fee9756] Jul 6 23:55:01.750825 kernel: ACPI: Reserving WAET table memory at [mem 0x7fee96f7-0x7fee971e] Jul 6 23:55:01.750830 kernel: system APIC only can use physical flat Jul 6 23:55:01.750835 kernel: APIC: Switched APIC routing to: physical flat Jul 6 23:55:01.750840 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jul 6 23:55:01.750845 kernel: SRAT: PXM 0 -> APIC 0x02 -> Node 0 Jul 6 23:55:01.750850 kernel: SRAT: PXM 0 -> APIC 0x04 -> Node 0 Jul 6 23:55:01.750855 kernel: SRAT: PXM 0 -> APIC 0x06 -> Node 0 Jul 6 23:55:01.750860 kernel: SRAT: PXM 0 -> APIC 0x08 -> Node 0 Jul 6 23:55:01.750866 kernel: SRAT: PXM 0 -> APIC 0x0a -> Node 0 Jul 6 23:55:01.750871 kernel: SRAT: PXM 0 -> APIC 0x0c -> Node 0 Jul 6 23:55:01.750876 kernel: SRAT: PXM 0 -> APIC 0x0e -> Node 0 Jul 6 23:55:01.750881 kernel: SRAT: PXM 0 -> APIC 0x10 -> Node 0 Jul 6 23:55:01.750886 kernel: SRAT: PXM 0 -> APIC 0x12 -> Node 0 Jul 6 23:55:01.750891 kernel: SRAT: PXM 0 -> APIC 0x14 -> Node 0 Jul 6 23:55:01.750896 kernel: SRAT: PXM 0 -> APIC 0x16 -> Node 0 Jul 6 23:55:01.750901 kernel: SRAT: PXM 0 -> APIC 0x18 -> Node 0 Jul 6 23:55:01.750906 kernel: SRAT: PXM 0 -> APIC 0x1a -> Node 0 Jul 6 23:55:01.750911 kernel: SRAT: PXM 0 -> APIC 0x1c -> Node 0 Jul 6 23:55:01.750917 kernel: SRAT: PXM 0 -> APIC 0x1e -> Node 0 Jul 6 23:55:01.750922 kernel: SRAT: PXM 0 -> APIC 0x20 -> Node 0 Jul 6 23:55:01.750927 kernel: SRAT: PXM 0 -> APIC 0x22 -> Node 0 Jul 6 23:55:01.750932 kernel: SRAT: PXM 0 -> APIC 0x24 -> Node 0 Jul 6 23:55:01.750937 kernel: SRAT: PXM 0 -> APIC 0x26 -> Node 0 Jul 6 23:55:01.750942 kernel: SRAT: PXM 0 -> APIC 0x28 -> Node 0 Jul 6 23:55:01.750947 kernel: SRAT: PXM 0 -> APIC 0x2a -> Node 0 Jul 6 23:55:01.750951 kernel: SRAT: PXM 0 -> APIC 0x2c -> Node 0 Jul 6 23:55:01.750956 kernel: SRAT: PXM 0 -> APIC 0x2e -> Node 0 Jul 6 23:55:01.750961 kernel: SRAT: PXM 0 -> APIC 0x30 -> Node 0 Jul 6 23:55:01.750966 kernel: SRAT: PXM 0 -> APIC 0x32 -> Node 0 Jul 6 23:55:01.750972 kernel: SRAT: PXM 0 -> APIC 0x34 -> Node 0 Jul 6 23:55:01.750977 kernel: SRAT: PXM 0 -> APIC 0x36 -> Node 0 Jul 6 23:55:01.750982 kernel: SRAT: PXM 0 -> APIC 0x38 -> Node 0 Jul 6 23:55:01.750987 kernel: SRAT: PXM 0 -> APIC 0x3a -> Node 0 Jul 6 23:55:01.750992 kernel: SRAT: PXM 0 -> APIC 0x3c -> Node 0 Jul 6 23:55:01.750997 kernel: SRAT: PXM 0 -> APIC 0x3e -> Node 0 Jul 6 23:55:01.751002 kernel: SRAT: PXM 0 -> APIC 0x40 -> Node 0 Jul 6 23:55:01.751007 kernel: SRAT: PXM 0 -> APIC 0x42 -> Node 0 Jul 6 23:55:01.751012 kernel: SRAT: PXM 0 -> APIC 0x44 -> Node 0 Jul 6 23:55:01.751017 kernel: SRAT: PXM 0 -> APIC 0x46 -> Node 0 Jul 6 23:55:01.751023 kernel: SRAT: PXM 0 -> APIC 0x48 -> Node 0 Jul 6 23:55:01.751028 kernel: SRAT: PXM 0 -> APIC 0x4a -> Node 0 Jul 6 23:55:01.751033 kernel: SRAT: PXM 0 -> APIC 0x4c -> Node 0 Jul 6 23:55:01.751038 kernel: SRAT: PXM 0 -> APIC 0x4e -> Node 0 Jul 6 23:55:01.751043 kernel: SRAT: PXM 0 -> APIC 0x50 -> Node 0 Jul 6 23:55:01.751048 kernel: SRAT: PXM 0 -> APIC 0x52 -> Node 0 Jul 6 23:55:01.751053 kernel: SRAT: PXM 0 -> APIC 0x54 -> Node 0 Jul 6 23:55:01.751058 kernel: SRAT: PXM 0 -> APIC 0x56 -> Node 0 Jul 6 23:55:01.751062 kernel: SRAT: PXM 0 -> APIC 0x58 -> Node 0 Jul 6 23:55:01.751068 kernel: SRAT: PXM 0 -> APIC 0x5a -> Node 0 Jul 6 23:55:01.751073 kernel: SRAT: PXM 0 -> APIC 0x5c -> Node 0 Jul 6 23:55:01.751079 kernel: SRAT: PXM 0 -> APIC 0x5e -> Node 0 Jul 6 23:55:01.751083 kernel: SRAT: PXM 0 -> APIC 0x60 -> Node 0 Jul 6 23:55:01.751088 kernel: SRAT: PXM 0 -> APIC 0x62 -> Node 0 Jul 6 23:55:01.751093 kernel: SRAT: PXM 0 -> APIC 0x64 -> Node 0 Jul 6 23:55:01.751098 kernel: SRAT: PXM 0 -> APIC 0x66 -> Node 0 Jul 6 23:55:01.751103 kernel: SRAT: PXM 0 -> APIC 0x68 -> Node 0 Jul 6 23:55:01.751108 kernel: SRAT: PXM 0 -> APIC 0x6a -> Node 0 Jul 6 23:55:01.751113 kernel: SRAT: PXM 0 -> APIC 0x6c -> Node 0 Jul 6 23:55:01.751118 kernel: SRAT: PXM 0 -> APIC 0x6e -> Node 0 Jul 6 23:55:01.751124 kernel: SRAT: PXM 0 -> APIC 0x70 -> Node 0 Jul 6 23:55:01.751129 kernel: SRAT: PXM 0 -> APIC 0x72 -> Node 0 Jul 6 23:55:01.751134 kernel: SRAT: PXM 0 -> APIC 0x74 -> Node 0 Jul 6 23:55:01.751144 kernel: SRAT: PXM 0 -> APIC 0x76 -> Node 0 Jul 6 23:55:01.751149 kernel: SRAT: PXM 0 -> APIC 0x78 -> Node 0 Jul 6 23:55:01.751154 kernel: SRAT: PXM 0 -> APIC 0x7a -> Node 0 Jul 6 23:55:01.751159 kernel: SRAT: PXM 0 -> APIC 0x7c -> Node 0 Jul 6 23:55:01.751165 kernel: SRAT: PXM 0 -> APIC 0x7e -> Node 0 Jul 6 23:55:01.751170 kernel: SRAT: PXM 0 -> APIC 0x80 -> Node 0 Jul 6 23:55:01.751176 kernel: SRAT: PXM 0 -> APIC 0x82 -> Node 0 Jul 6 23:55:01.751182 kernel: SRAT: PXM 0 -> APIC 0x84 -> Node 0 Jul 6 23:55:01.751187 kernel: SRAT: PXM 0 -> APIC 0x86 -> Node 0 Jul 6 23:55:01.751192 kernel: SRAT: PXM 0 -> APIC 0x88 -> Node 0 Jul 6 23:55:01.751197 kernel: SRAT: PXM 0 -> APIC 0x8a -> Node 0 Jul 6 23:55:01.751203 kernel: SRAT: PXM 0 -> APIC 0x8c -> Node 0 Jul 6 23:55:01.751208 kernel: SRAT: PXM 0 -> APIC 0x8e -> Node 0 Jul 6 23:55:01.751213 kernel: SRAT: PXM 0 -> APIC 0x90 -> Node 0 Jul 6 23:55:01.751219 kernel: SRAT: PXM 0 -> APIC 0x92 -> Node 0 Jul 6 23:55:01.751224 kernel: SRAT: PXM 0 -> APIC 0x94 -> Node 0 Jul 6 23:55:01.751230 kernel: SRAT: PXM 0 -> APIC 0x96 -> Node 0 Jul 6 23:55:01.751235 kernel: SRAT: PXM 0 -> APIC 0x98 -> Node 0 Jul 6 23:55:01.751241 kernel: SRAT: PXM 0 -> APIC 0x9a -> Node 0 Jul 6 23:55:01.751246 kernel: SRAT: PXM 0 -> APIC 0x9c -> Node 0 Jul 6 23:55:01.751251 kernel: SRAT: PXM 0 -> APIC 0x9e -> Node 0 Jul 6 23:55:01.751256 kernel: SRAT: PXM 0 -> APIC 0xa0 -> Node 0 Jul 6 23:55:01.751262 kernel: SRAT: PXM 0 -> APIC 0xa2 -> Node 0 Jul 6 23:55:01.751267 kernel: SRAT: PXM 0 -> APIC 0xa4 -> Node 0 Jul 6 23:55:01.751272 kernel: SRAT: PXM 0 -> APIC 0xa6 -> Node 0 Jul 6 23:55:01.751278 kernel: SRAT: PXM 0 -> APIC 0xa8 -> Node 0 Jul 6 23:55:01.751284 kernel: SRAT: PXM 0 -> APIC 0xaa -> Node 0 Jul 6 23:55:01.751289 kernel: SRAT: PXM 0 -> APIC 0xac -> Node 0 Jul 6 23:55:01.751295 kernel: SRAT: PXM 0 -> APIC 0xae -> Node 0 Jul 6 23:55:01.751300 kernel: SRAT: PXM 0 -> APIC 0xb0 -> Node 0 Jul 6 23:55:01.751305 kernel: SRAT: PXM 0 -> APIC 0xb2 -> Node 0 Jul 6 23:55:01.751311 kernel: SRAT: PXM 0 -> APIC 0xb4 -> Node 0 Jul 6 23:55:01.751316 kernel: SRAT: PXM 0 -> APIC 0xb6 -> Node 0 Jul 6 23:55:01.751321 kernel: SRAT: PXM 0 -> APIC 0xb8 -> Node 0 Jul 6 23:55:01.751327 kernel: SRAT: PXM 0 -> APIC 0xba -> Node 0 Jul 6 23:55:01.751332 kernel: SRAT: PXM 0 -> APIC 0xbc -> Node 0 Jul 6 23:55:01.751339 kernel: SRAT: PXM 0 -> APIC 0xbe -> Node 0 Jul 6 23:55:01.751344 kernel: SRAT: PXM 0 -> APIC 0xc0 -> Node 0 Jul 6 23:55:01.751349 kernel: SRAT: PXM 0 -> APIC 0xc2 -> Node 0 Jul 6 23:55:01.751354 kernel: SRAT: PXM 0 -> APIC 0xc4 -> Node 0 Jul 6 23:55:01.751360 kernel: SRAT: PXM 0 -> APIC 0xc6 -> Node 0 Jul 6 23:55:01.751365 kernel: SRAT: PXM 0 -> APIC 0xc8 -> Node 0 Jul 6 23:55:01.751370 kernel: SRAT: PXM 0 -> APIC 0xca -> Node 0 Jul 6 23:55:01.751376 kernel: SRAT: PXM 0 -> APIC 0xcc -> Node 0 Jul 6 23:55:01.751381 kernel: SRAT: PXM 0 -> APIC 0xce -> Node 0 Jul 6 23:55:01.751386 kernel: SRAT: PXM 0 -> APIC 0xd0 -> Node 0 Jul 6 23:55:01.751393 kernel: SRAT: PXM 0 -> APIC 0xd2 -> Node 0 Jul 6 23:55:01.751398 kernel: SRAT: PXM 0 -> APIC 0xd4 -> Node 0 Jul 6 23:55:01.751404 kernel: SRAT: PXM 0 -> APIC 0xd6 -> Node 0 Jul 6 23:55:01.751409 kernel: SRAT: PXM 0 -> APIC 0xd8 -> Node 0 Jul 6 23:55:01.751414 kernel: SRAT: PXM 0 -> APIC 0xda -> Node 0 Jul 6 23:55:01.751419 kernel: SRAT: PXM 0 -> APIC 0xdc -> Node 0 Jul 6 23:55:01.751425 kernel: SRAT: PXM 0 -> APIC 0xde -> Node 0 Jul 6 23:55:01.751430 kernel: SRAT: PXM 0 -> APIC 0xe0 -> Node 0 Jul 6 23:55:01.751435 kernel: SRAT: PXM 0 -> APIC 0xe2 -> Node 0 Jul 6 23:55:01.751441 kernel: SRAT: PXM 0 -> APIC 0xe4 -> Node 0 Jul 6 23:55:01.751446 kernel: SRAT: PXM 0 -> APIC 0xe6 -> Node 0 Jul 6 23:55:01.751452 kernel: SRAT: PXM 0 -> APIC 0xe8 -> Node 0 Jul 6 23:55:01.751458 kernel: SRAT: PXM 0 -> APIC 0xea -> Node 0 Jul 6 23:55:01.751463 kernel: SRAT: PXM 0 -> APIC 0xec -> Node 0 Jul 6 23:55:01.751468 kernel: SRAT: PXM 0 -> APIC 0xee -> Node 0 Jul 6 23:55:01.751473 kernel: SRAT: PXM 0 -> APIC 0xf0 -> Node 0 Jul 6 23:55:01.751478 kernel: SRAT: PXM 0 -> APIC 0xf2 -> Node 0 Jul 6 23:55:01.751484 kernel: SRAT: PXM 0 -> APIC 0xf4 -> Node 0 Jul 6 23:55:01.751489 kernel: SRAT: PXM 0 -> APIC 0xf6 -> Node 0 Jul 6 23:55:01.751507 kernel: SRAT: PXM 0 -> APIC 0xf8 -> Node 0 Jul 6 23:55:01.751512 kernel: SRAT: PXM 0 -> APIC 0xfa -> Node 0 Jul 6 23:55:01.751519 kernel: SRAT: PXM 0 -> APIC 0xfc -> Node 0 Jul 6 23:55:01.751525 kernel: SRAT: PXM 0 -> APIC 0xfe -> Node 0 Jul 6 23:55:01.751530 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Jul 6 23:55:01.751536 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Jul 6 23:55:01.751541 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000-0xbfffffff] hotplug Jul 6 23:55:01.751547 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7fffffff] -> [mem 0x00000000-0x7fffffff] Jul 6 23:55:01.752450 kernel: NODE_DATA(0) allocated [mem 0x7fffa000-0x7fffffff] Jul 6 23:55:01.752457 kernel: Zone ranges: Jul 6 23:55:01.752462 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jul 6 23:55:01.752470 kernel: DMA32 [mem 0x0000000001000000-0x000000007fffffff] Jul 6 23:55:01.752476 kernel: Normal empty Jul 6 23:55:01.752481 kernel: Movable zone start for each node Jul 6 23:55:01.752487 kernel: Early memory node ranges Jul 6 23:55:01.752492 kernel: node 0: [mem 0x0000000000001000-0x000000000009dfff] Jul 6 23:55:01.752498 kernel: node 0: [mem 0x0000000000100000-0x000000007fedffff] Jul 6 23:55:01.752503 kernel: node 0: [mem 0x000000007ff00000-0x000000007fffffff] Jul 6 23:55:01.752509 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007fffffff] Jul 6 23:55:01.752514 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jul 6 23:55:01.752520 kernel: On node 0, zone DMA: 98 pages in unavailable ranges Jul 6 23:55:01.752526 kernel: On node 0, zone DMA32: 32 pages in unavailable ranges Jul 6 23:55:01.752532 kernel: ACPI: PM-Timer IO Port: 0x1008 Jul 6 23:55:01.752538 kernel: system APIC only can use physical flat Jul 6 23:55:01.752543 kernel: ACPI: LAPIC_NMI (acpi_id[0x00] high edge lint[0x1]) Jul 6 23:55:01.752555 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1]) Jul 6 23:55:01.752561 kernel: ACPI: LAPIC_NMI (acpi_id[0x02] high edge lint[0x1]) Jul 6 23:55:01.752567 kernel: ACPI: LAPIC_NMI (acpi_id[0x03] high edge lint[0x1]) Jul 6 23:55:01.752572 kernel: ACPI: LAPIC_NMI (acpi_id[0x04] high edge lint[0x1]) Jul 6 23:55:01.752578 kernel: ACPI: LAPIC_NMI (acpi_id[0x05] high edge lint[0x1]) Jul 6 23:55:01.752584 kernel: ACPI: LAPIC_NMI (acpi_id[0x06] high edge lint[0x1]) Jul 6 23:55:01.752590 kernel: ACPI: LAPIC_NMI (acpi_id[0x07] high edge lint[0x1]) Jul 6 23:55:01.752595 kernel: ACPI: LAPIC_NMI (acpi_id[0x08] high edge lint[0x1]) Jul 6 23:55:01.752601 kernel: ACPI: LAPIC_NMI (acpi_id[0x09] high edge lint[0x1]) Jul 6 23:55:01.752606 kernel: ACPI: LAPIC_NMI (acpi_id[0x0a] high edge lint[0x1]) Jul 6 23:55:01.752612 kernel: ACPI: LAPIC_NMI (acpi_id[0x0b] high edge lint[0x1]) Jul 6 23:55:01.752617 kernel: ACPI: LAPIC_NMI (acpi_id[0x0c] high edge lint[0x1]) Jul 6 23:55:01.752622 kernel: ACPI: LAPIC_NMI (acpi_id[0x0d] high edge lint[0x1]) Jul 6 23:55:01.752628 kernel: ACPI: LAPIC_NMI (acpi_id[0x0e] high edge lint[0x1]) Jul 6 23:55:01.752633 kernel: ACPI: LAPIC_NMI (acpi_id[0x0f] high edge lint[0x1]) Jul 6 23:55:01.752640 kernel: ACPI: LAPIC_NMI (acpi_id[0x10] high edge lint[0x1]) Jul 6 23:55:01.752645 kernel: ACPI: LAPIC_NMI (acpi_id[0x11] high edge lint[0x1]) Jul 6 23:55:01.752650 kernel: ACPI: LAPIC_NMI (acpi_id[0x12] high edge lint[0x1]) Jul 6 23:55:01.752656 kernel: ACPI: LAPIC_NMI (acpi_id[0x13] high edge lint[0x1]) Jul 6 23:55:01.752661 kernel: ACPI: LAPIC_NMI (acpi_id[0x14] high edge lint[0x1]) Jul 6 23:55:01.752666 kernel: ACPI: LAPIC_NMI (acpi_id[0x15] high edge lint[0x1]) Jul 6 23:55:01.752672 kernel: ACPI: LAPIC_NMI (acpi_id[0x16] high edge lint[0x1]) Jul 6 23:55:01.752677 kernel: ACPI: LAPIC_NMI (acpi_id[0x17] high edge lint[0x1]) Jul 6 23:55:01.752683 kernel: ACPI: LAPIC_NMI (acpi_id[0x18] high edge lint[0x1]) Jul 6 23:55:01.752689 kernel: ACPI: LAPIC_NMI (acpi_id[0x19] high edge lint[0x1]) Jul 6 23:55:01.752695 kernel: ACPI: LAPIC_NMI (acpi_id[0x1a] high edge lint[0x1]) Jul 6 23:55:01.752700 kernel: ACPI: LAPIC_NMI (acpi_id[0x1b] high edge lint[0x1]) Jul 6 23:55:01.752705 kernel: ACPI: LAPIC_NMI (acpi_id[0x1c] high edge lint[0x1]) Jul 6 23:55:01.752711 kernel: ACPI: LAPIC_NMI (acpi_id[0x1d] high edge lint[0x1]) Jul 6 23:55:01.752716 kernel: ACPI: LAPIC_NMI (acpi_id[0x1e] high edge lint[0x1]) Jul 6 23:55:01.752722 kernel: ACPI: LAPIC_NMI (acpi_id[0x1f] high edge lint[0x1]) Jul 6 23:55:01.752727 kernel: ACPI: LAPIC_NMI (acpi_id[0x20] high edge lint[0x1]) Jul 6 23:55:01.752732 kernel: ACPI: LAPIC_NMI (acpi_id[0x21] high edge lint[0x1]) Jul 6 23:55:01.752738 kernel: ACPI: LAPIC_NMI (acpi_id[0x22] high edge lint[0x1]) Jul 6 23:55:01.752744 kernel: ACPI: LAPIC_NMI (acpi_id[0x23] high edge lint[0x1]) Jul 6 23:55:01.752750 kernel: ACPI: LAPIC_NMI (acpi_id[0x24] high edge lint[0x1]) Jul 6 23:55:01.752755 kernel: ACPI: LAPIC_NMI (acpi_id[0x25] high edge lint[0x1]) Jul 6 23:55:01.752760 kernel: ACPI: LAPIC_NMI (acpi_id[0x26] high edge lint[0x1]) Jul 6 23:55:01.752766 kernel: ACPI: LAPIC_NMI (acpi_id[0x27] high edge lint[0x1]) Jul 6 23:55:01.752771 kernel: ACPI: LAPIC_NMI (acpi_id[0x28] high edge lint[0x1]) Jul 6 23:55:01.752777 kernel: ACPI: LAPIC_NMI (acpi_id[0x29] high edge lint[0x1]) Jul 6 23:55:01.752782 kernel: ACPI: LAPIC_NMI (acpi_id[0x2a] high edge lint[0x1]) Jul 6 23:55:01.752787 kernel: ACPI: LAPIC_NMI (acpi_id[0x2b] high edge lint[0x1]) Jul 6 23:55:01.752793 kernel: ACPI: LAPIC_NMI (acpi_id[0x2c] high edge lint[0x1]) Jul 6 23:55:01.752799 kernel: ACPI: LAPIC_NMI (acpi_id[0x2d] high edge lint[0x1]) Jul 6 23:55:01.752804 kernel: ACPI: LAPIC_NMI (acpi_id[0x2e] high edge lint[0x1]) Jul 6 23:55:01.752810 kernel: ACPI: LAPIC_NMI (acpi_id[0x2f] high edge lint[0x1]) Jul 6 23:55:01.752815 kernel: ACPI: LAPIC_NMI (acpi_id[0x30] high edge lint[0x1]) Jul 6 23:55:01.752820 kernel: ACPI: LAPIC_NMI (acpi_id[0x31] high edge lint[0x1]) Jul 6 23:55:01.752826 kernel: ACPI: LAPIC_NMI (acpi_id[0x32] high edge lint[0x1]) Jul 6 23:55:01.752831 kernel: ACPI: LAPIC_NMI (acpi_id[0x33] high edge lint[0x1]) Jul 6 23:55:01.752837 kernel: ACPI: LAPIC_NMI (acpi_id[0x34] high edge lint[0x1]) Jul 6 23:55:01.752842 kernel: ACPI: LAPIC_NMI (acpi_id[0x35] high edge lint[0x1]) Jul 6 23:55:01.752849 kernel: ACPI: LAPIC_NMI (acpi_id[0x36] high edge lint[0x1]) Jul 6 23:55:01.752854 kernel: ACPI: LAPIC_NMI (acpi_id[0x37] high edge lint[0x1]) Jul 6 23:55:01.752859 kernel: ACPI: LAPIC_NMI (acpi_id[0x38] high edge lint[0x1]) Jul 6 23:55:01.752865 kernel: ACPI: LAPIC_NMI (acpi_id[0x39] high edge lint[0x1]) Jul 6 23:55:01.752870 kernel: ACPI: LAPIC_NMI (acpi_id[0x3a] high edge lint[0x1]) Jul 6 23:55:01.752876 kernel: ACPI: LAPIC_NMI (acpi_id[0x3b] high edge lint[0x1]) Jul 6 23:55:01.752881 kernel: ACPI: LAPIC_NMI (acpi_id[0x3c] high edge lint[0x1]) Jul 6 23:55:01.752886 kernel: ACPI: LAPIC_NMI (acpi_id[0x3d] high edge lint[0x1]) Jul 6 23:55:01.752892 kernel: ACPI: LAPIC_NMI (acpi_id[0x3e] high edge lint[0x1]) Jul 6 23:55:01.752897 kernel: ACPI: LAPIC_NMI (acpi_id[0x3f] high edge lint[0x1]) Jul 6 23:55:01.752903 kernel: ACPI: LAPIC_NMI (acpi_id[0x40] high edge lint[0x1]) Jul 6 23:55:01.752909 kernel: ACPI: LAPIC_NMI (acpi_id[0x41] high edge lint[0x1]) Jul 6 23:55:01.752914 kernel: ACPI: LAPIC_NMI (acpi_id[0x42] high edge lint[0x1]) Jul 6 23:55:01.752919 kernel: ACPI: LAPIC_NMI (acpi_id[0x43] high edge lint[0x1]) Jul 6 23:55:01.752925 kernel: ACPI: LAPIC_NMI (acpi_id[0x44] high edge lint[0x1]) Jul 6 23:55:01.752930 kernel: ACPI: LAPIC_NMI (acpi_id[0x45] high edge lint[0x1]) Jul 6 23:55:01.752936 kernel: ACPI: LAPIC_NMI (acpi_id[0x46] high edge lint[0x1]) Jul 6 23:55:01.752941 kernel: ACPI: LAPIC_NMI (acpi_id[0x47] high edge lint[0x1]) Jul 6 23:55:01.752946 kernel: ACPI: LAPIC_NMI (acpi_id[0x48] high edge lint[0x1]) Jul 6 23:55:01.752953 kernel: ACPI: LAPIC_NMI (acpi_id[0x49] high edge lint[0x1]) Jul 6 23:55:01.752958 kernel: ACPI: LAPIC_NMI (acpi_id[0x4a] high edge lint[0x1]) Jul 6 23:55:01.752963 kernel: ACPI: LAPIC_NMI (acpi_id[0x4b] high edge lint[0x1]) Jul 6 23:55:01.752969 kernel: ACPI: LAPIC_NMI (acpi_id[0x4c] high edge lint[0x1]) Jul 6 23:55:01.752974 kernel: ACPI: LAPIC_NMI (acpi_id[0x4d] high edge lint[0x1]) Jul 6 23:55:01.752979 kernel: ACPI: LAPIC_NMI (acpi_id[0x4e] high edge lint[0x1]) Jul 6 23:55:01.752985 kernel: ACPI: LAPIC_NMI (acpi_id[0x4f] high edge lint[0x1]) Jul 6 23:55:01.752990 kernel: ACPI: LAPIC_NMI (acpi_id[0x50] high edge lint[0x1]) Jul 6 23:55:01.752995 kernel: ACPI: LAPIC_NMI (acpi_id[0x51] high edge lint[0x1]) Jul 6 23:55:01.753001 kernel: ACPI: LAPIC_NMI (acpi_id[0x52] high edge lint[0x1]) Jul 6 23:55:01.753008 kernel: ACPI: LAPIC_NMI (acpi_id[0x53] high edge lint[0x1]) Jul 6 23:55:01.753013 kernel: ACPI: LAPIC_NMI (acpi_id[0x54] high edge lint[0x1]) Jul 6 23:55:01.753019 kernel: ACPI: LAPIC_NMI (acpi_id[0x55] high edge lint[0x1]) Jul 6 23:55:01.753024 kernel: ACPI: LAPIC_NMI (acpi_id[0x56] high edge lint[0x1]) Jul 6 23:55:01.753029 kernel: ACPI: LAPIC_NMI (acpi_id[0x57] high edge lint[0x1]) Jul 6 23:55:01.753035 kernel: ACPI: LAPIC_NMI (acpi_id[0x58] high edge lint[0x1]) Jul 6 23:55:01.753040 kernel: ACPI: LAPIC_NMI (acpi_id[0x59] high edge lint[0x1]) Jul 6 23:55:01.753046 kernel: ACPI: LAPIC_NMI (acpi_id[0x5a] high edge lint[0x1]) Jul 6 23:55:01.753051 kernel: ACPI: LAPIC_NMI (acpi_id[0x5b] high edge lint[0x1]) Jul 6 23:55:01.753057 kernel: ACPI: LAPIC_NMI (acpi_id[0x5c] high edge lint[0x1]) Jul 6 23:55:01.753063 kernel: ACPI: LAPIC_NMI (acpi_id[0x5d] high edge lint[0x1]) Jul 6 23:55:01.753068 kernel: ACPI: LAPIC_NMI (acpi_id[0x5e] high edge lint[0x1]) Jul 6 23:55:01.753074 kernel: ACPI: LAPIC_NMI (acpi_id[0x5f] high edge lint[0x1]) Jul 6 23:55:01.753079 kernel: ACPI: LAPIC_NMI (acpi_id[0x60] high edge lint[0x1]) Jul 6 23:55:01.753085 kernel: ACPI: LAPIC_NMI (acpi_id[0x61] high edge lint[0x1]) Jul 6 23:55:01.753090 kernel: ACPI: LAPIC_NMI (acpi_id[0x62] high edge lint[0x1]) Jul 6 23:55:01.753096 kernel: ACPI: LAPIC_NMI (acpi_id[0x63] high edge lint[0x1]) Jul 6 23:55:01.753101 kernel: ACPI: LAPIC_NMI (acpi_id[0x64] high edge lint[0x1]) Jul 6 23:55:01.753106 kernel: ACPI: LAPIC_NMI (acpi_id[0x65] high edge lint[0x1]) Jul 6 23:55:01.753113 kernel: ACPI: LAPIC_NMI (acpi_id[0x66] high edge lint[0x1]) Jul 6 23:55:01.753118 kernel: ACPI: LAPIC_NMI (acpi_id[0x67] high edge lint[0x1]) Jul 6 23:55:01.753124 kernel: ACPI: LAPIC_NMI (acpi_id[0x68] high edge lint[0x1]) Jul 6 23:55:01.753129 kernel: ACPI: LAPIC_NMI (acpi_id[0x69] high edge lint[0x1]) Jul 6 23:55:01.753134 kernel: ACPI: LAPIC_NMI (acpi_id[0x6a] high edge lint[0x1]) Jul 6 23:55:01.753140 kernel: ACPI: LAPIC_NMI (acpi_id[0x6b] high edge lint[0x1]) Jul 6 23:55:01.753145 kernel: ACPI: LAPIC_NMI (acpi_id[0x6c] high edge lint[0x1]) Jul 6 23:55:01.753151 kernel: ACPI: LAPIC_NMI (acpi_id[0x6d] high edge lint[0x1]) Jul 6 23:55:01.753156 kernel: ACPI: LAPIC_NMI (acpi_id[0x6e] high edge lint[0x1]) Jul 6 23:55:01.753161 kernel: ACPI: LAPIC_NMI (acpi_id[0x6f] high edge lint[0x1]) Jul 6 23:55:01.753168 kernel: ACPI: LAPIC_NMI (acpi_id[0x70] high edge lint[0x1]) Jul 6 23:55:01.753173 kernel: ACPI: LAPIC_NMI (acpi_id[0x71] high edge lint[0x1]) Jul 6 23:55:01.753178 kernel: ACPI: LAPIC_NMI (acpi_id[0x72] high edge lint[0x1]) Jul 6 23:55:01.753184 kernel: ACPI: LAPIC_NMI (acpi_id[0x73] high edge lint[0x1]) Jul 6 23:55:01.753189 kernel: ACPI: LAPIC_NMI (acpi_id[0x74] high edge lint[0x1]) Jul 6 23:55:01.753194 kernel: ACPI: LAPIC_NMI (acpi_id[0x75] high edge lint[0x1]) Jul 6 23:55:01.753200 kernel: ACPI: LAPIC_NMI (acpi_id[0x76] high edge lint[0x1]) Jul 6 23:55:01.753205 kernel: ACPI: LAPIC_NMI (acpi_id[0x77] high edge lint[0x1]) Jul 6 23:55:01.753210 kernel: ACPI: LAPIC_NMI (acpi_id[0x78] high edge lint[0x1]) Jul 6 23:55:01.753222 kernel: ACPI: LAPIC_NMI (acpi_id[0x79] high edge lint[0x1]) Jul 6 23:55:01.753228 kernel: ACPI: LAPIC_NMI (acpi_id[0x7a] high edge lint[0x1]) Jul 6 23:55:01.753242 kernel: ACPI: LAPIC_NMI (acpi_id[0x7b] high edge lint[0x1]) Jul 6 23:55:01.753258 kernel: ACPI: LAPIC_NMI (acpi_id[0x7c] high edge lint[0x1]) Jul 6 23:55:01.753273 kernel: ACPI: LAPIC_NMI (acpi_id[0x7d] high edge lint[0x1]) Jul 6 23:55:01.753289 kernel: ACPI: LAPIC_NMI (acpi_id[0x7e] high edge lint[0x1]) Jul 6 23:55:01.753298 kernel: ACPI: LAPIC_NMI (acpi_id[0x7f] high edge lint[0x1]) Jul 6 23:55:01.753304 kernel: IOAPIC[0]: apic_id 1, version 17, address 0xfec00000, GSI 0-23 Jul 6 23:55:01.753310 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 high edge) Jul 6 23:55:01.753315 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jul 6 23:55:01.753323 kernel: ACPI: HPET id: 0x8086af01 base: 0xfed00000 Jul 6 23:55:01.753328 kernel: TSC deadline timer available Jul 6 23:55:01.753334 kernel: smpboot: Allowing 128 CPUs, 126 hotplug CPUs Jul 6 23:55:01.753339 kernel: [mem 0x80000000-0xefffffff] available for PCI devices Jul 6 23:55:01.753345 kernel: Booting paravirtualized kernel on VMware hypervisor Jul 6 23:55:01.753350 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jul 6 23:55:01.753356 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:128 nr_cpu_ids:128 nr_node_ids:1 Jul 6 23:55:01.753361 kernel: percpu: Embedded 58 pages/cpu s197096 r8192 d32280 u262144 Jul 6 23:55:01.753367 kernel: pcpu-alloc: s197096 r8192 d32280 u262144 alloc=1*2097152 Jul 6 23:55:01.753373 kernel: pcpu-alloc: [0] 000 001 002 003 004 005 006 007 Jul 6 23:55:01.753379 kernel: pcpu-alloc: [0] 008 009 010 011 012 013 014 015 Jul 6 23:55:01.753384 kernel: pcpu-alloc: [0] 016 017 018 019 020 021 022 023 Jul 6 23:55:01.753390 kernel: pcpu-alloc: [0] 024 025 026 027 028 029 030 031 Jul 6 23:55:01.753395 kernel: pcpu-alloc: [0] 032 033 034 035 036 037 038 039 Jul 6 23:55:01.753409 kernel: pcpu-alloc: [0] 040 041 042 043 044 045 046 047 Jul 6 23:55:01.753416 kernel: pcpu-alloc: [0] 048 049 050 051 052 053 054 055 Jul 6 23:55:01.753421 kernel: pcpu-alloc: [0] 056 057 058 059 060 061 062 063 Jul 6 23:55:01.753427 kernel: pcpu-alloc: [0] 064 065 066 067 068 069 070 071 Jul 6 23:55:01.753434 kernel: pcpu-alloc: [0] 072 073 074 075 076 077 078 079 Jul 6 23:55:01.753439 kernel: pcpu-alloc: [0] 080 081 082 083 084 085 086 087 Jul 6 23:55:01.753445 kernel: pcpu-alloc: [0] 088 089 090 091 092 093 094 095 Jul 6 23:55:01.753451 kernel: pcpu-alloc: [0] 096 097 098 099 100 101 102 103 Jul 6 23:55:01.753457 kernel: pcpu-alloc: [0] 104 105 106 107 108 109 110 111 Jul 6 23:55:01.753462 kernel: pcpu-alloc: [0] 112 113 114 115 116 117 118 119 Jul 6 23:55:01.753468 kernel: pcpu-alloc: [0] 120 121 122 123 124 125 126 127 Jul 6 23:55:01.753474 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=65c65ff9d50198f0ae5c37458dc3ff85c6a690e7aa124bb306a2f4c63a54d876 Jul 6 23:55:01.753482 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 6 23:55:01.753488 kernel: random: crng init done Jul 6 23:55:01.753494 kernel: printk: log_buf_len individual max cpu contribution: 4096 bytes Jul 6 23:55:01.753499 kernel: printk: log_buf_len total cpu_extra contributions: 520192 bytes Jul 6 23:55:01.753505 kernel: printk: log_buf_len min size: 262144 bytes Jul 6 23:55:01.753511 kernel: printk: log_buf_len: 1048576 bytes Jul 6 23:55:01.753517 kernel: printk: early log buf free: 239648(91%) Jul 6 23:55:01.753523 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 6 23:55:01.753529 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jul 6 23:55:01.753536 kernel: Fallback order for Node 0: 0 Jul 6 23:55:01.753541 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515808 Jul 6 23:55:01.753547 kernel: Policy zone: DMA32 Jul 6 23:55:01.753969 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 6 23:55:01.753976 kernel: Memory: 1936372K/2096628K available (12288K kernel code, 2295K rwdata, 22748K rodata, 42868K init, 2324K bss, 159996K reserved, 0K cma-reserved) Jul 6 23:55:01.753986 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=128, Nodes=1 Jul 6 23:55:01.753992 kernel: ftrace: allocating 37966 entries in 149 pages Jul 6 23:55:01.753998 kernel: ftrace: allocated 149 pages with 4 groups Jul 6 23:55:01.754003 kernel: Dynamic Preempt: voluntary Jul 6 23:55:01.754009 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 6 23:55:01.754016 kernel: rcu: RCU event tracing is enabled. Jul 6 23:55:01.754021 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=128. Jul 6 23:55:01.754027 kernel: Trampoline variant of Tasks RCU enabled. Jul 6 23:55:01.754033 kernel: Rude variant of Tasks RCU enabled. Jul 6 23:55:01.754039 kernel: Tracing variant of Tasks RCU enabled. Jul 6 23:55:01.754046 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 6 23:55:01.754052 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=128 Jul 6 23:55:01.754058 kernel: NR_IRQS: 33024, nr_irqs: 1448, preallocated irqs: 16 Jul 6 23:55:01.754064 kernel: rcu: srcu_init: Setting srcu_struct sizes to big. Jul 6 23:55:01.754070 kernel: Console: colour VGA+ 80x25 Jul 6 23:55:01.754076 kernel: printk: console [tty0] enabled Jul 6 23:55:01.754082 kernel: printk: console [ttyS0] enabled Jul 6 23:55:01.754087 kernel: ACPI: Core revision 20230628 Jul 6 23:55:01.754094 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 133484882848 ns Jul 6 23:55:01.754100 kernel: APIC: Switch to symmetric I/O mode setup Jul 6 23:55:01.754106 kernel: x2apic enabled Jul 6 23:55:01.754112 kernel: APIC: Switched APIC routing to: physical x2apic Jul 6 23:55:01.754118 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jul 6 23:55:01.754124 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x311fd3cd494, max_idle_ns: 440795223879 ns Jul 6 23:55:01.754131 kernel: Calibrating delay loop (skipped) preset value.. 6816.00 BogoMIPS (lpj=3408000) Jul 6 23:55:01.754136 kernel: Disabled fast string operations Jul 6 23:55:01.754142 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Jul 6 23:55:01.754148 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Jul 6 23:55:01.754155 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jul 6 23:55:01.754161 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Jul 6 23:55:01.754167 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Jul 6 23:55:01.754174 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS Jul 6 23:55:01.754180 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT Jul 6 23:55:01.754186 kernel: RETBleed: Mitigation: Enhanced IBRS Jul 6 23:55:01.754192 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jul 6 23:55:01.754198 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jul 6 23:55:01.754204 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jul 6 23:55:01.754211 kernel: SRBDS: Unknown: Dependent on hypervisor status Jul 6 23:55:01.754217 kernel: GDS: Unknown: Dependent on hypervisor status Jul 6 23:55:01.754222 kernel: ITS: Mitigation: Aligned branch/return thunks Jul 6 23:55:01.754228 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jul 6 23:55:01.754234 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jul 6 23:55:01.754240 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jul 6 23:55:01.754246 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jul 6 23:55:01.754252 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jul 6 23:55:01.754258 kernel: Freeing SMP alternatives memory: 32K Jul 6 23:55:01.754265 kernel: pid_max: default: 131072 minimum: 1024 Jul 6 23:55:01.754271 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jul 6 23:55:01.754276 kernel: landlock: Up and running. Jul 6 23:55:01.754282 kernel: SELinux: Initializing. Jul 6 23:55:01.754288 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jul 6 23:55:01.754294 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jul 6 23:55:01.754300 kernel: smpboot: CPU0: Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz (family: 0x6, model: 0x9e, stepping: 0xd) Jul 6 23:55:01.754306 kernel: RCU Tasks: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. Jul 6 23:55:01.754312 kernel: RCU Tasks Rude: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. Jul 6 23:55:01.754319 kernel: RCU Tasks Trace: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. Jul 6 23:55:01.754325 kernel: Performance Events: Skylake events, core PMU driver. Jul 6 23:55:01.754331 kernel: core: CPUID marked event: 'cpu cycles' unavailable Jul 6 23:55:01.754337 kernel: core: CPUID marked event: 'instructions' unavailable Jul 6 23:55:01.754343 kernel: core: CPUID marked event: 'bus cycles' unavailable Jul 6 23:55:01.754348 kernel: core: CPUID marked event: 'cache references' unavailable Jul 6 23:55:01.754354 kernel: core: CPUID marked event: 'cache misses' unavailable Jul 6 23:55:01.754360 kernel: core: CPUID marked event: 'branch instructions' unavailable Jul 6 23:55:01.754367 kernel: core: CPUID marked event: 'branch misses' unavailable Jul 6 23:55:01.754372 kernel: ... version: 1 Jul 6 23:55:01.754378 kernel: ... bit width: 48 Jul 6 23:55:01.754384 kernel: ... generic registers: 4 Jul 6 23:55:01.754390 kernel: ... value mask: 0000ffffffffffff Jul 6 23:55:01.754396 kernel: ... max period: 000000007fffffff Jul 6 23:55:01.754401 kernel: ... fixed-purpose events: 0 Jul 6 23:55:01.754407 kernel: ... event mask: 000000000000000f Jul 6 23:55:01.754413 kernel: signal: max sigframe size: 1776 Jul 6 23:55:01.754420 kernel: rcu: Hierarchical SRCU implementation. Jul 6 23:55:01.754426 kernel: rcu: Max phase no-delay instances is 400. Jul 6 23:55:01.754432 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jul 6 23:55:01.754438 kernel: smp: Bringing up secondary CPUs ... Jul 6 23:55:01.754443 kernel: smpboot: x86: Booting SMP configuration: Jul 6 23:55:01.754449 kernel: .... node #0, CPUs: #1 Jul 6 23:55:01.754455 kernel: Disabled fast string operations Jul 6 23:55:01.754461 kernel: smpboot: CPU 1 Converting physical 2 to logical package 1 Jul 6 23:55:01.754467 kernel: smpboot: CPU 1 Converting physical 0 to logical die 1 Jul 6 23:55:01.754472 kernel: smp: Brought up 1 node, 2 CPUs Jul 6 23:55:01.754480 kernel: smpboot: Max logical packages: 128 Jul 6 23:55:01.754485 kernel: smpboot: Total of 2 processors activated (13632.00 BogoMIPS) Jul 6 23:55:01.754491 kernel: devtmpfs: initialized Jul 6 23:55:01.754509 kernel: x86/mm: Memory block size: 128MB Jul 6 23:55:01.754516 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7feff000-0x7fefffff] (4096 bytes) Jul 6 23:55:01.754522 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 6 23:55:01.754528 kernel: futex hash table entries: 32768 (order: 9, 2097152 bytes, linear) Jul 6 23:55:01.754534 kernel: pinctrl core: initialized pinctrl subsystem Jul 6 23:55:01.754540 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 6 23:55:01.754580 kernel: audit: initializing netlink subsys (disabled) Jul 6 23:55:01.754588 kernel: audit: type=2000 audit(1751846100.090:1): state=initialized audit_enabled=0 res=1 Jul 6 23:55:01.754594 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 6 23:55:01.754600 kernel: thermal_sys: Registered thermal governor 'user_space' Jul 6 23:55:01.754606 kernel: cpuidle: using governor menu Jul 6 23:55:01.754611 kernel: Simple Boot Flag at 0x36 set to 0x80 Jul 6 23:55:01.754617 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 6 23:55:01.754625 kernel: dca service started, version 1.12.1 Jul 6 23:55:01.754631 kernel: PCI: MMCONFIG for domain 0000 [bus 00-7f] at [mem 0xf0000000-0xf7ffffff] (base 0xf0000000) Jul 6 23:55:01.754638 kernel: PCI: Using configuration type 1 for base access Jul 6 23:55:01.754644 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jul 6 23:55:01.754650 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 6 23:55:01.754656 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jul 6 23:55:01.754662 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 6 23:55:01.754668 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jul 6 23:55:01.754674 kernel: ACPI: Added _OSI(Module Device) Jul 6 23:55:01.754679 kernel: ACPI: Added _OSI(Processor Device) Jul 6 23:55:01.754685 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 6 23:55:01.754692 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 6 23:55:01.754698 kernel: ACPI: [Firmware Bug]: BIOS _OSI(Linux) query ignored Jul 6 23:55:01.754704 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jul 6 23:55:01.754710 kernel: ACPI: Interpreter enabled Jul 6 23:55:01.754715 kernel: ACPI: PM: (supports S0 S1 S5) Jul 6 23:55:01.754721 kernel: ACPI: Using IOAPIC for interrupt routing Jul 6 23:55:01.754727 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jul 6 23:55:01.754733 kernel: PCI: Using E820 reservations for host bridge windows Jul 6 23:55:01.754739 kernel: ACPI: Enabled 4 GPEs in block 00 to 0F Jul 6 23:55:01.754746 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-7f]) Jul 6 23:55:01.754829 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 6 23:55:01.754887 kernel: acpi PNP0A03:00: _OSC: platform does not support [AER LTR] Jul 6 23:55:01.754937 kernel: acpi PNP0A03:00: _OSC: OS now controls [PCIeHotplug PME PCIeCapability] Jul 6 23:55:01.754946 kernel: PCI host bridge to bus 0000:00 Jul 6 23:55:01.754997 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jul 6 23:55:01.755088 kernel: pci_bus 0000:00: root bus resource [mem 0x000cc000-0x000dbfff window] Jul 6 23:55:01.755135 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jul 6 23:55:01.755180 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jul 6 23:55:01.755225 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xfeff window] Jul 6 23:55:01.755269 kernel: pci_bus 0000:00: root bus resource [bus 00-7f] Jul 6 23:55:01.755334 kernel: pci 0000:00:00.0: [8086:7190] type 00 class 0x060000 Jul 6 23:55:01.755391 kernel: pci 0000:00:01.0: [8086:7191] type 01 class 0x060400 Jul 6 23:55:01.755451 kernel: pci 0000:00:07.0: [8086:7110] type 00 class 0x060100 Jul 6 23:55:01.755519 kernel: pci 0000:00:07.1: [8086:7111] type 00 class 0x01018a Jul 6 23:55:01.755587 kernel: pci 0000:00:07.1: reg 0x20: [io 0x1060-0x106f] Jul 6 23:55:01.755640 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Jul 6 23:55:01.755692 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Jul 6 23:55:01.755743 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Jul 6 23:55:01.755797 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Jul 6 23:55:01.755853 kernel: pci 0000:00:07.3: [8086:7113] type 00 class 0x068000 Jul 6 23:55:01.755967 kernel: pci 0000:00:07.3: quirk: [io 0x1000-0x103f] claimed by PIIX4 ACPI Jul 6 23:55:01.756041 kernel: pci 0000:00:07.3: quirk: [io 0x1040-0x104f] claimed by PIIX4 SMB Jul 6 23:55:01.756113 kernel: pci 0000:00:07.7: [15ad:0740] type 00 class 0x088000 Jul 6 23:55:01.756167 kernel: pci 0000:00:07.7: reg 0x10: [io 0x1080-0x10bf] Jul 6 23:55:01.756219 kernel: pci 0000:00:07.7: reg 0x14: [mem 0xfebfe000-0xfebfffff 64bit] Jul 6 23:55:01.756277 kernel: pci 0000:00:0f.0: [15ad:0405] type 00 class 0x030000 Jul 6 23:55:01.756328 kernel: pci 0000:00:0f.0: reg 0x10: [io 0x1070-0x107f] Jul 6 23:55:01.756378 kernel: pci 0000:00:0f.0: reg 0x14: [mem 0xe8000000-0xefffffff pref] Jul 6 23:55:01.756427 kernel: pci 0000:00:0f.0: reg 0x18: [mem 0xfe000000-0xfe7fffff] Jul 6 23:55:01.756477 kernel: pci 0000:00:0f.0: reg 0x30: [mem 0x00000000-0x00007fff pref] Jul 6 23:55:01.756528 kernel: pci 0000:00:0f.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jul 6 23:55:01.756597 kernel: pci 0000:00:11.0: [15ad:0790] type 01 class 0x060401 Jul 6 23:55:01.756658 kernel: pci 0000:00:15.0: [15ad:07a0] type 01 class 0x060400 Jul 6 23:55:01.756711 kernel: pci 0000:00:15.0: PME# supported from D0 D3hot D3cold Jul 6 23:55:01.756768 kernel: pci 0000:00:15.1: [15ad:07a0] type 01 class 0x060400 Jul 6 23:55:01.756821 kernel: pci 0000:00:15.1: PME# supported from D0 D3hot D3cold Jul 6 23:55:01.756876 kernel: pci 0000:00:15.2: [15ad:07a0] type 01 class 0x060400 Jul 6 23:55:01.756929 kernel: pci 0000:00:15.2: PME# supported from D0 D3hot D3cold Jul 6 23:55:01.757012 kernel: pci 0000:00:15.3: [15ad:07a0] type 01 class 0x060400 Jul 6 23:55:01.757101 kernel: pci 0000:00:15.3: PME# supported from D0 D3hot D3cold Jul 6 23:55:01.757161 kernel: pci 0000:00:15.4: [15ad:07a0] type 01 class 0x060400 Jul 6 23:55:01.757213 kernel: pci 0000:00:15.4: PME# supported from D0 D3hot D3cold Jul 6 23:55:01.757268 kernel: pci 0000:00:15.5: [15ad:07a0] type 01 class 0x060400 Jul 6 23:55:01.757320 kernel: pci 0000:00:15.5: PME# supported from D0 D3hot D3cold Jul 6 23:55:01.757379 kernel: pci 0000:00:15.6: [15ad:07a0] type 01 class 0x060400 Jul 6 23:55:01.757431 kernel: pci 0000:00:15.6: PME# supported from D0 D3hot D3cold Jul 6 23:55:01.757485 kernel: pci 0000:00:15.7: [15ad:07a0] type 01 class 0x060400 Jul 6 23:55:01.757537 kernel: pci 0000:00:15.7: PME# supported from D0 D3hot D3cold Jul 6 23:55:01.757620 kernel: pci 0000:00:16.0: [15ad:07a0] type 01 class 0x060400 Jul 6 23:55:01.757696 kernel: pci 0000:00:16.0: PME# supported from D0 D3hot D3cold Jul 6 23:55:01.757757 kernel: pci 0000:00:16.1: [15ad:07a0] type 01 class 0x060400 Jul 6 23:55:01.757809 kernel: pci 0000:00:16.1: PME# supported from D0 D3hot D3cold Jul 6 23:55:01.757864 kernel: pci 0000:00:16.2: [15ad:07a0] type 01 class 0x060400 Jul 6 23:55:01.757916 kernel: pci 0000:00:16.2: PME# supported from D0 D3hot D3cold Jul 6 23:55:01.757971 kernel: pci 0000:00:16.3: [15ad:07a0] type 01 class 0x060400 Jul 6 23:55:01.758026 kernel: pci 0000:00:16.3: PME# supported from D0 D3hot D3cold Jul 6 23:55:01.758081 kernel: pci 0000:00:16.4: [15ad:07a0] type 01 class 0x060400 Jul 6 23:55:01.758133 kernel: pci 0000:00:16.4: PME# supported from D0 D3hot D3cold Jul 6 23:55:01.758188 kernel: pci 0000:00:16.5: [15ad:07a0] type 01 class 0x060400 Jul 6 23:55:01.758240 kernel: pci 0000:00:16.5: PME# supported from D0 D3hot D3cold Jul 6 23:55:01.758295 kernel: pci 0000:00:16.6: [15ad:07a0] type 01 class 0x060400 Jul 6 23:55:01.758347 kernel: pci 0000:00:16.6: PME# supported from D0 D3hot D3cold Jul 6 23:55:01.758405 kernel: pci 0000:00:16.7: [15ad:07a0] type 01 class 0x060400 Jul 6 23:55:01.758457 kernel: pci 0000:00:16.7: PME# supported from D0 D3hot D3cold Jul 6 23:55:01.758532 kernel: pci 0000:00:17.0: [15ad:07a0] type 01 class 0x060400 Jul 6 23:55:01.758597 kernel: pci 0000:00:17.0: PME# supported from D0 D3hot D3cold Jul 6 23:55:01.758653 kernel: pci 0000:00:17.1: [15ad:07a0] type 01 class 0x060400 Jul 6 23:55:01.758705 kernel: pci 0000:00:17.1: PME# supported from D0 D3hot D3cold Jul 6 23:55:01.758763 kernel: pci 0000:00:17.2: [15ad:07a0] type 01 class 0x060400 Jul 6 23:55:01.758816 kernel: pci 0000:00:17.2: PME# supported from D0 D3hot D3cold Jul 6 23:55:01.758870 kernel: pci 0000:00:17.3: [15ad:07a0] type 01 class 0x060400 Jul 6 23:55:01.758922 kernel: pci 0000:00:17.3: PME# supported from D0 D3hot D3cold Jul 6 23:55:01.758977 kernel: pci 0000:00:17.4: [15ad:07a0] type 01 class 0x060400 Jul 6 23:55:01.759030 kernel: pci 0000:00:17.4: PME# supported from D0 D3hot D3cold Jul 6 23:55:01.759088 kernel: pci 0000:00:17.5: [15ad:07a0] type 01 class 0x060400 Jul 6 23:55:01.759139 kernel: pci 0000:00:17.5: PME# supported from D0 D3hot D3cold Jul 6 23:55:01.759194 kernel: pci 0000:00:17.6: [15ad:07a0] type 01 class 0x060400 Jul 6 23:55:01.759246 kernel: pci 0000:00:17.6: PME# supported from D0 D3hot D3cold Jul 6 23:55:01.759300 kernel: pci 0000:00:17.7: [15ad:07a0] type 01 class 0x060400 Jul 6 23:55:01.759352 kernel: pci 0000:00:17.7: PME# supported from D0 D3hot D3cold Jul 6 23:55:01.759408 kernel: pci 0000:00:18.0: [15ad:07a0] type 01 class 0x060400 Jul 6 23:55:01.759463 kernel: pci 0000:00:18.0: PME# supported from D0 D3hot D3cold Jul 6 23:55:01.759518 kernel: pci 0000:00:18.1: [15ad:07a0] type 01 class 0x060400 Jul 6 23:55:01.761630 kernel: pci 0000:00:18.1: PME# supported from D0 D3hot D3cold Jul 6 23:55:01.761700 kernel: pci 0000:00:18.2: [15ad:07a0] type 01 class 0x060400 Jul 6 23:55:01.761756 kernel: pci 0000:00:18.2: PME# supported from D0 D3hot D3cold Jul 6 23:55:01.761813 kernel: pci 0000:00:18.3: [15ad:07a0] type 01 class 0x060400 Jul 6 23:55:01.761870 kernel: pci 0000:00:18.3: PME# supported from D0 D3hot D3cold Jul 6 23:55:01.761925 kernel: pci 0000:00:18.4: [15ad:07a0] type 01 class 0x060400 Jul 6 23:55:01.761979 kernel: pci 0000:00:18.4: PME# supported from D0 D3hot D3cold Jul 6 23:55:01.762034 kernel: pci 0000:00:18.5: [15ad:07a0] type 01 class 0x060400 Jul 6 23:55:01.762087 kernel: pci 0000:00:18.5: PME# supported from D0 D3hot D3cold Jul 6 23:55:01.762142 kernel: pci 0000:00:18.6: [15ad:07a0] type 01 class 0x060400 Jul 6 23:55:01.762197 kernel: pci 0000:00:18.6: PME# supported from D0 D3hot D3cold Jul 6 23:55:01.762252 kernel: pci 0000:00:18.7: [15ad:07a0] type 01 class 0x060400 Jul 6 23:55:01.762322 kernel: pci 0000:00:18.7: PME# supported from D0 D3hot D3cold Jul 6 23:55:01.762380 kernel: pci_bus 0000:01: extended config space not accessible Jul 6 23:55:01.762433 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Jul 6 23:55:01.762488 kernel: pci_bus 0000:02: extended config space not accessible Jul 6 23:55:01.762500 kernel: acpiphp: Slot [32] registered Jul 6 23:55:01.762506 kernel: acpiphp: Slot [33] registered Jul 6 23:55:01.762513 kernel: acpiphp: Slot [34] registered Jul 6 23:55:01.762518 kernel: acpiphp: Slot [35] registered Jul 6 23:55:01.762524 kernel: acpiphp: Slot [36] registered Jul 6 23:55:01.762530 kernel: acpiphp: Slot [37] registered Jul 6 23:55:01.762536 kernel: acpiphp: Slot [38] registered Jul 6 23:55:01.762542 kernel: acpiphp: Slot [39] registered Jul 6 23:55:01.762554 kernel: acpiphp: Slot [40] registered Jul 6 23:55:01.762561 kernel: acpiphp: Slot [41] registered Jul 6 23:55:01.762568 kernel: acpiphp: Slot [42] registered Jul 6 23:55:01.762575 kernel: acpiphp: Slot [43] registered Jul 6 23:55:01.762580 kernel: acpiphp: Slot [44] registered Jul 6 23:55:01.762586 kernel: acpiphp: Slot [45] registered Jul 6 23:55:01.762598 kernel: acpiphp: Slot [46] registered Jul 6 23:55:01.762610 kernel: acpiphp: Slot [47] registered Jul 6 23:55:01.762622 kernel: acpiphp: Slot [48] registered Jul 6 23:55:01.762628 kernel: acpiphp: Slot [49] registered Jul 6 23:55:01.762634 kernel: acpiphp: Slot [50] registered Jul 6 23:55:01.762642 kernel: acpiphp: Slot [51] registered Jul 6 23:55:01.762648 kernel: acpiphp: Slot [52] registered Jul 6 23:55:01.762654 kernel: acpiphp: Slot [53] registered Jul 6 23:55:01.762659 kernel: acpiphp: Slot [54] registered Jul 6 23:55:01.762665 kernel: acpiphp: Slot [55] registered Jul 6 23:55:01.762671 kernel: acpiphp: Slot [56] registered Jul 6 23:55:01.762677 kernel: acpiphp: Slot [57] registered Jul 6 23:55:01.762683 kernel: acpiphp: Slot [58] registered Jul 6 23:55:01.762689 kernel: acpiphp: Slot [59] registered Jul 6 23:55:01.762696 kernel: acpiphp: Slot [60] registered Jul 6 23:55:01.762702 kernel: acpiphp: Slot [61] registered Jul 6 23:55:01.762708 kernel: acpiphp: Slot [62] registered Jul 6 23:55:01.762714 kernel: acpiphp: Slot [63] registered Jul 6 23:55:01.762772 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] (subtractive decode) Jul 6 23:55:01.762825 kernel: pci 0000:00:11.0: bridge window [io 0x2000-0x3fff] Jul 6 23:55:01.762877 kernel: pci 0000:00:11.0: bridge window [mem 0xfd600000-0xfdffffff] Jul 6 23:55:01.762927 kernel: pci 0000:00:11.0: bridge window [mem 0xe7b00000-0xe7ffffff 64bit pref] Jul 6 23:55:01.762977 kernel: pci 0000:00:11.0: bridge window [mem 0x000a0000-0x000bffff window] (subtractive decode) Jul 6 23:55:01.763030 kernel: pci 0000:00:11.0: bridge window [mem 0x000cc000-0x000dbfff window] (subtractive decode) Jul 6 23:55:01.763081 kernel: pci 0000:00:11.0: bridge window [mem 0xc0000000-0xfebfffff window] (subtractive decode) Jul 6 23:55:01.763131 kernel: pci 0000:00:11.0: bridge window [io 0x0000-0x0cf7 window] (subtractive decode) Jul 6 23:55:01.763182 kernel: pci 0000:00:11.0: bridge window [io 0x0d00-0xfeff window] (subtractive decode) Jul 6 23:55:01.763238 kernel: pci 0000:03:00.0: [15ad:07c0] type 00 class 0x010700 Jul 6 23:55:01.763292 kernel: pci 0000:03:00.0: reg 0x10: [io 0x4000-0x4007] Jul 6 23:55:01.763345 kernel: pci 0000:03:00.0: reg 0x14: [mem 0xfd5f8000-0xfd5fffff 64bit] Jul 6 23:55:01.763401 kernel: pci 0000:03:00.0: reg 0x30: [mem 0x00000000-0x0000ffff pref] Jul 6 23:55:01.763453 kernel: pci 0000:03:00.0: PME# supported from D0 D3hot D3cold Jul 6 23:55:01.763505 kernel: pci 0000:03:00.0: disabling ASPM on pre-1.1 PCIe device. You can enable it with 'pcie_aspm=force' Jul 6 23:55:01.764284 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] Jul 6 23:55:01.764347 kernel: pci 0000:00:15.0: bridge window [io 0x4000-0x4fff] Jul 6 23:55:01.764403 kernel: pci 0000:00:15.0: bridge window [mem 0xfd500000-0xfd5fffff] Jul 6 23:55:01.764458 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] Jul 6 23:55:01.764519 kernel: pci 0000:00:15.1: bridge window [io 0x8000-0x8fff] Jul 6 23:55:01.765671 kernel: pci 0000:00:15.1: bridge window [mem 0xfd100000-0xfd1fffff] Jul 6 23:55:01.765733 kernel: pci 0000:00:15.1: bridge window [mem 0xe7800000-0xe78fffff 64bit pref] Jul 6 23:55:01.765791 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] Jul 6 23:55:01.765844 kernel: pci 0000:00:15.2: bridge window [io 0xc000-0xcfff] Jul 6 23:55:01.765896 kernel: pci 0000:00:15.2: bridge window [mem 0xfcd00000-0xfcdfffff] Jul 6 23:55:01.765948 kernel: pci 0000:00:15.2: bridge window [mem 0xe7400000-0xe74fffff 64bit pref] Jul 6 23:55:01.766002 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] Jul 6 23:55:01.766057 kernel: pci 0000:00:15.3: bridge window [mem 0xfc900000-0xfc9fffff] Jul 6 23:55:01.766109 kernel: pci 0000:00:15.3: bridge window [mem 0xe7000000-0xe70fffff 64bit pref] Jul 6 23:55:01.766162 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] Jul 6 23:55:01.766213 kernel: pci 0000:00:15.4: bridge window [mem 0xfc500000-0xfc5fffff] Jul 6 23:55:01.766264 kernel: pci 0000:00:15.4: bridge window [mem 0xe6c00000-0xe6cfffff 64bit pref] Jul 6 23:55:01.766319 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] Jul 6 23:55:01.766371 kernel: pci 0000:00:15.5: bridge window [mem 0xfc100000-0xfc1fffff] Jul 6 23:55:01.766422 kernel: pci 0000:00:15.5: bridge window [mem 0xe6800000-0xe68fffff 64bit pref] Jul 6 23:55:01.766476 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] Jul 6 23:55:01.766527 kernel: pci 0000:00:15.6: bridge window [mem 0xfbd00000-0xfbdfffff] Jul 6 23:55:01.767606 kernel: pci 0000:00:15.6: bridge window [mem 0xe6400000-0xe64fffff 64bit pref] Jul 6 23:55:01.767665 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] Jul 6 23:55:01.767722 kernel: pci 0000:00:15.7: bridge window [mem 0xfb900000-0xfb9fffff] Jul 6 23:55:01.767773 kernel: pci 0000:00:15.7: bridge window [mem 0xe6000000-0xe60fffff 64bit pref] Jul 6 23:55:01.767830 kernel: pci 0000:0b:00.0: [15ad:07b0] type 00 class 0x020000 Jul 6 23:55:01.767884 kernel: pci 0000:0b:00.0: reg 0x10: [mem 0xfd4fc000-0xfd4fcfff] Jul 6 23:55:01.767937 kernel: pci 0000:0b:00.0: reg 0x14: [mem 0xfd4fd000-0xfd4fdfff] Jul 6 23:55:01.767989 kernel: pci 0000:0b:00.0: reg 0x18: [mem 0xfd4fe000-0xfd4fffff] Jul 6 23:55:01.768042 kernel: pci 0000:0b:00.0: reg 0x1c: [io 0x5000-0x500f] Jul 6 23:55:01.768094 kernel: pci 0000:0b:00.0: reg 0x30: [mem 0x00000000-0x0000ffff pref] Jul 6 23:55:01.768150 kernel: pci 0000:0b:00.0: supports D1 D2 Jul 6 23:55:01.768203 kernel: pci 0000:0b:00.0: PME# supported from D0 D1 D2 D3hot D3cold Jul 6 23:55:01.768254 kernel: pci 0000:0b:00.0: disabling ASPM on pre-1.1 PCIe device. You can enable it with 'pcie_aspm=force' Jul 6 23:55:01.768307 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] Jul 6 23:55:01.768359 kernel: pci 0000:00:16.0: bridge window [io 0x5000-0x5fff] Jul 6 23:55:01.768410 kernel: pci 0000:00:16.0: bridge window [mem 0xfd400000-0xfd4fffff] Jul 6 23:55:01.768462 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] Jul 6 23:55:01.768521 kernel: pci 0000:00:16.1: bridge window [io 0x9000-0x9fff] Jul 6 23:55:01.769623 kernel: pci 0000:00:16.1: bridge window [mem 0xfd000000-0xfd0fffff] Jul 6 23:55:01.769682 kernel: pci 0000:00:16.1: bridge window [mem 0xe7700000-0xe77fffff 64bit pref] Jul 6 23:55:01.769741 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] Jul 6 23:55:01.769794 kernel: pci 0000:00:16.2: bridge window [io 0xd000-0xdfff] Jul 6 23:55:01.769860 kernel: pci 0000:00:16.2: bridge window [mem 0xfcc00000-0xfccfffff] Jul 6 23:55:01.769911 kernel: pci 0000:00:16.2: bridge window [mem 0xe7300000-0xe73fffff 64bit pref] Jul 6 23:55:01.769963 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] Jul 6 23:55:01.770017 kernel: pci 0000:00:16.3: bridge window [mem 0xfc800000-0xfc8fffff] Jul 6 23:55:01.770068 kernel: pci 0000:00:16.3: bridge window [mem 0xe6f00000-0xe6ffffff 64bit pref] Jul 6 23:55:01.770120 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] Jul 6 23:55:01.770170 kernel: pci 0000:00:16.4: bridge window [mem 0xfc400000-0xfc4fffff] Jul 6 23:55:01.770219 kernel: pci 0000:00:16.4: bridge window [mem 0xe6b00000-0xe6bfffff 64bit pref] Jul 6 23:55:01.770271 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] Jul 6 23:55:01.770320 kernel: pci 0000:00:16.5: bridge window [mem 0xfc000000-0xfc0fffff] Jul 6 23:55:01.770369 kernel: pci 0000:00:16.5: bridge window [mem 0xe6700000-0xe67fffff 64bit pref] Jul 6 23:55:01.770423 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] Jul 6 23:55:01.770474 kernel: pci 0000:00:16.6: bridge window [mem 0xfbc00000-0xfbcfffff] Jul 6 23:55:01.770524 kernel: pci 0000:00:16.6: bridge window [mem 0xe6300000-0xe63fffff 64bit pref] Jul 6 23:55:01.771694 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] Jul 6 23:55:01.771749 kernel: pci 0000:00:16.7: bridge window [mem 0xfb800000-0xfb8fffff] Jul 6 23:55:01.771800 kernel: pci 0000:00:16.7: bridge window [mem 0xe5f00000-0xe5ffffff 64bit pref] Jul 6 23:55:01.771852 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] Jul 6 23:55:01.771903 kernel: pci 0000:00:17.0: bridge window [io 0x6000-0x6fff] Jul 6 23:55:01.771958 kernel: pci 0000:00:17.0: bridge window [mem 0xfd300000-0xfd3fffff] Jul 6 23:55:01.772008 kernel: pci 0000:00:17.0: bridge window [mem 0xe7a00000-0xe7afffff 64bit pref] Jul 6 23:55:01.772061 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] Jul 6 23:55:01.772111 kernel: pci 0000:00:17.1: bridge window [io 0xa000-0xafff] Jul 6 23:55:01.772162 kernel: pci 0000:00:17.1: bridge window [mem 0xfcf00000-0xfcffffff] Jul 6 23:55:01.772212 kernel: pci 0000:00:17.1: bridge window [mem 0xe7600000-0xe76fffff 64bit pref] Jul 6 23:55:01.772266 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] Jul 6 23:55:01.772364 kernel: pci 0000:00:17.2: bridge window [io 0xe000-0xefff] Jul 6 23:55:01.772418 kernel: pci 0000:00:17.2: bridge window [mem 0xfcb00000-0xfcbfffff] Jul 6 23:55:01.772498 kernel: pci 0000:00:17.2: bridge window [mem 0xe7200000-0xe72fffff 64bit pref] Jul 6 23:55:01.774564 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] Jul 6 23:55:01.774620 kernel: pci 0000:00:17.3: bridge window [mem 0xfc700000-0xfc7fffff] Jul 6 23:55:01.774671 kernel: pci 0000:00:17.3: bridge window [mem 0xe6e00000-0xe6efffff 64bit pref] Jul 6 23:55:01.774726 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] Jul 6 23:55:01.774776 kernel: pci 0000:00:17.4: bridge window [mem 0xfc300000-0xfc3fffff] Jul 6 23:55:01.774832 kernel: pci 0000:00:17.4: bridge window [mem 0xe6a00000-0xe6afffff 64bit pref] Jul 6 23:55:01.774885 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] Jul 6 23:55:01.774937 kernel: pci 0000:00:17.5: bridge window [mem 0xfbf00000-0xfbffffff] Jul 6 23:55:01.774988 kernel: pci 0000:00:17.5: bridge window [mem 0xe6600000-0xe66fffff 64bit pref] Jul 6 23:55:01.775041 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] Jul 6 23:55:01.775093 kernel: pci 0000:00:17.6: bridge window [mem 0xfbb00000-0xfbbfffff] Jul 6 23:55:01.775143 kernel: pci 0000:00:17.6: bridge window [mem 0xe6200000-0xe62fffff 64bit pref] Jul 6 23:55:01.775197 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] Jul 6 23:55:01.775252 kernel: pci 0000:00:17.7: bridge window [mem 0xfb700000-0xfb7fffff] Jul 6 23:55:01.775303 kernel: pci 0000:00:17.7: bridge window [mem 0xe5e00000-0xe5efffff 64bit pref] Jul 6 23:55:01.775356 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] Jul 6 23:55:01.775529 kernel: pci 0000:00:18.0: bridge window [io 0x7000-0x7fff] Jul 6 23:55:01.775758 kernel: pci 0000:00:18.0: bridge window [mem 0xfd200000-0xfd2fffff] Jul 6 23:55:01.775812 kernel: pci 0000:00:18.0: bridge window [mem 0xe7900000-0xe79fffff 64bit pref] Jul 6 23:55:01.775901 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] Jul 6 23:55:01.775962 kernel: pci 0000:00:18.1: bridge window [io 0xb000-0xbfff] Jul 6 23:55:01.776017 kernel: pci 0000:00:18.1: bridge window [mem 0xfce00000-0xfcefffff] Jul 6 23:55:01.776068 kernel: pci 0000:00:18.1: bridge window [mem 0xe7500000-0xe75fffff 64bit pref] Jul 6 23:55:01.776121 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] Jul 6 23:55:01.776171 kernel: pci 0000:00:18.2: bridge window [mem 0xfca00000-0xfcafffff] Jul 6 23:55:01.776223 kernel: pci 0000:00:18.2: bridge window [mem 0xe7100000-0xe71fffff 64bit pref] Jul 6 23:55:01.776275 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] Jul 6 23:55:01.776326 kernel: pci 0000:00:18.3: bridge window [mem 0xfc600000-0xfc6fffff] Jul 6 23:55:01.776376 kernel: pci 0000:00:18.3: bridge window [mem 0xe6d00000-0xe6dfffff 64bit pref] Jul 6 23:55:01.776433 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] Jul 6 23:55:01.776485 kernel: pci 0000:00:18.4: bridge window [mem 0xfc200000-0xfc2fffff] Jul 6 23:55:01.776535 kernel: pci 0000:00:18.4: bridge window [mem 0xe6900000-0xe69fffff 64bit pref] Jul 6 23:55:01.779816 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] Jul 6 23:55:01.779874 kernel: pci 0000:00:18.5: bridge window [mem 0xfbe00000-0xfbefffff] Jul 6 23:55:01.779927 kernel: pci 0000:00:18.5: bridge window [mem 0xe6500000-0xe65fffff 64bit pref] Jul 6 23:55:01.779980 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] Jul 6 23:55:01.780032 kernel: pci 0000:00:18.6: bridge window [mem 0xfba00000-0xfbafffff] Jul 6 23:55:01.780087 kernel: pci 0000:00:18.6: bridge window [mem 0xe6100000-0xe61fffff 64bit pref] Jul 6 23:55:01.780140 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] Jul 6 23:55:01.780191 kernel: pci 0000:00:18.7: bridge window [mem 0xfb600000-0xfb6fffff] Jul 6 23:55:01.780242 kernel: pci 0000:00:18.7: bridge window [mem 0xe5d00000-0xe5dfffff 64bit pref] Jul 6 23:55:01.780251 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 9 Jul 6 23:55:01.780257 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 0 Jul 6 23:55:01.780263 kernel: ACPI: PCI: Interrupt link LNKB disabled Jul 6 23:55:01.780269 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jul 6 23:55:01.780277 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 10 Jul 6 23:55:01.780283 kernel: iommu: Default domain type: Translated Jul 6 23:55:01.780289 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jul 6 23:55:01.780295 kernel: PCI: Using ACPI for IRQ routing Jul 6 23:55:01.780301 kernel: PCI: pci_cache_line_size set to 64 bytes Jul 6 23:55:01.780307 kernel: e820: reserve RAM buffer [mem 0x0009ec00-0x0009ffff] Jul 6 23:55:01.780313 kernel: e820: reserve RAM buffer [mem 0x7fee0000-0x7fffffff] Jul 6 23:55:01.780366 kernel: pci 0000:00:0f.0: vgaarb: setting as boot VGA device Jul 6 23:55:01.780418 kernel: pci 0000:00:0f.0: vgaarb: bridge control possible Jul 6 23:55:01.780471 kernel: pci 0000:00:0f.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jul 6 23:55:01.780480 kernel: vgaarb: loaded Jul 6 23:55:01.780486 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 Jul 6 23:55:01.780492 kernel: hpet0: 16 comparators, 64-bit 14.318180 MHz counter Jul 6 23:55:01.780498 kernel: clocksource: Switched to clocksource tsc-early Jul 6 23:55:01.780504 kernel: VFS: Disk quotas dquot_6.6.0 Jul 6 23:55:01.780510 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 6 23:55:01.780516 kernel: pnp: PnP ACPI init Jul 6 23:55:01.780585 kernel: system 00:00: [io 0x1000-0x103f] has been reserved Jul 6 23:55:01.780638 kernel: system 00:00: [io 0x1040-0x104f] has been reserved Jul 6 23:55:01.780685 kernel: system 00:00: [io 0x0cf0-0x0cf1] has been reserved Jul 6 23:55:01.780736 kernel: system 00:04: [mem 0xfed00000-0xfed003ff] has been reserved Jul 6 23:55:01.780787 kernel: pnp 00:06: [dma 2] Jul 6 23:55:01.780840 kernel: system 00:07: [io 0xfce0-0xfcff] has been reserved Jul 6 23:55:01.780887 kernel: system 00:07: [mem 0xf0000000-0xf7ffffff] has been reserved Jul 6 23:55:01.780936 kernel: system 00:07: [mem 0xfe800000-0xfe9fffff] has been reserved Jul 6 23:55:01.780945 kernel: pnp: PnP ACPI: found 8 devices Jul 6 23:55:01.780951 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jul 6 23:55:01.780957 kernel: NET: Registered PF_INET protocol family Jul 6 23:55:01.780963 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 6 23:55:01.780969 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Jul 6 23:55:01.780975 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 6 23:55:01.780981 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Jul 6 23:55:01.780987 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Jul 6 23:55:01.780995 kernel: TCP: Hash tables configured (established 16384 bind 16384) Jul 6 23:55:01.781001 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Jul 6 23:55:01.781007 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Jul 6 23:55:01.781013 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 6 23:55:01.781019 kernel: NET: Registered PF_XDP protocol family Jul 6 23:55:01.781071 kernel: pci 0000:00:15.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 Jul 6 23:55:01.781125 kernel: pci 0000:00:15.3: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Jul 6 23:55:01.781182 kernel: pci 0000:00:15.4: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Jul 6 23:55:01.781235 kernel: pci 0000:00:15.5: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Jul 6 23:55:01.781289 kernel: pci 0000:00:15.6: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Jul 6 23:55:01.781342 kernel: pci 0000:00:15.7: bridge window [io 0x1000-0x0fff] to [bus 0a] add_size 1000 Jul 6 23:55:01.781396 kernel: pci 0000:00:16.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 0b] add_size 200000 add_align 100000 Jul 6 23:55:01.781449 kernel: pci 0000:00:16.3: bridge window [io 0x1000-0x0fff] to [bus 0e] add_size 1000 Jul 6 23:55:01.781505 kernel: pci 0000:00:16.4: bridge window [io 0x1000-0x0fff] to [bus 0f] add_size 1000 Jul 6 23:55:01.781579 kernel: pci 0000:00:16.5: bridge window [io 0x1000-0x0fff] to [bus 10] add_size 1000 Jul 6 23:55:01.781652 kernel: pci 0000:00:16.6: bridge window [io 0x1000-0x0fff] to [bus 11] add_size 1000 Jul 6 23:55:01.781706 kernel: pci 0000:00:16.7: bridge window [io 0x1000-0x0fff] to [bus 12] add_size 1000 Jul 6 23:55:01.781758 kernel: pci 0000:00:17.3: bridge window [io 0x1000-0x0fff] to [bus 16] add_size 1000 Jul 6 23:55:01.781812 kernel: pci 0000:00:17.4: bridge window [io 0x1000-0x0fff] to [bus 17] add_size 1000 Jul 6 23:55:01.781868 kernel: pci 0000:00:17.5: bridge window [io 0x1000-0x0fff] to [bus 18] add_size 1000 Jul 6 23:55:01.781920 kernel: pci 0000:00:17.6: bridge window [io 0x1000-0x0fff] to [bus 19] add_size 1000 Jul 6 23:55:01.781974 kernel: pci 0000:00:17.7: bridge window [io 0x1000-0x0fff] to [bus 1a] add_size 1000 Jul 6 23:55:01.782026 kernel: pci 0000:00:18.2: bridge window [io 0x1000-0x0fff] to [bus 1d] add_size 1000 Jul 6 23:55:01.782078 kernel: pci 0000:00:18.3: bridge window [io 0x1000-0x0fff] to [bus 1e] add_size 1000 Jul 6 23:55:01.782131 kernel: pci 0000:00:18.4: bridge window [io 0x1000-0x0fff] to [bus 1f] add_size 1000 Jul 6 23:55:01.782185 kernel: pci 0000:00:18.5: bridge window [io 0x1000-0x0fff] to [bus 20] add_size 1000 Jul 6 23:55:01.782237 kernel: pci 0000:00:18.6: bridge window [io 0x1000-0x0fff] to [bus 21] add_size 1000 Jul 6 23:55:01.782289 kernel: pci 0000:00:18.7: bridge window [io 0x1000-0x0fff] to [bus 22] add_size 1000 Jul 6 23:55:01.782340 kernel: pci 0000:00:15.0: BAR 15: assigned [mem 0xc0000000-0xc01fffff 64bit pref] Jul 6 23:55:01.782445 kernel: pci 0000:00:16.0: BAR 15: assigned [mem 0xc0200000-0xc03fffff 64bit pref] Jul 6 23:55:01.782500 kernel: pci 0000:00:15.3: BAR 13: no space for [io size 0x1000] Jul 6 23:55:01.784490 kernel: pci 0000:00:15.3: BAR 13: failed to assign [io size 0x1000] Jul 6 23:55:01.784575 kernel: pci 0000:00:15.4: BAR 13: no space for [io size 0x1000] Jul 6 23:55:01.784630 kernel: pci 0000:00:15.4: BAR 13: failed to assign [io size 0x1000] Jul 6 23:55:01.784684 kernel: pci 0000:00:15.5: BAR 13: no space for [io size 0x1000] Jul 6 23:55:01.784735 kernel: pci 0000:00:15.5: BAR 13: failed to assign [io size 0x1000] Jul 6 23:55:01.784788 kernel: pci 0000:00:15.6: BAR 13: no space for [io size 0x1000] Jul 6 23:55:01.784840 kernel: pci 0000:00:15.6: BAR 13: failed to assign [io size 0x1000] Jul 6 23:55:01.784893 kernel: pci 0000:00:15.7: BAR 13: no space for [io size 0x1000] Jul 6 23:55:01.784950 kernel: pci 0000:00:15.7: BAR 13: failed to assign [io size 0x1000] Jul 6 23:55:01.785003 kernel: pci 0000:00:16.3: BAR 13: no space for [io size 0x1000] Jul 6 23:55:01.785054 kernel: pci 0000:00:16.3: BAR 13: failed to assign [io size 0x1000] Jul 6 23:55:01.785107 kernel: pci 0000:00:16.4: BAR 13: no space for [io size 0x1000] Jul 6 23:55:01.785158 kernel: pci 0000:00:16.4: BAR 13: failed to assign [io size 0x1000] Jul 6 23:55:01.785211 kernel: pci 0000:00:16.5: BAR 13: no space for [io size 0x1000] Jul 6 23:55:01.785263 kernel: pci 0000:00:16.5: BAR 13: failed to assign [io size 0x1000] Jul 6 23:55:01.785316 kernel: pci 0000:00:16.6: BAR 13: no space for [io size 0x1000] Jul 6 23:55:01.785370 kernel: pci 0000:00:16.6: BAR 13: failed to assign [io size 0x1000] Jul 6 23:55:01.785423 kernel: pci 0000:00:16.7: BAR 13: no space for [io size 0x1000] Jul 6 23:55:01.785475 kernel: pci 0000:00:16.7: BAR 13: failed to assign [io size 0x1000] Jul 6 23:55:01.785534 kernel: pci 0000:00:17.3: BAR 13: no space for [io size 0x1000] Jul 6 23:55:01.785607 kernel: pci 0000:00:17.3: BAR 13: failed to assign [io size 0x1000] Jul 6 23:55:01.785661 kernel: pci 0000:00:17.4: BAR 13: no space for [io size 0x1000] Jul 6 23:55:01.785711 kernel: pci 0000:00:17.4: BAR 13: failed to assign [io size 0x1000] Jul 6 23:55:01.785764 kernel: pci 0000:00:17.5: BAR 13: no space for [io size 0x1000] Jul 6 23:55:01.785818 kernel: pci 0000:00:17.5: BAR 13: failed to assign [io size 0x1000] Jul 6 23:55:01.785871 kernel: pci 0000:00:17.6: BAR 13: no space for [io size 0x1000] Jul 6 23:55:01.785923 kernel: pci 0000:00:17.6: BAR 13: failed to assign [io size 0x1000] Jul 6 23:55:01.785976 kernel: pci 0000:00:17.7: BAR 13: no space for [io size 0x1000] Jul 6 23:55:01.786028 kernel: pci 0000:00:17.7: BAR 13: failed to assign [io size 0x1000] Jul 6 23:55:01.786082 kernel: pci 0000:00:18.2: BAR 13: no space for [io size 0x1000] Jul 6 23:55:01.786133 kernel: pci 0000:00:18.2: BAR 13: failed to assign [io size 0x1000] Jul 6 23:55:01.786186 kernel: pci 0000:00:18.3: BAR 13: no space for [io size 0x1000] Jul 6 23:55:01.786237 kernel: pci 0000:00:18.3: BAR 13: failed to assign [io size 0x1000] Jul 6 23:55:01.786293 kernel: pci 0000:00:18.4: BAR 13: no space for [io size 0x1000] Jul 6 23:55:01.786345 kernel: pci 0000:00:18.4: BAR 13: failed to assign [io size 0x1000] Jul 6 23:55:01.786398 kernel: pci 0000:00:18.5: BAR 13: no space for [io size 0x1000] Jul 6 23:55:01.786450 kernel: pci 0000:00:18.5: BAR 13: failed to assign [io size 0x1000] Jul 6 23:55:01.786502 kernel: pci 0000:00:18.6: BAR 13: no space for [io size 0x1000] Jul 6 23:55:01.786941 kernel: pci 0000:00:18.6: BAR 13: failed to assign [io size 0x1000] Jul 6 23:55:01.787004 kernel: pci 0000:00:18.7: BAR 13: no space for [io size 0x1000] Jul 6 23:55:01.787056 kernel: pci 0000:00:18.7: BAR 13: failed to assign [io size 0x1000] Jul 6 23:55:01.787112 kernel: pci 0000:00:18.7: BAR 13: no space for [io size 0x1000] Jul 6 23:55:01.787163 kernel: pci 0000:00:18.7: BAR 13: failed to assign [io size 0x1000] Jul 6 23:55:01.787215 kernel: pci 0000:00:18.6: BAR 13: no space for [io size 0x1000] Jul 6 23:55:01.787266 kernel: pci 0000:00:18.6: BAR 13: failed to assign [io size 0x1000] Jul 6 23:55:01.787317 kernel: pci 0000:00:18.5: BAR 13: no space for [io size 0x1000] Jul 6 23:55:01.787369 kernel: pci 0000:00:18.5: BAR 13: failed to assign [io size 0x1000] Jul 6 23:55:01.787419 kernel: pci 0000:00:18.4: BAR 13: no space for [io size 0x1000] Jul 6 23:55:01.787471 kernel: pci 0000:00:18.4: BAR 13: failed to assign [io size 0x1000] Jul 6 23:55:01.787522 kernel: pci 0000:00:18.3: BAR 13: no space for [io size 0x1000] Jul 6 23:55:01.787602 kernel: pci 0000:00:18.3: BAR 13: failed to assign [io size 0x1000] Jul 6 23:55:01.787656 kernel: pci 0000:00:18.2: BAR 13: no space for [io size 0x1000] Jul 6 23:55:01.787708 kernel: pci 0000:00:18.2: BAR 13: failed to assign [io size 0x1000] Jul 6 23:55:01.787760 kernel: pci 0000:00:17.7: BAR 13: no space for [io size 0x1000] Jul 6 23:55:01.787810 kernel: pci 0000:00:17.7: BAR 13: failed to assign [io size 0x1000] Jul 6 23:55:01.787862 kernel: pci 0000:00:17.6: BAR 13: no space for [io size 0x1000] Jul 6 23:55:01.787913 kernel: pci 0000:00:17.6: BAR 13: failed to assign [io size 0x1000] Jul 6 23:55:01.787964 kernel: pci 0000:00:17.5: BAR 13: no space for [io size 0x1000] Jul 6 23:55:01.788014 kernel: pci 0000:00:17.5: BAR 13: failed to assign [io size 0x1000] Jul 6 23:55:01.788069 kernel: pci 0000:00:17.4: BAR 13: no space for [io size 0x1000] Jul 6 23:55:01.788120 kernel: pci 0000:00:17.4: BAR 13: failed to assign [io size 0x1000] Jul 6 23:55:01.788171 kernel: pci 0000:00:17.3: BAR 13: no space for [io size 0x1000] Jul 6 23:55:01.788222 kernel: pci 0000:00:17.3: BAR 13: failed to assign [io size 0x1000] Jul 6 23:55:01.788273 kernel: pci 0000:00:16.7: BAR 13: no space for [io size 0x1000] Jul 6 23:55:01.788324 kernel: pci 0000:00:16.7: BAR 13: failed to assign [io size 0x1000] Jul 6 23:55:01.788377 kernel: pci 0000:00:16.6: BAR 13: no space for [io size 0x1000] Jul 6 23:55:01.788428 kernel: pci 0000:00:16.6: BAR 13: failed to assign [io size 0x1000] Jul 6 23:55:01.788480 kernel: pci 0000:00:16.5: BAR 13: no space for [io size 0x1000] Jul 6 23:55:01.788530 kernel: pci 0000:00:16.5: BAR 13: failed to assign [io size 0x1000] Jul 6 23:55:01.788666 kernel: pci 0000:00:16.4: BAR 13: no space for [io size 0x1000] Jul 6 23:55:01.788718 kernel: pci 0000:00:16.4: BAR 13: failed to assign [io size 0x1000] Jul 6 23:55:01.788769 kernel: pci 0000:00:16.3: BAR 13: no space for [io size 0x1000] Jul 6 23:55:01.788819 kernel: pci 0000:00:16.3: BAR 13: failed to assign [io size 0x1000] Jul 6 23:55:01.788869 kernel: pci 0000:00:15.7: BAR 13: no space for [io size 0x1000] Jul 6 23:55:01.788919 kernel: pci 0000:00:15.7: BAR 13: failed to assign [io size 0x1000] Jul 6 23:55:01.788970 kernel: pci 0000:00:15.6: BAR 13: no space for [io size 0x1000] Jul 6 23:55:01.789021 kernel: pci 0000:00:15.6: BAR 13: failed to assign [io size 0x1000] Jul 6 23:55:01.789072 kernel: pci 0000:00:15.5: BAR 13: no space for [io size 0x1000] Jul 6 23:55:01.789127 kernel: pci 0000:00:15.5: BAR 13: failed to assign [io size 0x1000] Jul 6 23:55:01.789177 kernel: pci 0000:00:15.4: BAR 13: no space for [io size 0x1000] Jul 6 23:55:01.789227 kernel: pci 0000:00:15.4: BAR 13: failed to assign [io size 0x1000] Jul 6 23:55:01.789280 kernel: pci 0000:00:15.3: BAR 13: no space for [io size 0x1000] Jul 6 23:55:01.789331 kernel: pci 0000:00:15.3: BAR 13: failed to assign [io size 0x1000] Jul 6 23:55:01.789384 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Jul 6 23:55:01.789435 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] Jul 6 23:55:01.789486 kernel: pci 0000:00:11.0: bridge window [io 0x2000-0x3fff] Jul 6 23:55:01.789536 kernel: pci 0000:00:11.0: bridge window [mem 0xfd600000-0xfdffffff] Jul 6 23:55:01.789599 kernel: pci 0000:00:11.0: bridge window [mem 0xe7b00000-0xe7ffffff 64bit pref] Jul 6 23:55:01.789662 kernel: pci 0000:03:00.0: BAR 6: assigned [mem 0xfd500000-0xfd50ffff pref] Jul 6 23:55:01.789715 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] Jul 6 23:55:01.789767 kernel: pci 0000:00:15.0: bridge window [io 0x4000-0x4fff] Jul 6 23:55:01.789817 kernel: pci 0000:00:15.0: bridge window [mem 0xfd500000-0xfd5fffff] Jul 6 23:55:01.789869 kernel: pci 0000:00:15.0: bridge window [mem 0xc0000000-0xc01fffff 64bit pref] Jul 6 23:55:01.789922 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] Jul 6 23:55:01.789974 kernel: pci 0000:00:15.1: bridge window [io 0x8000-0x8fff] Jul 6 23:55:01.790025 kernel: pci 0000:00:15.1: bridge window [mem 0xfd100000-0xfd1fffff] Jul 6 23:55:01.790079 kernel: pci 0000:00:15.1: bridge window [mem 0xe7800000-0xe78fffff 64bit pref] Jul 6 23:55:01.790134 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] Jul 6 23:55:01.790186 kernel: pci 0000:00:15.2: bridge window [io 0xc000-0xcfff] Jul 6 23:55:01.790237 kernel: pci 0000:00:15.2: bridge window [mem 0xfcd00000-0xfcdfffff] Jul 6 23:55:01.790288 kernel: pci 0000:00:15.2: bridge window [mem 0xe7400000-0xe74fffff 64bit pref] Jul 6 23:55:01.790340 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] Jul 6 23:55:01.790392 kernel: pci 0000:00:15.3: bridge window [mem 0xfc900000-0xfc9fffff] Jul 6 23:55:01.790444 kernel: pci 0000:00:15.3: bridge window [mem 0xe7000000-0xe70fffff 64bit pref] Jul 6 23:55:01.790496 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] Jul 6 23:55:01.790558 kernel: pci 0000:00:15.4: bridge window [mem 0xfc500000-0xfc5fffff] Jul 6 23:55:01.790611 kernel: pci 0000:00:15.4: bridge window [mem 0xe6c00000-0xe6cfffff 64bit pref] Jul 6 23:55:01.790667 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] Jul 6 23:55:01.790719 kernel: pci 0000:00:15.5: bridge window [mem 0xfc100000-0xfc1fffff] Jul 6 23:55:01.790770 kernel: pci 0000:00:15.5: bridge window [mem 0xe6800000-0xe68fffff 64bit pref] Jul 6 23:55:01.790822 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] Jul 6 23:55:01.790876 kernel: pci 0000:00:15.6: bridge window [mem 0xfbd00000-0xfbdfffff] Jul 6 23:55:01.790928 kernel: pci 0000:00:15.6: bridge window [mem 0xe6400000-0xe64fffff 64bit pref] Jul 6 23:55:01.790980 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] Jul 6 23:55:01.791031 kernel: pci 0000:00:15.7: bridge window [mem 0xfb900000-0xfb9fffff] Jul 6 23:55:01.791083 kernel: pci 0000:00:15.7: bridge window [mem 0xe6000000-0xe60fffff 64bit pref] Jul 6 23:55:01.791140 kernel: pci 0000:0b:00.0: BAR 6: assigned [mem 0xfd400000-0xfd40ffff pref] Jul 6 23:55:01.791194 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] Jul 6 23:55:01.791245 kernel: pci 0000:00:16.0: bridge window [io 0x5000-0x5fff] Jul 6 23:55:01.791297 kernel: pci 0000:00:16.0: bridge window [mem 0xfd400000-0xfd4fffff] Jul 6 23:55:01.791352 kernel: pci 0000:00:16.0: bridge window [mem 0xc0200000-0xc03fffff 64bit pref] Jul 6 23:55:01.791405 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] Jul 6 23:55:01.791457 kernel: pci 0000:00:16.1: bridge window [io 0x9000-0x9fff] Jul 6 23:55:01.791512 kernel: pci 0000:00:16.1: bridge window [mem 0xfd000000-0xfd0fffff] Jul 6 23:55:01.791576 kernel: pci 0000:00:16.1: bridge window [mem 0xe7700000-0xe77fffff 64bit pref] Jul 6 23:55:01.791632 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] Jul 6 23:55:01.791684 kernel: pci 0000:00:16.2: bridge window [io 0xd000-0xdfff] Jul 6 23:55:01.791736 kernel: pci 0000:00:16.2: bridge window [mem 0xfcc00000-0xfccfffff] Jul 6 23:55:01.791787 kernel: pci 0000:00:16.2: bridge window [mem 0xe7300000-0xe73fffff 64bit pref] Jul 6 23:55:01.791838 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] Jul 6 23:55:01.791892 kernel: pci 0000:00:16.3: bridge window [mem 0xfc800000-0xfc8fffff] Jul 6 23:55:01.791943 kernel: pci 0000:00:16.3: bridge window [mem 0xe6f00000-0xe6ffffff 64bit pref] Jul 6 23:55:01.791995 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] Jul 6 23:55:01.792046 kernel: pci 0000:00:16.4: bridge window [mem 0xfc400000-0xfc4fffff] Jul 6 23:55:01.792098 kernel: pci 0000:00:16.4: bridge window [mem 0xe6b00000-0xe6bfffff 64bit pref] Jul 6 23:55:01.792150 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] Jul 6 23:55:01.792201 kernel: pci 0000:00:16.5: bridge window [mem 0xfc000000-0xfc0fffff] Jul 6 23:55:01.792252 kernel: pci 0000:00:16.5: bridge window [mem 0xe6700000-0xe67fffff 64bit pref] Jul 6 23:55:01.792304 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] Jul 6 23:55:01.792358 kernel: pci 0000:00:16.6: bridge window [mem 0xfbc00000-0xfbcfffff] Jul 6 23:55:01.792457 kernel: pci 0000:00:16.6: bridge window [mem 0xe6300000-0xe63fffff 64bit pref] Jul 6 23:55:01.792511 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] Jul 6 23:55:01.792574 kernel: pci 0000:00:16.7: bridge window [mem 0xfb800000-0xfb8fffff] Jul 6 23:55:01.792630 kernel: pci 0000:00:16.7: bridge window [mem 0xe5f00000-0xe5ffffff 64bit pref] Jul 6 23:55:01.792683 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] Jul 6 23:55:01.792735 kernel: pci 0000:00:17.0: bridge window [io 0x6000-0x6fff] Jul 6 23:55:01.792785 kernel: pci 0000:00:17.0: bridge window [mem 0xfd300000-0xfd3fffff] Jul 6 23:55:01.792837 kernel: pci 0000:00:17.0: bridge window [mem 0xe7a00000-0xe7afffff 64bit pref] Jul 6 23:55:01.792891 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] Jul 6 23:55:01.792947 kernel: pci 0000:00:17.1: bridge window [io 0xa000-0xafff] Jul 6 23:55:01.792999 kernel: pci 0000:00:17.1: bridge window [mem 0xfcf00000-0xfcffffff] Jul 6 23:55:01.793050 kernel: pci 0000:00:17.1: bridge window [mem 0xe7600000-0xe76fffff 64bit pref] Jul 6 23:55:01.793102 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] Jul 6 23:55:01.793154 kernel: pci 0000:00:17.2: bridge window [io 0xe000-0xefff] Jul 6 23:55:01.793205 kernel: pci 0000:00:17.2: bridge window [mem 0xfcb00000-0xfcbfffff] Jul 6 23:55:01.793256 kernel: pci 0000:00:17.2: bridge window [mem 0xe7200000-0xe72fffff 64bit pref] Jul 6 23:55:01.793308 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] Jul 6 23:55:01.793358 kernel: pci 0000:00:17.3: bridge window [mem 0xfc700000-0xfc7fffff] Jul 6 23:55:01.793412 kernel: pci 0000:00:17.3: bridge window [mem 0xe6e00000-0xe6efffff 64bit pref] Jul 6 23:55:01.793463 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] Jul 6 23:55:01.793514 kernel: pci 0000:00:17.4: bridge window [mem 0xfc300000-0xfc3fffff] Jul 6 23:55:01.793572 kernel: pci 0000:00:17.4: bridge window [mem 0xe6a00000-0xe6afffff 64bit pref] Jul 6 23:55:01.793624 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] Jul 6 23:55:01.793675 kernel: pci 0000:00:17.5: bridge window [mem 0xfbf00000-0xfbffffff] Jul 6 23:55:01.793726 kernel: pci 0000:00:17.5: bridge window [mem 0xe6600000-0xe66fffff 64bit pref] Jul 6 23:55:01.793779 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] Jul 6 23:55:01.793831 kernel: pci 0000:00:17.6: bridge window [mem 0xfbb00000-0xfbbfffff] Jul 6 23:55:01.793883 kernel: pci 0000:00:17.6: bridge window [mem 0xe6200000-0xe62fffff 64bit pref] Jul 6 23:55:01.793939 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] Jul 6 23:55:01.793991 kernel: pci 0000:00:17.7: bridge window [mem 0xfb700000-0xfb7fffff] Jul 6 23:55:01.794043 kernel: pci 0000:00:17.7: bridge window [mem 0xe5e00000-0xe5efffff 64bit pref] Jul 6 23:55:01.794097 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] Jul 6 23:55:01.794149 kernel: pci 0000:00:18.0: bridge window [io 0x7000-0x7fff] Jul 6 23:55:01.794201 kernel: pci 0000:00:18.0: bridge window [mem 0xfd200000-0xfd2fffff] Jul 6 23:55:01.794252 kernel: pci 0000:00:18.0: bridge window [mem 0xe7900000-0xe79fffff 64bit pref] Jul 6 23:55:01.794306 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] Jul 6 23:55:01.794358 kernel: pci 0000:00:18.1: bridge window [io 0xb000-0xbfff] Jul 6 23:55:01.794412 kernel: pci 0000:00:18.1: bridge window [mem 0xfce00000-0xfcefffff] Jul 6 23:55:01.794463 kernel: pci 0000:00:18.1: bridge window [mem 0xe7500000-0xe75fffff 64bit pref] Jul 6 23:55:01.794520 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] Jul 6 23:55:01.794631 kernel: pci 0000:00:18.2: bridge window [mem 0xfca00000-0xfcafffff] Jul 6 23:55:01.794684 kernel: pci 0000:00:18.2: bridge window [mem 0xe7100000-0xe71fffff 64bit pref] Jul 6 23:55:01.794737 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] Jul 6 23:55:01.794788 kernel: pci 0000:00:18.3: bridge window [mem 0xfc600000-0xfc6fffff] Jul 6 23:55:01.794841 kernel: pci 0000:00:18.3: bridge window [mem 0xe6d00000-0xe6dfffff 64bit pref] Jul 6 23:55:01.794894 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] Jul 6 23:55:01.794946 kernel: pci 0000:00:18.4: bridge window [mem 0xfc200000-0xfc2fffff] Jul 6 23:55:01.795000 kernel: pci 0000:00:18.4: bridge window [mem 0xe6900000-0xe69fffff 64bit pref] Jul 6 23:55:01.795054 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] Jul 6 23:55:01.795106 kernel: pci 0000:00:18.5: bridge window [mem 0xfbe00000-0xfbefffff] Jul 6 23:55:01.795158 kernel: pci 0000:00:18.5: bridge window [mem 0xe6500000-0xe65fffff 64bit pref] Jul 6 23:55:01.795211 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] Jul 6 23:55:01.795263 kernel: pci 0000:00:18.6: bridge window [mem 0xfba00000-0xfbafffff] Jul 6 23:55:01.795314 kernel: pci 0000:00:18.6: bridge window [mem 0xe6100000-0xe61fffff 64bit pref] Jul 6 23:55:01.795366 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] Jul 6 23:55:01.795419 kernel: pci 0000:00:18.7: bridge window [mem 0xfb600000-0xfb6fffff] Jul 6 23:55:01.795475 kernel: pci 0000:00:18.7: bridge window [mem 0xe5d00000-0xe5dfffff 64bit pref] Jul 6 23:55:01.795532 kernel: pci_bus 0000:00: resource 4 [mem 0x000a0000-0x000bffff window] Jul 6 23:55:01.795588 kernel: pci_bus 0000:00: resource 5 [mem 0x000cc000-0x000dbfff window] Jul 6 23:55:01.795635 kernel: pci_bus 0000:00: resource 6 [mem 0xc0000000-0xfebfffff window] Jul 6 23:55:01.795681 kernel: pci_bus 0000:00: resource 7 [io 0x0000-0x0cf7 window] Jul 6 23:55:01.795726 kernel: pci_bus 0000:00: resource 8 [io 0x0d00-0xfeff window] Jul 6 23:55:01.795777 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x3fff] Jul 6 23:55:01.795825 kernel: pci_bus 0000:02: resource 1 [mem 0xfd600000-0xfdffffff] Jul 6 23:55:01.795877 kernel: pci_bus 0000:02: resource 2 [mem 0xe7b00000-0xe7ffffff 64bit pref] Jul 6 23:55:01.795924 kernel: pci_bus 0000:02: resource 4 [mem 0x000a0000-0x000bffff window] Jul 6 23:55:01.795971 kernel: pci_bus 0000:02: resource 5 [mem 0x000cc000-0x000dbfff window] Jul 6 23:55:01.796019 kernel: pci_bus 0000:02: resource 6 [mem 0xc0000000-0xfebfffff window] Jul 6 23:55:01.796066 kernel: pci_bus 0000:02: resource 7 [io 0x0000-0x0cf7 window] Jul 6 23:55:01.796113 kernel: pci_bus 0000:02: resource 8 [io 0x0d00-0xfeff window] Jul 6 23:55:01.796166 kernel: pci_bus 0000:03: resource 0 [io 0x4000-0x4fff] Jul 6 23:55:01.796231 kernel: pci_bus 0000:03: resource 1 [mem 0xfd500000-0xfd5fffff] Jul 6 23:55:01.796282 kernel: pci_bus 0000:03: resource 2 [mem 0xc0000000-0xc01fffff 64bit pref] Jul 6 23:55:01.796335 kernel: pci_bus 0000:04: resource 0 [io 0x8000-0x8fff] Jul 6 23:55:01.796382 kernel: pci_bus 0000:04: resource 1 [mem 0xfd100000-0xfd1fffff] Jul 6 23:55:01.796429 kernel: pci_bus 0000:04: resource 2 [mem 0xe7800000-0xe78fffff 64bit pref] Jul 6 23:55:01.796480 kernel: pci_bus 0000:05: resource 0 [io 0xc000-0xcfff] Jul 6 23:55:01.796528 kernel: pci_bus 0000:05: resource 1 [mem 0xfcd00000-0xfcdfffff] Jul 6 23:55:01.796744 kernel: pci_bus 0000:05: resource 2 [mem 0xe7400000-0xe74fffff 64bit pref] Jul 6 23:55:01.796797 kernel: pci_bus 0000:06: resource 1 [mem 0xfc900000-0xfc9fffff] Jul 6 23:55:01.796845 kernel: pci_bus 0000:06: resource 2 [mem 0xe7000000-0xe70fffff 64bit pref] Jul 6 23:55:01.796897 kernel: pci_bus 0000:07: resource 1 [mem 0xfc500000-0xfc5fffff] Jul 6 23:55:01.796944 kernel: pci_bus 0000:07: resource 2 [mem 0xe6c00000-0xe6cfffff 64bit pref] Jul 6 23:55:01.796997 kernel: pci_bus 0000:08: resource 1 [mem 0xfc100000-0xfc1fffff] Jul 6 23:55:01.797048 kernel: pci_bus 0000:08: resource 2 [mem 0xe6800000-0xe68fffff 64bit pref] Jul 6 23:55:01.797100 kernel: pci_bus 0000:09: resource 1 [mem 0xfbd00000-0xfbdfffff] Jul 6 23:55:01.797147 kernel: pci_bus 0000:09: resource 2 [mem 0xe6400000-0xe64fffff 64bit pref] Jul 6 23:55:01.797202 kernel: pci_bus 0000:0a: resource 1 [mem 0xfb900000-0xfb9fffff] Jul 6 23:55:01.797250 kernel: pci_bus 0000:0a: resource 2 [mem 0xe6000000-0xe60fffff 64bit pref] Jul 6 23:55:01.797312 kernel: pci_bus 0000:0b: resource 0 [io 0x5000-0x5fff] Jul 6 23:55:01.797363 kernel: pci_bus 0000:0b: resource 1 [mem 0xfd400000-0xfd4fffff] Jul 6 23:55:01.797411 kernel: pci_bus 0000:0b: resource 2 [mem 0xc0200000-0xc03fffff 64bit pref] Jul 6 23:55:01.797463 kernel: pci_bus 0000:0c: resource 0 [io 0x9000-0x9fff] Jul 6 23:55:01.797511 kernel: pci_bus 0000:0c: resource 1 [mem 0xfd000000-0xfd0fffff] Jul 6 23:55:01.797608 kernel: pci_bus 0000:0c: resource 2 [mem 0xe7700000-0xe77fffff 64bit pref] Jul 6 23:55:01.797661 kernel: pci_bus 0000:0d: resource 0 [io 0xd000-0xdfff] Jul 6 23:55:01.797710 kernel: pci_bus 0000:0d: resource 1 [mem 0xfcc00000-0xfccfffff] Jul 6 23:55:01.797765 kernel: pci_bus 0000:0d: resource 2 [mem 0xe7300000-0xe73fffff 64bit pref] Jul 6 23:55:01.797817 kernel: pci_bus 0000:0e: resource 1 [mem 0xfc800000-0xfc8fffff] Jul 6 23:55:01.797865 kernel: pci_bus 0000:0e: resource 2 [mem 0xe6f00000-0xe6ffffff 64bit pref] Jul 6 23:55:01.797917 kernel: pci_bus 0000:0f: resource 1 [mem 0xfc400000-0xfc4fffff] Jul 6 23:55:01.797964 kernel: pci_bus 0000:0f: resource 2 [mem 0xe6b00000-0xe6bfffff 64bit pref] Jul 6 23:55:01.798017 kernel: pci_bus 0000:10: resource 1 [mem 0xfc000000-0xfc0fffff] Jul 6 23:55:01.798067 kernel: pci_bus 0000:10: resource 2 [mem 0xe6700000-0xe67fffff 64bit pref] Jul 6 23:55:01.798119 kernel: pci_bus 0000:11: resource 1 [mem 0xfbc00000-0xfbcfffff] Jul 6 23:55:01.798167 kernel: pci_bus 0000:11: resource 2 [mem 0xe6300000-0xe63fffff 64bit pref] Jul 6 23:55:01.798223 kernel: pci_bus 0000:12: resource 1 [mem 0xfb800000-0xfb8fffff] Jul 6 23:55:01.798272 kernel: pci_bus 0000:12: resource 2 [mem 0xe5f00000-0xe5ffffff 64bit pref] Jul 6 23:55:01.798325 kernel: pci_bus 0000:13: resource 0 [io 0x6000-0x6fff] Jul 6 23:55:01.798376 kernel: pci_bus 0000:13: resource 1 [mem 0xfd300000-0xfd3fffff] Jul 6 23:55:01.798424 kernel: pci_bus 0000:13: resource 2 [mem 0xe7a00000-0xe7afffff 64bit pref] Jul 6 23:55:01.798476 kernel: pci_bus 0000:14: resource 0 [io 0xa000-0xafff] Jul 6 23:55:01.798531 kernel: pci_bus 0000:14: resource 1 [mem 0xfcf00000-0xfcffffff] Jul 6 23:55:01.798597 kernel: pci_bus 0000:14: resource 2 [mem 0xe7600000-0xe76fffff 64bit pref] Jul 6 23:55:01.798651 kernel: pci_bus 0000:15: resource 0 [io 0xe000-0xefff] Jul 6 23:55:01.798699 kernel: pci_bus 0000:15: resource 1 [mem 0xfcb00000-0xfcbfffff] Jul 6 23:55:01.798749 kernel: pci_bus 0000:15: resource 2 [mem 0xe7200000-0xe72fffff 64bit pref] Jul 6 23:55:01.798800 kernel: pci_bus 0000:16: resource 1 [mem 0xfc700000-0xfc7fffff] Jul 6 23:55:01.798848 kernel: pci_bus 0000:16: resource 2 [mem 0xe6e00000-0xe6efffff 64bit pref] Jul 6 23:55:01.798901 kernel: pci_bus 0000:17: resource 1 [mem 0xfc300000-0xfc3fffff] Jul 6 23:55:01.798950 kernel: pci_bus 0000:17: resource 2 [mem 0xe6a00000-0xe6afffff 64bit pref] Jul 6 23:55:01.799003 kernel: pci_bus 0000:18: resource 1 [mem 0xfbf00000-0xfbffffff] Jul 6 23:55:01.799054 kernel: pci_bus 0000:18: resource 2 [mem 0xe6600000-0xe66fffff 64bit pref] Jul 6 23:55:01.799110 kernel: pci_bus 0000:19: resource 1 [mem 0xfbb00000-0xfbbfffff] Jul 6 23:55:01.799158 kernel: pci_bus 0000:19: resource 2 [mem 0xe6200000-0xe62fffff 64bit pref] Jul 6 23:55:01.799211 kernel: pci_bus 0000:1a: resource 1 [mem 0xfb700000-0xfb7fffff] Jul 6 23:55:01.799260 kernel: pci_bus 0000:1a: resource 2 [mem 0xe5e00000-0xe5efffff 64bit pref] Jul 6 23:55:01.799317 kernel: pci_bus 0000:1b: resource 0 [io 0x7000-0x7fff] Jul 6 23:55:01.799370 kernel: pci_bus 0000:1b: resource 1 [mem 0xfd200000-0xfd2fffff] Jul 6 23:55:01.799421 kernel: pci_bus 0000:1b: resource 2 [mem 0xe7900000-0xe79fffff 64bit pref] Jul 6 23:55:01.799474 kernel: pci_bus 0000:1c: resource 0 [io 0xb000-0xbfff] Jul 6 23:55:01.799523 kernel: pci_bus 0000:1c: resource 1 [mem 0xfce00000-0xfcefffff] Jul 6 23:55:01.799591 kernel: pci_bus 0000:1c: resource 2 [mem 0xe7500000-0xe75fffff 64bit pref] Jul 6 23:55:01.799647 kernel: pci_bus 0000:1d: resource 1 [mem 0xfca00000-0xfcafffff] Jul 6 23:55:01.799698 kernel: pci_bus 0000:1d: resource 2 [mem 0xe7100000-0xe71fffff 64bit pref] Jul 6 23:55:01.799751 kernel: pci_bus 0000:1e: resource 1 [mem 0xfc600000-0xfc6fffff] Jul 6 23:55:01.799799 kernel: pci_bus 0000:1e: resource 2 [mem 0xe6d00000-0xe6dfffff 64bit pref] Jul 6 23:55:01.799852 kernel: pci_bus 0000:1f: resource 1 [mem 0xfc200000-0xfc2fffff] Jul 6 23:55:01.799901 kernel: pci_bus 0000:1f: resource 2 [mem 0xe6900000-0xe69fffff 64bit pref] Jul 6 23:55:01.799952 kernel: pci_bus 0000:20: resource 1 [mem 0xfbe00000-0xfbefffff] Jul 6 23:55:01.800001 kernel: pci_bus 0000:20: resource 2 [mem 0xe6500000-0xe65fffff 64bit pref] Jul 6 23:55:01.800055 kernel: pci_bus 0000:21: resource 1 [mem 0xfba00000-0xfbafffff] Jul 6 23:55:01.800104 kernel: pci_bus 0000:21: resource 2 [mem 0xe6100000-0xe61fffff 64bit pref] Jul 6 23:55:01.800156 kernel: pci_bus 0000:22: resource 1 [mem 0xfb600000-0xfb6fffff] Jul 6 23:55:01.800204 kernel: pci_bus 0000:22: resource 2 [mem 0xe5d00000-0xe5dfffff 64bit pref] Jul 6 23:55:01.800261 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jul 6 23:55:01.800272 kernel: PCI: CLS 32 bytes, default 64 Jul 6 23:55:01.800280 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jul 6 23:55:01.800287 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x311fd3cd494, max_idle_ns: 440795223879 ns Jul 6 23:55:01.800293 kernel: clocksource: Switched to clocksource tsc Jul 6 23:55:01.800300 kernel: Initialise system trusted keyrings Jul 6 23:55:01.800307 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Jul 6 23:55:01.800314 kernel: Key type asymmetric registered Jul 6 23:55:01.800320 kernel: Asymmetric key parser 'x509' registered Jul 6 23:55:01.800326 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jul 6 23:55:01.800333 kernel: io scheduler mq-deadline registered Jul 6 23:55:01.800341 kernel: io scheduler kyber registered Jul 6 23:55:01.800348 kernel: io scheduler bfq registered Jul 6 23:55:01.800403 kernel: pcieport 0000:00:15.0: PME: Signaling with IRQ 24 Jul 6 23:55:01.800457 kernel: pcieport 0000:00:15.0: pciehp: Slot #160 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 6 23:55:01.800513 kernel: pcieport 0000:00:15.1: PME: Signaling with IRQ 25 Jul 6 23:55:01.800613 kernel: pcieport 0000:00:15.1: pciehp: Slot #161 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 6 23:55:01.800670 kernel: pcieport 0000:00:15.2: PME: Signaling with IRQ 26 Jul 6 23:55:01.800723 kernel: pcieport 0000:00:15.2: pciehp: Slot #162 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 6 23:55:01.800779 kernel: pcieport 0000:00:15.3: PME: Signaling with IRQ 27 Jul 6 23:55:01.800832 kernel: pcieport 0000:00:15.3: pciehp: Slot #163 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 6 23:55:01.800885 kernel: pcieport 0000:00:15.4: PME: Signaling with IRQ 28 Jul 6 23:55:01.800938 kernel: pcieport 0000:00:15.4: pciehp: Slot #164 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 6 23:55:01.800991 kernel: pcieport 0000:00:15.5: PME: Signaling with IRQ 29 Jul 6 23:55:01.801043 kernel: pcieport 0000:00:15.5: pciehp: Slot #165 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 6 23:55:01.801099 kernel: pcieport 0000:00:15.6: PME: Signaling with IRQ 30 Jul 6 23:55:01.801151 kernel: pcieport 0000:00:15.6: pciehp: Slot #166 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 6 23:55:01.801204 kernel: pcieport 0000:00:15.7: PME: Signaling with IRQ 31 Jul 6 23:55:01.801256 kernel: pcieport 0000:00:15.7: pciehp: Slot #167 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 6 23:55:01.801309 kernel: pcieport 0000:00:16.0: PME: Signaling with IRQ 32 Jul 6 23:55:01.801365 kernel: pcieport 0000:00:16.0: pciehp: Slot #192 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 6 23:55:01.801419 kernel: pcieport 0000:00:16.1: PME: Signaling with IRQ 33 Jul 6 23:55:01.801472 kernel: pcieport 0000:00:16.1: pciehp: Slot #193 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 6 23:55:01.801536 kernel: pcieport 0000:00:16.2: PME: Signaling with IRQ 34 Jul 6 23:55:01.801668 kernel: pcieport 0000:00:16.2: pciehp: Slot #194 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 6 23:55:01.801723 kernel: pcieport 0000:00:16.3: PME: Signaling with IRQ 35 Jul 6 23:55:01.801776 kernel: pcieport 0000:00:16.3: pciehp: Slot #195 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 6 23:55:01.801833 kernel: pcieport 0000:00:16.4: PME: Signaling with IRQ 36 Jul 6 23:55:01.801886 kernel: pcieport 0000:00:16.4: pciehp: Slot #196 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 6 23:55:01.801939 kernel: pcieport 0000:00:16.5: PME: Signaling with IRQ 37 Jul 6 23:55:01.801991 kernel: pcieport 0000:00:16.5: pciehp: Slot #197 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 6 23:55:01.802044 kernel: pcieport 0000:00:16.6: PME: Signaling with IRQ 38 Jul 6 23:55:01.802100 kernel: pcieport 0000:00:16.6: pciehp: Slot #198 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 6 23:55:01.802153 kernel: pcieport 0000:00:16.7: PME: Signaling with IRQ 39 Jul 6 23:55:01.802206 kernel: pcieport 0000:00:16.7: pciehp: Slot #199 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 6 23:55:01.802260 kernel: pcieport 0000:00:17.0: PME: Signaling with IRQ 40 Jul 6 23:55:01.802314 kernel: pcieport 0000:00:17.0: pciehp: Slot #224 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 6 23:55:01.803611 kernel: pcieport 0000:00:17.1: PME: Signaling with IRQ 41 Jul 6 23:55:01.803686 kernel: pcieport 0000:00:17.1: pciehp: Slot #225 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 6 23:55:01.803746 kernel: pcieport 0000:00:17.2: PME: Signaling with IRQ 42 Jul 6 23:55:01.803801 kernel: pcieport 0000:00:17.2: pciehp: Slot #226 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 6 23:55:01.803857 kernel: pcieport 0000:00:17.3: PME: Signaling with IRQ 43 Jul 6 23:55:01.803912 kernel: pcieport 0000:00:17.3: pciehp: Slot #227 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 6 23:55:01.803967 kernel: pcieport 0000:00:17.4: PME: Signaling with IRQ 44 Jul 6 23:55:01.804023 kernel: pcieport 0000:00:17.4: pciehp: Slot #228 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 6 23:55:01.804078 kernel: pcieport 0000:00:17.5: PME: Signaling with IRQ 45 Jul 6 23:55:01.804132 kernel: pcieport 0000:00:17.5: pciehp: Slot #229 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 6 23:55:01.804185 kernel: pcieport 0000:00:17.6: PME: Signaling with IRQ 46 Jul 6 23:55:01.804239 kernel: pcieport 0000:00:17.6: pciehp: Slot #230 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 6 23:55:01.804293 kernel: pcieport 0000:00:17.7: PME: Signaling with IRQ 47 Jul 6 23:55:01.804350 kernel: pcieport 0000:00:17.7: pciehp: Slot #231 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 6 23:55:01.804404 kernel: pcieport 0000:00:18.0: PME: Signaling with IRQ 48 Jul 6 23:55:01.804457 kernel: pcieport 0000:00:18.0: pciehp: Slot #256 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 6 23:55:01.804518 kernel: pcieport 0000:00:18.1: PME: Signaling with IRQ 49 Jul 6 23:55:01.804594 kernel: pcieport 0000:00:18.1: pciehp: Slot #257 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 6 23:55:01.804651 kernel: pcieport 0000:00:18.2: PME: Signaling with IRQ 50 Jul 6 23:55:01.804707 kernel: pcieport 0000:00:18.2: pciehp: Slot #258 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 6 23:55:01.804762 kernel: pcieport 0000:00:18.3: PME: Signaling with IRQ 51 Jul 6 23:55:01.804816 kernel: pcieport 0000:00:18.3: pciehp: Slot #259 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 6 23:55:01.804870 kernel: pcieport 0000:00:18.4: PME: Signaling with IRQ 52 Jul 6 23:55:01.804924 kernel: pcieport 0000:00:18.4: pciehp: Slot #260 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 6 23:55:01.804981 kernel: pcieport 0000:00:18.5: PME: Signaling with IRQ 53 Jul 6 23:55:01.805034 kernel: pcieport 0000:00:18.5: pciehp: Slot #261 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 6 23:55:01.805089 kernel: pcieport 0000:00:18.6: PME: Signaling with IRQ 54 Jul 6 23:55:01.805141 kernel: pcieport 0000:00:18.6: pciehp: Slot #262 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 6 23:55:01.805195 kernel: pcieport 0000:00:18.7: PME: Signaling with IRQ 55 Jul 6 23:55:01.805247 kernel: pcieport 0000:00:18.7: pciehp: Slot #263 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 6 23:55:01.805259 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jul 6 23:55:01.805266 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 6 23:55:01.805273 kernel: 00:05: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jul 6 23:55:01.805279 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBC,PNP0f13:MOUS] at 0x60,0x64 irq 1,12 Jul 6 23:55:01.805285 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jul 6 23:55:01.805291 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jul 6 23:55:01.805348 kernel: rtc_cmos 00:01: registered as rtc0 Jul 6 23:55:01.805401 kernel: rtc_cmos 00:01: setting system clock to 2025-07-06T23:55:01 UTC (1751846101) Jul 6 23:55:01.805449 kernel: rtc_cmos 00:01: alarms up to one month, y3k, 114 bytes nvram Jul 6 23:55:01.805458 kernel: intel_pstate: CPU model not supported Jul 6 23:55:01.805465 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jul 6 23:55:01.805472 kernel: NET: Registered PF_INET6 protocol family Jul 6 23:55:01.805479 kernel: Segment Routing with IPv6 Jul 6 23:55:01.805485 kernel: In-situ OAM (IOAM) with IPv6 Jul 6 23:55:01.805491 kernel: NET: Registered PF_PACKET protocol family Jul 6 23:55:01.805502 kernel: Key type dns_resolver registered Jul 6 23:55:01.805511 kernel: IPI shorthand broadcast: enabled Jul 6 23:55:01.805517 kernel: sched_clock: Marking stable (928335659, 228834771)->(1222504812, -65334382) Jul 6 23:55:01.805524 kernel: registered taskstats version 1 Jul 6 23:55:01.805530 kernel: Loading compiled-in X.509 certificates Jul 6 23:55:01.805537 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.95-flatcar: 6372c48ca52cc7f7bbee5675b604584c1c68ec5b' Jul 6 23:55:01.805543 kernel: Key type .fscrypt registered Jul 6 23:55:01.806479 kernel: Key type fscrypt-provisioning registered Jul 6 23:55:01.806490 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 6 23:55:01.806497 kernel: ima: Allocated hash algorithm: sha1 Jul 6 23:55:01.806507 kernel: ima: No architecture policies found Jul 6 23:55:01.806515 kernel: clk: Disabling unused clocks Jul 6 23:55:01.806522 kernel: Freeing unused kernel image (initmem) memory: 42868K Jul 6 23:55:01.806529 kernel: Write protecting the kernel read-only data: 36864k Jul 6 23:55:01.806535 kernel: Freeing unused kernel image (rodata/data gap) memory: 1828K Jul 6 23:55:01.806542 kernel: Run /init as init process Jul 6 23:55:01.806555 kernel: with arguments: Jul 6 23:55:01.806562 kernel: /init Jul 6 23:55:01.806569 kernel: with environment: Jul 6 23:55:01.806576 kernel: HOME=/ Jul 6 23:55:01.806583 kernel: TERM=linux Jul 6 23:55:01.806589 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 6 23:55:01.806597 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 6 23:55:01.806607 systemd[1]: Detected virtualization vmware. Jul 6 23:55:01.806614 systemd[1]: Detected architecture x86-64. Jul 6 23:55:01.806620 systemd[1]: Running in initrd. Jul 6 23:55:01.806628 systemd[1]: No hostname configured, using default hostname. Jul 6 23:55:01.806634 systemd[1]: Hostname set to . Jul 6 23:55:01.806641 systemd[1]: Initializing machine ID from random generator. Jul 6 23:55:01.806647 systemd[1]: Queued start job for default target initrd.target. Jul 6 23:55:01.806654 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 6 23:55:01.806660 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 6 23:55:01.806669 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 6 23:55:01.806676 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 6 23:55:01.806684 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 6 23:55:01.806691 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 6 23:55:01.806699 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 6 23:55:01.806706 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 6 23:55:01.806712 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 6 23:55:01.806719 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 6 23:55:01.806726 systemd[1]: Reached target paths.target - Path Units. Jul 6 23:55:01.806734 systemd[1]: Reached target slices.target - Slice Units. Jul 6 23:55:01.806740 systemd[1]: Reached target swap.target - Swaps. Jul 6 23:55:01.806747 systemd[1]: Reached target timers.target - Timer Units. Jul 6 23:55:01.806753 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 6 23:55:01.806760 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 6 23:55:01.806766 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 6 23:55:01.806773 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jul 6 23:55:01.806779 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 6 23:55:01.806786 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 6 23:55:01.806795 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 6 23:55:01.806801 systemd[1]: Reached target sockets.target - Socket Units. Jul 6 23:55:01.806808 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 6 23:55:01.806815 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 6 23:55:01.806822 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 6 23:55:01.806828 systemd[1]: Starting systemd-fsck-usr.service... Jul 6 23:55:01.806835 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 6 23:55:01.806841 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 6 23:55:01.806849 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 6 23:55:01.806872 systemd-journald[215]: Collecting audit messages is disabled. Jul 6 23:55:01.806889 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 6 23:55:01.806896 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 6 23:55:01.806905 systemd[1]: Finished systemd-fsck-usr.service. Jul 6 23:55:01.806912 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 6 23:55:01.806918 kernel: Bridge firewalling registered Jul 6 23:55:01.806925 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 6 23:55:01.806931 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 6 23:55:01.806940 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:55:01.806946 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 6 23:55:01.806953 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 6 23:55:01.806960 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 6 23:55:01.806966 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 6 23:55:01.806974 systemd-journald[215]: Journal started Jul 6 23:55:01.806991 systemd-journald[215]: Runtime Journal (/run/log/journal/2119a5fe2e4c436b8627108f214aa588) is 4.8M, max 38.6M, 33.8M free. Jul 6 23:55:01.756079 systemd-modules-load[216]: Inserted module 'overlay' Jul 6 23:55:01.777230 systemd-modules-load[216]: Inserted module 'br_netfilter' Jul 6 23:55:01.813217 systemd[1]: Started systemd-journald.service - Journal Service. Jul 6 23:55:01.819686 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 6 23:55:01.819924 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 6 23:55:01.820120 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 6 23:55:01.820298 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 6 23:55:01.821667 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 6 23:55:01.826488 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 6 23:55:01.829717 dracut-cmdline[246]: dracut-dracut-053 Jul 6 23:55:01.830745 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 6 23:55:01.831862 dracut-cmdline[246]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=65c65ff9d50198f0ae5c37458dc3ff85c6a690e7aa124bb306a2f4c63a54d876 Jul 6 23:55:01.849828 systemd-resolved[252]: Positive Trust Anchors: Jul 6 23:55:01.849839 systemd-resolved[252]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 6 23:55:01.849862 systemd-resolved[252]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 6 23:55:01.851780 systemd-resolved[252]: Defaulting to hostname 'linux'. Jul 6 23:55:01.852460 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 6 23:55:01.852625 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 6 23:55:01.883574 kernel: SCSI subsystem initialized Jul 6 23:55:01.891571 kernel: Loading iSCSI transport class v2.0-870. Jul 6 23:55:01.898570 kernel: iscsi: registered transport (tcp) Jul 6 23:55:01.914096 kernel: iscsi: registered transport (qla4xxx) Jul 6 23:55:01.914145 kernel: QLogic iSCSI HBA Driver Jul 6 23:55:01.935080 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 6 23:55:01.946731 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 6 23:55:01.961783 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 6 23:55:01.961829 kernel: device-mapper: uevent: version 1.0.3 Jul 6 23:55:01.962855 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jul 6 23:55:01.994578 kernel: raid6: avx2x4 gen() 51848 MB/s Jul 6 23:55:02.011580 kernel: raid6: avx2x2 gen() 46149 MB/s Jul 6 23:55:02.028808 kernel: raid6: avx2x1 gen() 43207 MB/s Jul 6 23:55:02.028865 kernel: raid6: using algorithm avx2x4 gen() 51848 MB/s Jul 6 23:55:02.046852 kernel: raid6: .... xor() 18012 MB/s, rmw enabled Jul 6 23:55:02.046916 kernel: raid6: using avx2x2 recovery algorithm Jul 6 23:55:02.060574 kernel: xor: automatically using best checksumming function avx Jul 6 23:55:02.163577 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 6 23:55:02.169517 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 6 23:55:02.174692 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 6 23:55:02.182285 systemd-udevd[432]: Using default interface naming scheme 'v255'. Jul 6 23:55:02.184880 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 6 23:55:02.194753 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 6 23:55:02.202446 dracut-pre-trigger[438]: rd.md=0: removing MD RAID activation Jul 6 23:55:02.221186 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 6 23:55:02.226708 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 6 23:55:02.303852 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 6 23:55:02.309644 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 6 23:55:02.321362 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 6 23:55:02.321708 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 6 23:55:02.322204 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 6 23:55:02.322481 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 6 23:55:02.325693 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 6 23:55:02.331775 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 6 23:55:02.371565 kernel: VMware PVSCSI driver - version 1.0.7.0-k Jul 6 23:55:02.373317 kernel: vmw_pvscsi: using 64bit dma Jul 6 23:55:02.373338 kernel: vmw_pvscsi: max_id: 16 Jul 6 23:55:02.373346 kernel: vmw_pvscsi: setting ring_pages to 8 Jul 6 23:55:02.375558 kernel: vmw_pvscsi: enabling reqCallThreshold Jul 6 23:55:02.375576 kernel: vmw_pvscsi: driver-based request coalescing enabled Jul 6 23:55:02.375616 kernel: vmw_pvscsi: using MSI-X Jul 6 23:55:02.378173 kernel: scsi host0: VMware PVSCSI storage adapter rev 2, req/cmp/msg rings: 8/8/1 pages, cmd_per_lun=254 Jul 6 23:55:02.378263 kernel: vmw_pvscsi 0000:03:00.0: VMware PVSCSI rev 2 host #0 Jul 6 23:55:02.379596 kernel: scsi 0:0:0:0: Direct-Access VMware Virtual disk 2.0 PQ: 0 ANSI: 6 Jul 6 23:55:02.395574 kernel: VMware vmxnet3 virtual NIC driver - version 1.7.0.0-k-NAPI Jul 6 23:55:02.399565 kernel: vmxnet3 0000:0b:00.0: # of Tx queues : 2, # of Rx queues : 2 Jul 6 23:55:02.403560 kernel: vmxnet3 0000:0b:00.0 eth0: NIC Link is Up 10000 Mbps Jul 6 23:55:02.407955 kernel: cryptd: max_cpu_qlen set to 1000 Jul 6 23:55:02.409793 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 6 23:55:02.409867 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 6 23:55:02.410843 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 6 23:55:02.410969 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 6 23:55:02.411047 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:55:02.411187 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 6 23:55:02.415578 kernel: vmxnet3 0000:0b:00.0 ens192: renamed from eth0 Jul 6 23:55:02.414730 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 6 23:55:02.424624 kernel: AVX2 version of gcm_enc/dec engaged. Jul 6 23:55:02.426464 kernel: AES CTR mode by8 optimization enabled Jul 6 23:55:02.428571 kernel: libata version 3.00 loaded. Jul 6 23:55:02.429576 kernel: ata_piix 0000:00:07.1: version 2.13 Jul 6 23:55:02.431578 kernel: scsi host1: ata_piix Jul 6 23:55:02.431677 kernel: scsi host2: ata_piix Jul 6 23:55:02.431747 kernel: ata1: PATA max UDMA/33 cmd 0x1f0 ctl 0x3f6 bmdma 0x1060 irq 14 Jul 6 23:55:02.431756 kernel: ata2: PATA max UDMA/33 cmd 0x170 ctl 0x376 bmdma 0x1068 irq 15 Jul 6 23:55:02.439318 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:55:02.443708 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 6 23:55:02.454204 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 6 23:55:02.598583 kernel: ata2.00: ATAPI: VMware Virtual IDE CDROM Drive, 00000001, max UDMA/33 Jul 6 23:55:02.604602 kernel: scsi 2:0:0:0: CD-ROM NECVMWar VMware IDE CDR10 1.00 PQ: 0 ANSI: 5 Jul 6 23:55:02.616904 kernel: sd 0:0:0:0: [sda] 17805312 512-byte logical blocks: (9.12 GB/8.49 GiB) Jul 6 23:55:02.617030 kernel: sd 0:0:0:0: [sda] Write Protect is off Jul 6 23:55:02.617097 kernel: sd 0:0:0:0: [sda] Mode Sense: 31 00 00 00 Jul 6 23:55:02.617160 kernel: sd 0:0:0:0: [sda] Cache data unavailable Jul 6 23:55:02.617589 kernel: sd 0:0:0:0: [sda] Assuming drive cache: write through Jul 6 23:55:02.623880 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 6 23:55:02.623909 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jul 6 23:55:02.627569 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 1x/1x writer dvd-ram cd/rw xa/form2 cdda tray Jul 6 23:55:02.627665 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jul 6 23:55:02.641581 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jul 6 23:55:02.669891 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/sda6 scanned by (udev-worker) (485) Jul 6 23:55:02.670324 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_disk ROOT. Jul 6 23:55:02.674040 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_disk EFI-SYSTEM. Jul 6 23:55:02.674620 kernel: BTRFS: device fsid 01287863-c21f-4cbb-820d-bbae8208f32f devid 1 transid 34 /dev/sda3 scanned by (udev-worker) (486) Jul 6 23:55:02.676779 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_disk OEM. Jul 6 23:55:02.679712 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_disk USR-A. Jul 6 23:55:02.679846 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_disk USR-A. Jul 6 23:55:02.682617 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 6 23:55:02.709587 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 6 23:55:02.717572 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 6 23:55:03.723578 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 6 23:55:03.724171 disk-uuid[587]: The operation has completed successfully. Jul 6 23:55:03.767137 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 6 23:55:03.767213 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 6 23:55:03.770656 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 6 23:55:03.774329 sh[605]: Success Jul 6 23:55:03.783599 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jul 6 23:55:03.842646 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 6 23:55:03.843854 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 6 23:55:03.844204 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 6 23:55:03.860943 kernel: BTRFS info (device dm-0): first mount of filesystem 01287863-c21f-4cbb-820d-bbae8208f32f Jul 6 23:55:03.860985 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jul 6 23:55:03.860993 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jul 6 23:55:03.862061 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jul 6 23:55:03.862882 kernel: BTRFS info (device dm-0): using free space tree Jul 6 23:55:03.872567 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jul 6 23:55:03.874364 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 6 23:55:03.884700 systemd[1]: Starting afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments... Jul 6 23:55:03.886111 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 6 23:55:03.911924 kernel: BTRFS info (device sda6): first mount of filesystem 11f56a79-b29d-47db-ad8e-56effe5ac41b Jul 6 23:55:03.911959 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jul 6 23:55:03.911968 kernel: BTRFS info (device sda6): using free space tree Jul 6 23:55:03.922567 kernel: BTRFS info (device sda6): enabling ssd optimizations Jul 6 23:55:03.929802 systemd[1]: mnt-oem.mount: Deactivated successfully. Jul 6 23:55:03.930621 kernel: BTRFS info (device sda6): last unmount of filesystem 11f56a79-b29d-47db-ad8e-56effe5ac41b Jul 6 23:55:03.936786 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 6 23:55:03.940677 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 6 23:55:03.965794 systemd[1]: Finished afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments. Jul 6 23:55:03.971736 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 6 23:55:04.027168 ignition[664]: Ignition 2.19.0 Jul 6 23:55:04.027176 ignition[664]: Stage: fetch-offline Jul 6 23:55:04.027196 ignition[664]: no configs at "/usr/lib/ignition/base.d" Jul 6 23:55:04.027201 ignition[664]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Jul 6 23:55:04.027264 ignition[664]: parsed url from cmdline: "" Jul 6 23:55:04.027266 ignition[664]: no config URL provided Jul 6 23:55:04.027269 ignition[664]: reading system config file "/usr/lib/ignition/user.ign" Jul 6 23:55:04.027273 ignition[664]: no config at "/usr/lib/ignition/user.ign" Jul 6 23:55:04.029339 ignition[664]: config successfully fetched Jul 6 23:55:04.029360 ignition[664]: parsing config with SHA512: 9ec580518a5d41887662762ab23102ff355f39c474e2d172a5e2944f03f6cba03ff8f5b44f8d2b18c448f219b926140863af099d28d8edc898ad2680d556a10c Jul 6 23:55:04.034126 unknown[664]: fetched base config from "system" Jul 6 23:55:04.034400 ignition[664]: fetch-offline: fetch-offline passed Jul 6 23:55:04.034133 unknown[664]: fetched user config from "vmware" Jul 6 23:55:04.034461 ignition[664]: Ignition finished successfully Jul 6 23:55:04.034234 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 6 23:55:04.040573 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 6 23:55:04.041581 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 6 23:55:04.052377 systemd-networkd[797]: lo: Link UP Jul 6 23:55:04.052384 systemd-networkd[797]: lo: Gained carrier Jul 6 23:55:04.053304 systemd-networkd[797]: Enumeration completed Jul 6 23:55:04.053462 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 6 23:55:04.053689 systemd-networkd[797]: ens192: Configuring with /etc/systemd/network/10-dracut-cmdline-99.network. Jul 6 23:55:04.053925 systemd[1]: Reached target network.target - Network. Jul 6 23:55:04.054012 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jul 6 23:55:04.057460 kernel: vmxnet3 0000:0b:00.0 ens192: intr type 3, mode 0, 3 vectors allocated Jul 6 23:55:04.057648 kernel: vmxnet3 0000:0b:00.0 ens192: NIC Link is Up 10000 Mbps Jul 6 23:55:04.057218 systemd-networkd[797]: ens192: Link UP Jul 6 23:55:04.057220 systemd-networkd[797]: ens192: Gained carrier Jul 6 23:55:04.063718 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 6 23:55:04.071877 ignition[800]: Ignition 2.19.0 Jul 6 23:55:04.071884 ignition[800]: Stage: kargs Jul 6 23:55:04.071993 ignition[800]: no configs at "/usr/lib/ignition/base.d" Jul 6 23:55:04.071999 ignition[800]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Jul 6 23:55:04.072608 ignition[800]: kargs: kargs passed Jul 6 23:55:04.072639 ignition[800]: Ignition finished successfully Jul 6 23:55:04.073759 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 6 23:55:04.078696 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 6 23:55:04.085970 ignition[807]: Ignition 2.19.0 Jul 6 23:55:04.085976 ignition[807]: Stage: disks Jul 6 23:55:04.086099 ignition[807]: no configs at "/usr/lib/ignition/base.d" Jul 6 23:55:04.086104 ignition[807]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Jul 6 23:55:04.086744 ignition[807]: disks: disks passed Jul 6 23:55:04.086778 ignition[807]: Ignition finished successfully Jul 6 23:55:04.087450 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 6 23:55:04.087966 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 6 23:55:04.088202 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 6 23:55:04.088460 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 6 23:55:04.088694 systemd[1]: Reached target sysinit.target - System Initialization. Jul 6 23:55:04.088919 systemd[1]: Reached target basic.target - Basic System. Jul 6 23:55:04.093753 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 6 23:55:04.101806 systemd-resolved[252]: Detected conflict on linux IN A 139.178.70.109 Jul 6 23:55:04.102126 systemd-resolved[252]: Hostname conflict, changing published hostname from 'linux' to 'linux6'. Jul 6 23:55:04.106638 systemd-fsck[816]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Jul 6 23:55:04.108563 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 6 23:55:04.112621 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 6 23:55:04.174562 kernel: EXT4-fs (sda9): mounted filesystem c3eefe20-4a42-420d-8034-4d5498275b2f r/w with ordered data mode. Quota mode: none. Jul 6 23:55:04.174719 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 6 23:55:04.175210 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 6 23:55:04.185645 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 6 23:55:04.189622 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 6 23:55:04.189889 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jul 6 23:55:04.189918 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 6 23:55:04.189933 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 6 23:55:04.193385 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 6 23:55:04.194439 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 6 23:55:04.196574 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by mount (824) Jul 6 23:55:04.200328 kernel: BTRFS info (device sda6): first mount of filesystem 11f56a79-b29d-47db-ad8e-56effe5ac41b Jul 6 23:55:04.200349 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jul 6 23:55:04.200358 kernel: BTRFS info (device sda6): using free space tree Jul 6 23:55:04.207609 kernel: BTRFS info (device sda6): enabling ssd optimizations Jul 6 23:55:04.207266 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 6 23:55:04.232856 initrd-setup-root[848]: cut: /sysroot/etc/passwd: No such file or directory Jul 6 23:55:04.236957 initrd-setup-root[855]: cut: /sysroot/etc/group: No such file or directory Jul 6 23:55:04.239496 initrd-setup-root[862]: cut: /sysroot/etc/shadow: No such file or directory Jul 6 23:55:04.241964 initrd-setup-root[869]: cut: /sysroot/etc/gshadow: No such file or directory Jul 6 23:55:04.297338 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 6 23:55:04.300644 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 6 23:55:04.302225 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 6 23:55:04.306591 kernel: BTRFS info (device sda6): last unmount of filesystem 11f56a79-b29d-47db-ad8e-56effe5ac41b Jul 6 23:55:04.318565 ignition[937]: INFO : Ignition 2.19.0 Jul 6 23:55:04.318565 ignition[937]: INFO : Stage: mount Jul 6 23:55:04.318565 ignition[937]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 6 23:55:04.318565 ignition[937]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Jul 6 23:55:04.320571 ignition[937]: INFO : mount: mount passed Jul 6 23:55:04.320571 ignition[937]: INFO : Ignition finished successfully Jul 6 23:55:04.322739 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 6 23:55:04.327661 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 6 23:55:04.330831 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 6 23:55:04.859183 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 6 23:55:04.863679 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 6 23:55:04.870566 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (948) Jul 6 23:55:04.873153 kernel: BTRFS info (device sda6): first mount of filesystem 11f56a79-b29d-47db-ad8e-56effe5ac41b Jul 6 23:55:04.873182 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jul 6 23:55:04.873191 kernel: BTRFS info (device sda6): using free space tree Jul 6 23:55:04.909582 kernel: BTRFS info (device sda6): enabling ssd optimizations Jul 6 23:55:04.913018 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 6 23:55:04.930635 ignition[965]: INFO : Ignition 2.19.0 Jul 6 23:55:04.930635 ignition[965]: INFO : Stage: files Jul 6 23:55:04.930635 ignition[965]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 6 23:55:04.930635 ignition[965]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Jul 6 23:55:04.930635 ignition[965]: DEBUG : files: compiled without relabeling support, skipping Jul 6 23:55:04.931490 ignition[965]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 6 23:55:04.931639 ignition[965]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 6 23:55:04.934695 ignition[965]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 6 23:55:04.935066 ignition[965]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 6 23:55:04.935508 unknown[965]: wrote ssh authorized keys file for user: core Jul 6 23:55:04.935862 ignition[965]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 6 23:55:04.938584 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Jul 6 23:55:04.938808 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Jul 6 23:55:04.938808 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jul 6 23:55:04.938808 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jul 6 23:55:04.976669 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jul 6 23:55:05.073109 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jul 6 23:55:05.073626 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jul 6 23:55:05.073626 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jul 6 23:55:05.073626 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 6 23:55:05.074793 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 6 23:55:05.074793 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 6 23:55:05.074793 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 6 23:55:05.074793 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 6 23:55:05.074793 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 6 23:55:05.074793 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 6 23:55:05.074793 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 6 23:55:05.074793 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 6 23:55:05.074793 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 6 23:55:05.074793 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 6 23:55:05.074793 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 Jul 6 23:55:05.506686 systemd-networkd[797]: ens192: Gained IPv6LL Jul 6 23:55:05.771767 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jul 6 23:55:06.044612 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 6 23:55:06.044612 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/etc/systemd/network/00-vmware.network" Jul 6 23:55:06.044612 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/etc/systemd/network/00-vmware.network" Jul 6 23:55:06.044612 ignition[965]: INFO : files: op(d): [started] processing unit "containerd.service" Jul 6 23:55:06.044612 ignition[965]: INFO : files: op(d): op(e): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jul 6 23:55:06.044612 ignition[965]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jul 6 23:55:06.044612 ignition[965]: INFO : files: op(d): [finished] processing unit "containerd.service" Jul 6 23:55:06.044612 ignition[965]: INFO : files: op(f): [started] processing unit "prepare-helm.service" Jul 6 23:55:06.044612 ignition[965]: INFO : files: op(f): op(10): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 6 23:55:06.044612 ignition[965]: INFO : files: op(f): op(10): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 6 23:55:06.044612 ignition[965]: INFO : files: op(f): [finished] processing unit "prepare-helm.service" Jul 6 23:55:06.044612 ignition[965]: INFO : files: op(11): [started] processing unit "coreos-metadata.service" Jul 6 23:55:06.044612 ignition[965]: INFO : files: op(11): op(12): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 6 23:55:06.047171 ignition[965]: INFO : files: op(11): op(12): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 6 23:55:06.047171 ignition[965]: INFO : files: op(11): [finished] processing unit "coreos-metadata.service" Jul 6 23:55:06.047171 ignition[965]: INFO : files: op(13): [started] setting preset to disabled for "coreos-metadata.service" Jul 6 23:55:06.768448 ignition[965]: INFO : files: op(13): op(14): [started] removing enablement symlink(s) for "coreos-metadata.service" Jul 6 23:55:06.771790 ignition[965]: INFO : files: op(13): op(14): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jul 6 23:55:06.772004 ignition[965]: INFO : files: op(13): [finished] setting preset to disabled for "coreos-metadata.service" Jul 6 23:55:06.772004 ignition[965]: INFO : files: op(15): [started] setting preset to enabled for "prepare-helm.service" Jul 6 23:55:06.772004 ignition[965]: INFO : files: op(15): [finished] setting preset to enabled for "prepare-helm.service" Jul 6 23:55:06.772004 ignition[965]: INFO : files: createResultFile: createFiles: op(16): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 6 23:55:06.772718 ignition[965]: INFO : files: createResultFile: createFiles: op(16): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 6 23:55:06.772718 ignition[965]: INFO : files: files passed Jul 6 23:55:06.772718 ignition[965]: INFO : Ignition finished successfully Jul 6 23:55:06.772776 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 6 23:55:06.776651 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 6 23:55:06.778766 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 6 23:55:06.794652 initrd-setup-root-after-ignition[994]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 6 23:55:06.794652 initrd-setup-root-after-ignition[994]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 6 23:55:06.795233 initrd-setup-root-after-ignition[998]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 6 23:55:06.796546 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 6 23:55:06.796983 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 6 23:55:06.800714 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 6 23:55:06.801359 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 6 23:55:06.801435 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 6 23:55:06.831136 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 6 23:55:06.831196 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 6 23:55:06.831627 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 6 23:55:06.831781 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 6 23:55:06.831986 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 6 23:55:06.832416 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 6 23:55:06.841807 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 6 23:55:06.847706 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 6 23:55:06.853159 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 6 23:55:06.853335 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 6 23:55:06.853485 systemd[1]: Stopped target timers.target - Timer Units. Jul 6 23:55:06.853672 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 6 23:55:06.853745 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 6 23:55:06.854150 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 6 23:55:06.854373 systemd[1]: Stopped target basic.target - Basic System. Jul 6 23:55:06.854577 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 6 23:55:06.854785 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 6 23:55:06.854970 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 6 23:55:06.855177 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 6 23:55:06.855378 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 6 23:55:06.855634 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 6 23:55:06.855787 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 6 23:55:06.855989 systemd[1]: Stopped target swap.target - Swaps. Jul 6 23:55:06.856145 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 6 23:55:06.856211 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 6 23:55:06.856467 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 6 23:55:06.856705 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 6 23:55:06.856889 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 6 23:55:06.856936 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 6 23:55:06.857098 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 6 23:55:06.857161 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 6 23:55:06.857403 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 6 23:55:06.857471 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 6 23:55:06.857728 systemd[1]: Stopped target paths.target - Path Units. Jul 6 23:55:06.857857 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 6 23:55:06.861590 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 6 23:55:06.861794 systemd[1]: Stopped target slices.target - Slice Units. Jul 6 23:55:06.862003 systemd[1]: Stopped target sockets.target - Socket Units. Jul 6 23:55:06.862188 systemd[1]: iscsid.socket: Deactivated successfully. Jul 6 23:55:06.862246 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 6 23:55:06.862462 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 6 23:55:06.862511 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 6 23:55:06.862762 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 6 23:55:06.862828 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 6 23:55:06.863073 systemd[1]: ignition-files.service: Deactivated successfully. Jul 6 23:55:06.863130 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 6 23:55:06.867720 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 6 23:55:06.867834 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 6 23:55:06.867926 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 6 23:55:06.869146 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 6 23:55:06.869244 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 6 23:55:06.869333 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 6 23:55:06.869514 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 6 23:55:06.869603 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 6 23:55:06.872747 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 6 23:55:06.872807 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 6 23:55:06.877561 ignition[1019]: INFO : Ignition 2.19.0 Jul 6 23:55:06.877561 ignition[1019]: INFO : Stage: umount Jul 6 23:55:06.877561 ignition[1019]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 6 23:55:06.877561 ignition[1019]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Jul 6 23:55:06.878776 ignition[1019]: INFO : umount: umount passed Jul 6 23:55:06.878776 ignition[1019]: INFO : Ignition finished successfully Jul 6 23:55:06.879792 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 6 23:55:06.879871 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 6 23:55:06.880135 systemd[1]: Stopped target network.target - Network. Jul 6 23:55:06.880241 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 6 23:55:06.880269 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 6 23:55:06.880420 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 6 23:55:06.880445 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 6 23:55:06.880592 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 6 23:55:06.880614 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 6 23:55:06.880764 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 6 23:55:06.880786 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 6 23:55:06.881046 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 6 23:55:06.881311 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 6 23:55:06.885545 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 6 23:55:06.885632 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 6 23:55:06.886650 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 6 23:55:06.886673 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 6 23:55:06.889659 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 6 23:55:06.890464 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 6 23:55:06.890515 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 6 23:55:06.890657 systemd[1]: afterburn-network-kargs.service: Deactivated successfully. Jul 6 23:55:06.890680 systemd[1]: Stopped afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments. Jul 6 23:55:06.891234 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 6 23:55:06.892197 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 6 23:55:06.892251 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 6 23:55:06.894987 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 6 23:55:06.895040 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 6 23:55:06.895256 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 6 23:55:06.895281 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 6 23:55:06.895391 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 6 23:55:06.895414 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 6 23:55:06.899316 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 6 23:55:06.900943 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 6 23:55:06.901001 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 6 23:55:06.903809 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 6 23:55:06.904003 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 6 23:55:06.904530 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 6 23:55:06.904596 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 6 23:55:06.904755 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 6 23:55:06.904787 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 6 23:55:06.904945 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 6 23:55:06.904970 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 6 23:55:06.905243 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 6 23:55:06.905266 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 6 23:55:06.905591 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 6 23:55:06.905617 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 6 23:55:06.908686 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 6 23:55:06.908802 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 6 23:55:06.908834 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 6 23:55:06.908970 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 6 23:55:06.908993 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:55:06.912318 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 6 23:55:06.912374 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 6 23:55:07.054742 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 6 23:55:07.054830 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 6 23:55:07.055130 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 6 23:55:07.055254 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 6 23:55:07.055281 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 6 23:55:07.059645 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 6 23:55:07.093961 systemd[1]: Switching root. Jul 6 23:55:07.120609 systemd-journald[215]: Journal stopped Jul 6 23:55:01.750524 kernel: Linux version 6.6.95-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Sun Jul 6 22:23:50 -00 2025 Jul 6 23:55:01.750541 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=65c65ff9d50198f0ae5c37458dc3ff85c6a690e7aa124bb306a2f4c63a54d876 Jul 6 23:55:01.750548 kernel: Disabled fast string operations Jul 6 23:55:01.750559 kernel: BIOS-provided physical RAM map: Jul 6 23:55:01.750563 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ebff] usable Jul 6 23:55:01.750567 kernel: BIOS-e820: [mem 0x000000000009ec00-0x000000000009ffff] reserved Jul 6 23:55:01.750574 kernel: BIOS-e820: [mem 0x00000000000dc000-0x00000000000fffff] reserved Jul 6 23:55:01.750578 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007fedffff] usable Jul 6 23:55:01.750582 kernel: BIOS-e820: [mem 0x000000007fee0000-0x000000007fefefff] ACPI data Jul 6 23:55:01.750587 kernel: BIOS-e820: [mem 0x000000007feff000-0x000000007fefffff] ACPI NVS Jul 6 23:55:01.750591 kernel: BIOS-e820: [mem 0x000000007ff00000-0x000000007fffffff] usable Jul 6 23:55:01.750595 kernel: BIOS-e820: [mem 0x00000000f0000000-0x00000000f7ffffff] reserved Jul 6 23:55:01.750599 kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec0ffff] reserved Jul 6 23:55:01.750603 kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved Jul 6 23:55:01.750610 kernel: BIOS-e820: [mem 0x00000000fffe0000-0x00000000ffffffff] reserved Jul 6 23:55:01.750615 kernel: NX (Execute Disable) protection: active Jul 6 23:55:01.750619 kernel: APIC: Static calls initialized Jul 6 23:55:01.750624 kernel: SMBIOS 2.7 present. Jul 6 23:55:01.750629 kernel: DMI: VMware, Inc. VMware Virtual Platform/440BX Desktop Reference Platform, BIOS 6.00 05/28/2020 Jul 6 23:55:01.750634 kernel: vmware: hypercall mode: 0x00 Jul 6 23:55:01.750638 kernel: Hypervisor detected: VMware Jul 6 23:55:01.750643 kernel: vmware: TSC freq read from hypervisor : 3408.000 MHz Jul 6 23:55:01.750649 kernel: vmware: Host bus clock speed read from hypervisor : 66000000 Hz Jul 6 23:55:01.750653 kernel: vmware: using clock offset of 2662504574 ns Jul 6 23:55:01.750658 kernel: tsc: Detected 3408.000 MHz processor Jul 6 23:55:01.750663 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jul 6 23:55:01.750669 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jul 6 23:55:01.750673 kernel: last_pfn = 0x80000 max_arch_pfn = 0x400000000 Jul 6 23:55:01.750678 kernel: total RAM covered: 3072M Jul 6 23:55:01.750683 kernel: Found optimal setting for mtrr clean up Jul 6 23:55:01.750688 kernel: gran_size: 64K chunk_size: 64K num_reg: 2 lose cover RAM: 0G Jul 6 23:55:01.750694 kernel: MTRR map: 6 entries (5 fixed + 1 variable; max 21), built from 8 variable MTRRs Jul 6 23:55:01.750699 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jul 6 23:55:01.750704 kernel: Using GB pages for direct mapping Jul 6 23:55:01.750709 kernel: ACPI: Early table checksum verification disabled Jul 6 23:55:01.750713 kernel: ACPI: RSDP 0x00000000000F6A00 000024 (v02 PTLTD ) Jul 6 23:55:01.750719 kernel: ACPI: XSDT 0x000000007FEE965B 00005C (v01 INTEL 440BX 06040000 VMW 01324272) Jul 6 23:55:01.750723 kernel: ACPI: FACP 0x000000007FEFEE73 0000F4 (v04 INTEL 440BX 06040000 PTL 000F4240) Jul 6 23:55:01.750728 kernel: ACPI: DSDT 0x000000007FEEAD55 01411E (v01 PTLTD Custom 06040000 MSFT 03000001) Jul 6 23:55:01.750733 kernel: ACPI: FACS 0x000000007FEFFFC0 000040 Jul 6 23:55:01.750741 kernel: ACPI: FACS 0x000000007FEFFFC0 000040 Jul 6 23:55:01.750746 kernel: ACPI: BOOT 0x000000007FEEAD2D 000028 (v01 PTLTD $SBFTBL$ 06040000 LTP 00000001) Jul 6 23:55:01.750751 kernel: ACPI: APIC 0x000000007FEEA5EB 000742 (v01 PTLTD ? APIC 06040000 LTP 00000000) Jul 6 23:55:01.750757 kernel: ACPI: MCFG 0x000000007FEEA5AF 00003C (v01 PTLTD $PCITBL$ 06040000 LTP 00000001) Jul 6 23:55:01.750762 kernel: ACPI: SRAT 0x000000007FEE9757 0008A8 (v02 VMWARE MEMPLUG 06040000 VMW 00000001) Jul 6 23:55:01.750768 kernel: ACPI: HPET 0x000000007FEE971F 000038 (v01 VMWARE VMW HPET 06040000 VMW 00000001) Jul 6 23:55:01.750773 kernel: ACPI: WAET 0x000000007FEE96F7 000028 (v01 VMWARE VMW WAET 06040000 VMW 00000001) Jul 6 23:55:01.750778 kernel: ACPI: Reserving FACP table memory at [mem 0x7fefee73-0x7fefef66] Jul 6 23:55:01.750783 kernel: ACPI: Reserving DSDT table memory at [mem 0x7feead55-0x7fefee72] Jul 6 23:55:01.750788 kernel: ACPI: Reserving FACS table memory at [mem 0x7fefffc0-0x7fefffff] Jul 6 23:55:01.750794 kernel: ACPI: Reserving FACS table memory at [mem 0x7fefffc0-0x7fefffff] Jul 6 23:55:01.750799 kernel: ACPI: Reserving BOOT table memory at [mem 0x7feead2d-0x7feead54] Jul 6 23:55:01.750804 kernel: ACPI: Reserving APIC table memory at [mem 0x7feea5eb-0x7feead2c] Jul 6 23:55:01.750809 kernel: ACPI: Reserving MCFG table memory at [mem 0x7feea5af-0x7feea5ea] Jul 6 23:55:01.750814 kernel: ACPI: Reserving SRAT table memory at [mem 0x7fee9757-0x7fee9ffe] Jul 6 23:55:01.750820 kernel: ACPI: Reserving HPET table memory at [mem 0x7fee971f-0x7fee9756] Jul 6 23:55:01.750825 kernel: ACPI: Reserving WAET table memory at [mem 0x7fee96f7-0x7fee971e] Jul 6 23:55:01.750830 kernel: system APIC only can use physical flat Jul 6 23:55:01.750835 kernel: APIC: Switched APIC routing to: physical flat Jul 6 23:55:01.750840 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jul 6 23:55:01.750845 kernel: SRAT: PXM 0 -> APIC 0x02 -> Node 0 Jul 6 23:55:01.750850 kernel: SRAT: PXM 0 -> APIC 0x04 -> Node 0 Jul 6 23:55:01.750855 kernel: SRAT: PXM 0 -> APIC 0x06 -> Node 0 Jul 6 23:55:01.750860 kernel: SRAT: PXM 0 -> APIC 0x08 -> Node 0 Jul 6 23:55:01.750866 kernel: SRAT: PXM 0 -> APIC 0x0a -> Node 0 Jul 6 23:55:01.750871 kernel: SRAT: PXM 0 -> APIC 0x0c -> Node 0 Jul 6 23:55:01.750876 kernel: SRAT: PXM 0 -> APIC 0x0e -> Node 0 Jul 6 23:55:01.750881 kernel: SRAT: PXM 0 -> APIC 0x10 -> Node 0 Jul 6 23:55:01.750886 kernel: SRAT: PXM 0 -> APIC 0x12 -> Node 0 Jul 6 23:55:01.750891 kernel: SRAT: PXM 0 -> APIC 0x14 -> Node 0 Jul 6 23:55:01.750896 kernel: SRAT: PXM 0 -> APIC 0x16 -> Node 0 Jul 6 23:55:01.750901 kernel: SRAT: PXM 0 -> APIC 0x18 -> Node 0 Jul 6 23:55:01.750906 kernel: SRAT: PXM 0 -> APIC 0x1a -> Node 0 Jul 6 23:55:01.750911 kernel: SRAT: PXM 0 -> APIC 0x1c -> Node 0 Jul 6 23:55:01.750917 kernel: SRAT: PXM 0 -> APIC 0x1e -> Node 0 Jul 6 23:55:01.750922 kernel: SRAT: PXM 0 -> APIC 0x20 -> Node 0 Jul 6 23:55:01.750927 kernel: SRAT: PXM 0 -> APIC 0x22 -> Node 0 Jul 6 23:55:01.750932 kernel: SRAT: PXM 0 -> APIC 0x24 -> Node 0 Jul 6 23:55:01.750937 kernel: SRAT: PXM 0 -> APIC 0x26 -> Node 0 Jul 6 23:55:01.750942 kernel: SRAT: PXM 0 -> APIC 0x28 -> Node 0 Jul 6 23:55:01.750947 kernel: SRAT: PXM 0 -> APIC 0x2a -> Node 0 Jul 6 23:55:01.750951 kernel: SRAT: PXM 0 -> APIC 0x2c -> Node 0 Jul 6 23:55:01.750956 kernel: SRAT: PXM 0 -> APIC 0x2e -> Node 0 Jul 6 23:55:01.750961 kernel: SRAT: PXM 0 -> APIC 0x30 -> Node 0 Jul 6 23:55:01.750966 kernel: SRAT: PXM 0 -> APIC 0x32 -> Node 0 Jul 6 23:55:01.750972 kernel: SRAT: PXM 0 -> APIC 0x34 -> Node 0 Jul 6 23:55:01.750977 kernel: SRAT: PXM 0 -> APIC 0x36 -> Node 0 Jul 6 23:55:01.750982 kernel: SRAT: PXM 0 -> APIC 0x38 -> Node 0 Jul 6 23:55:01.750987 kernel: SRAT: PXM 0 -> APIC 0x3a -> Node 0 Jul 6 23:55:01.750992 kernel: SRAT: PXM 0 -> APIC 0x3c -> Node 0 Jul 6 23:55:01.750997 kernel: SRAT: PXM 0 -> APIC 0x3e -> Node 0 Jul 6 23:55:01.751002 kernel: SRAT: PXM 0 -> APIC 0x40 -> Node 0 Jul 6 23:55:01.751007 kernel: SRAT: PXM 0 -> APIC 0x42 -> Node 0 Jul 6 23:55:01.751012 kernel: SRAT: PXM 0 -> APIC 0x44 -> Node 0 Jul 6 23:55:01.751017 kernel: SRAT: PXM 0 -> APIC 0x46 -> Node 0 Jul 6 23:55:01.751023 kernel: SRAT: PXM 0 -> APIC 0x48 -> Node 0 Jul 6 23:55:01.751028 kernel: SRAT: PXM 0 -> APIC 0x4a -> Node 0 Jul 6 23:55:01.751033 kernel: SRAT: PXM 0 -> APIC 0x4c -> Node 0 Jul 6 23:55:01.751038 kernel: SRAT: PXM 0 -> APIC 0x4e -> Node 0 Jul 6 23:55:01.751043 kernel: SRAT: PXM 0 -> APIC 0x50 -> Node 0 Jul 6 23:55:01.751048 kernel: SRAT: PXM 0 -> APIC 0x52 -> Node 0 Jul 6 23:55:01.751053 kernel: SRAT: PXM 0 -> APIC 0x54 -> Node 0 Jul 6 23:55:01.751058 kernel: SRAT: PXM 0 -> APIC 0x56 -> Node 0 Jul 6 23:55:01.751062 kernel: SRAT: PXM 0 -> APIC 0x58 -> Node 0 Jul 6 23:55:01.751068 kernel: SRAT: PXM 0 -> APIC 0x5a -> Node 0 Jul 6 23:55:01.751073 kernel: SRAT: PXM 0 -> APIC 0x5c -> Node 0 Jul 6 23:55:01.751079 kernel: SRAT: PXM 0 -> APIC 0x5e -> Node 0 Jul 6 23:55:01.751083 kernel: SRAT: PXM 0 -> APIC 0x60 -> Node 0 Jul 6 23:55:01.751088 kernel: SRAT: PXM 0 -> APIC 0x62 -> Node 0 Jul 6 23:55:01.751093 kernel: SRAT: PXM 0 -> APIC 0x64 -> Node 0 Jul 6 23:55:01.751098 kernel: SRAT: PXM 0 -> APIC 0x66 -> Node 0 Jul 6 23:55:01.751103 kernel: SRAT: PXM 0 -> APIC 0x68 -> Node 0 Jul 6 23:55:01.751108 kernel: SRAT: PXM 0 -> APIC 0x6a -> Node 0 Jul 6 23:55:01.751113 kernel: SRAT: PXM 0 -> APIC 0x6c -> Node 0 Jul 6 23:55:01.751118 kernel: SRAT: PXM 0 -> APIC 0x6e -> Node 0 Jul 6 23:55:01.751124 kernel: SRAT: PXM 0 -> APIC 0x70 -> Node 0 Jul 6 23:55:01.751129 kernel: SRAT: PXM 0 -> APIC 0x72 -> Node 0 Jul 6 23:55:01.751134 kernel: SRAT: PXM 0 -> APIC 0x74 -> Node 0 Jul 6 23:55:01.751144 kernel: SRAT: PXM 0 -> APIC 0x76 -> Node 0 Jul 6 23:55:01.751149 kernel: SRAT: PXM 0 -> APIC 0x78 -> Node 0 Jul 6 23:55:01.751154 kernel: SRAT: PXM 0 -> APIC 0x7a -> Node 0 Jul 6 23:55:01.751159 kernel: SRAT: PXM 0 -> APIC 0x7c -> Node 0 Jul 6 23:55:01.751165 kernel: SRAT: PXM 0 -> APIC 0x7e -> Node 0 Jul 6 23:55:01.751170 kernel: SRAT: PXM 0 -> APIC 0x80 -> Node 0 Jul 6 23:55:01.751176 kernel: SRAT: PXM 0 -> APIC 0x82 -> Node 0 Jul 6 23:55:01.751182 kernel: SRAT: PXM 0 -> APIC 0x84 -> Node 0 Jul 6 23:55:01.751187 kernel: SRAT: PXM 0 -> APIC 0x86 -> Node 0 Jul 6 23:55:01.751192 kernel: SRAT: PXM 0 -> APIC 0x88 -> Node 0 Jul 6 23:55:01.751197 kernel: SRAT: PXM 0 -> APIC 0x8a -> Node 0 Jul 6 23:55:01.751203 kernel: SRAT: PXM 0 -> APIC 0x8c -> Node 0 Jul 6 23:55:01.751208 kernel: SRAT: PXM 0 -> APIC 0x8e -> Node 0 Jul 6 23:55:01.751213 kernel: SRAT: PXM 0 -> APIC 0x90 -> Node 0 Jul 6 23:55:01.751219 kernel: SRAT: PXM 0 -> APIC 0x92 -> Node 0 Jul 6 23:55:01.751224 kernel: SRAT: PXM 0 -> APIC 0x94 -> Node 0 Jul 6 23:55:01.751230 kernel: SRAT: PXM 0 -> APIC 0x96 -> Node 0 Jul 6 23:55:01.751235 kernel: SRAT: PXM 0 -> APIC 0x98 -> Node 0 Jul 6 23:55:01.751241 kernel: SRAT: PXM 0 -> APIC 0x9a -> Node 0 Jul 6 23:55:01.751246 kernel: SRAT: PXM 0 -> APIC 0x9c -> Node 0 Jul 6 23:55:01.751251 kernel: SRAT: PXM 0 -> APIC 0x9e -> Node 0 Jul 6 23:55:01.751256 kernel: SRAT: PXM 0 -> APIC 0xa0 -> Node 0 Jul 6 23:55:01.751262 kernel: SRAT: PXM 0 -> APIC 0xa2 -> Node 0 Jul 6 23:55:01.751267 kernel: SRAT: PXM 0 -> APIC 0xa4 -> Node 0 Jul 6 23:55:01.751272 kernel: SRAT: PXM 0 -> APIC 0xa6 -> Node 0 Jul 6 23:55:01.751278 kernel: SRAT: PXM 0 -> APIC 0xa8 -> Node 0 Jul 6 23:55:01.751284 kernel: SRAT: PXM 0 -> APIC 0xaa -> Node 0 Jul 6 23:55:01.751289 kernel: SRAT: PXM 0 -> APIC 0xac -> Node 0 Jul 6 23:55:01.751295 kernel: SRAT: PXM 0 -> APIC 0xae -> Node 0 Jul 6 23:55:01.751300 kernel: SRAT: PXM 0 -> APIC 0xb0 -> Node 0 Jul 6 23:55:01.751305 kernel: SRAT: PXM 0 -> APIC 0xb2 -> Node 0 Jul 6 23:55:01.751311 kernel: SRAT: PXM 0 -> APIC 0xb4 -> Node 0 Jul 6 23:55:01.751316 kernel: SRAT: PXM 0 -> APIC 0xb6 -> Node 0 Jul 6 23:55:01.751321 kernel: SRAT: PXM 0 -> APIC 0xb8 -> Node 0 Jul 6 23:55:01.751327 kernel: SRAT: PXM 0 -> APIC 0xba -> Node 0 Jul 6 23:55:01.751332 kernel: SRAT: PXM 0 -> APIC 0xbc -> Node 0 Jul 6 23:55:01.751339 kernel: SRAT: PXM 0 -> APIC 0xbe -> Node 0 Jul 6 23:55:01.751344 kernel: SRAT: PXM 0 -> APIC 0xc0 -> Node 0 Jul 6 23:55:01.751349 kernel: SRAT: PXM 0 -> APIC 0xc2 -> Node 0 Jul 6 23:55:01.751354 kernel: SRAT: PXM 0 -> APIC 0xc4 -> Node 0 Jul 6 23:55:01.751360 kernel: SRAT: PXM 0 -> APIC 0xc6 -> Node 0 Jul 6 23:55:01.751365 kernel: SRAT: PXM 0 -> APIC 0xc8 -> Node 0 Jul 6 23:55:01.751370 kernel: SRAT: PXM 0 -> APIC 0xca -> Node 0 Jul 6 23:55:01.751376 kernel: SRAT: PXM 0 -> APIC 0xcc -> Node 0 Jul 6 23:55:01.751381 kernel: SRAT: PXM 0 -> APIC 0xce -> Node 0 Jul 6 23:55:01.751386 kernel: SRAT: PXM 0 -> APIC 0xd0 -> Node 0 Jul 6 23:55:01.751393 kernel: SRAT: PXM 0 -> APIC 0xd2 -> Node 0 Jul 6 23:55:01.751398 kernel: SRAT: PXM 0 -> APIC 0xd4 -> Node 0 Jul 6 23:55:01.751404 kernel: SRAT: PXM 0 -> APIC 0xd6 -> Node 0 Jul 6 23:55:01.751409 kernel: SRAT: PXM 0 -> APIC 0xd8 -> Node 0 Jul 6 23:55:01.751414 kernel: SRAT: PXM 0 -> APIC 0xda -> Node 0 Jul 6 23:55:01.751419 kernel: SRAT: PXM 0 -> APIC 0xdc -> Node 0 Jul 6 23:55:01.751425 kernel: SRAT: PXM 0 -> APIC 0xde -> Node 0 Jul 6 23:55:01.751430 kernel: SRAT: PXM 0 -> APIC 0xe0 -> Node 0 Jul 6 23:55:01.751435 kernel: SRAT: PXM 0 -> APIC 0xe2 -> Node 0 Jul 6 23:55:01.751441 kernel: SRAT: PXM 0 -> APIC 0xe4 -> Node 0 Jul 6 23:55:01.751446 kernel: SRAT: PXM 0 -> APIC 0xe6 -> Node 0 Jul 6 23:55:01.751452 kernel: SRAT: PXM 0 -> APIC 0xe8 -> Node 0 Jul 6 23:55:01.751458 kernel: SRAT: PXM 0 -> APIC 0xea -> Node 0 Jul 6 23:55:01.751463 kernel: SRAT: PXM 0 -> APIC 0xec -> Node 0 Jul 6 23:55:01.751468 kernel: SRAT: PXM 0 -> APIC 0xee -> Node 0 Jul 6 23:55:01.751473 kernel: SRAT: PXM 0 -> APIC 0xf0 -> Node 0 Jul 6 23:55:01.751478 kernel: SRAT: PXM 0 -> APIC 0xf2 -> Node 0 Jul 6 23:55:01.751484 kernel: SRAT: PXM 0 -> APIC 0xf4 -> Node 0 Jul 6 23:55:01.751489 kernel: SRAT: PXM 0 -> APIC 0xf6 -> Node 0 Jul 6 23:55:01.751507 kernel: SRAT: PXM 0 -> APIC 0xf8 -> Node 0 Jul 6 23:55:01.751512 kernel: SRAT: PXM 0 -> APIC 0xfa -> Node 0 Jul 6 23:55:01.751519 kernel: SRAT: PXM 0 -> APIC 0xfc -> Node 0 Jul 6 23:55:01.751525 kernel: SRAT: PXM 0 -> APIC 0xfe -> Node 0 Jul 6 23:55:01.751530 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Jul 6 23:55:01.751536 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Jul 6 23:55:01.751541 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000-0xbfffffff] hotplug Jul 6 23:55:01.751547 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7fffffff] -> [mem 0x00000000-0x7fffffff] Jul 6 23:55:01.752450 kernel: NODE_DATA(0) allocated [mem 0x7fffa000-0x7fffffff] Jul 6 23:55:01.752457 kernel: Zone ranges: Jul 6 23:55:01.752462 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jul 6 23:55:01.752470 kernel: DMA32 [mem 0x0000000001000000-0x000000007fffffff] Jul 6 23:55:01.752476 kernel: Normal empty Jul 6 23:55:01.752481 kernel: Movable zone start for each node Jul 6 23:55:01.752487 kernel: Early memory node ranges Jul 6 23:55:01.752492 kernel: node 0: [mem 0x0000000000001000-0x000000000009dfff] Jul 6 23:55:01.752498 kernel: node 0: [mem 0x0000000000100000-0x000000007fedffff] Jul 6 23:55:01.752503 kernel: node 0: [mem 0x000000007ff00000-0x000000007fffffff] Jul 6 23:55:01.752509 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007fffffff] Jul 6 23:55:01.752514 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jul 6 23:55:01.752520 kernel: On node 0, zone DMA: 98 pages in unavailable ranges Jul 6 23:55:01.752526 kernel: On node 0, zone DMA32: 32 pages in unavailable ranges Jul 6 23:55:01.752532 kernel: ACPI: PM-Timer IO Port: 0x1008 Jul 6 23:55:01.752538 kernel: system APIC only can use physical flat Jul 6 23:55:01.752543 kernel: ACPI: LAPIC_NMI (acpi_id[0x00] high edge lint[0x1]) Jul 6 23:55:01.752555 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1]) Jul 6 23:55:01.752561 kernel: ACPI: LAPIC_NMI (acpi_id[0x02] high edge lint[0x1]) Jul 6 23:55:01.752567 kernel: ACPI: LAPIC_NMI (acpi_id[0x03] high edge lint[0x1]) Jul 6 23:55:01.752572 kernel: ACPI: LAPIC_NMI (acpi_id[0x04] high edge lint[0x1]) Jul 6 23:55:01.752578 kernel: ACPI: LAPIC_NMI (acpi_id[0x05] high edge lint[0x1]) Jul 6 23:55:01.752584 kernel: ACPI: LAPIC_NMI (acpi_id[0x06] high edge lint[0x1]) Jul 6 23:55:01.752590 kernel: ACPI: LAPIC_NMI (acpi_id[0x07] high edge lint[0x1]) Jul 6 23:55:01.752595 kernel: ACPI: LAPIC_NMI (acpi_id[0x08] high edge lint[0x1]) Jul 6 23:55:01.752601 kernel: ACPI: LAPIC_NMI (acpi_id[0x09] high edge lint[0x1]) Jul 6 23:55:01.752606 kernel: ACPI: LAPIC_NMI (acpi_id[0x0a] high edge lint[0x1]) Jul 6 23:55:01.752612 kernel: ACPI: LAPIC_NMI (acpi_id[0x0b] high edge lint[0x1]) Jul 6 23:55:01.752617 kernel: ACPI: LAPIC_NMI (acpi_id[0x0c] high edge lint[0x1]) Jul 6 23:55:01.752622 kernel: ACPI: LAPIC_NMI (acpi_id[0x0d] high edge lint[0x1]) Jul 6 23:55:01.752628 kernel: ACPI: LAPIC_NMI (acpi_id[0x0e] high edge lint[0x1]) Jul 6 23:55:01.752633 kernel: ACPI: LAPIC_NMI (acpi_id[0x0f] high edge lint[0x1]) Jul 6 23:55:01.752640 kernel: ACPI: LAPIC_NMI (acpi_id[0x10] high edge lint[0x1]) Jul 6 23:55:01.752645 kernel: ACPI: LAPIC_NMI (acpi_id[0x11] high edge lint[0x1]) Jul 6 23:55:01.752650 kernel: ACPI: LAPIC_NMI (acpi_id[0x12] high edge lint[0x1]) Jul 6 23:55:01.752656 kernel: ACPI: LAPIC_NMI (acpi_id[0x13] high edge lint[0x1]) Jul 6 23:55:01.752661 kernel: ACPI: LAPIC_NMI (acpi_id[0x14] high edge lint[0x1]) Jul 6 23:55:01.752666 kernel: ACPI: LAPIC_NMI (acpi_id[0x15] high edge lint[0x1]) Jul 6 23:55:01.752672 kernel: ACPI: LAPIC_NMI (acpi_id[0x16] high edge lint[0x1]) Jul 6 23:55:01.752677 kernel: ACPI: LAPIC_NMI (acpi_id[0x17] high edge lint[0x1]) Jul 6 23:55:01.752683 kernel: ACPI: LAPIC_NMI (acpi_id[0x18] high edge lint[0x1]) Jul 6 23:55:01.752689 kernel: ACPI: LAPIC_NMI (acpi_id[0x19] high edge lint[0x1]) Jul 6 23:55:01.752695 kernel: ACPI: LAPIC_NMI (acpi_id[0x1a] high edge lint[0x1]) Jul 6 23:55:01.752700 kernel: ACPI: LAPIC_NMI (acpi_id[0x1b] high edge lint[0x1]) Jul 6 23:55:01.752705 kernel: ACPI: LAPIC_NMI (acpi_id[0x1c] high edge lint[0x1]) Jul 6 23:55:01.752711 kernel: ACPI: LAPIC_NMI (acpi_id[0x1d] high edge lint[0x1]) Jul 6 23:55:01.752716 kernel: ACPI: LAPIC_NMI (acpi_id[0x1e] high edge lint[0x1]) Jul 6 23:55:01.752722 kernel: ACPI: LAPIC_NMI (acpi_id[0x1f] high edge lint[0x1]) Jul 6 23:55:01.752727 kernel: ACPI: LAPIC_NMI (acpi_id[0x20] high edge lint[0x1]) Jul 6 23:55:01.752732 kernel: ACPI: LAPIC_NMI (acpi_id[0x21] high edge lint[0x1]) Jul 6 23:55:01.752738 kernel: ACPI: LAPIC_NMI (acpi_id[0x22] high edge lint[0x1]) Jul 6 23:55:01.752744 kernel: ACPI: LAPIC_NMI (acpi_id[0x23] high edge lint[0x1]) Jul 6 23:55:01.752750 kernel: ACPI: LAPIC_NMI (acpi_id[0x24] high edge lint[0x1]) Jul 6 23:55:01.752755 kernel: ACPI: LAPIC_NMI (acpi_id[0x25] high edge lint[0x1]) Jul 6 23:55:01.752760 kernel: ACPI: LAPIC_NMI (acpi_id[0x26] high edge lint[0x1]) Jul 6 23:55:01.752766 kernel: ACPI: LAPIC_NMI (acpi_id[0x27] high edge lint[0x1]) Jul 6 23:55:01.752771 kernel: ACPI: LAPIC_NMI (acpi_id[0x28] high edge lint[0x1]) Jul 6 23:55:01.752777 kernel: ACPI: LAPIC_NMI (acpi_id[0x29] high edge lint[0x1]) Jul 6 23:55:01.752782 kernel: ACPI: LAPIC_NMI (acpi_id[0x2a] high edge lint[0x1]) Jul 6 23:55:01.752787 kernel: ACPI: LAPIC_NMI (acpi_id[0x2b] high edge lint[0x1]) Jul 6 23:55:01.752793 kernel: ACPI: LAPIC_NMI (acpi_id[0x2c] high edge lint[0x1]) Jul 6 23:55:01.752799 kernel: ACPI: LAPIC_NMI (acpi_id[0x2d] high edge lint[0x1]) Jul 6 23:55:01.752804 kernel: ACPI: LAPIC_NMI (acpi_id[0x2e] high edge lint[0x1]) Jul 6 23:55:01.752810 kernel: ACPI: LAPIC_NMI (acpi_id[0x2f] high edge lint[0x1]) Jul 6 23:55:01.752815 kernel: ACPI: LAPIC_NMI (acpi_id[0x30] high edge lint[0x1]) Jul 6 23:55:01.752820 kernel: ACPI: LAPIC_NMI (acpi_id[0x31] high edge lint[0x1]) Jul 6 23:55:01.752826 kernel: ACPI: LAPIC_NMI (acpi_id[0x32] high edge lint[0x1]) Jul 6 23:55:01.752831 kernel: ACPI: LAPIC_NMI (acpi_id[0x33] high edge lint[0x1]) Jul 6 23:55:01.752837 kernel: ACPI: LAPIC_NMI (acpi_id[0x34] high edge lint[0x1]) Jul 6 23:55:01.752842 kernel: ACPI: LAPIC_NMI (acpi_id[0x35] high edge lint[0x1]) Jul 6 23:55:01.752849 kernel: ACPI: LAPIC_NMI (acpi_id[0x36] high edge lint[0x1]) Jul 6 23:55:01.752854 kernel: ACPI: LAPIC_NMI (acpi_id[0x37] high edge lint[0x1]) Jul 6 23:55:01.752859 kernel: ACPI: LAPIC_NMI (acpi_id[0x38] high edge lint[0x1]) Jul 6 23:55:01.752865 kernel: ACPI: LAPIC_NMI (acpi_id[0x39] high edge lint[0x1]) Jul 6 23:55:01.752870 kernel: ACPI: LAPIC_NMI (acpi_id[0x3a] high edge lint[0x1]) Jul 6 23:55:01.752876 kernel: ACPI: LAPIC_NMI (acpi_id[0x3b] high edge lint[0x1]) Jul 6 23:55:01.752881 kernel: ACPI: LAPIC_NMI (acpi_id[0x3c] high edge lint[0x1]) Jul 6 23:55:01.752886 kernel: ACPI: LAPIC_NMI (acpi_id[0x3d] high edge lint[0x1]) Jul 6 23:55:01.752892 kernel: ACPI: LAPIC_NMI (acpi_id[0x3e] high edge lint[0x1]) Jul 6 23:55:01.752897 kernel: ACPI: LAPIC_NMI (acpi_id[0x3f] high edge lint[0x1]) Jul 6 23:55:01.752903 kernel: ACPI: LAPIC_NMI (acpi_id[0x40] high edge lint[0x1]) Jul 6 23:55:01.752909 kernel: ACPI: LAPIC_NMI (acpi_id[0x41] high edge lint[0x1]) Jul 6 23:55:01.752914 kernel: ACPI: LAPIC_NMI (acpi_id[0x42] high edge lint[0x1]) Jul 6 23:55:01.752919 kernel: ACPI: LAPIC_NMI (acpi_id[0x43] high edge lint[0x1]) Jul 6 23:55:01.752925 kernel: ACPI: LAPIC_NMI (acpi_id[0x44] high edge lint[0x1]) Jul 6 23:55:01.752930 kernel: ACPI: LAPIC_NMI (acpi_id[0x45] high edge lint[0x1]) Jul 6 23:55:01.752936 kernel: ACPI: LAPIC_NMI (acpi_id[0x46] high edge lint[0x1]) Jul 6 23:55:01.752941 kernel: ACPI: LAPIC_NMI (acpi_id[0x47] high edge lint[0x1]) Jul 6 23:55:01.752946 kernel: ACPI: LAPIC_NMI (acpi_id[0x48] high edge lint[0x1]) Jul 6 23:55:01.752953 kernel: ACPI: LAPIC_NMI (acpi_id[0x49] high edge lint[0x1]) Jul 6 23:55:01.752958 kernel: ACPI: LAPIC_NMI (acpi_id[0x4a] high edge lint[0x1]) Jul 6 23:55:01.752963 kernel: ACPI: LAPIC_NMI (acpi_id[0x4b] high edge lint[0x1]) Jul 6 23:55:01.752969 kernel: ACPI: LAPIC_NMI (acpi_id[0x4c] high edge lint[0x1]) Jul 6 23:55:01.752974 kernel: ACPI: LAPIC_NMI (acpi_id[0x4d] high edge lint[0x1]) Jul 6 23:55:01.752979 kernel: ACPI: LAPIC_NMI (acpi_id[0x4e] high edge lint[0x1]) Jul 6 23:55:01.752985 kernel: ACPI: LAPIC_NMI (acpi_id[0x4f] high edge lint[0x1]) Jul 6 23:55:01.752990 kernel: ACPI: LAPIC_NMI (acpi_id[0x50] high edge lint[0x1]) Jul 6 23:55:01.752995 kernel: ACPI: LAPIC_NMI (acpi_id[0x51] high edge lint[0x1]) Jul 6 23:55:01.753001 kernel: ACPI: LAPIC_NMI (acpi_id[0x52] high edge lint[0x1]) Jul 6 23:55:01.753008 kernel: ACPI: LAPIC_NMI (acpi_id[0x53] high edge lint[0x1]) Jul 6 23:55:01.753013 kernel: ACPI: LAPIC_NMI (acpi_id[0x54] high edge lint[0x1]) Jul 6 23:55:01.753019 kernel: ACPI: LAPIC_NMI (acpi_id[0x55] high edge lint[0x1]) Jul 6 23:55:01.753024 kernel: ACPI: LAPIC_NMI (acpi_id[0x56] high edge lint[0x1]) Jul 6 23:55:01.753029 kernel: ACPI: LAPIC_NMI (acpi_id[0x57] high edge lint[0x1]) Jul 6 23:55:01.753035 kernel: ACPI: LAPIC_NMI (acpi_id[0x58] high edge lint[0x1]) Jul 6 23:55:01.753040 kernel: ACPI: LAPIC_NMI (acpi_id[0x59] high edge lint[0x1]) Jul 6 23:55:01.753046 kernel: ACPI: LAPIC_NMI (acpi_id[0x5a] high edge lint[0x1]) Jul 6 23:55:01.753051 kernel: ACPI: LAPIC_NMI (acpi_id[0x5b] high edge lint[0x1]) Jul 6 23:55:01.753057 kernel: ACPI: LAPIC_NMI (acpi_id[0x5c] high edge lint[0x1]) Jul 6 23:55:01.753063 kernel: ACPI: LAPIC_NMI (acpi_id[0x5d] high edge lint[0x1]) Jul 6 23:55:01.753068 kernel: ACPI: LAPIC_NMI (acpi_id[0x5e] high edge lint[0x1]) Jul 6 23:55:01.753074 kernel: ACPI: LAPIC_NMI (acpi_id[0x5f] high edge lint[0x1]) Jul 6 23:55:01.753079 kernel: ACPI: LAPIC_NMI (acpi_id[0x60] high edge lint[0x1]) Jul 6 23:55:01.753085 kernel: ACPI: LAPIC_NMI (acpi_id[0x61] high edge lint[0x1]) Jul 6 23:55:01.753090 kernel: ACPI: LAPIC_NMI (acpi_id[0x62] high edge lint[0x1]) Jul 6 23:55:01.753096 kernel: ACPI: LAPIC_NMI (acpi_id[0x63] high edge lint[0x1]) Jul 6 23:55:01.753101 kernel: ACPI: LAPIC_NMI (acpi_id[0x64] high edge lint[0x1]) Jul 6 23:55:01.753106 kernel: ACPI: LAPIC_NMI (acpi_id[0x65] high edge lint[0x1]) Jul 6 23:55:01.753113 kernel: ACPI: LAPIC_NMI (acpi_id[0x66] high edge lint[0x1]) Jul 6 23:55:01.753118 kernel: ACPI: LAPIC_NMI (acpi_id[0x67] high edge lint[0x1]) Jul 6 23:55:01.753124 kernel: ACPI: LAPIC_NMI (acpi_id[0x68] high edge lint[0x1]) Jul 6 23:55:01.753129 kernel: ACPI: LAPIC_NMI (acpi_id[0x69] high edge lint[0x1]) Jul 6 23:55:01.753134 kernel: ACPI: LAPIC_NMI (acpi_id[0x6a] high edge lint[0x1]) Jul 6 23:55:01.753140 kernel: ACPI: LAPIC_NMI (acpi_id[0x6b] high edge lint[0x1]) Jul 6 23:55:01.753145 kernel: ACPI: LAPIC_NMI (acpi_id[0x6c] high edge lint[0x1]) Jul 6 23:55:01.753151 kernel: ACPI: LAPIC_NMI (acpi_id[0x6d] high edge lint[0x1]) Jul 6 23:55:01.753156 kernel: ACPI: LAPIC_NMI (acpi_id[0x6e] high edge lint[0x1]) Jul 6 23:55:01.753161 kernel: ACPI: LAPIC_NMI (acpi_id[0x6f] high edge lint[0x1]) Jul 6 23:55:01.753168 kernel: ACPI: LAPIC_NMI (acpi_id[0x70] high edge lint[0x1]) Jul 6 23:55:01.753173 kernel: ACPI: LAPIC_NMI (acpi_id[0x71] high edge lint[0x1]) Jul 6 23:55:01.753178 kernel: ACPI: LAPIC_NMI (acpi_id[0x72] high edge lint[0x1]) Jul 6 23:55:01.753184 kernel: ACPI: LAPIC_NMI (acpi_id[0x73] high edge lint[0x1]) Jul 6 23:55:01.753189 kernel: ACPI: LAPIC_NMI (acpi_id[0x74] high edge lint[0x1]) Jul 6 23:55:01.753194 kernel: ACPI: LAPIC_NMI (acpi_id[0x75] high edge lint[0x1]) Jul 6 23:55:01.753200 kernel: ACPI: LAPIC_NMI (acpi_id[0x76] high edge lint[0x1]) Jul 6 23:55:01.753205 kernel: ACPI: LAPIC_NMI (acpi_id[0x77] high edge lint[0x1]) Jul 6 23:55:01.753210 kernel: ACPI: LAPIC_NMI (acpi_id[0x78] high edge lint[0x1]) Jul 6 23:55:01.753222 kernel: ACPI: LAPIC_NMI (acpi_id[0x79] high edge lint[0x1]) Jul 6 23:55:01.753228 kernel: ACPI: LAPIC_NMI (acpi_id[0x7a] high edge lint[0x1]) Jul 6 23:55:01.753242 kernel: ACPI: LAPIC_NMI (acpi_id[0x7b] high edge lint[0x1]) Jul 6 23:55:01.753258 kernel: ACPI: LAPIC_NMI (acpi_id[0x7c] high edge lint[0x1]) Jul 6 23:55:01.753273 kernel: ACPI: LAPIC_NMI (acpi_id[0x7d] high edge lint[0x1]) Jul 6 23:55:01.753289 kernel: ACPI: LAPIC_NMI (acpi_id[0x7e] high edge lint[0x1]) Jul 6 23:55:01.753298 kernel: ACPI: LAPIC_NMI (acpi_id[0x7f] high edge lint[0x1]) Jul 6 23:55:01.753304 kernel: IOAPIC[0]: apic_id 1, version 17, address 0xfec00000, GSI 0-23 Jul 6 23:55:01.753310 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 high edge) Jul 6 23:55:01.753315 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jul 6 23:55:01.753323 kernel: ACPI: HPET id: 0x8086af01 base: 0xfed00000 Jul 6 23:55:01.753328 kernel: TSC deadline timer available Jul 6 23:55:01.753334 kernel: smpboot: Allowing 128 CPUs, 126 hotplug CPUs Jul 6 23:55:01.753339 kernel: [mem 0x80000000-0xefffffff] available for PCI devices Jul 6 23:55:01.753345 kernel: Booting paravirtualized kernel on VMware hypervisor Jul 6 23:55:01.753350 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jul 6 23:55:01.753356 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:128 nr_cpu_ids:128 nr_node_ids:1 Jul 6 23:55:01.753361 kernel: percpu: Embedded 58 pages/cpu s197096 r8192 d32280 u262144 Jul 6 23:55:01.753367 kernel: pcpu-alloc: s197096 r8192 d32280 u262144 alloc=1*2097152 Jul 6 23:55:01.753373 kernel: pcpu-alloc: [0] 000 001 002 003 004 005 006 007 Jul 6 23:55:01.753379 kernel: pcpu-alloc: [0] 008 009 010 011 012 013 014 015 Jul 6 23:55:01.753384 kernel: pcpu-alloc: [0] 016 017 018 019 020 021 022 023 Jul 6 23:55:01.753390 kernel: pcpu-alloc: [0] 024 025 026 027 028 029 030 031 Jul 6 23:55:01.753395 kernel: pcpu-alloc: [0] 032 033 034 035 036 037 038 039 Jul 6 23:55:01.753409 kernel: pcpu-alloc: [0] 040 041 042 043 044 045 046 047 Jul 6 23:55:01.753416 kernel: pcpu-alloc: [0] 048 049 050 051 052 053 054 055 Jul 6 23:55:01.753421 kernel: pcpu-alloc: [0] 056 057 058 059 060 061 062 063 Jul 6 23:55:01.753427 kernel: pcpu-alloc: [0] 064 065 066 067 068 069 070 071 Jul 6 23:55:01.753434 kernel: pcpu-alloc: [0] 072 073 074 075 076 077 078 079 Jul 6 23:55:01.753439 kernel: pcpu-alloc: [0] 080 081 082 083 084 085 086 087 Jul 6 23:55:01.753445 kernel: pcpu-alloc: [0] 088 089 090 091 092 093 094 095 Jul 6 23:55:01.753451 kernel: pcpu-alloc: [0] 096 097 098 099 100 101 102 103 Jul 6 23:55:01.753457 kernel: pcpu-alloc: [0] 104 105 106 107 108 109 110 111 Jul 6 23:55:01.753462 kernel: pcpu-alloc: [0] 112 113 114 115 116 117 118 119 Jul 6 23:55:01.753468 kernel: pcpu-alloc: [0] 120 121 122 123 124 125 126 127 Jul 6 23:55:01.753474 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=65c65ff9d50198f0ae5c37458dc3ff85c6a690e7aa124bb306a2f4c63a54d876 Jul 6 23:55:01.753482 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 6 23:55:01.753488 kernel: random: crng init done Jul 6 23:55:01.753494 kernel: printk: log_buf_len individual max cpu contribution: 4096 bytes Jul 6 23:55:01.753499 kernel: printk: log_buf_len total cpu_extra contributions: 520192 bytes Jul 6 23:55:01.753505 kernel: printk: log_buf_len min size: 262144 bytes Jul 6 23:55:01.753511 kernel: printk: log_buf_len: 1048576 bytes Jul 6 23:55:01.753517 kernel: printk: early log buf free: 239648(91%) Jul 6 23:55:01.753523 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 6 23:55:01.753529 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jul 6 23:55:01.753536 kernel: Fallback order for Node 0: 0 Jul 6 23:55:01.753541 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515808 Jul 6 23:55:01.753547 kernel: Policy zone: DMA32 Jul 6 23:55:01.753969 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 6 23:55:01.753976 kernel: Memory: 1936372K/2096628K available (12288K kernel code, 2295K rwdata, 22748K rodata, 42868K init, 2324K bss, 159996K reserved, 0K cma-reserved) Jul 6 23:55:01.753986 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=128, Nodes=1 Jul 6 23:55:01.753992 kernel: ftrace: allocating 37966 entries in 149 pages Jul 6 23:55:01.753998 kernel: ftrace: allocated 149 pages with 4 groups Jul 6 23:55:01.754003 kernel: Dynamic Preempt: voluntary Jul 6 23:55:01.754009 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 6 23:55:01.754016 kernel: rcu: RCU event tracing is enabled. Jul 6 23:55:01.754021 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=128. Jul 6 23:55:01.754027 kernel: Trampoline variant of Tasks RCU enabled. Jul 6 23:55:01.754033 kernel: Rude variant of Tasks RCU enabled. Jul 6 23:55:01.754039 kernel: Tracing variant of Tasks RCU enabled. Jul 6 23:55:01.754046 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 6 23:55:01.754052 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=128 Jul 6 23:55:01.754058 kernel: NR_IRQS: 33024, nr_irqs: 1448, preallocated irqs: 16 Jul 6 23:55:01.754064 kernel: rcu: srcu_init: Setting srcu_struct sizes to big. Jul 6 23:55:01.754070 kernel: Console: colour VGA+ 80x25 Jul 6 23:55:01.754076 kernel: printk: console [tty0] enabled Jul 6 23:55:01.754082 kernel: printk: console [ttyS0] enabled Jul 6 23:55:01.754087 kernel: ACPI: Core revision 20230628 Jul 6 23:55:01.754094 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 133484882848 ns Jul 6 23:55:01.754100 kernel: APIC: Switch to symmetric I/O mode setup Jul 6 23:55:01.754106 kernel: x2apic enabled Jul 6 23:55:01.754112 kernel: APIC: Switched APIC routing to: physical x2apic Jul 6 23:55:01.754118 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jul 6 23:55:01.754124 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x311fd3cd494, max_idle_ns: 440795223879 ns Jul 6 23:55:01.754131 kernel: Calibrating delay loop (skipped) preset value.. 6816.00 BogoMIPS (lpj=3408000) Jul 6 23:55:01.754136 kernel: Disabled fast string operations Jul 6 23:55:01.754142 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Jul 6 23:55:01.754148 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Jul 6 23:55:01.754155 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jul 6 23:55:01.754161 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Jul 6 23:55:01.754167 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Jul 6 23:55:01.754174 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS Jul 6 23:55:01.754180 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT Jul 6 23:55:01.754186 kernel: RETBleed: Mitigation: Enhanced IBRS Jul 6 23:55:01.754192 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jul 6 23:55:01.754198 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jul 6 23:55:01.754204 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jul 6 23:55:01.754211 kernel: SRBDS: Unknown: Dependent on hypervisor status Jul 6 23:55:01.754217 kernel: GDS: Unknown: Dependent on hypervisor status Jul 6 23:55:01.754222 kernel: ITS: Mitigation: Aligned branch/return thunks Jul 6 23:55:01.754228 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jul 6 23:55:01.754234 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jul 6 23:55:01.754240 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jul 6 23:55:01.754246 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jul 6 23:55:01.754252 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jul 6 23:55:01.754258 kernel: Freeing SMP alternatives memory: 32K Jul 6 23:55:01.754265 kernel: pid_max: default: 131072 minimum: 1024 Jul 6 23:55:01.754271 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jul 6 23:55:01.754276 kernel: landlock: Up and running. Jul 6 23:55:01.754282 kernel: SELinux: Initializing. Jul 6 23:55:01.754288 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jul 6 23:55:01.754294 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jul 6 23:55:01.754300 kernel: smpboot: CPU0: Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz (family: 0x6, model: 0x9e, stepping: 0xd) Jul 6 23:55:01.754306 kernel: RCU Tasks: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. Jul 6 23:55:01.754312 kernel: RCU Tasks Rude: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. Jul 6 23:55:01.754319 kernel: RCU Tasks Trace: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. Jul 6 23:55:01.754325 kernel: Performance Events: Skylake events, core PMU driver. Jul 6 23:55:01.754331 kernel: core: CPUID marked event: 'cpu cycles' unavailable Jul 6 23:55:01.754337 kernel: core: CPUID marked event: 'instructions' unavailable Jul 6 23:55:01.754343 kernel: core: CPUID marked event: 'bus cycles' unavailable Jul 6 23:55:01.754348 kernel: core: CPUID marked event: 'cache references' unavailable Jul 6 23:55:01.754354 kernel: core: CPUID marked event: 'cache misses' unavailable Jul 6 23:55:01.754360 kernel: core: CPUID marked event: 'branch instructions' unavailable Jul 6 23:55:01.754367 kernel: core: CPUID marked event: 'branch misses' unavailable Jul 6 23:55:01.754372 kernel: ... version: 1 Jul 6 23:55:01.754378 kernel: ... bit width: 48 Jul 6 23:55:01.754384 kernel: ... generic registers: 4 Jul 6 23:55:01.754390 kernel: ... value mask: 0000ffffffffffff Jul 6 23:55:01.754396 kernel: ... max period: 000000007fffffff Jul 6 23:55:01.754401 kernel: ... fixed-purpose events: 0 Jul 6 23:55:01.754407 kernel: ... event mask: 000000000000000f Jul 6 23:55:01.754413 kernel: signal: max sigframe size: 1776 Jul 6 23:55:01.754420 kernel: rcu: Hierarchical SRCU implementation. Jul 6 23:55:01.754426 kernel: rcu: Max phase no-delay instances is 400. Jul 6 23:55:01.754432 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jul 6 23:55:01.754438 kernel: smp: Bringing up secondary CPUs ... Jul 6 23:55:01.754443 kernel: smpboot: x86: Booting SMP configuration: Jul 6 23:55:01.754449 kernel: .... node #0, CPUs: #1 Jul 6 23:55:01.754455 kernel: Disabled fast string operations Jul 6 23:55:01.754461 kernel: smpboot: CPU 1 Converting physical 2 to logical package 1 Jul 6 23:55:01.754467 kernel: smpboot: CPU 1 Converting physical 0 to logical die 1 Jul 6 23:55:01.754472 kernel: smp: Brought up 1 node, 2 CPUs Jul 6 23:55:01.754480 kernel: smpboot: Max logical packages: 128 Jul 6 23:55:01.754485 kernel: smpboot: Total of 2 processors activated (13632.00 BogoMIPS) Jul 6 23:55:01.754491 kernel: devtmpfs: initialized Jul 6 23:55:01.754509 kernel: x86/mm: Memory block size: 128MB Jul 6 23:55:01.754516 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7feff000-0x7fefffff] (4096 bytes) Jul 6 23:55:01.754522 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 6 23:55:01.754528 kernel: futex hash table entries: 32768 (order: 9, 2097152 bytes, linear) Jul 6 23:55:01.754534 kernel: pinctrl core: initialized pinctrl subsystem Jul 6 23:55:01.754540 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 6 23:55:01.754580 kernel: audit: initializing netlink subsys (disabled) Jul 6 23:55:01.754588 kernel: audit: type=2000 audit(1751846100.090:1): state=initialized audit_enabled=0 res=1 Jul 6 23:55:01.754594 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 6 23:55:01.754600 kernel: thermal_sys: Registered thermal governor 'user_space' Jul 6 23:55:01.754606 kernel: cpuidle: using governor menu Jul 6 23:55:01.754611 kernel: Simple Boot Flag at 0x36 set to 0x80 Jul 6 23:55:01.754617 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 6 23:55:01.754625 kernel: dca service started, version 1.12.1 Jul 6 23:55:01.754631 kernel: PCI: MMCONFIG for domain 0000 [bus 00-7f] at [mem 0xf0000000-0xf7ffffff] (base 0xf0000000) Jul 6 23:55:01.754638 kernel: PCI: Using configuration type 1 for base access Jul 6 23:55:01.754644 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jul 6 23:55:01.754650 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 6 23:55:01.754656 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jul 6 23:55:01.754662 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 6 23:55:01.754668 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jul 6 23:55:01.754674 kernel: ACPI: Added _OSI(Module Device) Jul 6 23:55:01.754679 kernel: ACPI: Added _OSI(Processor Device) Jul 6 23:55:01.754685 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 6 23:55:01.754692 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 6 23:55:01.754698 kernel: ACPI: [Firmware Bug]: BIOS _OSI(Linux) query ignored Jul 6 23:55:01.754704 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jul 6 23:55:01.754710 kernel: ACPI: Interpreter enabled Jul 6 23:55:01.754715 kernel: ACPI: PM: (supports S0 S1 S5) Jul 6 23:55:01.754721 kernel: ACPI: Using IOAPIC for interrupt routing Jul 6 23:55:01.754727 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jul 6 23:55:01.754733 kernel: PCI: Using E820 reservations for host bridge windows Jul 6 23:55:01.754739 kernel: ACPI: Enabled 4 GPEs in block 00 to 0F Jul 6 23:55:01.754746 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-7f]) Jul 6 23:55:01.754829 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 6 23:55:01.754887 kernel: acpi PNP0A03:00: _OSC: platform does not support [AER LTR] Jul 6 23:55:01.754937 kernel: acpi PNP0A03:00: _OSC: OS now controls [PCIeHotplug PME PCIeCapability] Jul 6 23:55:01.754946 kernel: PCI host bridge to bus 0000:00 Jul 6 23:55:01.754997 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jul 6 23:55:01.755088 kernel: pci_bus 0000:00: root bus resource [mem 0x000cc000-0x000dbfff window] Jul 6 23:55:01.755135 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jul 6 23:55:01.755180 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jul 6 23:55:01.755225 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xfeff window] Jul 6 23:55:01.755269 kernel: pci_bus 0000:00: root bus resource [bus 00-7f] Jul 6 23:55:01.755334 kernel: pci 0000:00:00.0: [8086:7190] type 00 class 0x060000 Jul 6 23:55:01.755391 kernel: pci 0000:00:01.0: [8086:7191] type 01 class 0x060400 Jul 6 23:55:01.755451 kernel: pci 0000:00:07.0: [8086:7110] type 00 class 0x060100 Jul 6 23:55:01.755519 kernel: pci 0000:00:07.1: [8086:7111] type 00 class 0x01018a Jul 6 23:55:01.755587 kernel: pci 0000:00:07.1: reg 0x20: [io 0x1060-0x106f] Jul 6 23:55:01.755640 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Jul 6 23:55:01.755692 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Jul 6 23:55:01.755743 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Jul 6 23:55:01.755797 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Jul 6 23:55:01.755853 kernel: pci 0000:00:07.3: [8086:7113] type 00 class 0x068000 Jul 6 23:55:01.755967 kernel: pci 0000:00:07.3: quirk: [io 0x1000-0x103f] claimed by PIIX4 ACPI Jul 6 23:55:01.756041 kernel: pci 0000:00:07.3: quirk: [io 0x1040-0x104f] claimed by PIIX4 SMB Jul 6 23:55:01.756113 kernel: pci 0000:00:07.7: [15ad:0740] type 00 class 0x088000 Jul 6 23:55:01.756167 kernel: pci 0000:00:07.7: reg 0x10: [io 0x1080-0x10bf] Jul 6 23:55:01.756219 kernel: pci 0000:00:07.7: reg 0x14: [mem 0xfebfe000-0xfebfffff 64bit] Jul 6 23:55:01.756277 kernel: pci 0000:00:0f.0: [15ad:0405] type 00 class 0x030000 Jul 6 23:55:01.756328 kernel: pci 0000:00:0f.0: reg 0x10: [io 0x1070-0x107f] Jul 6 23:55:01.756378 kernel: pci 0000:00:0f.0: reg 0x14: [mem 0xe8000000-0xefffffff pref] Jul 6 23:55:01.756427 kernel: pci 0000:00:0f.0: reg 0x18: [mem 0xfe000000-0xfe7fffff] Jul 6 23:55:01.756477 kernel: pci 0000:00:0f.0: reg 0x30: [mem 0x00000000-0x00007fff pref] Jul 6 23:55:01.756528 kernel: pci 0000:00:0f.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jul 6 23:55:01.756597 kernel: pci 0000:00:11.0: [15ad:0790] type 01 class 0x060401 Jul 6 23:55:01.756658 kernel: pci 0000:00:15.0: [15ad:07a0] type 01 class 0x060400 Jul 6 23:55:01.756711 kernel: pci 0000:00:15.0: PME# supported from D0 D3hot D3cold Jul 6 23:55:01.756768 kernel: pci 0000:00:15.1: [15ad:07a0] type 01 class 0x060400 Jul 6 23:55:01.756821 kernel: pci 0000:00:15.1: PME# supported from D0 D3hot D3cold Jul 6 23:55:01.756876 kernel: pci 0000:00:15.2: [15ad:07a0] type 01 class 0x060400 Jul 6 23:55:01.756929 kernel: pci 0000:00:15.2: PME# supported from D0 D3hot D3cold Jul 6 23:55:01.757012 kernel: pci 0000:00:15.3: [15ad:07a0] type 01 class 0x060400 Jul 6 23:55:01.757101 kernel: pci 0000:00:15.3: PME# supported from D0 D3hot D3cold Jul 6 23:55:01.757161 kernel: pci 0000:00:15.4: [15ad:07a0] type 01 class 0x060400 Jul 6 23:55:01.757213 kernel: pci 0000:00:15.4: PME# supported from D0 D3hot D3cold Jul 6 23:55:01.757268 kernel: pci 0000:00:15.5: [15ad:07a0] type 01 class 0x060400 Jul 6 23:55:01.757320 kernel: pci 0000:00:15.5: PME# supported from D0 D3hot D3cold Jul 6 23:55:01.757379 kernel: pci 0000:00:15.6: [15ad:07a0] type 01 class 0x060400 Jul 6 23:55:01.757431 kernel: pci 0000:00:15.6: PME# supported from D0 D3hot D3cold Jul 6 23:55:01.757485 kernel: pci 0000:00:15.7: [15ad:07a0] type 01 class 0x060400 Jul 6 23:55:01.757537 kernel: pci 0000:00:15.7: PME# supported from D0 D3hot D3cold Jul 6 23:55:01.757620 kernel: pci 0000:00:16.0: [15ad:07a0] type 01 class 0x060400 Jul 6 23:55:01.757696 kernel: pci 0000:00:16.0: PME# supported from D0 D3hot D3cold Jul 6 23:55:01.757757 kernel: pci 0000:00:16.1: [15ad:07a0] type 01 class 0x060400 Jul 6 23:55:01.757809 kernel: pci 0000:00:16.1: PME# supported from D0 D3hot D3cold Jul 6 23:55:01.757864 kernel: pci 0000:00:16.2: [15ad:07a0] type 01 class 0x060400 Jul 6 23:55:01.757916 kernel: pci 0000:00:16.2: PME# supported from D0 D3hot D3cold Jul 6 23:55:01.757971 kernel: pci 0000:00:16.3: [15ad:07a0] type 01 class 0x060400 Jul 6 23:55:01.758026 kernel: pci 0000:00:16.3: PME# supported from D0 D3hot D3cold Jul 6 23:55:01.758081 kernel: pci 0000:00:16.4: [15ad:07a0] type 01 class 0x060400 Jul 6 23:55:01.758133 kernel: pci 0000:00:16.4: PME# supported from D0 D3hot D3cold Jul 6 23:55:01.758188 kernel: pci 0000:00:16.5: [15ad:07a0] type 01 class 0x060400 Jul 6 23:55:01.758240 kernel: pci 0000:00:16.5: PME# supported from D0 D3hot D3cold Jul 6 23:55:01.758295 kernel: pci 0000:00:16.6: [15ad:07a0] type 01 class 0x060400 Jul 6 23:55:01.758347 kernel: pci 0000:00:16.6: PME# supported from D0 D3hot D3cold Jul 6 23:55:01.758405 kernel: pci 0000:00:16.7: [15ad:07a0] type 01 class 0x060400 Jul 6 23:55:01.758457 kernel: pci 0000:00:16.7: PME# supported from D0 D3hot D3cold Jul 6 23:55:01.758532 kernel: pci 0000:00:17.0: [15ad:07a0] type 01 class 0x060400 Jul 6 23:55:01.758597 kernel: pci 0000:00:17.0: PME# supported from D0 D3hot D3cold Jul 6 23:55:01.758653 kernel: pci 0000:00:17.1: [15ad:07a0] type 01 class 0x060400 Jul 6 23:55:01.758705 kernel: pci 0000:00:17.1: PME# supported from D0 D3hot D3cold Jul 6 23:55:01.758763 kernel: pci 0000:00:17.2: [15ad:07a0] type 01 class 0x060400 Jul 6 23:55:01.758816 kernel: pci 0000:00:17.2: PME# supported from D0 D3hot D3cold Jul 6 23:55:01.758870 kernel: pci 0000:00:17.3: [15ad:07a0] type 01 class 0x060400 Jul 6 23:55:01.758922 kernel: pci 0000:00:17.3: PME# supported from D0 D3hot D3cold Jul 6 23:55:01.758977 kernel: pci 0000:00:17.4: [15ad:07a0] type 01 class 0x060400 Jul 6 23:55:01.759030 kernel: pci 0000:00:17.4: PME# supported from D0 D3hot D3cold Jul 6 23:55:01.759088 kernel: pci 0000:00:17.5: [15ad:07a0] type 01 class 0x060400 Jul 6 23:55:01.759139 kernel: pci 0000:00:17.5: PME# supported from D0 D3hot D3cold Jul 6 23:55:01.759194 kernel: pci 0000:00:17.6: [15ad:07a0] type 01 class 0x060400 Jul 6 23:55:01.759246 kernel: pci 0000:00:17.6: PME# supported from D0 D3hot D3cold Jul 6 23:55:01.759300 kernel: pci 0000:00:17.7: [15ad:07a0] type 01 class 0x060400 Jul 6 23:55:01.759352 kernel: pci 0000:00:17.7: PME# supported from D0 D3hot D3cold Jul 6 23:55:01.759408 kernel: pci 0000:00:18.0: [15ad:07a0] type 01 class 0x060400 Jul 6 23:55:01.759463 kernel: pci 0000:00:18.0: PME# supported from D0 D3hot D3cold Jul 6 23:55:01.759518 kernel: pci 0000:00:18.1: [15ad:07a0] type 01 class 0x060400 Jul 6 23:55:01.761630 kernel: pci 0000:00:18.1: PME# supported from D0 D3hot D3cold Jul 6 23:55:01.761700 kernel: pci 0000:00:18.2: [15ad:07a0] type 01 class 0x060400 Jul 6 23:55:01.761756 kernel: pci 0000:00:18.2: PME# supported from D0 D3hot D3cold Jul 6 23:55:01.761813 kernel: pci 0000:00:18.3: [15ad:07a0] type 01 class 0x060400 Jul 6 23:55:01.761870 kernel: pci 0000:00:18.3: PME# supported from D0 D3hot D3cold Jul 6 23:55:01.761925 kernel: pci 0000:00:18.4: [15ad:07a0] type 01 class 0x060400 Jul 6 23:55:01.761979 kernel: pci 0000:00:18.4: PME# supported from D0 D3hot D3cold Jul 6 23:55:01.762034 kernel: pci 0000:00:18.5: [15ad:07a0] type 01 class 0x060400 Jul 6 23:55:01.762087 kernel: pci 0000:00:18.5: PME# supported from D0 D3hot D3cold Jul 6 23:55:01.762142 kernel: pci 0000:00:18.6: [15ad:07a0] type 01 class 0x060400 Jul 6 23:55:01.762197 kernel: pci 0000:00:18.6: PME# supported from D0 D3hot D3cold Jul 6 23:55:01.762252 kernel: pci 0000:00:18.7: [15ad:07a0] type 01 class 0x060400 Jul 6 23:55:01.762322 kernel: pci 0000:00:18.7: PME# supported from D0 D3hot D3cold Jul 6 23:55:01.762380 kernel: pci_bus 0000:01: extended config space not accessible Jul 6 23:55:01.762433 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Jul 6 23:55:01.762488 kernel: pci_bus 0000:02: extended config space not accessible Jul 6 23:55:01.762500 kernel: acpiphp: Slot [32] registered Jul 6 23:55:01.762506 kernel: acpiphp: Slot [33] registered Jul 6 23:55:01.762513 kernel: acpiphp: Slot [34] registered Jul 6 23:55:01.762518 kernel: acpiphp: Slot [35] registered Jul 6 23:55:01.762524 kernel: acpiphp: Slot [36] registered Jul 6 23:55:01.762530 kernel: acpiphp: Slot [37] registered Jul 6 23:55:01.762536 kernel: acpiphp: Slot [38] registered Jul 6 23:55:01.762542 kernel: acpiphp: Slot [39] registered Jul 6 23:55:01.762554 kernel: acpiphp: Slot [40] registered Jul 6 23:55:01.762561 kernel: acpiphp: Slot [41] registered Jul 6 23:55:01.762568 kernel: acpiphp: Slot [42] registered Jul 6 23:55:01.762575 kernel: acpiphp: Slot [43] registered Jul 6 23:55:01.762580 kernel: acpiphp: Slot [44] registered Jul 6 23:55:01.762586 kernel: acpiphp: Slot [45] registered Jul 6 23:55:01.762598 kernel: acpiphp: Slot [46] registered Jul 6 23:55:01.762610 kernel: acpiphp: Slot [47] registered Jul 6 23:55:01.762622 kernel: acpiphp: Slot [48] registered Jul 6 23:55:01.762628 kernel: acpiphp: Slot [49] registered Jul 6 23:55:01.762634 kernel: acpiphp: Slot [50] registered Jul 6 23:55:01.762642 kernel: acpiphp: Slot [51] registered Jul 6 23:55:01.762648 kernel: acpiphp: Slot [52] registered Jul 6 23:55:01.762654 kernel: acpiphp: Slot [53] registered Jul 6 23:55:01.762659 kernel: acpiphp: Slot [54] registered Jul 6 23:55:01.762665 kernel: acpiphp: Slot [55] registered Jul 6 23:55:01.762671 kernel: acpiphp: Slot [56] registered Jul 6 23:55:01.762677 kernel: acpiphp: Slot [57] registered Jul 6 23:55:01.762683 kernel: acpiphp: Slot [58] registered Jul 6 23:55:01.762689 kernel: acpiphp: Slot [59] registered Jul 6 23:55:01.762696 kernel: acpiphp: Slot [60] registered Jul 6 23:55:01.762702 kernel: acpiphp: Slot [61] registered Jul 6 23:55:01.762708 kernel: acpiphp: Slot [62] registered Jul 6 23:55:01.762714 kernel: acpiphp: Slot [63] registered Jul 6 23:55:01.762772 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] (subtractive decode) Jul 6 23:55:01.762825 kernel: pci 0000:00:11.0: bridge window [io 0x2000-0x3fff] Jul 6 23:55:01.762877 kernel: pci 0000:00:11.0: bridge window [mem 0xfd600000-0xfdffffff] Jul 6 23:55:01.762927 kernel: pci 0000:00:11.0: bridge window [mem 0xe7b00000-0xe7ffffff 64bit pref] Jul 6 23:55:01.762977 kernel: pci 0000:00:11.0: bridge window [mem 0x000a0000-0x000bffff window] (subtractive decode) Jul 6 23:55:01.763030 kernel: pci 0000:00:11.0: bridge window [mem 0x000cc000-0x000dbfff window] (subtractive decode) Jul 6 23:55:01.763081 kernel: pci 0000:00:11.0: bridge window [mem 0xc0000000-0xfebfffff window] (subtractive decode) Jul 6 23:55:01.763131 kernel: pci 0000:00:11.0: bridge window [io 0x0000-0x0cf7 window] (subtractive decode) Jul 6 23:55:01.763182 kernel: pci 0000:00:11.0: bridge window [io 0x0d00-0xfeff window] (subtractive decode) Jul 6 23:55:01.763238 kernel: pci 0000:03:00.0: [15ad:07c0] type 00 class 0x010700 Jul 6 23:55:01.763292 kernel: pci 0000:03:00.0: reg 0x10: [io 0x4000-0x4007] Jul 6 23:55:01.763345 kernel: pci 0000:03:00.0: reg 0x14: [mem 0xfd5f8000-0xfd5fffff 64bit] Jul 6 23:55:01.763401 kernel: pci 0000:03:00.0: reg 0x30: [mem 0x00000000-0x0000ffff pref] Jul 6 23:55:01.763453 kernel: pci 0000:03:00.0: PME# supported from D0 D3hot D3cold Jul 6 23:55:01.763505 kernel: pci 0000:03:00.0: disabling ASPM on pre-1.1 PCIe device. You can enable it with 'pcie_aspm=force' Jul 6 23:55:01.764284 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] Jul 6 23:55:01.764347 kernel: pci 0000:00:15.0: bridge window [io 0x4000-0x4fff] Jul 6 23:55:01.764403 kernel: pci 0000:00:15.0: bridge window [mem 0xfd500000-0xfd5fffff] Jul 6 23:55:01.764458 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] Jul 6 23:55:01.764519 kernel: pci 0000:00:15.1: bridge window [io 0x8000-0x8fff] Jul 6 23:55:01.765671 kernel: pci 0000:00:15.1: bridge window [mem 0xfd100000-0xfd1fffff] Jul 6 23:55:01.765733 kernel: pci 0000:00:15.1: bridge window [mem 0xe7800000-0xe78fffff 64bit pref] Jul 6 23:55:01.765791 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] Jul 6 23:55:01.765844 kernel: pci 0000:00:15.2: bridge window [io 0xc000-0xcfff] Jul 6 23:55:01.765896 kernel: pci 0000:00:15.2: bridge window [mem 0xfcd00000-0xfcdfffff] Jul 6 23:55:01.765948 kernel: pci 0000:00:15.2: bridge window [mem 0xe7400000-0xe74fffff 64bit pref] Jul 6 23:55:01.766002 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] Jul 6 23:55:01.766057 kernel: pci 0000:00:15.3: bridge window [mem 0xfc900000-0xfc9fffff] Jul 6 23:55:01.766109 kernel: pci 0000:00:15.3: bridge window [mem 0xe7000000-0xe70fffff 64bit pref] Jul 6 23:55:01.766162 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] Jul 6 23:55:01.766213 kernel: pci 0000:00:15.4: bridge window [mem 0xfc500000-0xfc5fffff] Jul 6 23:55:01.766264 kernel: pci 0000:00:15.4: bridge window [mem 0xe6c00000-0xe6cfffff 64bit pref] Jul 6 23:55:01.766319 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] Jul 6 23:55:01.766371 kernel: pci 0000:00:15.5: bridge window [mem 0xfc100000-0xfc1fffff] Jul 6 23:55:01.766422 kernel: pci 0000:00:15.5: bridge window [mem 0xe6800000-0xe68fffff 64bit pref] Jul 6 23:55:01.766476 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] Jul 6 23:55:01.766527 kernel: pci 0000:00:15.6: bridge window [mem 0xfbd00000-0xfbdfffff] Jul 6 23:55:01.767606 kernel: pci 0000:00:15.6: bridge window [mem 0xe6400000-0xe64fffff 64bit pref] Jul 6 23:55:01.767665 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] Jul 6 23:55:01.767722 kernel: pci 0000:00:15.7: bridge window [mem 0xfb900000-0xfb9fffff] Jul 6 23:55:01.767773 kernel: pci 0000:00:15.7: bridge window [mem 0xe6000000-0xe60fffff 64bit pref] Jul 6 23:55:01.767830 kernel: pci 0000:0b:00.0: [15ad:07b0] type 00 class 0x020000 Jul 6 23:55:01.767884 kernel: pci 0000:0b:00.0: reg 0x10: [mem 0xfd4fc000-0xfd4fcfff] Jul 6 23:55:01.767937 kernel: pci 0000:0b:00.0: reg 0x14: [mem 0xfd4fd000-0xfd4fdfff] Jul 6 23:55:01.767989 kernel: pci 0000:0b:00.0: reg 0x18: [mem 0xfd4fe000-0xfd4fffff] Jul 6 23:55:01.768042 kernel: pci 0000:0b:00.0: reg 0x1c: [io 0x5000-0x500f] Jul 6 23:55:01.768094 kernel: pci 0000:0b:00.0: reg 0x30: [mem 0x00000000-0x0000ffff pref] Jul 6 23:55:01.768150 kernel: pci 0000:0b:00.0: supports D1 D2 Jul 6 23:55:01.768203 kernel: pci 0000:0b:00.0: PME# supported from D0 D1 D2 D3hot D3cold Jul 6 23:55:01.768254 kernel: pci 0000:0b:00.0: disabling ASPM on pre-1.1 PCIe device. You can enable it with 'pcie_aspm=force' Jul 6 23:55:01.768307 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] Jul 6 23:55:01.768359 kernel: pci 0000:00:16.0: bridge window [io 0x5000-0x5fff] Jul 6 23:55:01.768410 kernel: pci 0000:00:16.0: bridge window [mem 0xfd400000-0xfd4fffff] Jul 6 23:55:01.768462 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] Jul 6 23:55:01.768521 kernel: pci 0000:00:16.1: bridge window [io 0x9000-0x9fff] Jul 6 23:55:01.769623 kernel: pci 0000:00:16.1: bridge window [mem 0xfd000000-0xfd0fffff] Jul 6 23:55:01.769682 kernel: pci 0000:00:16.1: bridge window [mem 0xe7700000-0xe77fffff 64bit pref] Jul 6 23:55:01.769741 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] Jul 6 23:55:01.769794 kernel: pci 0000:00:16.2: bridge window [io 0xd000-0xdfff] Jul 6 23:55:01.769860 kernel: pci 0000:00:16.2: bridge window [mem 0xfcc00000-0xfccfffff] Jul 6 23:55:01.769911 kernel: pci 0000:00:16.2: bridge window [mem 0xe7300000-0xe73fffff 64bit pref] Jul 6 23:55:01.769963 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] Jul 6 23:55:01.770017 kernel: pci 0000:00:16.3: bridge window [mem 0xfc800000-0xfc8fffff] Jul 6 23:55:01.770068 kernel: pci 0000:00:16.3: bridge window [mem 0xe6f00000-0xe6ffffff 64bit pref] Jul 6 23:55:01.770120 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] Jul 6 23:55:01.770170 kernel: pci 0000:00:16.4: bridge window [mem 0xfc400000-0xfc4fffff] Jul 6 23:55:01.770219 kernel: pci 0000:00:16.4: bridge window [mem 0xe6b00000-0xe6bfffff 64bit pref] Jul 6 23:55:01.770271 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] Jul 6 23:55:01.770320 kernel: pci 0000:00:16.5: bridge window [mem 0xfc000000-0xfc0fffff] Jul 6 23:55:01.770369 kernel: pci 0000:00:16.5: bridge window [mem 0xe6700000-0xe67fffff 64bit pref] Jul 6 23:55:01.770423 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] Jul 6 23:55:01.770474 kernel: pci 0000:00:16.6: bridge window [mem 0xfbc00000-0xfbcfffff] Jul 6 23:55:01.770524 kernel: pci 0000:00:16.6: bridge window [mem 0xe6300000-0xe63fffff 64bit pref] Jul 6 23:55:01.771694 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] Jul 6 23:55:01.771749 kernel: pci 0000:00:16.7: bridge window [mem 0xfb800000-0xfb8fffff] Jul 6 23:55:01.771800 kernel: pci 0000:00:16.7: bridge window [mem 0xe5f00000-0xe5ffffff 64bit pref] Jul 6 23:55:01.771852 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] Jul 6 23:55:01.771903 kernel: pci 0000:00:17.0: bridge window [io 0x6000-0x6fff] Jul 6 23:55:01.771958 kernel: pci 0000:00:17.0: bridge window [mem 0xfd300000-0xfd3fffff] Jul 6 23:55:01.772008 kernel: pci 0000:00:17.0: bridge window [mem 0xe7a00000-0xe7afffff 64bit pref] Jul 6 23:55:01.772061 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] Jul 6 23:55:01.772111 kernel: pci 0000:00:17.1: bridge window [io 0xa000-0xafff] Jul 6 23:55:01.772162 kernel: pci 0000:00:17.1: bridge window [mem 0xfcf00000-0xfcffffff] Jul 6 23:55:01.772212 kernel: pci 0000:00:17.1: bridge window [mem 0xe7600000-0xe76fffff 64bit pref] Jul 6 23:55:01.772266 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] Jul 6 23:55:01.772364 kernel: pci 0000:00:17.2: bridge window [io 0xe000-0xefff] Jul 6 23:55:01.772418 kernel: pci 0000:00:17.2: bridge window [mem 0xfcb00000-0xfcbfffff] Jul 6 23:55:01.772498 kernel: pci 0000:00:17.2: bridge window [mem 0xe7200000-0xe72fffff 64bit pref] Jul 6 23:55:01.774564 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] Jul 6 23:55:01.774620 kernel: pci 0000:00:17.3: bridge window [mem 0xfc700000-0xfc7fffff] Jul 6 23:55:01.774671 kernel: pci 0000:00:17.3: bridge window [mem 0xe6e00000-0xe6efffff 64bit pref] Jul 6 23:55:01.774726 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] Jul 6 23:55:01.774776 kernel: pci 0000:00:17.4: bridge window [mem 0xfc300000-0xfc3fffff] Jul 6 23:55:01.774832 kernel: pci 0000:00:17.4: bridge window [mem 0xe6a00000-0xe6afffff 64bit pref] Jul 6 23:55:01.774885 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] Jul 6 23:55:01.774937 kernel: pci 0000:00:17.5: bridge window [mem 0xfbf00000-0xfbffffff] Jul 6 23:55:01.774988 kernel: pci 0000:00:17.5: bridge window [mem 0xe6600000-0xe66fffff 64bit pref] Jul 6 23:55:01.775041 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] Jul 6 23:55:01.775093 kernel: pci 0000:00:17.6: bridge window [mem 0xfbb00000-0xfbbfffff] Jul 6 23:55:01.775143 kernel: pci 0000:00:17.6: bridge window [mem 0xe6200000-0xe62fffff 64bit pref] Jul 6 23:55:01.775197 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] Jul 6 23:55:01.775252 kernel: pci 0000:00:17.7: bridge window [mem 0xfb700000-0xfb7fffff] Jul 6 23:55:01.775303 kernel: pci 0000:00:17.7: bridge window [mem 0xe5e00000-0xe5efffff 64bit pref] Jul 6 23:55:01.775356 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] Jul 6 23:55:01.775529 kernel: pci 0000:00:18.0: bridge window [io 0x7000-0x7fff] Jul 6 23:55:01.775758 kernel: pci 0000:00:18.0: bridge window [mem 0xfd200000-0xfd2fffff] Jul 6 23:55:01.775812 kernel: pci 0000:00:18.0: bridge window [mem 0xe7900000-0xe79fffff 64bit pref] Jul 6 23:55:01.775901 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] Jul 6 23:55:01.775962 kernel: pci 0000:00:18.1: bridge window [io 0xb000-0xbfff] Jul 6 23:55:01.776017 kernel: pci 0000:00:18.1: bridge window [mem 0xfce00000-0xfcefffff] Jul 6 23:55:01.776068 kernel: pci 0000:00:18.1: bridge window [mem 0xe7500000-0xe75fffff 64bit pref] Jul 6 23:55:01.776121 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] Jul 6 23:55:01.776171 kernel: pci 0000:00:18.2: bridge window [mem 0xfca00000-0xfcafffff] Jul 6 23:55:01.776223 kernel: pci 0000:00:18.2: bridge window [mem 0xe7100000-0xe71fffff 64bit pref] Jul 6 23:55:01.776275 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] Jul 6 23:55:01.776326 kernel: pci 0000:00:18.3: bridge window [mem 0xfc600000-0xfc6fffff] Jul 6 23:55:01.776376 kernel: pci 0000:00:18.3: bridge window [mem 0xe6d00000-0xe6dfffff 64bit pref] Jul 6 23:55:01.776433 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] Jul 6 23:55:01.776485 kernel: pci 0000:00:18.4: bridge window [mem 0xfc200000-0xfc2fffff] Jul 6 23:55:01.776535 kernel: pci 0000:00:18.4: bridge window [mem 0xe6900000-0xe69fffff 64bit pref] Jul 6 23:55:01.779816 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] Jul 6 23:55:01.779874 kernel: pci 0000:00:18.5: bridge window [mem 0xfbe00000-0xfbefffff] Jul 6 23:55:01.779927 kernel: pci 0000:00:18.5: bridge window [mem 0xe6500000-0xe65fffff 64bit pref] Jul 6 23:55:01.779980 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] Jul 6 23:55:01.780032 kernel: pci 0000:00:18.6: bridge window [mem 0xfba00000-0xfbafffff] Jul 6 23:55:01.780087 kernel: pci 0000:00:18.6: bridge window [mem 0xe6100000-0xe61fffff 64bit pref] Jul 6 23:55:01.780140 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] Jul 6 23:55:01.780191 kernel: pci 0000:00:18.7: bridge window [mem 0xfb600000-0xfb6fffff] Jul 6 23:55:01.780242 kernel: pci 0000:00:18.7: bridge window [mem 0xe5d00000-0xe5dfffff 64bit pref] Jul 6 23:55:01.780251 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 9 Jul 6 23:55:01.780257 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 0 Jul 6 23:55:01.780263 kernel: ACPI: PCI: Interrupt link LNKB disabled Jul 6 23:55:01.780269 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jul 6 23:55:01.780277 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 10 Jul 6 23:55:01.780283 kernel: iommu: Default domain type: Translated Jul 6 23:55:01.780289 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jul 6 23:55:01.780295 kernel: PCI: Using ACPI for IRQ routing Jul 6 23:55:01.780301 kernel: PCI: pci_cache_line_size set to 64 bytes Jul 6 23:55:01.780307 kernel: e820: reserve RAM buffer [mem 0x0009ec00-0x0009ffff] Jul 6 23:55:01.780313 kernel: e820: reserve RAM buffer [mem 0x7fee0000-0x7fffffff] Jul 6 23:55:01.780366 kernel: pci 0000:00:0f.0: vgaarb: setting as boot VGA device Jul 6 23:55:01.780418 kernel: pci 0000:00:0f.0: vgaarb: bridge control possible Jul 6 23:55:01.780471 kernel: pci 0000:00:0f.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jul 6 23:55:01.780480 kernel: vgaarb: loaded Jul 6 23:55:01.780486 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 Jul 6 23:55:01.780492 kernel: hpet0: 16 comparators, 64-bit 14.318180 MHz counter Jul 6 23:55:01.780498 kernel: clocksource: Switched to clocksource tsc-early Jul 6 23:55:01.780504 kernel: VFS: Disk quotas dquot_6.6.0 Jul 6 23:55:01.780510 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 6 23:55:01.780516 kernel: pnp: PnP ACPI init Jul 6 23:55:01.780585 kernel: system 00:00: [io 0x1000-0x103f] has been reserved Jul 6 23:55:01.780638 kernel: system 00:00: [io 0x1040-0x104f] has been reserved Jul 6 23:55:01.780685 kernel: system 00:00: [io 0x0cf0-0x0cf1] has been reserved Jul 6 23:55:01.780736 kernel: system 00:04: [mem 0xfed00000-0xfed003ff] has been reserved Jul 6 23:55:01.780787 kernel: pnp 00:06: [dma 2] Jul 6 23:55:01.780840 kernel: system 00:07: [io 0xfce0-0xfcff] has been reserved Jul 6 23:55:01.780887 kernel: system 00:07: [mem 0xf0000000-0xf7ffffff] has been reserved Jul 6 23:55:01.780936 kernel: system 00:07: [mem 0xfe800000-0xfe9fffff] has been reserved Jul 6 23:55:01.780945 kernel: pnp: PnP ACPI: found 8 devices Jul 6 23:55:01.780951 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jul 6 23:55:01.780957 kernel: NET: Registered PF_INET protocol family Jul 6 23:55:01.780963 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 6 23:55:01.780969 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Jul 6 23:55:01.780975 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 6 23:55:01.780981 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Jul 6 23:55:01.780987 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Jul 6 23:55:01.780995 kernel: TCP: Hash tables configured (established 16384 bind 16384) Jul 6 23:55:01.781001 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Jul 6 23:55:01.781007 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Jul 6 23:55:01.781013 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 6 23:55:01.781019 kernel: NET: Registered PF_XDP protocol family Jul 6 23:55:01.781071 kernel: pci 0000:00:15.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 Jul 6 23:55:01.781125 kernel: pci 0000:00:15.3: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Jul 6 23:55:01.781182 kernel: pci 0000:00:15.4: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Jul 6 23:55:01.781235 kernel: pci 0000:00:15.5: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Jul 6 23:55:01.781289 kernel: pci 0000:00:15.6: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Jul 6 23:55:01.781342 kernel: pci 0000:00:15.7: bridge window [io 0x1000-0x0fff] to [bus 0a] add_size 1000 Jul 6 23:55:01.781396 kernel: pci 0000:00:16.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 0b] add_size 200000 add_align 100000 Jul 6 23:55:01.781449 kernel: pci 0000:00:16.3: bridge window [io 0x1000-0x0fff] to [bus 0e] add_size 1000 Jul 6 23:55:01.781505 kernel: pci 0000:00:16.4: bridge window [io 0x1000-0x0fff] to [bus 0f] add_size 1000 Jul 6 23:55:01.781579 kernel: pci 0000:00:16.5: bridge window [io 0x1000-0x0fff] to [bus 10] add_size 1000 Jul 6 23:55:01.781652 kernel: pci 0000:00:16.6: bridge window [io 0x1000-0x0fff] to [bus 11] add_size 1000 Jul 6 23:55:01.781706 kernel: pci 0000:00:16.7: bridge window [io 0x1000-0x0fff] to [bus 12] add_size 1000 Jul 6 23:55:01.781758 kernel: pci 0000:00:17.3: bridge window [io 0x1000-0x0fff] to [bus 16] add_size 1000 Jul 6 23:55:01.781812 kernel: pci 0000:00:17.4: bridge window [io 0x1000-0x0fff] to [bus 17] add_size 1000 Jul 6 23:55:01.781868 kernel: pci 0000:00:17.5: bridge window [io 0x1000-0x0fff] to [bus 18] add_size 1000 Jul 6 23:55:01.781920 kernel: pci 0000:00:17.6: bridge window [io 0x1000-0x0fff] to [bus 19] add_size 1000 Jul 6 23:55:01.781974 kernel: pci 0000:00:17.7: bridge window [io 0x1000-0x0fff] to [bus 1a] add_size 1000 Jul 6 23:55:01.782026 kernel: pci 0000:00:18.2: bridge window [io 0x1000-0x0fff] to [bus 1d] add_size 1000 Jul 6 23:55:01.782078 kernel: pci 0000:00:18.3: bridge window [io 0x1000-0x0fff] to [bus 1e] add_size 1000 Jul 6 23:55:01.782131 kernel: pci 0000:00:18.4: bridge window [io 0x1000-0x0fff] to [bus 1f] add_size 1000 Jul 6 23:55:01.782185 kernel: pci 0000:00:18.5: bridge window [io 0x1000-0x0fff] to [bus 20] add_size 1000 Jul 6 23:55:01.782237 kernel: pci 0000:00:18.6: bridge window [io 0x1000-0x0fff] to [bus 21] add_size 1000 Jul 6 23:55:01.782289 kernel: pci 0000:00:18.7: bridge window [io 0x1000-0x0fff] to [bus 22] add_size 1000 Jul 6 23:55:01.782340 kernel: pci 0000:00:15.0: BAR 15: assigned [mem 0xc0000000-0xc01fffff 64bit pref] Jul 6 23:55:01.782445 kernel: pci 0000:00:16.0: BAR 15: assigned [mem 0xc0200000-0xc03fffff 64bit pref] Jul 6 23:55:01.782500 kernel: pci 0000:00:15.3: BAR 13: no space for [io size 0x1000] Jul 6 23:55:01.784490 kernel: pci 0000:00:15.3: BAR 13: failed to assign [io size 0x1000] Jul 6 23:55:01.784575 kernel: pci 0000:00:15.4: BAR 13: no space for [io size 0x1000] Jul 6 23:55:01.784630 kernel: pci 0000:00:15.4: BAR 13: failed to assign [io size 0x1000] Jul 6 23:55:01.784684 kernel: pci 0000:00:15.5: BAR 13: no space for [io size 0x1000] Jul 6 23:55:01.784735 kernel: pci 0000:00:15.5: BAR 13: failed to assign [io size 0x1000] Jul 6 23:55:01.784788 kernel: pci 0000:00:15.6: BAR 13: no space for [io size 0x1000] Jul 6 23:55:01.784840 kernel: pci 0000:00:15.6: BAR 13: failed to assign [io size 0x1000] Jul 6 23:55:01.784893 kernel: pci 0000:00:15.7: BAR 13: no space for [io size 0x1000] Jul 6 23:55:01.784950 kernel: pci 0000:00:15.7: BAR 13: failed to assign [io size 0x1000] Jul 6 23:55:01.785003 kernel: pci 0000:00:16.3: BAR 13: no space for [io size 0x1000] Jul 6 23:55:01.785054 kernel: pci 0000:00:16.3: BAR 13: failed to assign [io size 0x1000] Jul 6 23:55:01.785107 kernel: pci 0000:00:16.4: BAR 13: no space for [io size 0x1000] Jul 6 23:55:01.785158 kernel: pci 0000:00:16.4: BAR 13: failed to assign [io size 0x1000] Jul 6 23:55:01.785211 kernel: pci 0000:00:16.5: BAR 13: no space for [io size 0x1000] Jul 6 23:55:01.785263 kernel: pci 0000:00:16.5: BAR 13: failed to assign [io size 0x1000] Jul 6 23:55:01.785316 kernel: pci 0000:00:16.6: BAR 13: no space for [io size 0x1000] Jul 6 23:55:01.785370 kernel: pci 0000:00:16.6: BAR 13: failed to assign [io size 0x1000] Jul 6 23:55:01.785423 kernel: pci 0000:00:16.7: BAR 13: no space for [io size 0x1000] Jul 6 23:55:01.785475 kernel: pci 0000:00:16.7: BAR 13: failed to assign [io size 0x1000] Jul 6 23:55:01.785534 kernel: pci 0000:00:17.3: BAR 13: no space for [io size 0x1000] Jul 6 23:55:01.785607 kernel: pci 0000:00:17.3: BAR 13: failed to assign [io size 0x1000] Jul 6 23:55:01.785661 kernel: pci 0000:00:17.4: BAR 13: no space for [io size 0x1000] Jul 6 23:55:01.785711 kernel: pci 0000:00:17.4: BAR 13: failed to assign [io size 0x1000] Jul 6 23:55:01.785764 kernel: pci 0000:00:17.5: BAR 13: no space for [io size 0x1000] Jul 6 23:55:01.785818 kernel: pci 0000:00:17.5: BAR 13: failed to assign [io size 0x1000] Jul 6 23:55:01.785871 kernel: pci 0000:00:17.6: BAR 13: no space for [io size 0x1000] Jul 6 23:55:01.785923 kernel: pci 0000:00:17.6: BAR 13: failed to assign [io size 0x1000] Jul 6 23:55:01.785976 kernel: pci 0000:00:17.7: BAR 13: no space for [io size 0x1000] Jul 6 23:55:01.786028 kernel: pci 0000:00:17.7: BAR 13: failed to assign [io size 0x1000] Jul 6 23:55:01.786082 kernel: pci 0000:00:18.2: BAR 13: no space for [io size 0x1000] Jul 6 23:55:01.786133 kernel: pci 0000:00:18.2: BAR 13: failed to assign [io size 0x1000] Jul 6 23:55:01.786186 kernel: pci 0000:00:18.3: BAR 13: no space for [io size 0x1000] Jul 6 23:55:01.786237 kernel: pci 0000:00:18.3: BAR 13: failed to assign [io size 0x1000] Jul 6 23:55:01.786293 kernel: pci 0000:00:18.4: BAR 13: no space for [io size 0x1000] Jul 6 23:55:01.786345 kernel: pci 0000:00:18.4: BAR 13: failed to assign [io size 0x1000] Jul 6 23:55:01.786398 kernel: pci 0000:00:18.5: BAR 13: no space for [io size 0x1000] Jul 6 23:55:01.786450 kernel: pci 0000:00:18.5: BAR 13: failed to assign [io size 0x1000] Jul 6 23:55:01.786502 kernel: pci 0000:00:18.6: BAR 13: no space for [io size 0x1000] Jul 6 23:55:01.786941 kernel: pci 0000:00:18.6: BAR 13: failed to assign [io size 0x1000] Jul 6 23:55:01.787004 kernel: pci 0000:00:18.7: BAR 13: no space for [io size 0x1000] Jul 6 23:55:01.787056 kernel: pci 0000:00:18.7: BAR 13: failed to assign [io size 0x1000] Jul 6 23:55:01.787112 kernel: pci 0000:00:18.7: BAR 13: no space for [io size 0x1000] Jul 6 23:55:01.787163 kernel: pci 0000:00:18.7: BAR 13: failed to assign [io size 0x1000] Jul 6 23:55:01.787215 kernel: pci 0000:00:18.6: BAR 13: no space for [io size 0x1000] Jul 6 23:55:01.787266 kernel: pci 0000:00:18.6: BAR 13: failed to assign [io size 0x1000] Jul 6 23:55:01.787317 kernel: pci 0000:00:18.5: BAR 13: no space for [io size 0x1000] Jul 6 23:55:01.787369 kernel: pci 0000:00:18.5: BAR 13: failed to assign [io size 0x1000] Jul 6 23:55:01.787419 kernel: pci 0000:00:18.4: BAR 13: no space for [io size 0x1000] Jul 6 23:55:01.787471 kernel: pci 0000:00:18.4: BAR 13: failed to assign [io size 0x1000] Jul 6 23:55:01.787522 kernel: pci 0000:00:18.3: BAR 13: no space for [io size 0x1000] Jul 6 23:55:01.787602 kernel: pci 0000:00:18.3: BAR 13: failed to assign [io size 0x1000] Jul 6 23:55:01.787656 kernel: pci 0000:00:18.2: BAR 13: no space for [io size 0x1000] Jul 6 23:55:01.787708 kernel: pci 0000:00:18.2: BAR 13: failed to assign [io size 0x1000] Jul 6 23:55:01.787760 kernel: pci 0000:00:17.7: BAR 13: no space for [io size 0x1000] Jul 6 23:55:01.787810 kernel: pci 0000:00:17.7: BAR 13: failed to assign [io size 0x1000] Jul 6 23:55:01.787862 kernel: pci 0000:00:17.6: BAR 13: no space for [io size 0x1000] Jul 6 23:55:01.787913 kernel: pci 0000:00:17.6: BAR 13: failed to assign [io size 0x1000] Jul 6 23:55:01.787964 kernel: pci 0000:00:17.5: BAR 13: no space for [io size 0x1000] Jul 6 23:55:01.788014 kernel: pci 0000:00:17.5: BAR 13: failed to assign [io size 0x1000] Jul 6 23:55:01.788069 kernel: pci 0000:00:17.4: BAR 13: no space for [io size 0x1000] Jul 6 23:55:01.788120 kernel: pci 0000:00:17.4: BAR 13: failed to assign [io size 0x1000] Jul 6 23:55:01.788171 kernel: pci 0000:00:17.3: BAR 13: no space for [io size 0x1000] Jul 6 23:55:01.788222 kernel: pci 0000:00:17.3: BAR 13: failed to assign [io size 0x1000] Jul 6 23:55:01.788273 kernel: pci 0000:00:16.7: BAR 13: no space for [io size 0x1000] Jul 6 23:55:01.788324 kernel: pci 0000:00:16.7: BAR 13: failed to assign [io size 0x1000] Jul 6 23:55:01.788377 kernel: pci 0000:00:16.6: BAR 13: no space for [io size 0x1000] Jul 6 23:55:01.788428 kernel: pci 0000:00:16.6: BAR 13: failed to assign [io size 0x1000] Jul 6 23:55:01.788480 kernel: pci 0000:00:16.5: BAR 13: no space for [io size 0x1000] Jul 6 23:55:01.788530 kernel: pci 0000:00:16.5: BAR 13: failed to assign [io size 0x1000] Jul 6 23:55:01.788666 kernel: pci 0000:00:16.4: BAR 13: no space for [io size 0x1000] Jul 6 23:55:01.788718 kernel: pci 0000:00:16.4: BAR 13: failed to assign [io size 0x1000] Jul 6 23:55:01.788769 kernel: pci 0000:00:16.3: BAR 13: no space for [io size 0x1000] Jul 6 23:55:01.788819 kernel: pci 0000:00:16.3: BAR 13: failed to assign [io size 0x1000] Jul 6 23:55:01.788869 kernel: pci 0000:00:15.7: BAR 13: no space for [io size 0x1000] Jul 6 23:55:01.788919 kernel: pci 0000:00:15.7: BAR 13: failed to assign [io size 0x1000] Jul 6 23:55:01.788970 kernel: pci 0000:00:15.6: BAR 13: no space for [io size 0x1000] Jul 6 23:55:01.789021 kernel: pci 0000:00:15.6: BAR 13: failed to assign [io size 0x1000] Jul 6 23:55:01.789072 kernel: pci 0000:00:15.5: BAR 13: no space for [io size 0x1000] Jul 6 23:55:01.789127 kernel: pci 0000:00:15.5: BAR 13: failed to assign [io size 0x1000] Jul 6 23:55:01.789177 kernel: pci 0000:00:15.4: BAR 13: no space for [io size 0x1000] Jul 6 23:55:01.789227 kernel: pci 0000:00:15.4: BAR 13: failed to assign [io size 0x1000] Jul 6 23:55:01.789280 kernel: pci 0000:00:15.3: BAR 13: no space for [io size 0x1000] Jul 6 23:55:01.789331 kernel: pci 0000:00:15.3: BAR 13: failed to assign [io size 0x1000] Jul 6 23:55:01.789384 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Jul 6 23:55:01.789435 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] Jul 6 23:55:01.789486 kernel: pci 0000:00:11.0: bridge window [io 0x2000-0x3fff] Jul 6 23:55:01.789536 kernel: pci 0000:00:11.0: bridge window [mem 0xfd600000-0xfdffffff] Jul 6 23:55:01.789599 kernel: pci 0000:00:11.0: bridge window [mem 0xe7b00000-0xe7ffffff 64bit pref] Jul 6 23:55:01.789662 kernel: pci 0000:03:00.0: BAR 6: assigned [mem 0xfd500000-0xfd50ffff pref] Jul 6 23:55:01.789715 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] Jul 6 23:55:01.789767 kernel: pci 0000:00:15.0: bridge window [io 0x4000-0x4fff] Jul 6 23:55:01.789817 kernel: pci 0000:00:15.0: bridge window [mem 0xfd500000-0xfd5fffff] Jul 6 23:55:01.789869 kernel: pci 0000:00:15.0: bridge window [mem 0xc0000000-0xc01fffff 64bit pref] Jul 6 23:55:01.789922 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] Jul 6 23:55:01.789974 kernel: pci 0000:00:15.1: bridge window [io 0x8000-0x8fff] Jul 6 23:55:01.790025 kernel: pci 0000:00:15.1: bridge window [mem 0xfd100000-0xfd1fffff] Jul 6 23:55:01.790079 kernel: pci 0000:00:15.1: bridge window [mem 0xe7800000-0xe78fffff 64bit pref] Jul 6 23:55:01.790134 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] Jul 6 23:55:01.790186 kernel: pci 0000:00:15.2: bridge window [io 0xc000-0xcfff] Jul 6 23:55:01.790237 kernel: pci 0000:00:15.2: bridge window [mem 0xfcd00000-0xfcdfffff] Jul 6 23:55:01.790288 kernel: pci 0000:00:15.2: bridge window [mem 0xe7400000-0xe74fffff 64bit pref] Jul 6 23:55:01.790340 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] Jul 6 23:55:01.790392 kernel: pci 0000:00:15.3: bridge window [mem 0xfc900000-0xfc9fffff] Jul 6 23:55:01.790444 kernel: pci 0000:00:15.3: bridge window [mem 0xe7000000-0xe70fffff 64bit pref] Jul 6 23:55:01.790496 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] Jul 6 23:55:01.790558 kernel: pci 0000:00:15.4: bridge window [mem 0xfc500000-0xfc5fffff] Jul 6 23:55:01.790611 kernel: pci 0000:00:15.4: bridge window [mem 0xe6c00000-0xe6cfffff 64bit pref] Jul 6 23:55:01.790667 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] Jul 6 23:55:01.790719 kernel: pci 0000:00:15.5: bridge window [mem 0xfc100000-0xfc1fffff] Jul 6 23:55:01.790770 kernel: pci 0000:00:15.5: bridge window [mem 0xe6800000-0xe68fffff 64bit pref] Jul 6 23:55:01.790822 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] Jul 6 23:55:01.790876 kernel: pci 0000:00:15.6: bridge window [mem 0xfbd00000-0xfbdfffff] Jul 6 23:55:01.790928 kernel: pci 0000:00:15.6: bridge window [mem 0xe6400000-0xe64fffff 64bit pref] Jul 6 23:55:01.790980 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] Jul 6 23:55:01.791031 kernel: pci 0000:00:15.7: bridge window [mem 0xfb900000-0xfb9fffff] Jul 6 23:55:01.791083 kernel: pci 0000:00:15.7: bridge window [mem 0xe6000000-0xe60fffff 64bit pref] Jul 6 23:55:01.791140 kernel: pci 0000:0b:00.0: BAR 6: assigned [mem 0xfd400000-0xfd40ffff pref] Jul 6 23:55:01.791194 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] Jul 6 23:55:01.791245 kernel: pci 0000:00:16.0: bridge window [io 0x5000-0x5fff] Jul 6 23:55:01.791297 kernel: pci 0000:00:16.0: bridge window [mem 0xfd400000-0xfd4fffff] Jul 6 23:55:01.791352 kernel: pci 0000:00:16.0: bridge window [mem 0xc0200000-0xc03fffff 64bit pref] Jul 6 23:55:01.791405 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] Jul 6 23:55:01.791457 kernel: pci 0000:00:16.1: bridge window [io 0x9000-0x9fff] Jul 6 23:55:01.791512 kernel: pci 0000:00:16.1: bridge window [mem 0xfd000000-0xfd0fffff] Jul 6 23:55:01.791576 kernel: pci 0000:00:16.1: bridge window [mem 0xe7700000-0xe77fffff 64bit pref] Jul 6 23:55:01.791632 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] Jul 6 23:55:01.791684 kernel: pci 0000:00:16.2: bridge window [io 0xd000-0xdfff] Jul 6 23:55:01.791736 kernel: pci 0000:00:16.2: bridge window [mem 0xfcc00000-0xfccfffff] Jul 6 23:55:01.791787 kernel: pci 0000:00:16.2: bridge window [mem 0xe7300000-0xe73fffff 64bit pref] Jul 6 23:55:01.791838 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] Jul 6 23:55:01.791892 kernel: pci 0000:00:16.3: bridge window [mem 0xfc800000-0xfc8fffff] Jul 6 23:55:01.791943 kernel: pci 0000:00:16.3: bridge window [mem 0xe6f00000-0xe6ffffff 64bit pref] Jul 6 23:55:01.791995 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] Jul 6 23:55:01.792046 kernel: pci 0000:00:16.4: bridge window [mem 0xfc400000-0xfc4fffff] Jul 6 23:55:01.792098 kernel: pci 0000:00:16.4: bridge window [mem 0xe6b00000-0xe6bfffff 64bit pref] Jul 6 23:55:01.792150 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] Jul 6 23:55:01.792201 kernel: pci 0000:00:16.5: bridge window [mem 0xfc000000-0xfc0fffff] Jul 6 23:55:01.792252 kernel: pci 0000:00:16.5: bridge window [mem 0xe6700000-0xe67fffff 64bit pref] Jul 6 23:55:01.792304 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] Jul 6 23:55:01.792358 kernel: pci 0000:00:16.6: bridge window [mem 0xfbc00000-0xfbcfffff] Jul 6 23:55:01.792457 kernel: pci 0000:00:16.6: bridge window [mem 0xe6300000-0xe63fffff 64bit pref] Jul 6 23:55:01.792511 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] Jul 6 23:55:01.792574 kernel: pci 0000:00:16.7: bridge window [mem 0xfb800000-0xfb8fffff] Jul 6 23:55:01.792630 kernel: pci 0000:00:16.7: bridge window [mem 0xe5f00000-0xe5ffffff 64bit pref] Jul 6 23:55:01.792683 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] Jul 6 23:55:01.792735 kernel: pci 0000:00:17.0: bridge window [io 0x6000-0x6fff] Jul 6 23:55:01.792785 kernel: pci 0000:00:17.0: bridge window [mem 0xfd300000-0xfd3fffff] Jul 6 23:55:01.792837 kernel: pci 0000:00:17.0: bridge window [mem 0xe7a00000-0xe7afffff 64bit pref] Jul 6 23:55:01.792891 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] Jul 6 23:55:01.792947 kernel: pci 0000:00:17.1: bridge window [io 0xa000-0xafff] Jul 6 23:55:01.792999 kernel: pci 0000:00:17.1: bridge window [mem 0xfcf00000-0xfcffffff] Jul 6 23:55:01.793050 kernel: pci 0000:00:17.1: bridge window [mem 0xe7600000-0xe76fffff 64bit pref] Jul 6 23:55:01.793102 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] Jul 6 23:55:01.793154 kernel: pci 0000:00:17.2: bridge window [io 0xe000-0xefff] Jul 6 23:55:01.793205 kernel: pci 0000:00:17.2: bridge window [mem 0xfcb00000-0xfcbfffff] Jul 6 23:55:01.793256 kernel: pci 0000:00:17.2: bridge window [mem 0xe7200000-0xe72fffff 64bit pref] Jul 6 23:55:01.793308 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] Jul 6 23:55:01.793358 kernel: pci 0000:00:17.3: bridge window [mem 0xfc700000-0xfc7fffff] Jul 6 23:55:01.793412 kernel: pci 0000:00:17.3: bridge window [mem 0xe6e00000-0xe6efffff 64bit pref] Jul 6 23:55:01.793463 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] Jul 6 23:55:01.793514 kernel: pci 0000:00:17.4: bridge window [mem 0xfc300000-0xfc3fffff] Jul 6 23:55:01.793572 kernel: pci 0000:00:17.4: bridge window [mem 0xe6a00000-0xe6afffff 64bit pref] Jul 6 23:55:01.793624 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] Jul 6 23:55:01.793675 kernel: pci 0000:00:17.5: bridge window [mem 0xfbf00000-0xfbffffff] Jul 6 23:55:01.793726 kernel: pci 0000:00:17.5: bridge window [mem 0xe6600000-0xe66fffff 64bit pref] Jul 6 23:55:01.793779 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] Jul 6 23:55:01.793831 kernel: pci 0000:00:17.6: bridge window [mem 0xfbb00000-0xfbbfffff] Jul 6 23:55:01.793883 kernel: pci 0000:00:17.6: bridge window [mem 0xe6200000-0xe62fffff 64bit pref] Jul 6 23:55:01.793939 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] Jul 6 23:55:01.793991 kernel: pci 0000:00:17.7: bridge window [mem 0xfb700000-0xfb7fffff] Jul 6 23:55:01.794043 kernel: pci 0000:00:17.7: bridge window [mem 0xe5e00000-0xe5efffff 64bit pref] Jul 6 23:55:01.794097 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] Jul 6 23:55:01.794149 kernel: pci 0000:00:18.0: bridge window [io 0x7000-0x7fff] Jul 6 23:55:01.794201 kernel: pci 0000:00:18.0: bridge window [mem 0xfd200000-0xfd2fffff] Jul 6 23:55:01.794252 kernel: pci 0000:00:18.0: bridge window [mem 0xe7900000-0xe79fffff 64bit pref] Jul 6 23:55:01.794306 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] Jul 6 23:55:01.794358 kernel: pci 0000:00:18.1: bridge window [io 0xb000-0xbfff] Jul 6 23:55:01.794412 kernel: pci 0000:00:18.1: bridge window [mem 0xfce00000-0xfcefffff] Jul 6 23:55:01.794463 kernel: pci 0000:00:18.1: bridge window [mem 0xe7500000-0xe75fffff 64bit pref] Jul 6 23:55:01.794520 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] Jul 6 23:55:01.794631 kernel: pci 0000:00:18.2: bridge window [mem 0xfca00000-0xfcafffff] Jul 6 23:55:01.794684 kernel: pci 0000:00:18.2: bridge window [mem 0xe7100000-0xe71fffff 64bit pref] Jul 6 23:55:01.794737 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] Jul 6 23:55:01.794788 kernel: pci 0000:00:18.3: bridge window [mem 0xfc600000-0xfc6fffff] Jul 6 23:55:01.794841 kernel: pci 0000:00:18.3: bridge window [mem 0xe6d00000-0xe6dfffff 64bit pref] Jul 6 23:55:01.794894 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] Jul 6 23:55:01.794946 kernel: pci 0000:00:18.4: bridge window [mem 0xfc200000-0xfc2fffff] Jul 6 23:55:01.795000 kernel: pci 0000:00:18.4: bridge window [mem 0xe6900000-0xe69fffff 64bit pref] Jul 6 23:55:01.795054 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] Jul 6 23:55:01.795106 kernel: pci 0000:00:18.5: bridge window [mem 0xfbe00000-0xfbefffff] Jul 6 23:55:01.795158 kernel: pci 0000:00:18.5: bridge window [mem 0xe6500000-0xe65fffff 64bit pref] Jul 6 23:55:01.795211 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] Jul 6 23:55:01.795263 kernel: pci 0000:00:18.6: bridge window [mem 0xfba00000-0xfbafffff] Jul 6 23:55:01.795314 kernel: pci 0000:00:18.6: bridge window [mem 0xe6100000-0xe61fffff 64bit pref] Jul 6 23:55:01.795366 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] Jul 6 23:55:01.795419 kernel: pci 0000:00:18.7: bridge window [mem 0xfb600000-0xfb6fffff] Jul 6 23:55:01.795475 kernel: pci 0000:00:18.7: bridge window [mem 0xe5d00000-0xe5dfffff 64bit pref] Jul 6 23:55:01.795532 kernel: pci_bus 0000:00: resource 4 [mem 0x000a0000-0x000bffff window] Jul 6 23:55:01.795588 kernel: pci_bus 0000:00: resource 5 [mem 0x000cc000-0x000dbfff window] Jul 6 23:55:01.795635 kernel: pci_bus 0000:00: resource 6 [mem 0xc0000000-0xfebfffff window] Jul 6 23:55:01.795681 kernel: pci_bus 0000:00: resource 7 [io 0x0000-0x0cf7 window] Jul 6 23:55:01.795726 kernel: pci_bus 0000:00: resource 8 [io 0x0d00-0xfeff window] Jul 6 23:55:01.795777 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x3fff] Jul 6 23:55:01.795825 kernel: pci_bus 0000:02: resource 1 [mem 0xfd600000-0xfdffffff] Jul 6 23:55:01.795877 kernel: pci_bus 0000:02: resource 2 [mem 0xe7b00000-0xe7ffffff 64bit pref] Jul 6 23:55:01.795924 kernel: pci_bus 0000:02: resource 4 [mem 0x000a0000-0x000bffff window] Jul 6 23:55:01.795971 kernel: pci_bus 0000:02: resource 5 [mem 0x000cc000-0x000dbfff window] Jul 6 23:55:01.796019 kernel: pci_bus 0000:02: resource 6 [mem 0xc0000000-0xfebfffff window] Jul 6 23:55:01.796066 kernel: pci_bus 0000:02: resource 7 [io 0x0000-0x0cf7 window] Jul 6 23:55:01.796113 kernel: pci_bus 0000:02: resource 8 [io 0x0d00-0xfeff window] Jul 6 23:55:01.796166 kernel: pci_bus 0000:03: resource 0 [io 0x4000-0x4fff] Jul 6 23:55:01.796231 kernel: pci_bus 0000:03: resource 1 [mem 0xfd500000-0xfd5fffff] Jul 6 23:55:01.796282 kernel: pci_bus 0000:03: resource 2 [mem 0xc0000000-0xc01fffff 64bit pref] Jul 6 23:55:01.796335 kernel: pci_bus 0000:04: resource 0 [io 0x8000-0x8fff] Jul 6 23:55:01.796382 kernel: pci_bus 0000:04: resource 1 [mem 0xfd100000-0xfd1fffff] Jul 6 23:55:01.796429 kernel: pci_bus 0000:04: resource 2 [mem 0xe7800000-0xe78fffff 64bit pref] Jul 6 23:55:01.796480 kernel: pci_bus 0000:05: resource 0 [io 0xc000-0xcfff] Jul 6 23:55:01.796528 kernel: pci_bus 0000:05: resource 1 [mem 0xfcd00000-0xfcdfffff] Jul 6 23:55:01.796744 kernel: pci_bus 0000:05: resource 2 [mem 0xe7400000-0xe74fffff 64bit pref] Jul 6 23:55:01.796797 kernel: pci_bus 0000:06: resource 1 [mem 0xfc900000-0xfc9fffff] Jul 6 23:55:01.796845 kernel: pci_bus 0000:06: resource 2 [mem 0xe7000000-0xe70fffff 64bit pref] Jul 6 23:55:01.796897 kernel: pci_bus 0000:07: resource 1 [mem 0xfc500000-0xfc5fffff] Jul 6 23:55:01.796944 kernel: pci_bus 0000:07: resource 2 [mem 0xe6c00000-0xe6cfffff 64bit pref] Jul 6 23:55:01.796997 kernel: pci_bus 0000:08: resource 1 [mem 0xfc100000-0xfc1fffff] Jul 6 23:55:01.797048 kernel: pci_bus 0000:08: resource 2 [mem 0xe6800000-0xe68fffff 64bit pref] Jul 6 23:55:01.797100 kernel: pci_bus 0000:09: resource 1 [mem 0xfbd00000-0xfbdfffff] Jul 6 23:55:01.797147 kernel: pci_bus 0000:09: resource 2 [mem 0xe6400000-0xe64fffff 64bit pref] Jul 6 23:55:01.797202 kernel: pci_bus 0000:0a: resource 1 [mem 0xfb900000-0xfb9fffff] Jul 6 23:55:01.797250 kernel: pci_bus 0000:0a: resource 2 [mem 0xe6000000-0xe60fffff 64bit pref] Jul 6 23:55:01.797312 kernel: pci_bus 0000:0b: resource 0 [io 0x5000-0x5fff] Jul 6 23:55:01.797363 kernel: pci_bus 0000:0b: resource 1 [mem 0xfd400000-0xfd4fffff] Jul 6 23:55:01.797411 kernel: pci_bus 0000:0b: resource 2 [mem 0xc0200000-0xc03fffff 64bit pref] Jul 6 23:55:01.797463 kernel: pci_bus 0000:0c: resource 0 [io 0x9000-0x9fff] Jul 6 23:55:01.797511 kernel: pci_bus 0000:0c: resource 1 [mem 0xfd000000-0xfd0fffff] Jul 6 23:55:01.797608 kernel: pci_bus 0000:0c: resource 2 [mem 0xe7700000-0xe77fffff 64bit pref] Jul 6 23:55:01.797661 kernel: pci_bus 0000:0d: resource 0 [io 0xd000-0xdfff] Jul 6 23:55:01.797710 kernel: pci_bus 0000:0d: resource 1 [mem 0xfcc00000-0xfccfffff] Jul 6 23:55:01.797765 kernel: pci_bus 0000:0d: resource 2 [mem 0xe7300000-0xe73fffff 64bit pref] Jul 6 23:55:01.797817 kernel: pci_bus 0000:0e: resource 1 [mem 0xfc800000-0xfc8fffff] Jul 6 23:55:01.797865 kernel: pci_bus 0000:0e: resource 2 [mem 0xe6f00000-0xe6ffffff 64bit pref] Jul 6 23:55:01.797917 kernel: pci_bus 0000:0f: resource 1 [mem 0xfc400000-0xfc4fffff] Jul 6 23:55:01.797964 kernel: pci_bus 0000:0f: resource 2 [mem 0xe6b00000-0xe6bfffff 64bit pref] Jul 6 23:55:01.798017 kernel: pci_bus 0000:10: resource 1 [mem 0xfc000000-0xfc0fffff] Jul 6 23:55:01.798067 kernel: pci_bus 0000:10: resource 2 [mem 0xe6700000-0xe67fffff 64bit pref] Jul 6 23:55:01.798119 kernel: pci_bus 0000:11: resource 1 [mem 0xfbc00000-0xfbcfffff] Jul 6 23:55:01.798167 kernel: pci_bus 0000:11: resource 2 [mem 0xe6300000-0xe63fffff 64bit pref] Jul 6 23:55:01.798223 kernel: pci_bus 0000:12: resource 1 [mem 0xfb800000-0xfb8fffff] Jul 6 23:55:01.798272 kernel: pci_bus 0000:12: resource 2 [mem 0xe5f00000-0xe5ffffff 64bit pref] Jul 6 23:55:01.798325 kernel: pci_bus 0000:13: resource 0 [io 0x6000-0x6fff] Jul 6 23:55:01.798376 kernel: pci_bus 0000:13: resource 1 [mem 0xfd300000-0xfd3fffff] Jul 6 23:55:01.798424 kernel: pci_bus 0000:13: resource 2 [mem 0xe7a00000-0xe7afffff 64bit pref] Jul 6 23:55:01.798476 kernel: pci_bus 0000:14: resource 0 [io 0xa000-0xafff] Jul 6 23:55:01.798531 kernel: pci_bus 0000:14: resource 1 [mem 0xfcf00000-0xfcffffff] Jul 6 23:55:01.798597 kernel: pci_bus 0000:14: resource 2 [mem 0xe7600000-0xe76fffff 64bit pref] Jul 6 23:55:01.798651 kernel: pci_bus 0000:15: resource 0 [io 0xe000-0xefff] Jul 6 23:55:01.798699 kernel: pci_bus 0000:15: resource 1 [mem 0xfcb00000-0xfcbfffff] Jul 6 23:55:01.798749 kernel: pci_bus 0000:15: resource 2 [mem 0xe7200000-0xe72fffff 64bit pref] Jul 6 23:55:01.798800 kernel: pci_bus 0000:16: resource 1 [mem 0xfc700000-0xfc7fffff] Jul 6 23:55:01.798848 kernel: pci_bus 0000:16: resource 2 [mem 0xe6e00000-0xe6efffff 64bit pref] Jul 6 23:55:01.798901 kernel: pci_bus 0000:17: resource 1 [mem 0xfc300000-0xfc3fffff] Jul 6 23:55:01.798950 kernel: pci_bus 0000:17: resource 2 [mem 0xe6a00000-0xe6afffff 64bit pref] Jul 6 23:55:01.799003 kernel: pci_bus 0000:18: resource 1 [mem 0xfbf00000-0xfbffffff] Jul 6 23:55:01.799054 kernel: pci_bus 0000:18: resource 2 [mem 0xe6600000-0xe66fffff 64bit pref] Jul 6 23:55:01.799110 kernel: pci_bus 0000:19: resource 1 [mem 0xfbb00000-0xfbbfffff] Jul 6 23:55:01.799158 kernel: pci_bus 0000:19: resource 2 [mem 0xe6200000-0xe62fffff 64bit pref] Jul 6 23:55:01.799211 kernel: pci_bus 0000:1a: resource 1 [mem 0xfb700000-0xfb7fffff] Jul 6 23:55:01.799260 kernel: pci_bus 0000:1a: resource 2 [mem 0xe5e00000-0xe5efffff 64bit pref] Jul 6 23:55:01.799317 kernel: pci_bus 0000:1b: resource 0 [io 0x7000-0x7fff] Jul 6 23:55:01.799370 kernel: pci_bus 0000:1b: resource 1 [mem 0xfd200000-0xfd2fffff] Jul 6 23:55:01.799421 kernel: pci_bus 0000:1b: resource 2 [mem 0xe7900000-0xe79fffff 64bit pref] Jul 6 23:55:01.799474 kernel: pci_bus 0000:1c: resource 0 [io 0xb000-0xbfff] Jul 6 23:55:01.799523 kernel: pci_bus 0000:1c: resource 1 [mem 0xfce00000-0xfcefffff] Jul 6 23:55:01.799591 kernel: pci_bus 0000:1c: resource 2 [mem 0xe7500000-0xe75fffff 64bit pref] Jul 6 23:55:01.799647 kernel: pci_bus 0000:1d: resource 1 [mem 0xfca00000-0xfcafffff] Jul 6 23:55:01.799698 kernel: pci_bus 0000:1d: resource 2 [mem 0xe7100000-0xe71fffff 64bit pref] Jul 6 23:55:01.799751 kernel: pci_bus 0000:1e: resource 1 [mem 0xfc600000-0xfc6fffff] Jul 6 23:55:01.799799 kernel: pci_bus 0000:1e: resource 2 [mem 0xe6d00000-0xe6dfffff 64bit pref] Jul 6 23:55:01.799852 kernel: pci_bus 0000:1f: resource 1 [mem 0xfc200000-0xfc2fffff] Jul 6 23:55:01.799901 kernel: pci_bus 0000:1f: resource 2 [mem 0xe6900000-0xe69fffff 64bit pref] Jul 6 23:55:01.799952 kernel: pci_bus 0000:20: resource 1 [mem 0xfbe00000-0xfbefffff] Jul 6 23:55:01.800001 kernel: pci_bus 0000:20: resource 2 [mem 0xe6500000-0xe65fffff 64bit pref] Jul 6 23:55:01.800055 kernel: pci_bus 0000:21: resource 1 [mem 0xfba00000-0xfbafffff] Jul 6 23:55:01.800104 kernel: pci_bus 0000:21: resource 2 [mem 0xe6100000-0xe61fffff 64bit pref] Jul 6 23:55:01.800156 kernel: pci_bus 0000:22: resource 1 [mem 0xfb600000-0xfb6fffff] Jul 6 23:55:01.800204 kernel: pci_bus 0000:22: resource 2 [mem 0xe5d00000-0xe5dfffff 64bit pref] Jul 6 23:55:01.800261 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jul 6 23:55:01.800272 kernel: PCI: CLS 32 bytes, default 64 Jul 6 23:55:01.800280 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jul 6 23:55:01.800287 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x311fd3cd494, max_idle_ns: 440795223879 ns Jul 6 23:55:01.800293 kernel: clocksource: Switched to clocksource tsc Jul 6 23:55:01.800300 kernel: Initialise system trusted keyrings Jul 6 23:55:01.800307 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Jul 6 23:55:01.800314 kernel: Key type asymmetric registered Jul 6 23:55:01.800320 kernel: Asymmetric key parser 'x509' registered Jul 6 23:55:01.800326 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jul 6 23:55:01.800333 kernel: io scheduler mq-deadline registered Jul 6 23:55:01.800341 kernel: io scheduler kyber registered Jul 6 23:55:01.800348 kernel: io scheduler bfq registered Jul 6 23:55:01.800403 kernel: pcieport 0000:00:15.0: PME: Signaling with IRQ 24 Jul 6 23:55:01.800457 kernel: pcieport 0000:00:15.0: pciehp: Slot #160 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 6 23:55:01.800513 kernel: pcieport 0000:00:15.1: PME: Signaling with IRQ 25 Jul 6 23:55:01.800613 kernel: pcieport 0000:00:15.1: pciehp: Slot #161 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 6 23:55:01.800670 kernel: pcieport 0000:00:15.2: PME: Signaling with IRQ 26 Jul 6 23:55:01.800723 kernel: pcieport 0000:00:15.2: pciehp: Slot #162 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 6 23:55:01.800779 kernel: pcieport 0000:00:15.3: PME: Signaling with IRQ 27 Jul 6 23:55:01.800832 kernel: pcieport 0000:00:15.3: pciehp: Slot #163 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 6 23:55:01.800885 kernel: pcieport 0000:00:15.4: PME: Signaling with IRQ 28 Jul 6 23:55:01.800938 kernel: pcieport 0000:00:15.4: pciehp: Slot #164 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 6 23:55:01.800991 kernel: pcieport 0000:00:15.5: PME: Signaling with IRQ 29 Jul 6 23:55:01.801043 kernel: pcieport 0000:00:15.5: pciehp: Slot #165 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 6 23:55:01.801099 kernel: pcieport 0000:00:15.6: PME: Signaling with IRQ 30 Jul 6 23:55:01.801151 kernel: pcieport 0000:00:15.6: pciehp: Slot #166 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 6 23:55:01.801204 kernel: pcieport 0000:00:15.7: PME: Signaling with IRQ 31 Jul 6 23:55:01.801256 kernel: pcieport 0000:00:15.7: pciehp: Slot #167 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 6 23:55:01.801309 kernel: pcieport 0000:00:16.0: PME: Signaling with IRQ 32 Jul 6 23:55:01.801365 kernel: pcieport 0000:00:16.0: pciehp: Slot #192 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 6 23:55:01.801419 kernel: pcieport 0000:00:16.1: PME: Signaling with IRQ 33 Jul 6 23:55:01.801472 kernel: pcieport 0000:00:16.1: pciehp: Slot #193 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 6 23:55:01.801536 kernel: pcieport 0000:00:16.2: PME: Signaling with IRQ 34 Jul 6 23:55:01.801668 kernel: pcieport 0000:00:16.2: pciehp: Slot #194 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 6 23:55:01.801723 kernel: pcieport 0000:00:16.3: PME: Signaling with IRQ 35 Jul 6 23:55:01.801776 kernel: pcieport 0000:00:16.3: pciehp: Slot #195 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 6 23:55:01.801833 kernel: pcieport 0000:00:16.4: PME: Signaling with IRQ 36 Jul 6 23:55:01.801886 kernel: pcieport 0000:00:16.4: pciehp: Slot #196 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 6 23:55:01.801939 kernel: pcieport 0000:00:16.5: PME: Signaling with IRQ 37 Jul 6 23:55:01.801991 kernel: pcieport 0000:00:16.5: pciehp: Slot #197 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 6 23:55:01.802044 kernel: pcieport 0000:00:16.6: PME: Signaling with IRQ 38 Jul 6 23:55:01.802100 kernel: pcieport 0000:00:16.6: pciehp: Slot #198 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 6 23:55:01.802153 kernel: pcieport 0000:00:16.7: PME: Signaling with IRQ 39 Jul 6 23:55:01.802206 kernel: pcieport 0000:00:16.7: pciehp: Slot #199 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 6 23:55:01.802260 kernel: pcieport 0000:00:17.0: PME: Signaling with IRQ 40 Jul 6 23:55:01.802314 kernel: pcieport 0000:00:17.0: pciehp: Slot #224 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 6 23:55:01.803611 kernel: pcieport 0000:00:17.1: PME: Signaling with IRQ 41 Jul 6 23:55:01.803686 kernel: pcieport 0000:00:17.1: pciehp: Slot #225 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 6 23:55:01.803746 kernel: pcieport 0000:00:17.2: PME: Signaling with IRQ 42 Jul 6 23:55:01.803801 kernel: pcieport 0000:00:17.2: pciehp: Slot #226 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 6 23:55:01.803857 kernel: pcieport 0000:00:17.3: PME: Signaling with IRQ 43 Jul 6 23:55:01.803912 kernel: pcieport 0000:00:17.3: pciehp: Slot #227 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 6 23:55:01.803967 kernel: pcieport 0000:00:17.4: PME: Signaling with IRQ 44 Jul 6 23:55:01.804023 kernel: pcieport 0000:00:17.4: pciehp: Slot #228 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 6 23:55:01.804078 kernel: pcieport 0000:00:17.5: PME: Signaling with IRQ 45 Jul 6 23:55:01.804132 kernel: pcieport 0000:00:17.5: pciehp: Slot #229 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 6 23:55:01.804185 kernel: pcieport 0000:00:17.6: PME: Signaling with IRQ 46 Jul 6 23:55:01.804239 kernel: pcieport 0000:00:17.6: pciehp: Slot #230 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 6 23:55:01.804293 kernel: pcieport 0000:00:17.7: PME: Signaling with IRQ 47 Jul 6 23:55:01.804350 kernel: pcieport 0000:00:17.7: pciehp: Slot #231 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 6 23:55:01.804404 kernel: pcieport 0000:00:18.0: PME: Signaling with IRQ 48 Jul 6 23:55:01.804457 kernel: pcieport 0000:00:18.0: pciehp: Slot #256 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 6 23:55:01.804518 kernel: pcieport 0000:00:18.1: PME: Signaling with IRQ 49 Jul 6 23:55:01.804594 kernel: pcieport 0000:00:18.1: pciehp: Slot #257 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 6 23:55:01.804651 kernel: pcieport 0000:00:18.2: PME: Signaling with IRQ 50 Jul 6 23:55:01.804707 kernel: pcieport 0000:00:18.2: pciehp: Slot #258 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 6 23:55:01.804762 kernel: pcieport 0000:00:18.3: PME: Signaling with IRQ 51 Jul 6 23:55:01.804816 kernel: pcieport 0000:00:18.3: pciehp: Slot #259 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 6 23:55:01.804870 kernel: pcieport 0000:00:18.4: PME: Signaling with IRQ 52 Jul 6 23:55:01.804924 kernel: pcieport 0000:00:18.4: pciehp: Slot #260 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 6 23:55:01.804981 kernel: pcieport 0000:00:18.5: PME: Signaling with IRQ 53 Jul 6 23:55:01.805034 kernel: pcieport 0000:00:18.5: pciehp: Slot #261 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 6 23:55:01.805089 kernel: pcieport 0000:00:18.6: PME: Signaling with IRQ 54 Jul 6 23:55:01.805141 kernel: pcieport 0000:00:18.6: pciehp: Slot #262 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 6 23:55:01.805195 kernel: pcieport 0000:00:18.7: PME: Signaling with IRQ 55 Jul 6 23:55:01.805247 kernel: pcieport 0000:00:18.7: pciehp: Slot #263 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 6 23:55:01.805259 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jul 6 23:55:01.805266 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 6 23:55:01.805273 kernel: 00:05: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jul 6 23:55:01.805279 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBC,PNP0f13:MOUS] at 0x60,0x64 irq 1,12 Jul 6 23:55:01.805285 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jul 6 23:55:01.805291 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jul 6 23:55:01.805348 kernel: rtc_cmos 00:01: registered as rtc0 Jul 6 23:55:01.805401 kernel: rtc_cmos 00:01: setting system clock to 2025-07-06T23:55:01 UTC (1751846101) Jul 6 23:55:01.805449 kernel: rtc_cmos 00:01: alarms up to one month, y3k, 114 bytes nvram Jul 6 23:55:01.805458 kernel: intel_pstate: CPU model not supported Jul 6 23:55:01.805465 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jul 6 23:55:01.805472 kernel: NET: Registered PF_INET6 protocol family Jul 6 23:55:01.805479 kernel: Segment Routing with IPv6 Jul 6 23:55:01.805485 kernel: In-situ OAM (IOAM) with IPv6 Jul 6 23:55:01.805491 kernel: NET: Registered PF_PACKET protocol family Jul 6 23:55:01.805502 kernel: Key type dns_resolver registered Jul 6 23:55:01.805511 kernel: IPI shorthand broadcast: enabled Jul 6 23:55:01.805517 kernel: sched_clock: Marking stable (928335659, 228834771)->(1222504812, -65334382) Jul 6 23:55:01.805524 kernel: registered taskstats version 1 Jul 6 23:55:01.805530 kernel: Loading compiled-in X.509 certificates Jul 6 23:55:01.805537 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.95-flatcar: 6372c48ca52cc7f7bbee5675b604584c1c68ec5b' Jul 6 23:55:01.805543 kernel: Key type .fscrypt registered Jul 6 23:55:01.806479 kernel: Key type fscrypt-provisioning registered Jul 6 23:55:01.806490 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 6 23:55:01.806497 kernel: ima: Allocated hash algorithm: sha1 Jul 6 23:55:01.806507 kernel: ima: No architecture policies found Jul 6 23:55:01.806515 kernel: clk: Disabling unused clocks Jul 6 23:55:01.806522 kernel: Freeing unused kernel image (initmem) memory: 42868K Jul 6 23:55:01.806529 kernel: Write protecting the kernel read-only data: 36864k Jul 6 23:55:01.806535 kernel: Freeing unused kernel image (rodata/data gap) memory: 1828K Jul 6 23:55:01.806542 kernel: Run /init as init process Jul 6 23:55:01.806555 kernel: with arguments: Jul 6 23:55:01.806562 kernel: /init Jul 6 23:55:01.806569 kernel: with environment: Jul 6 23:55:01.806576 kernel: HOME=/ Jul 6 23:55:01.806583 kernel: TERM=linux Jul 6 23:55:01.806589 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 6 23:55:01.806597 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 6 23:55:01.806607 systemd[1]: Detected virtualization vmware. Jul 6 23:55:01.806614 systemd[1]: Detected architecture x86-64. Jul 6 23:55:01.806620 systemd[1]: Running in initrd. Jul 6 23:55:01.806628 systemd[1]: No hostname configured, using default hostname. Jul 6 23:55:01.806634 systemd[1]: Hostname set to . Jul 6 23:55:01.806641 systemd[1]: Initializing machine ID from random generator. Jul 6 23:55:01.806647 systemd[1]: Queued start job for default target initrd.target. Jul 6 23:55:01.806654 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 6 23:55:01.806660 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 6 23:55:01.806669 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 6 23:55:01.806676 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 6 23:55:01.806684 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 6 23:55:01.806691 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 6 23:55:01.806699 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 6 23:55:01.806706 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 6 23:55:01.806712 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 6 23:55:01.806719 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 6 23:55:01.806726 systemd[1]: Reached target paths.target - Path Units. Jul 6 23:55:01.806734 systemd[1]: Reached target slices.target - Slice Units. Jul 6 23:55:01.806740 systemd[1]: Reached target swap.target - Swaps. Jul 6 23:55:01.806747 systemd[1]: Reached target timers.target - Timer Units. Jul 6 23:55:01.806753 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 6 23:55:01.806760 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 6 23:55:01.806766 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 6 23:55:01.806773 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jul 6 23:55:01.806779 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 6 23:55:01.806786 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 6 23:55:01.806795 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 6 23:55:01.806801 systemd[1]: Reached target sockets.target - Socket Units. Jul 6 23:55:01.806808 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 6 23:55:01.806815 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 6 23:55:01.806822 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 6 23:55:01.806828 systemd[1]: Starting systemd-fsck-usr.service... Jul 6 23:55:01.806835 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 6 23:55:01.806841 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 6 23:55:01.806849 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 6 23:55:01.806872 systemd-journald[215]: Collecting audit messages is disabled. Jul 6 23:55:01.806889 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 6 23:55:01.806896 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 6 23:55:01.806905 systemd[1]: Finished systemd-fsck-usr.service. Jul 6 23:55:01.806912 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 6 23:55:01.806918 kernel: Bridge firewalling registered Jul 6 23:55:01.806925 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 6 23:55:01.806931 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 6 23:55:01.806940 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:55:01.806946 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 6 23:55:01.806953 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 6 23:55:01.806960 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 6 23:55:01.806966 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 6 23:55:01.806974 systemd-journald[215]: Journal started Jul 6 23:55:01.806991 systemd-journald[215]: Runtime Journal (/run/log/journal/2119a5fe2e4c436b8627108f214aa588) is 4.8M, max 38.6M, 33.8M free. Jul 6 23:55:01.756079 systemd-modules-load[216]: Inserted module 'overlay' Jul 6 23:55:01.777230 systemd-modules-load[216]: Inserted module 'br_netfilter' Jul 6 23:55:01.813217 systemd[1]: Started systemd-journald.service - Journal Service. Jul 6 23:55:01.819686 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 6 23:55:01.819924 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 6 23:55:01.820120 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 6 23:55:01.820298 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 6 23:55:01.821667 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 6 23:55:01.826488 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 6 23:55:01.829717 dracut-cmdline[246]: dracut-dracut-053 Jul 6 23:55:01.830745 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 6 23:55:01.831862 dracut-cmdline[246]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=65c65ff9d50198f0ae5c37458dc3ff85c6a690e7aa124bb306a2f4c63a54d876 Jul 6 23:55:01.849828 systemd-resolved[252]: Positive Trust Anchors: Jul 6 23:55:01.849839 systemd-resolved[252]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 6 23:55:01.849862 systemd-resolved[252]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 6 23:55:01.851780 systemd-resolved[252]: Defaulting to hostname 'linux'. Jul 6 23:55:01.852460 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 6 23:55:01.852625 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 6 23:55:01.883574 kernel: SCSI subsystem initialized Jul 6 23:55:01.891571 kernel: Loading iSCSI transport class v2.0-870. Jul 6 23:55:01.898570 kernel: iscsi: registered transport (tcp) Jul 6 23:55:01.914096 kernel: iscsi: registered transport (qla4xxx) Jul 6 23:55:01.914145 kernel: QLogic iSCSI HBA Driver Jul 6 23:55:01.935080 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 6 23:55:01.946731 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 6 23:55:01.961783 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 6 23:55:01.961829 kernel: device-mapper: uevent: version 1.0.3 Jul 6 23:55:01.962855 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jul 6 23:55:01.994578 kernel: raid6: avx2x4 gen() 51848 MB/s Jul 6 23:55:02.011580 kernel: raid6: avx2x2 gen() 46149 MB/s Jul 6 23:55:02.028808 kernel: raid6: avx2x1 gen() 43207 MB/s Jul 6 23:55:02.028865 kernel: raid6: using algorithm avx2x4 gen() 51848 MB/s Jul 6 23:55:02.046852 kernel: raid6: .... xor() 18012 MB/s, rmw enabled Jul 6 23:55:02.046916 kernel: raid6: using avx2x2 recovery algorithm Jul 6 23:55:02.060574 kernel: xor: automatically using best checksumming function avx Jul 6 23:55:02.163577 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 6 23:55:02.169517 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 6 23:55:02.174692 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 6 23:55:02.182285 systemd-udevd[432]: Using default interface naming scheme 'v255'. Jul 6 23:55:02.184880 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 6 23:55:02.194753 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 6 23:55:02.202446 dracut-pre-trigger[438]: rd.md=0: removing MD RAID activation Jul 6 23:55:02.221186 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 6 23:55:02.226708 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 6 23:55:02.303852 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 6 23:55:02.309644 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 6 23:55:02.321362 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 6 23:55:02.321708 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 6 23:55:02.322204 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 6 23:55:02.322481 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 6 23:55:02.325693 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 6 23:55:02.331775 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 6 23:55:02.371565 kernel: VMware PVSCSI driver - version 1.0.7.0-k Jul 6 23:55:02.373317 kernel: vmw_pvscsi: using 64bit dma Jul 6 23:55:02.373338 kernel: vmw_pvscsi: max_id: 16 Jul 6 23:55:02.373346 kernel: vmw_pvscsi: setting ring_pages to 8 Jul 6 23:55:02.375558 kernel: vmw_pvscsi: enabling reqCallThreshold Jul 6 23:55:02.375576 kernel: vmw_pvscsi: driver-based request coalescing enabled Jul 6 23:55:02.375616 kernel: vmw_pvscsi: using MSI-X Jul 6 23:55:02.378173 kernel: scsi host0: VMware PVSCSI storage adapter rev 2, req/cmp/msg rings: 8/8/1 pages, cmd_per_lun=254 Jul 6 23:55:02.378263 kernel: vmw_pvscsi 0000:03:00.0: VMware PVSCSI rev 2 host #0 Jul 6 23:55:02.379596 kernel: scsi 0:0:0:0: Direct-Access VMware Virtual disk 2.0 PQ: 0 ANSI: 6 Jul 6 23:55:02.395574 kernel: VMware vmxnet3 virtual NIC driver - version 1.7.0.0-k-NAPI Jul 6 23:55:02.399565 kernel: vmxnet3 0000:0b:00.0: # of Tx queues : 2, # of Rx queues : 2 Jul 6 23:55:02.403560 kernel: vmxnet3 0000:0b:00.0 eth0: NIC Link is Up 10000 Mbps Jul 6 23:55:02.407955 kernel: cryptd: max_cpu_qlen set to 1000 Jul 6 23:55:02.409793 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 6 23:55:02.409867 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 6 23:55:02.410843 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 6 23:55:02.410969 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 6 23:55:02.411047 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:55:02.411187 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 6 23:55:02.415578 kernel: vmxnet3 0000:0b:00.0 ens192: renamed from eth0 Jul 6 23:55:02.414730 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 6 23:55:02.424624 kernel: AVX2 version of gcm_enc/dec engaged. Jul 6 23:55:02.426464 kernel: AES CTR mode by8 optimization enabled Jul 6 23:55:02.428571 kernel: libata version 3.00 loaded. Jul 6 23:55:02.429576 kernel: ata_piix 0000:00:07.1: version 2.13 Jul 6 23:55:02.431578 kernel: scsi host1: ata_piix Jul 6 23:55:02.431677 kernel: scsi host2: ata_piix Jul 6 23:55:02.431747 kernel: ata1: PATA max UDMA/33 cmd 0x1f0 ctl 0x3f6 bmdma 0x1060 irq 14 Jul 6 23:55:02.431756 kernel: ata2: PATA max UDMA/33 cmd 0x170 ctl 0x376 bmdma 0x1068 irq 15 Jul 6 23:55:02.439318 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:55:02.443708 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 6 23:55:02.454204 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 6 23:55:02.598583 kernel: ata2.00: ATAPI: VMware Virtual IDE CDROM Drive, 00000001, max UDMA/33 Jul 6 23:55:02.604602 kernel: scsi 2:0:0:0: CD-ROM NECVMWar VMware IDE CDR10 1.00 PQ: 0 ANSI: 5 Jul 6 23:55:02.616904 kernel: sd 0:0:0:0: [sda] 17805312 512-byte logical blocks: (9.12 GB/8.49 GiB) Jul 6 23:55:02.617030 kernel: sd 0:0:0:0: [sda] Write Protect is off Jul 6 23:55:02.617097 kernel: sd 0:0:0:0: [sda] Mode Sense: 31 00 00 00 Jul 6 23:55:02.617160 kernel: sd 0:0:0:0: [sda] Cache data unavailable Jul 6 23:55:02.617589 kernel: sd 0:0:0:0: [sda] Assuming drive cache: write through Jul 6 23:55:02.623880 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 6 23:55:02.623909 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jul 6 23:55:02.627569 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 1x/1x writer dvd-ram cd/rw xa/form2 cdda tray Jul 6 23:55:02.627665 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jul 6 23:55:02.641581 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jul 6 23:55:02.669891 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/sda6 scanned by (udev-worker) (485) Jul 6 23:55:02.670324 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_disk ROOT. Jul 6 23:55:02.674040 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_disk EFI-SYSTEM. Jul 6 23:55:02.674620 kernel: BTRFS: device fsid 01287863-c21f-4cbb-820d-bbae8208f32f devid 1 transid 34 /dev/sda3 scanned by (udev-worker) (486) Jul 6 23:55:02.676779 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_disk OEM. Jul 6 23:55:02.679712 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_disk USR-A. Jul 6 23:55:02.679846 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_disk USR-A. Jul 6 23:55:02.682617 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 6 23:55:02.709587 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 6 23:55:02.717572 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 6 23:55:03.723578 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 6 23:55:03.724171 disk-uuid[587]: The operation has completed successfully. Jul 6 23:55:03.767137 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 6 23:55:03.767213 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 6 23:55:03.770656 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 6 23:55:03.774329 sh[605]: Success Jul 6 23:55:03.783599 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jul 6 23:55:03.842646 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 6 23:55:03.843854 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 6 23:55:03.844204 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 6 23:55:03.860943 kernel: BTRFS info (device dm-0): first mount of filesystem 01287863-c21f-4cbb-820d-bbae8208f32f Jul 6 23:55:03.860985 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jul 6 23:55:03.860993 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jul 6 23:55:03.862061 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jul 6 23:55:03.862882 kernel: BTRFS info (device dm-0): using free space tree Jul 6 23:55:03.872567 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jul 6 23:55:03.874364 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 6 23:55:03.884700 systemd[1]: Starting afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments... Jul 6 23:55:03.886111 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 6 23:55:03.911924 kernel: BTRFS info (device sda6): first mount of filesystem 11f56a79-b29d-47db-ad8e-56effe5ac41b Jul 6 23:55:03.911959 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jul 6 23:55:03.911968 kernel: BTRFS info (device sda6): using free space tree Jul 6 23:55:03.922567 kernel: BTRFS info (device sda6): enabling ssd optimizations Jul 6 23:55:03.929802 systemd[1]: mnt-oem.mount: Deactivated successfully. Jul 6 23:55:03.930621 kernel: BTRFS info (device sda6): last unmount of filesystem 11f56a79-b29d-47db-ad8e-56effe5ac41b Jul 6 23:55:03.936786 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 6 23:55:03.940677 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 6 23:55:03.965794 systemd[1]: Finished afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments. Jul 6 23:55:03.971736 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 6 23:55:04.027168 ignition[664]: Ignition 2.19.0 Jul 6 23:55:04.027176 ignition[664]: Stage: fetch-offline Jul 6 23:55:04.027196 ignition[664]: no configs at "/usr/lib/ignition/base.d" Jul 6 23:55:04.027201 ignition[664]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Jul 6 23:55:04.027264 ignition[664]: parsed url from cmdline: "" Jul 6 23:55:04.027266 ignition[664]: no config URL provided Jul 6 23:55:04.027269 ignition[664]: reading system config file "/usr/lib/ignition/user.ign" Jul 6 23:55:04.027273 ignition[664]: no config at "/usr/lib/ignition/user.ign" Jul 6 23:55:04.029339 ignition[664]: config successfully fetched Jul 6 23:55:04.029360 ignition[664]: parsing config with SHA512: 9ec580518a5d41887662762ab23102ff355f39c474e2d172a5e2944f03f6cba03ff8f5b44f8d2b18c448f219b926140863af099d28d8edc898ad2680d556a10c Jul 6 23:55:04.034126 unknown[664]: fetched base config from "system" Jul 6 23:55:04.034400 ignition[664]: fetch-offline: fetch-offline passed Jul 6 23:55:04.034133 unknown[664]: fetched user config from "vmware" Jul 6 23:55:04.034461 ignition[664]: Ignition finished successfully Jul 6 23:55:04.034234 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 6 23:55:04.040573 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 6 23:55:04.041581 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 6 23:55:04.052377 systemd-networkd[797]: lo: Link UP Jul 6 23:55:04.052384 systemd-networkd[797]: lo: Gained carrier Jul 6 23:55:04.053304 systemd-networkd[797]: Enumeration completed Jul 6 23:55:04.053462 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 6 23:55:04.053689 systemd-networkd[797]: ens192: Configuring with /etc/systemd/network/10-dracut-cmdline-99.network. Jul 6 23:55:04.053925 systemd[1]: Reached target network.target - Network. Jul 6 23:55:04.054012 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jul 6 23:55:04.057460 kernel: vmxnet3 0000:0b:00.0 ens192: intr type 3, mode 0, 3 vectors allocated Jul 6 23:55:04.057648 kernel: vmxnet3 0000:0b:00.0 ens192: NIC Link is Up 10000 Mbps Jul 6 23:55:04.057218 systemd-networkd[797]: ens192: Link UP Jul 6 23:55:04.057220 systemd-networkd[797]: ens192: Gained carrier Jul 6 23:55:04.063718 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 6 23:55:04.071877 ignition[800]: Ignition 2.19.0 Jul 6 23:55:04.071884 ignition[800]: Stage: kargs Jul 6 23:55:04.071993 ignition[800]: no configs at "/usr/lib/ignition/base.d" Jul 6 23:55:04.071999 ignition[800]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Jul 6 23:55:04.072608 ignition[800]: kargs: kargs passed Jul 6 23:55:04.072639 ignition[800]: Ignition finished successfully Jul 6 23:55:04.073759 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 6 23:55:04.078696 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 6 23:55:04.085970 ignition[807]: Ignition 2.19.0 Jul 6 23:55:04.085976 ignition[807]: Stage: disks Jul 6 23:55:04.086099 ignition[807]: no configs at "/usr/lib/ignition/base.d" Jul 6 23:55:04.086104 ignition[807]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Jul 6 23:55:04.086744 ignition[807]: disks: disks passed Jul 6 23:55:04.086778 ignition[807]: Ignition finished successfully Jul 6 23:55:04.087450 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 6 23:55:04.087966 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 6 23:55:04.088202 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 6 23:55:04.088460 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 6 23:55:04.088694 systemd[1]: Reached target sysinit.target - System Initialization. Jul 6 23:55:04.088919 systemd[1]: Reached target basic.target - Basic System. Jul 6 23:55:04.093753 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 6 23:55:04.101806 systemd-resolved[252]: Detected conflict on linux IN A 139.178.70.109 Jul 6 23:55:04.102126 systemd-resolved[252]: Hostname conflict, changing published hostname from 'linux' to 'linux6'. Jul 6 23:55:04.106638 systemd-fsck[816]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Jul 6 23:55:04.108563 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 6 23:55:04.112621 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 6 23:55:04.174562 kernel: EXT4-fs (sda9): mounted filesystem c3eefe20-4a42-420d-8034-4d5498275b2f r/w with ordered data mode. Quota mode: none. Jul 6 23:55:04.174719 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 6 23:55:04.175210 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 6 23:55:04.185645 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 6 23:55:04.189622 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 6 23:55:04.189889 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jul 6 23:55:04.189918 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 6 23:55:04.189933 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 6 23:55:04.193385 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 6 23:55:04.194439 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 6 23:55:04.196574 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by mount (824) Jul 6 23:55:04.200328 kernel: BTRFS info (device sda6): first mount of filesystem 11f56a79-b29d-47db-ad8e-56effe5ac41b Jul 6 23:55:04.200349 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jul 6 23:55:04.200358 kernel: BTRFS info (device sda6): using free space tree Jul 6 23:55:04.207609 kernel: BTRFS info (device sda6): enabling ssd optimizations Jul 6 23:55:04.207266 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 6 23:55:04.232856 initrd-setup-root[848]: cut: /sysroot/etc/passwd: No such file or directory Jul 6 23:55:04.236957 initrd-setup-root[855]: cut: /sysroot/etc/group: No such file or directory Jul 6 23:55:04.239496 initrd-setup-root[862]: cut: /sysroot/etc/shadow: No such file or directory Jul 6 23:55:04.241964 initrd-setup-root[869]: cut: /sysroot/etc/gshadow: No such file or directory Jul 6 23:55:04.297338 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 6 23:55:04.300644 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 6 23:55:04.302225 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 6 23:55:04.306591 kernel: BTRFS info (device sda6): last unmount of filesystem 11f56a79-b29d-47db-ad8e-56effe5ac41b Jul 6 23:55:04.318565 ignition[937]: INFO : Ignition 2.19.0 Jul 6 23:55:04.318565 ignition[937]: INFO : Stage: mount Jul 6 23:55:04.318565 ignition[937]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 6 23:55:04.318565 ignition[937]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Jul 6 23:55:04.320571 ignition[937]: INFO : mount: mount passed Jul 6 23:55:04.320571 ignition[937]: INFO : Ignition finished successfully Jul 6 23:55:04.322739 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 6 23:55:04.327661 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 6 23:55:04.330831 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 6 23:55:04.859183 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 6 23:55:04.863679 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 6 23:55:04.870566 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (948) Jul 6 23:55:04.873153 kernel: BTRFS info (device sda6): first mount of filesystem 11f56a79-b29d-47db-ad8e-56effe5ac41b Jul 6 23:55:04.873182 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jul 6 23:55:04.873191 kernel: BTRFS info (device sda6): using free space tree Jul 6 23:55:04.909582 kernel: BTRFS info (device sda6): enabling ssd optimizations Jul 6 23:55:04.913018 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 6 23:55:04.930635 ignition[965]: INFO : Ignition 2.19.0 Jul 6 23:55:04.930635 ignition[965]: INFO : Stage: files Jul 6 23:55:04.930635 ignition[965]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 6 23:55:04.930635 ignition[965]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Jul 6 23:55:04.930635 ignition[965]: DEBUG : files: compiled without relabeling support, skipping Jul 6 23:55:04.931490 ignition[965]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 6 23:55:04.931639 ignition[965]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 6 23:55:04.934695 ignition[965]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 6 23:55:04.935066 ignition[965]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 6 23:55:04.935508 unknown[965]: wrote ssh authorized keys file for user: core Jul 6 23:55:04.935862 ignition[965]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 6 23:55:04.938584 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Jul 6 23:55:04.938808 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Jul 6 23:55:04.938808 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jul 6 23:55:04.938808 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jul 6 23:55:04.976669 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jul 6 23:55:05.073109 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jul 6 23:55:05.073626 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jul 6 23:55:05.073626 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jul 6 23:55:05.073626 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 6 23:55:05.074793 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 6 23:55:05.074793 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 6 23:55:05.074793 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 6 23:55:05.074793 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 6 23:55:05.074793 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 6 23:55:05.074793 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 6 23:55:05.074793 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 6 23:55:05.074793 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 6 23:55:05.074793 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 6 23:55:05.074793 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 6 23:55:05.074793 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 Jul 6 23:55:05.506686 systemd-networkd[797]: ens192: Gained IPv6LL Jul 6 23:55:05.771767 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jul 6 23:55:06.044612 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 6 23:55:06.044612 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/etc/systemd/network/00-vmware.network" Jul 6 23:55:06.044612 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/etc/systemd/network/00-vmware.network" Jul 6 23:55:06.044612 ignition[965]: INFO : files: op(d): [started] processing unit "containerd.service" Jul 6 23:55:06.044612 ignition[965]: INFO : files: op(d): op(e): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jul 6 23:55:06.044612 ignition[965]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jul 6 23:55:06.044612 ignition[965]: INFO : files: op(d): [finished] processing unit "containerd.service" Jul 6 23:55:06.044612 ignition[965]: INFO : files: op(f): [started] processing unit "prepare-helm.service" Jul 6 23:55:06.044612 ignition[965]: INFO : files: op(f): op(10): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 6 23:55:06.044612 ignition[965]: INFO : files: op(f): op(10): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 6 23:55:06.044612 ignition[965]: INFO : files: op(f): [finished] processing unit "prepare-helm.service" Jul 6 23:55:06.044612 ignition[965]: INFO : files: op(11): [started] processing unit "coreos-metadata.service" Jul 6 23:55:06.044612 ignition[965]: INFO : files: op(11): op(12): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 6 23:55:06.047171 ignition[965]: INFO : files: op(11): op(12): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 6 23:55:06.047171 ignition[965]: INFO : files: op(11): [finished] processing unit "coreos-metadata.service" Jul 6 23:55:06.047171 ignition[965]: INFO : files: op(13): [started] setting preset to disabled for "coreos-metadata.service" Jul 6 23:55:06.768448 ignition[965]: INFO : files: op(13): op(14): [started] removing enablement symlink(s) for "coreos-metadata.service" Jul 6 23:55:06.771790 ignition[965]: INFO : files: op(13): op(14): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jul 6 23:55:06.772004 ignition[965]: INFO : files: op(13): [finished] setting preset to disabled for "coreos-metadata.service" Jul 6 23:55:06.772004 ignition[965]: INFO : files: op(15): [started] setting preset to enabled for "prepare-helm.service" Jul 6 23:55:06.772004 ignition[965]: INFO : files: op(15): [finished] setting preset to enabled for "prepare-helm.service" Jul 6 23:55:06.772004 ignition[965]: INFO : files: createResultFile: createFiles: op(16): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 6 23:55:06.772718 ignition[965]: INFO : files: createResultFile: createFiles: op(16): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 6 23:55:06.772718 ignition[965]: INFO : files: files passed Jul 6 23:55:06.772718 ignition[965]: INFO : Ignition finished successfully Jul 6 23:55:06.772776 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 6 23:55:06.776651 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 6 23:55:06.778766 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 6 23:55:06.794652 initrd-setup-root-after-ignition[994]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 6 23:55:06.794652 initrd-setup-root-after-ignition[994]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 6 23:55:06.795233 initrd-setup-root-after-ignition[998]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 6 23:55:06.796546 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 6 23:55:06.796983 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 6 23:55:06.800714 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 6 23:55:06.801359 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 6 23:55:06.801435 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 6 23:55:06.831136 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 6 23:55:06.831196 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 6 23:55:06.831627 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 6 23:55:06.831781 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 6 23:55:06.831986 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 6 23:55:06.832416 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 6 23:55:06.841807 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 6 23:55:06.847706 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 6 23:55:06.853159 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 6 23:55:06.853335 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 6 23:55:06.853485 systemd[1]: Stopped target timers.target - Timer Units. Jul 6 23:55:06.853672 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 6 23:55:06.853745 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 6 23:55:06.854150 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 6 23:55:06.854373 systemd[1]: Stopped target basic.target - Basic System. Jul 6 23:55:06.854577 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 6 23:55:06.854785 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 6 23:55:06.854970 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 6 23:55:06.855177 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 6 23:55:06.855378 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 6 23:55:06.855634 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 6 23:55:06.855787 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 6 23:55:06.855989 systemd[1]: Stopped target swap.target - Swaps. Jul 6 23:55:06.856145 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 6 23:55:06.856211 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 6 23:55:06.856467 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 6 23:55:06.856705 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 6 23:55:06.856889 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 6 23:55:06.856936 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 6 23:55:06.857098 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 6 23:55:06.857161 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 6 23:55:06.857403 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 6 23:55:06.857471 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 6 23:55:06.857728 systemd[1]: Stopped target paths.target - Path Units. Jul 6 23:55:06.857857 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 6 23:55:06.861590 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 6 23:55:06.861794 systemd[1]: Stopped target slices.target - Slice Units. Jul 6 23:55:06.862003 systemd[1]: Stopped target sockets.target - Socket Units. Jul 6 23:55:06.862188 systemd[1]: iscsid.socket: Deactivated successfully. Jul 6 23:55:06.862246 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 6 23:55:06.862462 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 6 23:55:06.862511 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 6 23:55:06.862762 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 6 23:55:06.862828 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 6 23:55:06.863073 systemd[1]: ignition-files.service: Deactivated successfully. Jul 6 23:55:06.863130 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 6 23:55:06.867720 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 6 23:55:06.867834 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 6 23:55:06.867926 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 6 23:55:06.869146 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 6 23:55:06.869244 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 6 23:55:06.869333 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 6 23:55:06.869514 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 6 23:55:06.869603 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 6 23:55:06.872747 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 6 23:55:06.872807 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 6 23:55:06.877561 ignition[1019]: INFO : Ignition 2.19.0 Jul 6 23:55:06.877561 ignition[1019]: INFO : Stage: umount Jul 6 23:55:06.877561 ignition[1019]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 6 23:55:06.877561 ignition[1019]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Jul 6 23:55:06.878776 ignition[1019]: INFO : umount: umount passed Jul 6 23:55:06.878776 ignition[1019]: INFO : Ignition finished successfully Jul 6 23:55:06.879792 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 6 23:55:06.879871 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 6 23:55:06.880135 systemd[1]: Stopped target network.target - Network. Jul 6 23:55:06.880241 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 6 23:55:06.880269 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 6 23:55:06.880420 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 6 23:55:06.880445 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 6 23:55:06.880592 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 6 23:55:06.880614 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 6 23:55:06.880764 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 6 23:55:06.880786 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 6 23:55:06.881046 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 6 23:55:06.881311 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 6 23:55:06.885545 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 6 23:55:06.885632 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 6 23:55:06.886650 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 6 23:55:06.886673 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 6 23:55:06.889659 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 6 23:55:06.890464 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 6 23:55:06.890515 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 6 23:55:06.890657 systemd[1]: afterburn-network-kargs.service: Deactivated successfully. Jul 6 23:55:06.890680 systemd[1]: Stopped afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments. Jul 6 23:55:06.891234 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 6 23:55:06.892197 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 6 23:55:06.892251 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 6 23:55:06.894987 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 6 23:55:06.895040 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 6 23:55:06.895256 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 6 23:55:06.895281 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 6 23:55:06.895391 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 6 23:55:06.895414 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 6 23:55:06.899316 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 6 23:55:06.900943 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 6 23:55:06.901001 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 6 23:55:06.903809 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 6 23:55:06.904003 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 6 23:55:06.904530 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 6 23:55:06.904596 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 6 23:55:06.904755 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 6 23:55:06.904787 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 6 23:55:06.904945 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 6 23:55:06.904970 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 6 23:55:06.905243 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 6 23:55:06.905266 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 6 23:55:06.905591 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 6 23:55:06.905617 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 6 23:55:06.908686 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 6 23:55:06.908802 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 6 23:55:06.908834 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 6 23:55:06.908970 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 6 23:55:06.908993 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:55:06.912318 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 6 23:55:06.912374 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 6 23:55:07.054742 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 6 23:55:07.054830 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 6 23:55:07.055130 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 6 23:55:07.055254 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 6 23:55:07.055281 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 6 23:55:07.059645 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 6 23:55:07.093961 systemd[1]: Switching root. Jul 6 23:55:07.120609 systemd-journald[215]: Journal stopped Jul 6 23:55:08.591106 systemd-journald[215]: Received SIGTERM from PID 1 (systemd). Jul 6 23:55:08.591137 kernel: SELinux: policy capability network_peer_controls=1 Jul 6 23:55:08.591145 kernel: SELinux: policy capability open_perms=1 Jul 6 23:55:08.591151 kernel: SELinux: policy capability extended_socket_class=1 Jul 6 23:55:08.591156 kernel: SELinux: policy capability always_check_network=0 Jul 6 23:55:08.591162 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 6 23:55:08.591170 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 6 23:55:08.591176 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 6 23:55:08.591181 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 6 23:55:08.591187 systemd[1]: Successfully loaded SELinux policy in 65.322ms. Jul 6 23:55:08.591194 kernel: audit: type=1403 audit(1751846107.961:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 6 23:55:08.591217 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 6.916ms. Jul 6 23:55:08.591226 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 6 23:55:08.591235 systemd[1]: Detected virtualization vmware. Jul 6 23:55:08.591242 systemd[1]: Detected architecture x86-64. Jul 6 23:55:08.591248 systemd[1]: Detected first boot. Jul 6 23:55:08.591256 systemd[1]: Initializing machine ID from random generator. Jul 6 23:55:08.591264 zram_generator::config[1080]: No configuration found. Jul 6 23:55:08.591272 systemd[1]: Populated /etc with preset unit settings. Jul 6 23:55:08.591279 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Jul 6 23:55:08.591286 systemd[1]: COREOS_CUSTOM_PUBLIC_IPV4=$(ip addr show ens192 | grep -v "inet 10." | grep -Po "inet \K[\d.]+")" > ${OUTPUT}" Jul 6 23:55:08.591293 systemd[1]: Queued start job for default target multi-user.target. Jul 6 23:55:08.591299 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jul 6 23:55:08.591306 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 6 23:55:08.591314 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 6 23:55:08.591321 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 6 23:55:08.591327 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 6 23:55:08.591334 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 6 23:55:08.591341 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 6 23:55:08.591347 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 6 23:55:08.591354 systemd[1]: Created slice user.slice - User and Session Slice. Jul 6 23:55:08.591361 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 6 23:55:08.591368 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 6 23:55:08.591375 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 6 23:55:08.591381 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 6 23:55:08.591388 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 6 23:55:08.591395 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 6 23:55:08.591401 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jul 6 23:55:08.591408 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 6 23:55:08.591417 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 6 23:55:08.591424 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 6 23:55:08.591432 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 6 23:55:08.591439 systemd[1]: Reached target slices.target - Slice Units. Jul 6 23:55:08.591446 systemd[1]: Reached target swap.target - Swaps. Jul 6 23:55:08.591453 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 6 23:55:08.591460 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 6 23:55:08.591466 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 6 23:55:08.591474 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jul 6 23:55:08.591482 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 6 23:55:08.591489 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 6 23:55:08.591500 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 6 23:55:08.591507 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 6 23:55:08.591516 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 6 23:55:08.591523 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 6 23:55:08.591530 systemd[1]: Mounting media.mount - External Media Directory... Jul 6 23:55:08.591537 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 6 23:55:08.591544 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 6 23:55:08.591557 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 6 23:55:08.591564 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 6 23:55:08.591571 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 6 23:55:08.591580 systemd[1]: Starting ignition-delete-config.service - Ignition (delete config)... Jul 6 23:55:08.591588 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 6 23:55:08.591595 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 6 23:55:08.591602 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 6 23:55:08.591609 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 6 23:55:08.591615 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 6 23:55:08.591622 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 6 23:55:08.591629 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 6 23:55:08.591636 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 6 23:55:08.591645 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Jul 6 23:55:08.591651 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Jul 6 23:55:08.591658 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 6 23:55:08.591665 kernel: fuse: init (API version 7.39) Jul 6 23:55:08.591671 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 6 23:55:08.591678 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 6 23:55:08.591685 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 6 23:55:08.591692 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 6 23:55:08.591700 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 6 23:55:08.591708 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 6 23:55:08.591715 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 6 23:55:08.591721 systemd[1]: Mounted media.mount - External Media Directory. Jul 6 23:55:08.591729 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 6 23:55:08.591736 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 6 23:55:08.591742 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 6 23:55:08.591749 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 6 23:55:08.591758 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 6 23:55:08.591765 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 6 23:55:08.591772 kernel: ACPI: bus type drm_connector registered Jul 6 23:55:08.591778 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 6 23:55:08.592739 systemd-journald[1183]: Collecting audit messages is disabled. Jul 6 23:55:08.592765 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 6 23:55:08.592773 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 6 23:55:08.592780 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 6 23:55:08.592788 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 6 23:55:08.592794 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 6 23:55:08.592802 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 6 23:55:08.592808 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 6 23:55:08.592815 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 6 23:55:08.592824 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 6 23:55:08.592832 systemd-journald[1183]: Journal started Jul 6 23:55:08.592846 systemd-journald[1183]: Runtime Journal (/run/log/journal/6da6f81f7d77407aa297063413d90f00) is 4.8M, max 38.6M, 33.8M free. Jul 6 23:55:08.593223 jq[1158]: true Jul 6 23:55:08.594664 systemd[1]: Started systemd-journald.service - Journal Service. Jul 6 23:55:08.594871 jq[1207]: true Jul 6 23:55:08.597442 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 6 23:55:08.599557 kernel: loop: module loaded Jul 6 23:55:08.597925 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 6 23:55:08.603870 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 6 23:55:08.603965 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 6 23:55:08.611584 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 6 23:55:08.617647 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 6 23:55:08.620609 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 6 23:55:08.620777 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 6 23:55:08.627650 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 6 23:55:08.630664 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 6 23:55:08.631412 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 6 23:55:08.638700 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 6 23:55:08.638860 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 6 23:55:08.641789 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 6 23:55:08.650000 systemd-journald[1183]: Time spent on flushing to /var/log/journal/6da6f81f7d77407aa297063413d90f00 is 86.210ms for 1819 entries. Jul 6 23:55:08.650000 systemd-journald[1183]: System Journal (/var/log/journal/6da6f81f7d77407aa297063413d90f00) is 8.0M, max 584.8M, 576.8M free. Jul 6 23:55:08.760432 systemd-journald[1183]: Received client request to flush runtime journal. Jul 6 23:55:08.701945 ignition[1213]: Ignition 2.19.0 Jul 6 23:55:08.651694 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 6 23:55:08.702151 ignition[1213]: deleting config from guestinfo properties Jul 6 23:55:08.656846 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 6 23:55:08.737304 ignition[1213]: Successfully deleted config Jul 6 23:55:08.658641 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 6 23:55:08.677855 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 6 23:55:08.678047 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 6 23:55:08.731467 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 6 23:55:08.736858 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jul 6 23:55:08.740841 systemd[1]: Finished ignition-delete-config.service - Ignition (delete config). Jul 6 23:55:08.741175 systemd-tmpfiles[1244]: ACLs are not supported, ignoring. Jul 6 23:55:08.741184 systemd-tmpfiles[1244]: ACLs are not supported, ignoring. Jul 6 23:55:08.742846 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 6 23:55:08.747843 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 6 23:55:08.752736 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 6 23:55:08.754680 udevadm[1257]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jul 6 23:55:08.762689 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 6 23:55:08.783341 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 6 23:55:08.788682 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 6 23:55:08.797005 systemd-tmpfiles[1273]: ACLs are not supported, ignoring. Jul 6 23:55:08.797202 systemd-tmpfiles[1273]: ACLs are not supported, ignoring. Jul 6 23:55:08.800859 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 6 23:55:09.274168 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 6 23:55:09.281750 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 6 23:55:09.296867 systemd-udevd[1279]: Using default interface naming scheme 'v255'. Jul 6 23:55:09.365205 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 6 23:55:09.371932 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 6 23:55:09.383692 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 6 23:55:09.399740 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Jul 6 23:55:09.414486 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 6 23:55:09.462563 kernel: piix4_smbus 0000:00:07.3: SMBus Host Controller not enabled! Jul 6 23:55:09.468598 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jul 6 23:55:09.476589 kernel: ACPI: button: Power Button [PWRF] Jul 6 23:55:09.477159 systemd-networkd[1286]: lo: Link UP Jul 6 23:55:09.477164 systemd-networkd[1286]: lo: Gained carrier Jul 6 23:55:09.478311 systemd-networkd[1286]: Enumeration completed Jul 6 23:55:09.478378 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 6 23:55:09.479356 systemd-networkd[1286]: ens192: Configuring with /etc/systemd/network/00-vmware.network. Jul 6 23:55:09.481623 kernel: vmxnet3 0000:0b:00.0 ens192: intr type 3, mode 0, 3 vectors allocated Jul 6 23:55:09.482711 kernel: vmxnet3 0000:0b:00.0 ens192: NIC Link is Up 10000 Mbps Jul 6 23:55:09.482392 systemd-networkd[1286]: ens192: Link UP Jul 6 23:55:09.482497 systemd-networkd[1286]: ens192: Gained carrier Jul 6 23:55:09.483157 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 6 23:55:09.490877 kernel: vmw_vmci 0000:00:07.7: Using capabilities 0xc Jul 6 23:55:09.491407 kernel: Guest personality initialized and is active Jul 6 23:55:09.492752 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Jul 6 23:55:09.492778 kernel: Initialized host personality Jul 6 23:55:09.496564 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 34 scanned by (udev-worker) (1285) Jul 6 23:55:09.537767 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_disk OEM. Jul 6 23:55:09.559642 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input4 Jul 6 23:55:09.578567 kernel: mousedev: PS/2 mouse device common for all mice Jul 6 23:55:09.579954 (udev-worker)[1282]: id: Truncating stdout of 'dmi_memory_id' up to 16384 byte. Jul 6 23:55:09.590114 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 6 23:55:09.597800 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jul 6 23:55:09.603746 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jul 6 23:55:09.612240 lvm[1321]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 6 23:55:09.637728 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jul 6 23:55:09.638176 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 6 23:55:09.642763 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jul 6 23:55:09.649161 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:55:09.651499 lvm[1326]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 6 23:55:09.668472 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jul 6 23:55:09.668708 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 6 23:55:09.668835 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 6 23:55:09.668851 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 6 23:55:09.668954 systemd[1]: Reached target machines.target - Containers. Jul 6 23:55:09.670081 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jul 6 23:55:09.673651 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 6 23:55:09.674681 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 6 23:55:09.674852 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 6 23:55:09.677835 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 6 23:55:09.680083 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jul 6 23:55:09.683754 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 6 23:55:09.684380 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 6 23:55:09.723423 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 6 23:55:09.733572 kernel: loop0: detected capacity change from 0 to 142488 Jul 6 23:55:09.755572 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 6 23:55:09.755998 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jul 6 23:55:09.838566 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 6 23:55:09.863584 kernel: loop1: detected capacity change from 0 to 140768 Jul 6 23:55:09.915568 kernel: loop2: detected capacity change from 0 to 221472 Jul 6 23:55:09.967590 kernel: loop3: detected capacity change from 0 to 2976 Jul 6 23:55:10.008593 kernel: loop4: detected capacity change from 0 to 142488 Jul 6 23:55:10.032628 kernel: loop5: detected capacity change from 0 to 140768 Jul 6 23:55:10.056569 kernel: loop6: detected capacity change from 0 to 221472 Jul 6 23:55:10.125580 kernel: loop7: detected capacity change from 0 to 2976 Jul 6 23:55:10.234492 (sd-merge)[1350]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-vmware'. Jul 6 23:55:10.234783 (sd-merge)[1350]: Merged extensions into '/usr'. Jul 6 23:55:10.241996 systemd[1]: Reloading requested from client PID 1336 ('systemd-sysext') (unit systemd-sysext.service)... Jul 6 23:55:10.242072 systemd[1]: Reloading... Jul 6 23:55:10.276563 zram_generator::config[1376]: No configuration found. Jul 6 23:55:10.347692 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Jul 6 23:55:10.362661 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 6 23:55:10.400021 systemd[1]: Reloading finished in 157 ms. Jul 6 23:55:10.409998 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 6 23:55:10.417380 ldconfig[1332]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 6 23:55:10.418724 systemd[1]: Starting ensure-sysext.service... Jul 6 23:55:10.421624 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 6 23:55:10.421968 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 6 23:55:10.424461 systemd[1]: Reloading requested from client PID 1439 ('systemctl') (unit ensure-sysext.service)... Jul 6 23:55:10.424518 systemd[1]: Reloading... Jul 6 23:55:10.433893 systemd-tmpfiles[1441]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 6 23:55:10.434089 systemd-tmpfiles[1441]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 6 23:55:10.434607 systemd-tmpfiles[1441]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 6 23:55:10.434782 systemd-tmpfiles[1441]: ACLs are not supported, ignoring. Jul 6 23:55:10.434821 systemd-tmpfiles[1441]: ACLs are not supported, ignoring. Jul 6 23:55:10.436929 systemd-tmpfiles[1441]: Detected autofs mount point /boot during canonicalization of boot. Jul 6 23:55:10.436935 systemd-tmpfiles[1441]: Skipping /boot Jul 6 23:55:10.442499 systemd-tmpfiles[1441]: Detected autofs mount point /boot during canonicalization of boot. Jul 6 23:55:10.442505 systemd-tmpfiles[1441]: Skipping /boot Jul 6 23:55:10.465107 zram_generator::config[1468]: No configuration found. Jul 6 23:55:10.524469 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Jul 6 23:55:10.538942 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 6 23:55:10.575372 systemd[1]: Reloading finished in 150 ms. Jul 6 23:55:10.588908 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 6 23:55:10.601455 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 6 23:55:10.605666 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 6 23:55:10.608820 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 6 23:55:10.611703 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 6 23:55:10.612653 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 6 23:55:10.617305 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 6 23:55:10.623869 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 6 23:55:10.624971 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 6 23:55:10.635759 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 6 23:55:10.635997 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 6 23:55:10.636070 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 6 23:55:10.640046 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 6 23:55:10.647939 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 6 23:55:10.648656 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 6 23:55:10.648742 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 6 23:55:10.649168 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 6 23:55:10.649246 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 6 23:55:10.649875 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 6 23:55:10.650062 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 6 23:55:10.652049 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 6 23:55:10.654123 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 6 23:55:10.657146 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 6 23:55:10.667001 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 6 23:55:10.670512 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 6 23:55:10.670655 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 6 23:55:10.670725 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 6 23:55:10.671923 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 6 23:55:10.672316 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 6 23:55:10.672396 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 6 23:55:10.677783 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 6 23:55:10.677863 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 6 23:55:10.678498 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 6 23:55:10.682106 augenrules[1577]: No rules Jul 6 23:55:10.680064 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 6 23:55:10.687946 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 6 23:55:10.688116 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 6 23:55:10.688190 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 6 23:55:10.688254 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 6 23:55:10.688846 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 6 23:55:10.690014 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 6 23:55:10.690095 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 6 23:55:10.690407 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 6 23:55:10.690482 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 6 23:55:10.690804 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 6 23:55:10.690877 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 6 23:55:10.695172 systemd[1]: Finished ensure-sysext.service. Jul 6 23:55:10.695564 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 6 23:55:10.700822 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jul 6 23:55:10.712682 systemd-resolved[1540]: Positive Trust Anchors: Jul 6 23:55:10.712691 systemd-resolved[1540]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 6 23:55:10.712714 systemd-resolved[1540]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 6 23:55:10.732784 systemd-resolved[1540]: Defaulting to hostname 'linux'. Jul 6 23:55:10.734034 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 6 23:55:10.734219 systemd[1]: Reached target network.target - Network. Jul 6 23:55:10.734328 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 6 23:55:10.736331 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jul 6 23:55:10.736589 systemd[1]: Reached target time-set.target - System Time Set. Jul 6 23:55:10.755243 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 6 23:55:10.755696 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 6 23:55:10.755725 systemd[1]: Reached target sysinit.target - System Initialization. Jul 6 23:55:10.756219 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 6 23:55:10.756399 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 6 23:55:10.756658 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 6 23:55:10.756850 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 6 23:55:10.757058 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 6 23:55:10.757228 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 6 23:55:10.757280 systemd[1]: Reached target paths.target - Path Units. Jul 6 23:55:10.757413 systemd[1]: Reached target timers.target - Timer Units. Jul 6 23:55:10.758042 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 6 23:55:10.759141 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 6 23:55:10.760212 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 6 23:55:10.762093 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 6 23:55:10.762205 systemd[1]: Reached target sockets.target - Socket Units. Jul 6 23:55:10.762297 systemd[1]: Reached target basic.target - Basic System. Jul 6 23:55:10.762458 systemd[1]: System is tainted: cgroupsv1 Jul 6 23:55:10.762478 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 6 23:55:10.762490 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 6 23:55:10.764028 systemd[1]: Starting containerd.service - containerd container runtime... Jul 6 23:55:10.765679 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 6 23:55:10.768226 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 6 23:55:10.769617 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 6 23:55:10.771018 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 6 23:55:10.772851 jq[1608]: false Jul 6 23:55:10.773284 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 6 23:55:10.777209 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 6 23:55:10.778865 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 6 23:55:10.787703 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 6 23:55:10.791987 dbus-daemon[1606]: [system] SELinux support is enabled Jul 6 23:55:10.791686 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 6 23:55:10.792278 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 6 23:55:10.803637 extend-filesystems[1609]: Found loop4 Jul 6 23:55:10.803637 extend-filesystems[1609]: Found loop5 Jul 6 23:55:10.803637 extend-filesystems[1609]: Found loop6 Jul 6 23:55:10.803637 extend-filesystems[1609]: Found loop7 Jul 6 23:55:10.803637 extend-filesystems[1609]: Found sda Jul 6 23:55:10.803637 extend-filesystems[1609]: Found sda1 Jul 6 23:55:10.803637 extend-filesystems[1609]: Found sda2 Jul 6 23:55:10.803637 extend-filesystems[1609]: Found sda3 Jul 6 23:55:10.803637 extend-filesystems[1609]: Found usr Jul 6 23:55:10.803637 extend-filesystems[1609]: Found sda4 Jul 6 23:55:10.803637 extend-filesystems[1609]: Found sda6 Jul 6 23:55:10.803637 extend-filesystems[1609]: Found sda7 Jul 6 23:55:10.803637 extend-filesystems[1609]: Found sda9 Jul 6 23:55:10.803637 extend-filesystems[1609]: Checking size of /dev/sda9 Jul 6 23:55:10.800158 systemd[1]: Starting update-engine.service - Update Engine... Jul 6 23:55:10.808927 extend-filesystems[1609]: Old size kept for /dev/sda9 Jul 6 23:55:10.808927 extend-filesystems[1609]: Found sr0 Jul 6 23:55:10.808653 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 6 23:55:10.816357 jq[1629]: true Jul 6 23:55:10.823669 systemd[1]: Starting vgauthd.service - VGAuth Service for open-vm-tools... Jul 6 23:55:10.827832 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 6 23:55:10.830269 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 6 23:55:10.830399 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 6 23:55:10.830533 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 6 23:55:10.830659 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 6 23:55:10.832746 systemd[1]: motdgen.service: Deactivated successfully. Jul 6 23:55:10.832866 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 6 23:55:10.834692 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 6 23:55:10.834812 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 6 23:55:10.837598 update_engine[1623]: I20250706 23:55:10.837515 1623 main.cc:92] Flatcar Update Engine starting Jul 6 23:55:10.841053 update_engine[1623]: I20250706 23:55:10.841034 1623 update_check_scheduler.cc:74] Next update check in 2m50s Jul 6 23:55:10.852558 jq[1642]: true Jul 6 23:55:10.855216 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 34 scanned by (udev-worker) (1288) Jul 6 23:55:10.864741 (ntainerd)[1646]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 6 23:55:10.865276 systemd[1]: Started vgauthd.service - VGAuth Service for open-vm-tools. Jul 6 23:55:10.881557 systemd[1]: Started update-engine.service - Update Engine. Jul 6 23:55:10.882786 tar[1641]: linux-amd64/helm Jul 6 23:55:10.884992 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 6 23:55:10.885012 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 6 23:55:10.885138 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 6 23:55:10.885148 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 6 23:55:10.891633 systemd[1]: Starting vmtoolsd.service - Service for virtual machines hosted on VMware... Jul 6 23:55:10.897648 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 6 23:55:10.898411 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 6 23:56:37.056598 systemd-timesyncd[1599]: Contacted time server 72.30.35.89:123 (0.flatcar.pool.ntp.org). Jul 6 23:56:37.056808 systemd-timesyncd[1599]: Initial clock synchronization to Sun 2025-07-06 23:56:37.056523 UTC. Jul 6 23:56:37.056909 systemd-resolved[1540]: Clock change detected. Flushing caches. Jul 6 23:56:37.058970 systemd-logind[1621]: Watching system buttons on /dev/input/event1 (Power Button) Jul 6 23:56:37.059900 systemd-logind[1621]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jul 6 23:56:37.061043 systemd-logind[1621]: New seat seat0. Jul 6 23:56:37.064525 unknown[1655]: Pref_Init: Using '/etc/vmware-tools/vgauth.conf' as preferences filepath Jul 6 23:56:37.080927 kernel: NET: Registered PF_VSOCK protocol family Jul 6 23:56:37.080945 bash[1677]: Updated "/home/core/.ssh/authorized_keys" Jul 6 23:56:37.066151 unknown[1655]: Core dump limit set to -1 Jul 6 23:56:37.074898 systemd[1]: Started systemd-logind.service - User Login Management. Jul 6 23:56:37.084714 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 6 23:56:37.088920 systemd[1]: Started vmtoolsd.service - Service for virtual machines hosted on VMware. Jul 6 23:56:37.094219 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jul 6 23:56:37.204284 locksmithd[1672]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 6 23:56:37.372962 containerd[1646]: time="2025-07-06T23:56:37.372909609Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jul 6 23:56:37.416657 containerd[1646]: time="2025-07-06T23:56:37.416625397Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jul 6 23:56:37.418737 containerd[1646]: time="2025-07-06T23:56:37.418716666Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.95-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jul 6 23:56:37.418737 containerd[1646]: time="2025-07-06T23:56:37.418734297Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jul 6 23:56:37.418786 containerd[1646]: time="2025-07-06T23:56:37.418743715Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jul 6 23:56:37.418845 containerd[1646]: time="2025-07-06T23:56:37.418835386Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jul 6 23:56:37.418862 containerd[1646]: time="2025-07-06T23:56:37.418849598Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jul 6 23:56:37.418900 containerd[1646]: time="2025-07-06T23:56:37.418888416Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jul 6 23:56:37.418917 containerd[1646]: time="2025-07-06T23:56:37.418898677Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jul 6 23:56:37.419016 containerd[1646]: time="2025-07-06T23:56:37.419004014Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 6 23:56:37.419016 containerd[1646]: time="2025-07-06T23:56:37.419015545Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jul 6 23:56:37.419056 containerd[1646]: time="2025-07-06T23:56:37.419025691Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jul 6 23:56:37.419056 containerd[1646]: time="2025-07-06T23:56:37.419031618Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jul 6 23:56:37.419082 containerd[1646]: time="2025-07-06T23:56:37.419075473Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jul 6 23:56:37.419197 containerd[1646]: time="2025-07-06T23:56:37.419186899Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jul 6 23:56:37.419270 containerd[1646]: time="2025-07-06T23:56:37.419258907Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 6 23:56:37.419290 containerd[1646]: time="2025-07-06T23:56:37.419269158Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jul 6 23:56:37.419320 containerd[1646]: time="2025-07-06T23:56:37.419310214Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jul 6 23:56:37.419350 containerd[1646]: time="2025-07-06T23:56:37.419339782Z" level=info msg="metadata content store policy set" policy=shared Jul 6 23:56:37.435651 containerd[1646]: time="2025-07-06T23:56:37.435624190Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jul 6 23:56:37.435729 containerd[1646]: time="2025-07-06T23:56:37.435663506Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jul 6 23:56:37.435729 containerd[1646]: time="2025-07-06T23:56:37.435673353Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jul 6 23:56:37.435729 containerd[1646]: time="2025-07-06T23:56:37.435685524Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jul 6 23:56:37.435729 containerd[1646]: time="2025-07-06T23:56:37.435696004Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jul 6 23:56:37.435801 containerd[1646]: time="2025-07-06T23:56:37.435789316Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jul 6 23:56:37.436636 containerd[1646]: time="2025-07-06T23:56:37.436623864Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jul 6 23:56:37.436703 containerd[1646]: time="2025-07-06T23:56:37.436692724Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jul 6 23:56:37.436724 containerd[1646]: time="2025-07-06T23:56:37.436704462Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jul 6 23:56:37.436724 containerd[1646]: time="2025-07-06T23:56:37.436712897Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jul 6 23:56:37.436751 containerd[1646]: time="2025-07-06T23:56:37.436723127Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jul 6 23:56:37.436751 containerd[1646]: time="2025-07-06T23:56:37.436730747Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jul 6 23:56:37.436751 containerd[1646]: time="2025-07-06T23:56:37.436737740Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jul 6 23:56:37.436751 containerd[1646]: time="2025-07-06T23:56:37.436746566Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jul 6 23:56:37.436804 containerd[1646]: time="2025-07-06T23:56:37.436755274Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jul 6 23:56:37.436804 containerd[1646]: time="2025-07-06T23:56:37.436791563Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jul 6 23:56:37.436804 containerd[1646]: time="2025-07-06T23:56:37.436800363Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jul 6 23:56:37.436841 containerd[1646]: time="2025-07-06T23:56:37.436807529Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jul 6 23:56:37.436841 containerd[1646]: time="2025-07-06T23:56:37.436818911Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jul 6 23:56:37.436841 containerd[1646]: time="2025-07-06T23:56:37.436826889Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jul 6 23:56:37.436841 containerd[1646]: time="2025-07-06T23:56:37.436834021Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jul 6 23:56:37.436897 containerd[1646]: time="2025-07-06T23:56:37.436854030Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jul 6 23:56:37.436897 containerd[1646]: time="2025-07-06T23:56:37.436864506Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jul 6 23:56:37.436897 containerd[1646]: time="2025-07-06T23:56:37.436872072Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jul 6 23:56:37.436897 containerd[1646]: time="2025-07-06T23:56:37.436878349Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jul 6 23:56:37.436897 containerd[1646]: time="2025-07-06T23:56:37.436892103Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jul 6 23:56:37.436963 containerd[1646]: time="2025-07-06T23:56:37.436899721Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jul 6 23:56:37.436963 containerd[1646]: time="2025-07-06T23:56:37.436907599Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jul 6 23:56:37.436963 containerd[1646]: time="2025-07-06T23:56:37.436913941Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jul 6 23:56:37.436963 containerd[1646]: time="2025-07-06T23:56:37.436929254Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jul 6 23:56:37.436963 containerd[1646]: time="2025-07-06T23:56:37.436936554Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jul 6 23:56:37.436963 containerd[1646]: time="2025-07-06T23:56:37.436948528Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jul 6 23:56:37.437038 containerd[1646]: time="2025-07-06T23:56:37.436963735Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jul 6 23:56:37.437038 containerd[1646]: time="2025-07-06T23:56:37.436971166Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jul 6 23:56:37.437038 containerd[1646]: time="2025-07-06T23:56:37.436976992Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jul 6 23:56:37.437075 containerd[1646]: time="2025-07-06T23:56:37.437047930Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jul 6 23:56:37.437075 containerd[1646]: time="2025-07-06T23:56:37.437060380Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jul 6 23:56:37.437104 containerd[1646]: time="2025-07-06T23:56:37.437066959Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jul 6 23:56:37.437120 containerd[1646]: time="2025-07-06T23:56:37.437102476Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jul 6 23:56:37.437120 containerd[1646]: time="2025-07-06T23:56:37.437108881Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jul 6 23:56:37.437120 containerd[1646]: time="2025-07-06T23:56:37.437115562Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jul 6 23:56:37.437160 containerd[1646]: time="2025-07-06T23:56:37.437120935Z" level=info msg="NRI interface is disabled by configuration." Jul 6 23:56:37.437160 containerd[1646]: time="2025-07-06T23:56:37.437127321Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jul 6 23:56:37.439171 containerd[1646]: time="2025-07-06T23:56:37.438801977Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jul 6 23:56:37.439171 containerd[1646]: time="2025-07-06T23:56:37.438852834Z" level=info msg="Connect containerd service" Jul 6 23:56:37.439171 containerd[1646]: time="2025-07-06T23:56:37.438878199Z" level=info msg="using legacy CRI server" Jul 6 23:56:37.439171 containerd[1646]: time="2025-07-06T23:56:37.438883913Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 6 23:56:37.439171 containerd[1646]: time="2025-07-06T23:56:37.438946232Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jul 6 23:56:37.439597 containerd[1646]: time="2025-07-06T23:56:37.439329899Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 6 23:56:37.439597 containerd[1646]: time="2025-07-06T23:56:37.439394958Z" level=info msg="Start subscribing containerd event" Jul 6 23:56:37.439597 containerd[1646]: time="2025-07-06T23:56:37.439426473Z" level=info msg="Start recovering state" Jul 6 23:56:37.439597 containerd[1646]: time="2025-07-06T23:56:37.439466885Z" level=info msg="Start event monitor" Jul 6 23:56:37.439597 containerd[1646]: time="2025-07-06T23:56:37.439479338Z" level=info msg="Start snapshots syncer" Jul 6 23:56:37.439597 containerd[1646]: time="2025-07-06T23:56:37.439484810Z" level=info msg="Start cni network conf syncer for default" Jul 6 23:56:37.439597 containerd[1646]: time="2025-07-06T23:56:37.439488654Z" level=info msg="Start streaming server" Jul 6 23:56:37.439730 containerd[1646]: time="2025-07-06T23:56:37.439718215Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 6 23:56:37.439769 containerd[1646]: time="2025-07-06T23:56:37.439749603Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 6 23:56:37.445176 containerd[1646]: time="2025-07-06T23:56:37.443929799Z" level=info msg="containerd successfully booted in 0.071978s" Jul 6 23:56:37.444011 systemd[1]: Started containerd.service - containerd container runtime. Jul 6 23:56:37.489188 tar[1641]: linux-amd64/LICENSE Jul 6 23:56:37.489301 tar[1641]: linux-amd64/README.md Jul 6 23:56:37.498000 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 6 23:56:37.520727 sshd_keygen[1630]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 6 23:56:37.533688 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 6 23:56:37.538800 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 6 23:56:37.542167 systemd[1]: issuegen.service: Deactivated successfully. Jul 6 23:56:37.542315 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 6 23:56:37.547718 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 6 23:56:37.553255 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 6 23:56:37.554971 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 6 23:56:37.556896 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jul 6 23:56:37.557177 systemd[1]: Reached target getty.target - Login Prompts. Jul 6 23:56:37.615772 systemd-networkd[1286]: ens192: Gained IPv6LL Jul 6 23:56:37.617105 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 6 23:56:37.618254 systemd[1]: Reached target network-online.target - Network is Online. Jul 6 23:56:37.622981 systemd[1]: Starting coreos-metadata.service - VMware metadata agent... Jul 6 23:56:37.635909 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:56:37.637931 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 6 23:56:37.668176 systemd[1]: coreos-metadata.service: Deactivated successfully. Jul 6 23:56:37.668314 systemd[1]: Finished coreos-metadata.service - VMware metadata agent. Jul 6 23:56:37.670527 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 6 23:56:37.671143 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 6 23:56:38.954956 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:56:38.955347 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 6 23:56:38.955712 systemd[1]: Startup finished in 7.314s (kernel) + 4.900s (userspace) = 12.215s. Jul 6 23:56:38.960884 (kubelet)[1812]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 6 23:56:39.019882 login[1771]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jul 6 23:56:39.021486 login[1772]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jul 6 23:56:39.029693 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 6 23:56:39.030079 systemd-logind[1621]: New session 2 of user core. Jul 6 23:56:39.039758 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 6 23:56:39.042432 systemd-logind[1621]: New session 1 of user core. Jul 6 23:56:39.050306 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 6 23:56:39.059746 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 6 23:56:39.062865 (systemd)[1821]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 6 23:56:39.145126 systemd[1821]: Queued start job for default target default.target. Jul 6 23:56:39.145352 systemd[1821]: Created slice app.slice - User Application Slice. Jul 6 23:56:39.145368 systemd[1821]: Reached target paths.target - Paths. Jul 6 23:56:39.145376 systemd[1821]: Reached target timers.target - Timers. Jul 6 23:56:39.149700 systemd[1821]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 6 23:56:39.153167 systemd[1821]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 6 23:56:39.153744 systemd[1821]: Reached target sockets.target - Sockets. Jul 6 23:56:39.153755 systemd[1821]: Reached target basic.target - Basic System. Jul 6 23:56:39.153777 systemd[1821]: Reached target default.target - Main User Target. Jul 6 23:56:39.153793 systemd[1821]: Startup finished in 86ms. Jul 6 23:56:39.155826 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 6 23:56:39.156562 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 6 23:56:39.156980 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 6 23:56:39.655733 kubelet[1812]: E0706 23:56:39.655674 1812 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 6 23:56:39.656918 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 6 23:56:39.657032 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 6 23:56:49.675200 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 6 23:56:49.685762 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:56:49.789697 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:56:49.792062 (kubelet)[1871]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 6 23:56:49.844300 kubelet[1871]: E0706 23:56:49.844258 1871 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 6 23:56:49.847715 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 6 23:56:49.847822 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 6 23:56:59.925276 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 6 23:56:59.931753 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:56:59.999462 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:57:00.001516 (kubelet)[1891]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 6 23:57:00.075295 kubelet[1891]: E0706 23:57:00.075257 1891 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 6 23:57:00.076648 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 6 23:57:00.076752 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 6 23:57:07.170237 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 6 23:57:07.174792 systemd[1]: Started sshd@0-139.178.70.109:22-139.178.68.195:36482.service - OpenSSH per-connection server daemon (139.178.68.195:36482). Jul 6 23:57:07.204626 sshd[1898]: Accepted publickey for core from 139.178.68.195 port 36482 ssh2: RSA SHA256:/9exTOE5j0h3myXsW5LwESM2vwqV1QarY1uHfK4Vy7k Jul 6 23:57:07.205438 sshd[1898]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:57:07.208394 systemd-logind[1621]: New session 3 of user core. Jul 6 23:57:07.212792 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 6 23:57:07.265816 systemd[1]: Started sshd@1-139.178.70.109:22-139.178.68.195:36498.service - OpenSSH per-connection server daemon (139.178.68.195:36498). Jul 6 23:57:07.290951 sshd[1903]: Accepted publickey for core from 139.178.68.195 port 36498 ssh2: RSA SHA256:/9exTOE5j0h3myXsW5LwESM2vwqV1QarY1uHfK4Vy7k Jul 6 23:57:07.292319 sshd[1903]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:57:07.295890 systemd-logind[1621]: New session 4 of user core. Jul 6 23:57:07.305171 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 6 23:57:07.360252 sshd[1903]: pam_unix(sshd:session): session closed for user core Jul 6 23:57:07.367677 systemd[1]: Started sshd@2-139.178.70.109:22-139.178.68.195:36510.service - OpenSSH per-connection server daemon (139.178.68.195:36510). Jul 6 23:57:07.368006 systemd[1]: sshd@1-139.178.70.109:22-139.178.68.195:36498.service: Deactivated successfully. Jul 6 23:57:07.370181 systemd-logind[1621]: Session 4 logged out. Waiting for processes to exit. Jul 6 23:57:07.370748 systemd[1]: session-4.scope: Deactivated successfully. Jul 6 23:57:07.372045 systemd-logind[1621]: Removed session 4. Jul 6 23:57:07.392168 sshd[1908]: Accepted publickey for core from 139.178.68.195 port 36510 ssh2: RSA SHA256:/9exTOE5j0h3myXsW5LwESM2vwqV1QarY1uHfK4Vy7k Jul 6 23:57:07.393220 sshd[1908]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:57:07.397665 systemd-logind[1621]: New session 5 of user core. Jul 6 23:57:07.403961 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 6 23:57:07.451702 sshd[1908]: pam_unix(sshd:session): session closed for user core Jul 6 23:57:07.461113 systemd[1]: Started sshd@3-139.178.70.109:22-139.178.68.195:36512.service - OpenSSH per-connection server daemon (139.178.68.195:36512). Jul 6 23:57:07.461807 systemd[1]: sshd@2-139.178.70.109:22-139.178.68.195:36510.service: Deactivated successfully. Jul 6 23:57:07.462965 systemd[1]: session-5.scope: Deactivated successfully. Jul 6 23:57:07.463396 systemd-logind[1621]: Session 5 logged out. Waiting for processes to exit. Jul 6 23:57:07.464201 systemd-logind[1621]: Removed session 5. Jul 6 23:57:07.485625 sshd[1916]: Accepted publickey for core from 139.178.68.195 port 36512 ssh2: RSA SHA256:/9exTOE5j0h3myXsW5LwESM2vwqV1QarY1uHfK4Vy7k Jul 6 23:57:07.486378 sshd[1916]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:57:07.489151 systemd-logind[1621]: New session 6 of user core. Jul 6 23:57:07.491740 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 6 23:57:07.543757 sshd[1916]: pam_unix(sshd:session): session closed for user core Jul 6 23:57:07.554897 systemd[1]: Started sshd@4-139.178.70.109:22-139.178.68.195:36524.service - OpenSSH per-connection server daemon (139.178.68.195:36524). Jul 6 23:57:07.556964 systemd[1]: sshd@3-139.178.70.109:22-139.178.68.195:36512.service: Deactivated successfully. Jul 6 23:57:07.558000 systemd[1]: session-6.scope: Deactivated successfully. Jul 6 23:57:07.560763 systemd-logind[1621]: Session 6 logged out. Waiting for processes to exit. Jul 6 23:57:07.561578 systemd-logind[1621]: Removed session 6. Jul 6 23:57:07.582481 sshd[1924]: Accepted publickey for core from 139.178.68.195 port 36524 ssh2: RSA SHA256:/9exTOE5j0h3myXsW5LwESM2vwqV1QarY1uHfK4Vy7k Jul 6 23:57:07.583289 sshd[1924]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:57:07.585931 systemd-logind[1621]: New session 7 of user core. Jul 6 23:57:07.591744 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 6 23:57:07.650161 sudo[1931]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 6 23:57:07.650375 sudo[1931]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 6 23:57:07.667099 sudo[1931]: pam_unix(sudo:session): session closed for user root Jul 6 23:57:07.668051 sshd[1924]: pam_unix(sshd:session): session closed for user core Jul 6 23:57:07.674772 systemd[1]: Started sshd@5-139.178.70.109:22-139.178.68.195:36526.service - OpenSSH per-connection server daemon (139.178.68.195:36526). Jul 6 23:57:07.675304 systemd[1]: sshd@4-139.178.70.109:22-139.178.68.195:36524.service: Deactivated successfully. Jul 6 23:57:07.676052 systemd[1]: session-7.scope: Deactivated successfully. Jul 6 23:57:07.677137 systemd-logind[1621]: Session 7 logged out. Waiting for processes to exit. Jul 6 23:57:07.678306 systemd-logind[1621]: Removed session 7. Jul 6 23:57:07.698739 sshd[1934]: Accepted publickey for core from 139.178.68.195 port 36526 ssh2: RSA SHA256:/9exTOE5j0h3myXsW5LwESM2vwqV1QarY1uHfK4Vy7k Jul 6 23:57:07.699438 sshd[1934]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:57:07.701785 systemd-logind[1621]: New session 8 of user core. Jul 6 23:57:07.708761 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 6 23:57:07.757326 sudo[1941]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 6 23:57:07.757516 sudo[1941]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 6 23:57:07.759828 sudo[1941]: pam_unix(sudo:session): session closed for user root Jul 6 23:57:07.763680 sudo[1940]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jul 6 23:57:07.763888 sudo[1940]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 6 23:57:07.774794 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jul 6 23:57:07.775733 auditctl[1944]: No rules Jul 6 23:57:07.775964 systemd[1]: audit-rules.service: Deactivated successfully. Jul 6 23:57:07.776113 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jul 6 23:57:07.779025 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 6 23:57:07.797865 augenrules[1963]: No rules Jul 6 23:57:07.798828 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 6 23:57:07.799715 sudo[1940]: pam_unix(sudo:session): session closed for user root Jul 6 23:57:07.801347 sshd[1934]: pam_unix(sshd:session): session closed for user core Jul 6 23:57:07.803186 systemd[1]: sshd@5-139.178.70.109:22-139.178.68.195:36526.service: Deactivated successfully. Jul 6 23:57:07.804942 systemd[1]: session-8.scope: Deactivated successfully. Jul 6 23:57:07.805460 systemd-logind[1621]: Session 8 logged out. Waiting for processes to exit. Jul 6 23:57:07.810845 systemd[1]: Started sshd@6-139.178.70.109:22-139.178.68.195:36528.service - OpenSSH per-connection server daemon (139.178.68.195:36528). Jul 6 23:57:07.811524 systemd-logind[1621]: Removed session 8. Jul 6 23:57:07.834665 sshd[1972]: Accepted publickey for core from 139.178.68.195 port 36528 ssh2: RSA SHA256:/9exTOE5j0h3myXsW5LwESM2vwqV1QarY1uHfK4Vy7k Jul 6 23:57:07.835319 sshd[1972]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:57:07.838349 systemd-logind[1621]: New session 9 of user core. Jul 6 23:57:07.843743 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 6 23:57:07.893004 sudo[1976]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 6 23:57:07.893211 sudo[1976]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 6 23:57:08.386884 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 6 23:57:08.386958 (dockerd)[1992]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 6 23:57:08.750312 dockerd[1992]: time="2025-07-06T23:57:08.750224002Z" level=info msg="Starting up" Jul 6 23:57:09.184386 dockerd[1992]: time="2025-07-06T23:57:09.184362121Z" level=info msg="Loading containers: start." Jul 6 23:57:09.266624 kernel: Initializing XFRM netlink socket Jul 6 23:57:09.420492 systemd-networkd[1286]: docker0: Link UP Jul 6 23:57:09.431846 dockerd[1992]: time="2025-07-06T23:57:09.431809327Z" level=info msg="Loading containers: done." Jul 6 23:57:09.441554 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck55330256-merged.mount: Deactivated successfully. Jul 6 23:57:09.444056 dockerd[1992]: time="2025-07-06T23:57:09.444030174Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 6 23:57:09.444146 dockerd[1992]: time="2025-07-06T23:57:09.444128233Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jul 6 23:57:09.444229 dockerd[1992]: time="2025-07-06T23:57:09.444214959Z" level=info msg="Daemon has completed initialization" Jul 6 23:57:09.463199 dockerd[1992]: time="2025-07-06T23:57:09.463141372Z" level=info msg="API listen on /run/docker.sock" Jul 6 23:57:09.463382 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 6 23:57:10.175212 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jul 6 23:57:10.181818 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:57:11.194804 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:57:11.198737 (kubelet)[2140]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 6 23:57:11.257897 kubelet[2140]: E0706 23:57:11.257862 2140 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 6 23:57:11.259041 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 6 23:57:11.259151 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 6 23:57:11.872135 containerd[1646]: time="2025-07-06T23:57:11.871876708Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.10\"" Jul 6 23:57:13.242812 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1425419377.mount: Deactivated successfully. Jul 6 23:57:14.848934 containerd[1646]: time="2025-07-06T23:57:14.848892780Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:57:14.858383 containerd[1646]: time="2025-07-06T23:57:14.858349378Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.10: active requests=0, bytes read=28077744" Jul 6 23:57:14.871632 containerd[1646]: time="2025-07-06T23:57:14.871591978Z" level=info msg="ImageCreate event name:\"sha256:74c5154ea84d9a53c406e6c00e53cf66145cce821fd80e3c74e2e1bf312f3977\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:57:14.896679 containerd[1646]: time="2025-07-06T23:57:14.896642655Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:083d7d64af31cd090f870eb49fb815e6bb42c175fc602ee9dae2f28f082bd4dc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:57:14.897452 containerd[1646]: time="2025-07-06T23:57:14.897327007Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.10\" with image id \"sha256:74c5154ea84d9a53c406e6c00e53cf66145cce821fd80e3c74e2e1bf312f3977\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.10\", repo digest \"registry.k8s.io/kube-apiserver@sha256:083d7d64af31cd090f870eb49fb815e6bb42c175fc602ee9dae2f28f082bd4dc\", size \"28074544\" in 3.025426326s" Jul 6 23:57:14.897452 containerd[1646]: time="2025-07-06T23:57:14.897355000Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.10\" returns image reference \"sha256:74c5154ea84d9a53c406e6c00e53cf66145cce821fd80e3c74e2e1bf312f3977\"" Jul 6 23:57:14.898216 containerd[1646]: time="2025-07-06T23:57:14.898152101Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.10\"" Jul 6 23:57:17.205635 containerd[1646]: time="2025-07-06T23:57:17.205070134Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:57:17.214405 containerd[1646]: time="2025-07-06T23:57:17.214201826Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.10: active requests=0, bytes read=24713294" Jul 6 23:57:17.226479 containerd[1646]: time="2025-07-06T23:57:17.226426517Z" level=info msg="ImageCreate event name:\"sha256:c285c4e62c91c434e9928bee7063b361509f43f43faa31641b626d6eff97616d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:57:17.239634 containerd[1646]: time="2025-07-06T23:57:17.239414600Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3c67387d023c6114879f1e817669fd641797d30f117230682faf3930ecaaf0fe\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:57:17.240598 containerd[1646]: time="2025-07-06T23:57:17.240565454Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.10\" with image id \"sha256:c285c4e62c91c434e9928bee7063b361509f43f43faa31641b626d6eff97616d\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.10\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3c67387d023c6114879f1e817669fd641797d30f117230682faf3930ecaaf0fe\", size \"26315128\" in 2.342388846s" Jul 6 23:57:17.240662 containerd[1646]: time="2025-07-06T23:57:17.240597764Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.10\" returns image reference \"sha256:c285c4e62c91c434e9928bee7063b361509f43f43faa31641b626d6eff97616d\"" Jul 6 23:57:17.242163 containerd[1646]: time="2025-07-06T23:57:17.241983615Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.10\"" Jul 6 23:57:19.217509 containerd[1646]: time="2025-07-06T23:57:19.217144151Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:57:19.226465 containerd[1646]: time="2025-07-06T23:57:19.226433877Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.10: active requests=0, bytes read=18783671" Jul 6 23:57:19.236275 containerd[1646]: time="2025-07-06T23:57:19.236222069Z" level=info msg="ImageCreate event name:\"sha256:61daeb7d112d9547792027cb16242b1d131f357f511545477381457fff5a69e2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:57:19.243970 containerd[1646]: time="2025-07-06T23:57:19.243927577Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:284dc2a5cf6afc9b76e39ad4b79c680c23d289488517643b28784a06d0141272\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:57:19.245525 containerd[1646]: time="2025-07-06T23:57:19.245358499Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.10\" with image id \"sha256:61daeb7d112d9547792027cb16242b1d131f357f511545477381457fff5a69e2\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.10\", repo digest \"registry.k8s.io/kube-scheduler@sha256:284dc2a5cf6afc9b76e39ad4b79c680c23d289488517643b28784a06d0141272\", size \"20385523\" in 2.003344545s" Jul 6 23:57:19.245525 containerd[1646]: time="2025-07-06T23:57:19.245382792Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.10\" returns image reference \"sha256:61daeb7d112d9547792027cb16242b1d131f357f511545477381457fff5a69e2\"" Jul 6 23:57:19.245815 containerd[1646]: time="2025-07-06T23:57:19.245794599Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.10\"" Jul 6 23:57:20.620038 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2704899161.mount: Deactivated successfully. Jul 6 23:57:21.425166 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jul 6 23:57:21.430783 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:57:21.981744 containerd[1646]: time="2025-07-06T23:57:21.981249147Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:57:21.989695 containerd[1646]: time="2025-07-06T23:57:21.989654689Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.10: active requests=0, bytes read=30383943" Jul 6 23:57:21.998293 containerd[1646]: time="2025-07-06T23:57:21.997478843Z" level=info msg="ImageCreate event name:\"sha256:3ed600862d3e69931e0f9f4dbf5c2b46343af40aa079772434f13de771bdc30c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:57:21.998483 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:57:22.002059 (kubelet)[2231]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 6 23:57:22.005711 containerd[1646]: time="2025-07-06T23:57:22.005682181Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bcbb293812bdf587b28ea98369a8c347ca84884160046296761acdf12b27029d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:57:22.007255 containerd[1646]: time="2025-07-06T23:57:22.006964915Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.10\" with image id \"sha256:3ed600862d3e69931e0f9f4dbf5c2b46343af40aa079772434f13de771bdc30c\", repo tag \"registry.k8s.io/kube-proxy:v1.31.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:bcbb293812bdf587b28ea98369a8c347ca84884160046296761acdf12b27029d\", size \"30382962\" in 2.761147479s" Jul 6 23:57:22.007255 containerd[1646]: time="2025-07-06T23:57:22.006987994Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.10\" returns image reference \"sha256:3ed600862d3e69931e0f9f4dbf5c2b46343af40aa079772434f13de771bdc30c\"" Jul 6 23:57:22.007687 containerd[1646]: time="2025-07-06T23:57:22.007582008Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jul 6 23:57:22.043921 kubelet[2231]: E0706 23:57:22.043885 2231 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 6 23:57:22.045108 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 6 23:57:22.045211 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 6 23:57:22.744636 update_engine[1623]: I20250706 23:57:22.744251 1623 update_attempter.cc:509] Updating boot flags... Jul 6 23:57:22.788946 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 34 scanned by (udev-worker) (2248) Jul 6 23:57:23.007519 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 34 scanned by (udev-worker) (2250) Jul 6 23:57:23.746103 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3814011676.mount: Deactivated successfully. Jul 6 23:57:25.319634 containerd[1646]: time="2025-07-06T23:57:25.319528646Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:57:25.324458 containerd[1646]: time="2025-07-06T23:57:25.324401982Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Jul 6 23:57:25.333378 containerd[1646]: time="2025-07-06T23:57:25.333331151Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:57:25.341770 containerd[1646]: time="2025-07-06T23:57:25.341736273Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:57:25.342583 containerd[1646]: time="2025-07-06T23:57:25.342478969Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 3.334696051s" Jul 6 23:57:25.342583 containerd[1646]: time="2025-07-06T23:57:25.342505062Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Jul 6 23:57:25.342952 containerd[1646]: time="2025-07-06T23:57:25.342907245Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 6 23:57:26.105825 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3725055402.mount: Deactivated successfully. Jul 6 23:57:26.279636 containerd[1646]: time="2025-07-06T23:57:26.279366815Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:57:26.294831 containerd[1646]: time="2025-07-06T23:57:26.294781056Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Jul 6 23:57:26.308648 containerd[1646]: time="2025-07-06T23:57:26.308596979Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:57:26.320297 containerd[1646]: time="2025-07-06T23:57:26.320250880Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:57:26.321141 containerd[1646]: time="2025-07-06T23:57:26.320830918Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 977.874139ms" Jul 6 23:57:26.321141 containerd[1646]: time="2025-07-06T23:57:26.320856774Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jul 6 23:57:26.321886 containerd[1646]: time="2025-07-06T23:57:26.321435676Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Jul 6 23:57:27.544973 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount63752619.mount: Deactivated successfully. Jul 6 23:57:30.960961 containerd[1646]: time="2025-07-06T23:57:30.960917270Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:57:30.961644 containerd[1646]: time="2025-07-06T23:57:30.961622801Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56780013" Jul 6 23:57:30.961989 containerd[1646]: time="2025-07-06T23:57:30.961728527Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:57:30.963551 containerd[1646]: time="2025-07-06T23:57:30.963522675Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:57:30.964311 containerd[1646]: time="2025-07-06T23:57:30.964219868Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 4.64276427s" Jul 6 23:57:30.964311 containerd[1646]: time="2025-07-06T23:57:30.964238179Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Jul 6 23:57:32.175140 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Jul 6 23:57:32.184713 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:57:32.611900 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:57:32.613209 (kubelet)[2399]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 6 23:57:32.674493 kubelet[2399]: E0706 23:57:32.674239 2399 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 6 23:57:32.676548 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 6 23:57:32.676681 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 6 23:57:33.252573 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:57:33.256727 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:57:33.278211 systemd[1]: Reloading requested from client PID 2415 ('systemctl') (unit session-9.scope)... Jul 6 23:57:33.278305 systemd[1]: Reloading... Jul 6 23:57:33.339630 zram_generator::config[2452]: No configuration found. Jul 6 23:57:33.403456 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Jul 6 23:57:33.418676 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 6 23:57:33.461947 systemd[1]: Reloading finished in 183 ms. Jul 6 23:57:33.491653 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 6 23:57:33.491738 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 6 23:57:33.491937 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:57:33.497747 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:57:33.865492 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:57:33.868786 (kubelet)[2530]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 6 23:57:33.920611 kubelet[2530]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 6 23:57:33.920611 kubelet[2530]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 6 23:57:33.920611 kubelet[2530]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 6 23:57:33.920865 kubelet[2530]: I0706 23:57:33.920661 2530 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 6 23:57:34.201623 kubelet[2530]: I0706 23:57:34.201576 2530 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jul 6 23:57:34.201623 kubelet[2530]: I0706 23:57:34.201598 2530 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 6 23:57:34.202621 kubelet[2530]: I0706 23:57:34.201885 2530 server.go:934] "Client rotation is on, will bootstrap in background" Jul 6 23:57:34.248483 kubelet[2530]: I0706 23:57:34.248456 2530 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 6 23:57:34.249192 kubelet[2530]: E0706 23:57:34.249164 2530 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://139.178.70.109:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 139.178.70.109:6443: connect: connection refused" logger="UnhandledError" Jul 6 23:57:34.262460 kubelet[2530]: E0706 23:57:34.262433 2530 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 6 23:57:34.262460 kubelet[2530]: I0706 23:57:34.262457 2530 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 6 23:57:34.267575 kubelet[2530]: I0706 23:57:34.267562 2530 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 6 23:57:34.270166 kubelet[2530]: I0706 23:57:34.270152 2530 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jul 6 23:57:34.270245 kubelet[2530]: I0706 23:57:34.270221 2530 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 6 23:57:34.270349 kubelet[2530]: I0706 23:57:34.270243 2530 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Jul 6 23:57:34.270417 kubelet[2530]: I0706 23:57:34.270362 2530 topology_manager.go:138] "Creating topology manager with none policy" Jul 6 23:57:34.270417 kubelet[2530]: I0706 23:57:34.270370 2530 container_manager_linux.go:300] "Creating device plugin manager" Jul 6 23:57:34.270455 kubelet[2530]: I0706 23:57:34.270429 2530 state_mem.go:36] "Initialized new in-memory state store" Jul 6 23:57:34.274108 kubelet[2530]: I0706 23:57:34.274085 2530 kubelet.go:408] "Attempting to sync node with API server" Jul 6 23:57:34.274108 kubelet[2530]: I0706 23:57:34.274103 2530 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 6 23:57:34.274155 kubelet[2530]: I0706 23:57:34.274127 2530 kubelet.go:314] "Adding apiserver pod source" Jul 6 23:57:34.274155 kubelet[2530]: I0706 23:57:34.274144 2530 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 6 23:57:34.286904 kubelet[2530]: W0706 23:57:34.286844 2530 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://139.178.70.109:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 139.178.70.109:6443: connect: connection refused Jul 6 23:57:34.286904 kubelet[2530]: E0706 23:57:34.286888 2530 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://139.178.70.109:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 139.178.70.109:6443: connect: connection refused" logger="UnhandledError" Jul 6 23:57:34.287136 kubelet[2530]: I0706 23:57:34.287033 2530 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jul 6 23:57:34.293689 kubelet[2530]: I0706 23:57:34.293588 2530 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 6 23:57:34.293689 kubelet[2530]: W0706 23:57:34.293670 2530 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://139.178.70.109:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 139.178.70.109:6443: connect: connection refused Jul 6 23:57:34.293753 kubelet[2530]: E0706 23:57:34.293696 2530 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://139.178.70.109:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 139.178.70.109:6443: connect: connection refused" logger="UnhandledError" Jul 6 23:57:34.298209 kubelet[2530]: W0706 23:57:34.298081 2530 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 6 23:57:34.298645 kubelet[2530]: I0706 23:57:34.298453 2530 server.go:1274] "Started kubelet" Jul 6 23:57:34.299129 kubelet[2530]: I0706 23:57:34.299117 2530 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 6 23:57:34.308106 kubelet[2530]: I0706 23:57:34.308081 2530 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 6 23:57:34.314062 kubelet[2530]: I0706 23:57:34.314050 2530 server.go:449] "Adding debug handlers to kubelet server" Jul 6 23:57:34.318795 kubelet[2530]: I0706 23:57:34.318770 2530 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 6 23:57:34.318901 kubelet[2530]: I0706 23:57:34.318888 2530 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 6 23:57:34.319022 kubelet[2530]: I0706 23:57:34.319010 2530 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 6 23:57:34.327109 kubelet[2530]: I0706 23:57:34.327095 2530 volume_manager.go:289] "Starting Kubelet Volume Manager" Jul 6 23:57:34.327226 kubelet[2530]: E0706 23:57:34.327211 2530 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 6 23:57:34.333155 kubelet[2530]: I0706 23:57:34.333142 2530 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jul 6 23:57:34.333200 kubelet[2530]: I0706 23:57:34.333191 2530 reconciler.go:26] "Reconciler: start to sync state" Jul 6 23:57:34.346419 kubelet[2530]: E0706 23:57:34.346383 2530 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.109:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.109:6443: connect: connection refused" interval="200ms" Jul 6 23:57:34.351246 kubelet[2530]: W0706 23:57:34.351206 2530 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://139.178.70.109:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 139.178.70.109:6443: connect: connection refused Jul 6 23:57:34.351283 kubelet[2530]: E0706 23:57:34.351250 2530 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://139.178.70.109:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 139.178.70.109:6443: connect: connection refused" logger="UnhandledError" Jul 6 23:57:34.351486 kubelet[2530]: E0706 23:57:34.330862 2530 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://139.178.70.109:6443/api/v1/namespaces/default/events\": dial tcp 139.178.70.109:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.184fcee2f88e9880 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-07-06 23:57:34.29843776 +0000 UTC m=+0.426945225,LastTimestamp:2025-07-06 23:57:34.29843776 +0000 UTC m=+0.426945225,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jul 6 23:57:34.352788 kubelet[2530]: I0706 23:57:34.352775 2530 factory.go:221] Registration of the containerd container factory successfully Jul 6 23:57:34.352788 kubelet[2530]: I0706 23:57:34.352786 2530 factory.go:221] Registration of the systemd container factory successfully Jul 6 23:57:34.352838 kubelet[2530]: I0706 23:57:34.352827 2530 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 6 23:57:34.352910 kubelet[2530]: E0706 23:57:34.352899 2530 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 6 23:57:34.373662 kubelet[2530]: I0706 23:57:34.373645 2530 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 6 23:57:34.374296 kubelet[2530]: I0706 23:57:34.373882 2530 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 6 23:57:34.374296 kubelet[2530]: I0706 23:57:34.373890 2530 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 6 23:57:34.374296 kubelet[2530]: I0706 23:57:34.373911 2530 state_mem.go:36] "Initialized new in-memory state store" Jul 6 23:57:34.374426 kubelet[2530]: I0706 23:57:34.374399 2530 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 6 23:57:34.374426 kubelet[2530]: I0706 23:57:34.374416 2530 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 6 23:57:34.374481 kubelet[2530]: I0706 23:57:34.374432 2530 kubelet.go:2321] "Starting kubelet main sync loop" Jul 6 23:57:34.375070 kubelet[2530]: E0706 23:57:34.374944 2530 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 6 23:57:34.379539 kubelet[2530]: I0706 23:57:34.379501 2530 policy_none.go:49] "None policy: Start" Jul 6 23:57:34.380010 kubelet[2530]: I0706 23:57:34.379953 2530 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 6 23:57:34.380010 kubelet[2530]: I0706 23:57:34.379965 2530 state_mem.go:35] "Initializing new in-memory state store" Jul 6 23:57:34.385345 kubelet[2530]: W0706 23:57:34.385299 2530 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://139.178.70.109:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 139.178.70.109:6443: connect: connection refused Jul 6 23:57:34.385345 kubelet[2530]: E0706 23:57:34.385331 2530 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://139.178.70.109:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 139.178.70.109:6443: connect: connection refused" logger="UnhandledError" Jul 6 23:57:34.389620 kubelet[2530]: I0706 23:57:34.389332 2530 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 6 23:57:34.389620 kubelet[2530]: I0706 23:57:34.389430 2530 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 6 23:57:34.389620 kubelet[2530]: I0706 23:57:34.389439 2530 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 6 23:57:34.390235 kubelet[2530]: I0706 23:57:34.390164 2530 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 6 23:57:34.392221 kubelet[2530]: E0706 23:57:34.392209 2530 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jul 6 23:57:34.492528 kubelet[2530]: I0706 23:57:34.492434 2530 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 6 23:57:34.494880 kubelet[2530]: E0706 23:57:34.494755 2530 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://139.178.70.109:6443/api/v1/nodes\": dial tcp 139.178.70.109:6443: connect: connection refused" node="localhost" Jul 6 23:57:34.547619 kubelet[2530]: E0706 23:57:34.547580 2530 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.109:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.109:6443: connect: connection refused" interval="400ms" Jul 6 23:57:34.634628 kubelet[2530]: I0706 23:57:34.634497 2530 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/900aeb3c3cb3a7c8edd62fd09c86b25d-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"900aeb3c3cb3a7c8edd62fd09c86b25d\") " pod="kube-system/kube-apiserver-localhost" Jul 6 23:57:34.634628 kubelet[2530]: I0706 23:57:34.634521 2530 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 6 23:57:34.634628 kubelet[2530]: I0706 23:57:34.634533 2530 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 6 23:57:34.634628 kubelet[2530]: I0706 23:57:34.634542 2530 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/900aeb3c3cb3a7c8edd62fd09c86b25d-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"900aeb3c3cb3a7c8edd62fd09c86b25d\") " pod="kube-system/kube-apiserver-localhost" Jul 6 23:57:34.634628 kubelet[2530]: I0706 23:57:34.634553 2530 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/900aeb3c3cb3a7c8edd62fd09c86b25d-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"900aeb3c3cb3a7c8edd62fd09c86b25d\") " pod="kube-system/kube-apiserver-localhost" Jul 6 23:57:34.634780 kubelet[2530]: I0706 23:57:34.634573 2530 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 6 23:57:34.634780 kubelet[2530]: I0706 23:57:34.634582 2530 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 6 23:57:34.634780 kubelet[2530]: I0706 23:57:34.634590 2530 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 6 23:57:34.634780 kubelet[2530]: I0706 23:57:34.634599 2530 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b35b56493416c25588cb530e37ffc065-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"b35b56493416c25588cb530e37ffc065\") " pod="kube-system/kube-scheduler-localhost" Jul 6 23:57:34.696425 kubelet[2530]: I0706 23:57:34.696404 2530 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 6 23:57:34.696636 kubelet[2530]: E0706 23:57:34.696586 2530 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://139.178.70.109:6443/api/v1/nodes\": dial tcp 139.178.70.109:6443: connect: connection refused" node="localhost" Jul 6 23:57:34.784453 containerd[1646]: time="2025-07-06T23:57:34.783872276Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:900aeb3c3cb3a7c8edd62fd09c86b25d,Namespace:kube-system,Attempt:0,}" Jul 6 23:57:34.784825 containerd[1646]: time="2025-07-06T23:57:34.784490975Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:3f04709fe51ae4ab5abd58e8da771b74,Namespace:kube-system,Attempt:0,}" Jul 6 23:57:34.797318 containerd[1646]: time="2025-07-06T23:57:34.797245847Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:b35b56493416c25588cb530e37ffc065,Namespace:kube-system,Attempt:0,}" Jul 6 23:57:34.948488 kubelet[2530]: E0706 23:57:34.948461 2530 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.109:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.109:6443: connect: connection refused" interval="800ms" Jul 6 23:57:35.098243 kubelet[2530]: I0706 23:57:35.097820 2530 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 6 23:57:35.098243 kubelet[2530]: E0706 23:57:35.097990 2530 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://139.178.70.109:6443/api/v1/nodes\": dial tcp 139.178.70.109:6443: connect: connection refused" node="localhost" Jul 6 23:57:35.134683 kubelet[2530]: W0706 23:57:35.134646 2530 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://139.178.70.109:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 139.178.70.109:6443: connect: connection refused Jul 6 23:57:35.134759 kubelet[2530]: E0706 23:57:35.134689 2530 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://139.178.70.109:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 139.178.70.109:6443: connect: connection refused" logger="UnhandledError" Jul 6 23:57:35.230563 kubelet[2530]: W0706 23:57:35.230524 2530 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://139.178.70.109:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 139.178.70.109:6443: connect: connection refused Jul 6 23:57:35.230563 kubelet[2530]: E0706 23:57:35.230567 2530 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://139.178.70.109:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 139.178.70.109:6443: connect: connection refused" logger="UnhandledError" Jul 6 23:57:35.251443 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount39436992.mount: Deactivated successfully. Jul 6 23:57:35.254256 containerd[1646]: time="2025-07-06T23:57:35.254212275Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 6 23:57:35.254742 containerd[1646]: time="2025-07-06T23:57:35.254695119Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 6 23:57:35.255208 containerd[1646]: time="2025-07-06T23:57:35.255189769Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 6 23:57:35.255259 containerd[1646]: time="2025-07-06T23:57:35.255246949Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jul 6 23:57:35.255669 containerd[1646]: time="2025-07-06T23:57:35.255658155Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 6 23:57:35.256121 containerd[1646]: time="2025-07-06T23:57:35.256083395Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 6 23:57:35.257617 containerd[1646]: time="2025-07-06T23:57:35.256182907Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 6 23:57:35.259910 containerd[1646]: time="2025-07-06T23:57:35.259890087Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 6 23:57:35.260504 containerd[1646]: time="2025-07-06T23:57:35.260487705Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 463.206781ms" Jul 6 23:57:35.261492 containerd[1646]: time="2025-07-06T23:57:35.261477065Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 477.551684ms" Jul 6 23:57:35.261693 containerd[1646]: time="2025-07-06T23:57:35.261676569Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 477.157638ms" Jul 6 23:57:35.366819 containerd[1646]: time="2025-07-06T23:57:35.366719956Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:57:35.366920 containerd[1646]: time="2025-07-06T23:57:35.366832113Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:57:35.366920 containerd[1646]: time="2025-07-06T23:57:35.366844267Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:57:35.366920 containerd[1646]: time="2025-07-06T23:57:35.366890862Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:57:35.369926 containerd[1646]: time="2025-07-06T23:57:35.369758352Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:57:35.369926 containerd[1646]: time="2025-07-06T23:57:35.369782912Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:57:35.369926 containerd[1646]: time="2025-07-06T23:57:35.369799697Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:57:35.369926 containerd[1646]: time="2025-07-06T23:57:35.369841963Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:57:35.373455 containerd[1646]: time="2025-07-06T23:57:35.373395370Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:57:35.374231 containerd[1646]: time="2025-07-06T23:57:35.373549786Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:57:35.374231 containerd[1646]: time="2025-07-06T23:57:35.373559746Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:57:35.374231 containerd[1646]: time="2025-07-06T23:57:35.373612237Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:57:35.423861 kubelet[2530]: W0706 23:57:35.423824 2530 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://139.178.70.109:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 139.178.70.109:6443: connect: connection refused Jul 6 23:57:35.423942 kubelet[2530]: E0706 23:57:35.423865 2530 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://139.178.70.109:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 139.178.70.109:6443: connect: connection refused" logger="UnhandledError" Jul 6 23:57:35.428825 containerd[1646]: time="2025-07-06T23:57:35.428797247Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:b35b56493416c25588cb530e37ffc065,Namespace:kube-system,Attempt:0,} returns sandbox id \"1f30f264183f412ac707f94f07091d45ca93c87ac4259fa913b2eb130053a6e2\"" Jul 6 23:57:35.432050 containerd[1646]: time="2025-07-06T23:57:35.431960860Z" level=info msg="CreateContainer within sandbox \"1f30f264183f412ac707f94f07091d45ca93c87ac4259fa913b2eb130053a6e2\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 6 23:57:35.435315 containerd[1646]: time="2025-07-06T23:57:35.435293930Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:900aeb3c3cb3a7c8edd62fd09c86b25d,Namespace:kube-system,Attempt:0,} returns sandbox id \"b16f4b0c78a8e9dfb207ead42e2245b389b4272a135ba052b3c5da94adfaeeee\"" Jul 6 23:57:35.436974 containerd[1646]: time="2025-07-06T23:57:35.436903673Z" level=info msg="CreateContainer within sandbox \"b16f4b0c78a8e9dfb207ead42e2245b389b4272a135ba052b3c5da94adfaeeee\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 6 23:57:35.439591 containerd[1646]: time="2025-07-06T23:57:35.439577991Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:3f04709fe51ae4ab5abd58e8da771b74,Namespace:kube-system,Attempt:0,} returns sandbox id \"16020dd5433b4dfe843a60bb97886da035e694a4ea102d0db010b4758753600a\"" Jul 6 23:57:35.440932 containerd[1646]: time="2025-07-06T23:57:35.440824821Z" level=info msg="CreateContainer within sandbox \"16020dd5433b4dfe843a60bb97886da035e694a4ea102d0db010b4758753600a\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 6 23:57:35.449051 containerd[1646]: time="2025-07-06T23:57:35.449035766Z" level=info msg="CreateContainer within sandbox \"b16f4b0c78a8e9dfb207ead42e2245b389b4272a135ba052b3c5da94adfaeeee\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"ae70567129c9a92b2451965909ccbee0a96a2390b4bec1ad38607fe9d5cfe757\"" Jul 6 23:57:35.449405 containerd[1646]: time="2025-07-06T23:57:35.449385742Z" level=info msg="StartContainer for \"ae70567129c9a92b2451965909ccbee0a96a2390b4bec1ad38607fe9d5cfe757\"" Jul 6 23:57:35.450011 containerd[1646]: time="2025-07-06T23:57:35.449995576Z" level=info msg="CreateContainer within sandbox \"16020dd5433b4dfe843a60bb97886da035e694a4ea102d0db010b4758753600a\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"ada0c74c76cd18bd8f36726a1286fe4e3179d4b8e3fa48f1b37438e9f440758f\"" Jul 6 23:57:35.450293 containerd[1646]: time="2025-07-06T23:57:35.450276513Z" level=info msg="StartContainer for \"ada0c74c76cd18bd8f36726a1286fe4e3179d4b8e3fa48f1b37438e9f440758f\"" Jul 6 23:57:35.453946 containerd[1646]: time="2025-07-06T23:57:35.453893972Z" level=info msg="CreateContainer within sandbox \"1f30f264183f412ac707f94f07091d45ca93c87ac4259fa913b2eb130053a6e2\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"e631e4ef5d6e65831c3b8dc2a0198ea471b5d93bab372ee5fe0e6544b89e6638\"" Jul 6 23:57:35.454599 containerd[1646]: time="2025-07-06T23:57:35.454587830Z" level=info msg="StartContainer for \"e631e4ef5d6e65831c3b8dc2a0198ea471b5d93bab372ee5fe0e6544b89e6638\"" Jul 6 23:57:35.515393 containerd[1646]: time="2025-07-06T23:57:35.515368524Z" level=info msg="StartContainer for \"ae70567129c9a92b2451965909ccbee0a96a2390b4bec1ad38607fe9d5cfe757\" returns successfully" Jul 6 23:57:35.519565 containerd[1646]: time="2025-07-06T23:57:35.519441616Z" level=info msg="StartContainer for \"ada0c74c76cd18bd8f36726a1286fe4e3179d4b8e3fa48f1b37438e9f440758f\" returns successfully" Jul 6 23:57:35.527157 containerd[1646]: time="2025-07-06T23:57:35.527088791Z" level=info msg="StartContainer for \"e631e4ef5d6e65831c3b8dc2a0198ea471b5d93bab372ee5fe0e6544b89e6638\" returns successfully" Jul 6 23:57:35.660351 kubelet[2530]: W0706 23:57:35.660265 2530 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://139.178.70.109:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 139.178.70.109:6443: connect: connection refused Jul 6 23:57:35.660351 kubelet[2530]: E0706 23:57:35.660308 2530 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://139.178.70.109:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 139.178.70.109:6443: connect: connection refused" logger="UnhandledError" Jul 6 23:57:35.748892 kubelet[2530]: E0706 23:57:35.748797 2530 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.109:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.109:6443: connect: connection refused" interval="1.6s" Jul 6 23:57:35.900804 kubelet[2530]: I0706 23:57:35.900786 2530 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 6 23:57:35.901027 kubelet[2530]: E0706 23:57:35.901010 2530 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://139.178.70.109:6443/api/v1/nodes\": dial tcp 139.178.70.109:6443: connect: connection refused" node="localhost" Jul 6 23:57:37.288121 kubelet[2530]: I0706 23:57:37.287987 2530 apiserver.go:52] "Watching apiserver" Jul 6 23:57:37.290510 kubelet[2530]: E0706 23:57:37.290490 2530 csi_plugin.go:305] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Jul 6 23:57:37.334134 kubelet[2530]: I0706 23:57:37.334086 2530 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jul 6 23:57:37.351299 kubelet[2530]: E0706 23:57:37.351276 2530 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jul 6 23:57:37.502679 kubelet[2530]: I0706 23:57:37.502600 2530 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 6 23:57:37.509953 kubelet[2530]: I0706 23:57:37.509932 2530 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Jul 6 23:57:39.015938 systemd[1]: Reloading requested from client PID 2800 ('systemctl') (unit session-9.scope)... Jul 6 23:57:39.015950 systemd[1]: Reloading... Jul 6 23:57:39.082633 zram_generator::config[2841]: No configuration found. Jul 6 23:57:39.165582 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Jul 6 23:57:39.181882 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 6 23:57:39.238842 systemd[1]: Reloading finished in 222 ms. Jul 6 23:57:39.264839 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:57:39.278318 systemd[1]: kubelet.service: Deactivated successfully. Jul 6 23:57:39.278530 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:57:39.282971 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:57:39.577712 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:57:39.580275 (kubelet)[2915]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 6 23:57:39.623536 kubelet[2915]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 6 23:57:39.623536 kubelet[2915]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 6 23:57:39.623536 kubelet[2915]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 6 23:57:39.623536 kubelet[2915]: I0706 23:57:39.623520 2915 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 6 23:57:39.637302 kubelet[2915]: I0706 23:57:39.636591 2915 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jul 6 23:57:39.637302 kubelet[2915]: I0706 23:57:39.636625 2915 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 6 23:57:39.637302 kubelet[2915]: I0706 23:57:39.636874 2915 server.go:934] "Client rotation is on, will bootstrap in background" Jul 6 23:57:39.651919 kubelet[2915]: I0706 23:57:39.651898 2915 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 6 23:57:39.664629 kubelet[2915]: I0706 23:57:39.664596 2915 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 6 23:57:39.689238 kubelet[2915]: E0706 23:57:39.689218 2915 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 6 23:57:39.689345 kubelet[2915]: I0706 23:57:39.689325 2915 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 6 23:57:39.706792 kubelet[2915]: I0706 23:57:39.706775 2915 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 6 23:57:39.707011 kubelet[2915]: I0706 23:57:39.706999 2915 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jul 6 23:57:39.707091 kubelet[2915]: I0706 23:57:39.707069 2915 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 6 23:57:39.707206 kubelet[2915]: I0706 23:57:39.707091 2915 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Jul 6 23:57:39.707265 kubelet[2915]: I0706 23:57:39.707212 2915 topology_manager.go:138] "Creating topology manager with none policy" Jul 6 23:57:39.707265 kubelet[2915]: I0706 23:57:39.707219 2915 container_manager_linux.go:300] "Creating device plugin manager" Jul 6 23:57:39.707265 kubelet[2915]: I0706 23:57:39.707236 2915 state_mem.go:36] "Initialized new in-memory state store" Jul 6 23:57:39.707332 kubelet[2915]: I0706 23:57:39.707298 2915 kubelet.go:408] "Attempting to sync node with API server" Jul 6 23:57:39.707332 kubelet[2915]: I0706 23:57:39.707306 2915 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 6 23:57:39.707332 kubelet[2915]: I0706 23:57:39.707325 2915 kubelet.go:314] "Adding apiserver pod source" Jul 6 23:57:39.707332 kubelet[2915]: I0706 23:57:39.707332 2915 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 6 23:57:39.729926 kubelet[2915]: I0706 23:57:39.729586 2915 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jul 6 23:57:39.730897 kubelet[2915]: I0706 23:57:39.730476 2915 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 6 23:57:39.730897 kubelet[2915]: I0706 23:57:39.730770 2915 server.go:1274] "Started kubelet" Jul 6 23:57:39.731826 kubelet[2915]: I0706 23:57:39.731818 2915 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 6 23:57:39.735401 kubelet[2915]: I0706 23:57:39.735361 2915 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 6 23:57:39.756512 kubelet[2915]: I0706 23:57:39.756490 2915 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 6 23:57:39.756747 kubelet[2915]: I0706 23:57:39.756737 2915 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 6 23:57:39.778406 kubelet[2915]: I0706 23:57:39.778384 2915 server.go:449] "Adding debug handlers to kubelet server" Jul 6 23:57:39.778724 kubelet[2915]: I0706 23:57:39.778489 2915 volume_manager.go:289] "Starting Kubelet Volume Manager" Jul 6 23:57:39.778724 kubelet[2915]: E0706 23:57:39.778649 2915 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 6 23:57:39.779492 kubelet[2915]: I0706 23:57:39.779224 2915 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 6 23:57:39.782189 kubelet[2915]: I0706 23:57:39.782151 2915 factory.go:221] Registration of the systemd container factory successfully Jul 6 23:57:39.782243 kubelet[2915]: I0706 23:57:39.782224 2915 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 6 23:57:39.783586 kubelet[2915]: I0706 23:57:39.782982 2915 reconciler.go:26] "Reconciler: start to sync state" Jul 6 23:57:39.783586 kubelet[2915]: I0706 23:57:39.783496 2915 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jul 6 23:57:39.785074 kubelet[2915]: I0706 23:57:39.784243 2915 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 6 23:57:39.785074 kubelet[2915]: I0706 23:57:39.784871 2915 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 6 23:57:39.785074 kubelet[2915]: I0706 23:57:39.784882 2915 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 6 23:57:39.785074 kubelet[2915]: I0706 23:57:39.784895 2915 kubelet.go:2321] "Starting kubelet main sync loop" Jul 6 23:57:39.785074 kubelet[2915]: E0706 23:57:39.784915 2915 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 6 23:57:39.787825 kubelet[2915]: I0706 23:57:39.787807 2915 factory.go:221] Registration of the containerd container factory successfully Jul 6 23:57:39.826710 kubelet[2915]: I0706 23:57:39.826658 2915 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 6 23:57:39.826812 kubelet[2915]: I0706 23:57:39.826804 2915 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 6 23:57:39.826859 kubelet[2915]: I0706 23:57:39.826855 2915 state_mem.go:36] "Initialized new in-memory state store" Jul 6 23:57:39.826970 kubelet[2915]: I0706 23:57:39.826963 2915 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 6 23:57:39.827013 kubelet[2915]: I0706 23:57:39.827000 2915 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 6 23:57:39.827043 kubelet[2915]: I0706 23:57:39.827040 2915 policy_none.go:49] "None policy: Start" Jul 6 23:57:39.827372 kubelet[2915]: I0706 23:57:39.827366 2915 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 6 23:57:39.827432 kubelet[2915]: I0706 23:57:39.827427 2915 state_mem.go:35] "Initializing new in-memory state store" Jul 6 23:57:39.827575 kubelet[2915]: I0706 23:57:39.827570 2915 state_mem.go:75] "Updated machine memory state" Jul 6 23:57:39.830276 kubelet[2915]: I0706 23:57:39.830231 2915 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 6 23:57:39.830648 kubelet[2915]: I0706 23:57:39.830435 2915 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 6 23:57:39.830648 kubelet[2915]: I0706 23:57:39.830443 2915 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 6 23:57:39.830648 kubelet[2915]: I0706 23:57:39.830588 2915 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 6 23:57:39.891650 kubelet[2915]: E0706 23:57:39.891623 2915 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jul 6 23:57:39.932176 kubelet[2915]: I0706 23:57:39.932155 2915 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 6 23:57:39.936115 kubelet[2915]: I0706 23:57:39.936095 2915 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Jul 6 23:57:39.936214 kubelet[2915]: I0706 23:57:39.936155 2915 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Jul 6 23:57:40.084964 kubelet[2915]: I0706 23:57:40.084850 2915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 6 23:57:40.084964 kubelet[2915]: I0706 23:57:40.084886 2915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 6 23:57:40.084964 kubelet[2915]: I0706 23:57:40.084903 2915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 6 23:57:40.084964 kubelet[2915]: I0706 23:57:40.084913 2915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 6 23:57:40.084964 kubelet[2915]: I0706 23:57:40.084923 2915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/900aeb3c3cb3a7c8edd62fd09c86b25d-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"900aeb3c3cb3a7c8edd62fd09c86b25d\") " pod="kube-system/kube-apiserver-localhost" Jul 6 23:57:40.085139 kubelet[2915]: I0706 23:57:40.084932 2915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/900aeb3c3cb3a7c8edd62fd09c86b25d-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"900aeb3c3cb3a7c8edd62fd09c86b25d\") " pod="kube-system/kube-apiserver-localhost" Jul 6 23:57:40.085139 kubelet[2915]: I0706 23:57:40.084942 2915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/900aeb3c3cb3a7c8edd62fd09c86b25d-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"900aeb3c3cb3a7c8edd62fd09c86b25d\") " pod="kube-system/kube-apiserver-localhost" Jul 6 23:57:40.085139 kubelet[2915]: I0706 23:57:40.084955 2915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 6 23:57:40.085139 kubelet[2915]: I0706 23:57:40.084964 2915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b35b56493416c25588cb530e37ffc065-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"b35b56493416c25588cb530e37ffc065\") " pod="kube-system/kube-scheduler-localhost" Jul 6 23:57:40.707907 kubelet[2915]: I0706 23:57:40.707871 2915 apiserver.go:52] "Watching apiserver" Jul 6 23:57:40.784029 kubelet[2915]: I0706 23:57:40.783996 2915 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jul 6 23:57:40.848021 kubelet[2915]: I0706 23:57:40.847230 2915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.847184438 podStartE2EDuration="2.847184438s" podCreationTimestamp="2025-07-06 23:57:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:57:40.847135964 +0000 UTC m=+1.252320386" watchObservedRunningTime="2025-07-06 23:57:40.847184438 +0000 UTC m=+1.252368856" Jul 6 23:57:40.848021 kubelet[2915]: I0706 23:57:40.847336 2915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.847328465 podStartE2EDuration="1.847328465s" podCreationTimestamp="2025-07-06 23:57:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:57:40.841248688 +0000 UTC m=+1.246433111" watchObservedRunningTime="2025-07-06 23:57:40.847328465 +0000 UTC m=+1.252512882" Jul 6 23:57:40.860931 kubelet[2915]: I0706 23:57:40.860725 2915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.860712583 podStartE2EDuration="1.860712583s" podCreationTimestamp="2025-07-06 23:57:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:57:40.853165662 +0000 UTC m=+1.258350084" watchObservedRunningTime="2025-07-06 23:57:40.860712583 +0000 UTC m=+1.265897005" Jul 6 23:57:45.965814 kubelet[2915]: I0706 23:57:45.965790 2915 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 6 23:57:45.966357 containerd[1646]: time="2025-07-06T23:57:45.966332337Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 6 23:57:45.966982 kubelet[2915]: I0706 23:57:45.966447 2915 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 6 23:57:47.130520 kubelet[2915]: I0706 23:57:47.130491 2915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/7e1c4648-31ea-4dc0-8d4d-eb8626dc3dee-var-lib-calico\") pod \"tigera-operator-5bf8dfcb4-92hzj\" (UID: \"7e1c4648-31ea-4dc0-8d4d-eb8626dc3dee\") " pod="tigera-operator/tigera-operator-5bf8dfcb4-92hzj" Jul 6 23:57:47.130899 kubelet[2915]: I0706 23:57:47.130550 2915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/97ab1e44-bd96-4ede-b499-97a0b4603ca1-kube-proxy\") pod \"kube-proxy-pq8r7\" (UID: \"97ab1e44-bd96-4ede-b499-97a0b4603ca1\") " pod="kube-system/kube-proxy-pq8r7" Jul 6 23:57:47.130899 kubelet[2915]: I0706 23:57:47.130573 2915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/97ab1e44-bd96-4ede-b499-97a0b4603ca1-xtables-lock\") pod \"kube-proxy-pq8r7\" (UID: \"97ab1e44-bd96-4ede-b499-97a0b4603ca1\") " pod="kube-system/kube-proxy-pq8r7" Jul 6 23:57:47.130899 kubelet[2915]: I0706 23:57:47.130600 2915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xkjcb\" (UniqueName: \"kubernetes.io/projected/97ab1e44-bd96-4ede-b499-97a0b4603ca1-kube-api-access-xkjcb\") pod \"kube-proxy-pq8r7\" (UID: \"97ab1e44-bd96-4ede-b499-97a0b4603ca1\") " pod="kube-system/kube-proxy-pq8r7" Jul 6 23:57:47.130899 kubelet[2915]: I0706 23:57:47.130629 2915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w5hr5\" (UniqueName: \"kubernetes.io/projected/7e1c4648-31ea-4dc0-8d4d-eb8626dc3dee-kube-api-access-w5hr5\") pod \"tigera-operator-5bf8dfcb4-92hzj\" (UID: \"7e1c4648-31ea-4dc0-8d4d-eb8626dc3dee\") " pod="tigera-operator/tigera-operator-5bf8dfcb4-92hzj" Jul 6 23:57:47.130899 kubelet[2915]: I0706 23:57:47.130640 2915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/97ab1e44-bd96-4ede-b499-97a0b4603ca1-lib-modules\") pod \"kube-proxy-pq8r7\" (UID: \"97ab1e44-bd96-4ede-b499-97a0b4603ca1\") " pod="kube-system/kube-proxy-pq8r7" Jul 6 23:57:47.347914 containerd[1646]: time="2025-07-06T23:57:47.347885323Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-pq8r7,Uid:97ab1e44-bd96-4ede-b499-97a0b4603ca1,Namespace:kube-system,Attempt:0,}" Jul 6 23:57:47.362317 containerd[1646]: time="2025-07-06T23:57:47.362243612Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:57:47.362317 containerd[1646]: time="2025-07-06T23:57:47.362287806Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:57:47.362317 containerd[1646]: time="2025-07-06T23:57:47.362299590Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:57:47.362699 containerd[1646]: time="2025-07-06T23:57:47.362353827Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:57:47.393229 containerd[1646]: time="2025-07-06T23:57:47.393106228Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-pq8r7,Uid:97ab1e44-bd96-4ede-b499-97a0b4603ca1,Namespace:kube-system,Attempt:0,} returns sandbox id \"fe78637a87ae109e3b97365ba0e1214daad88d5ed5a81d4fc6c2ab8140aa909a\"" Jul 6 23:57:47.394951 containerd[1646]: time="2025-07-06T23:57:47.394887448Z" level=info msg="CreateContainer within sandbox \"fe78637a87ae109e3b97365ba0e1214daad88d5ed5a81d4fc6c2ab8140aa909a\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 6 23:57:47.395908 containerd[1646]: time="2025-07-06T23:57:47.395859238Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5bf8dfcb4-92hzj,Uid:7e1c4648-31ea-4dc0-8d4d-eb8626dc3dee,Namespace:tigera-operator,Attempt:0,}" Jul 6 23:57:47.438058 containerd[1646]: time="2025-07-06T23:57:47.438008872Z" level=info msg="CreateContainer within sandbox \"fe78637a87ae109e3b97365ba0e1214daad88d5ed5a81d4fc6c2ab8140aa909a\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"b1528b2653fbd4f1d3e79b40cb237492186d61afec11ca05515ef9cc0dbed79c\"" Jul 6 23:57:47.438832 containerd[1646]: time="2025-07-06T23:57:47.438810698Z" level=info msg="StartContainer for \"b1528b2653fbd4f1d3e79b40cb237492186d61afec11ca05515ef9cc0dbed79c\"" Jul 6 23:57:47.448717 containerd[1646]: time="2025-07-06T23:57:47.442366330Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:57:47.448717 containerd[1646]: time="2025-07-06T23:57:47.442742941Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:57:47.448717 containerd[1646]: time="2025-07-06T23:57:47.442765477Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:57:47.448717 containerd[1646]: time="2025-07-06T23:57:47.442864321Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:57:47.482193 containerd[1646]: time="2025-07-06T23:57:47.482169388Z" level=info msg="StartContainer for \"b1528b2653fbd4f1d3e79b40cb237492186d61afec11ca05515ef9cc0dbed79c\" returns successfully" Jul 6 23:57:47.497294 containerd[1646]: time="2025-07-06T23:57:47.497252872Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5bf8dfcb4-92hzj,Uid:7e1c4648-31ea-4dc0-8d4d-eb8626dc3dee,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"b2db6977aeb188fd0af5e3a7a6b99f0f4a4e56ffd612ccc7f4308d532c2c7a60\"" Jul 6 23:57:47.499400 containerd[1646]: time="2025-07-06T23:57:47.498552083Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\"" Jul 6 23:57:48.249203 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1382834675.mount: Deactivated successfully. Jul 6 23:57:48.951928 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1305458881.mount: Deactivated successfully. Jul 6 23:57:49.665368 containerd[1646]: time="2025-07-06T23:57:49.665320269Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:57:49.666059 containerd[1646]: time="2025-07-06T23:57:49.665991291Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.3: active requests=0, bytes read=25056543" Jul 6 23:57:49.667056 containerd[1646]: time="2025-07-06T23:57:49.666416015Z" level=info msg="ImageCreate event name:\"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:57:49.669320 containerd[1646]: time="2025-07-06T23:57:49.669294911Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:57:49.670722 containerd[1646]: time="2025-07-06T23:57:49.670701218Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.3\" with image id \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\", repo tag \"quay.io/tigera/operator:v1.38.3\", repo digest \"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\", size \"25052538\" in 2.172127506s" Jul 6 23:57:49.670800 containerd[1646]: time="2025-07-06T23:57:49.670789903Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\" returns image reference \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\"" Jul 6 23:57:49.673200 containerd[1646]: time="2025-07-06T23:57:49.673176237Z" level=info msg="CreateContainer within sandbox \"b2db6977aeb188fd0af5e3a7a6b99f0f4a4e56ffd612ccc7f4308d532c2c7a60\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jul 6 23:57:49.709037 kubelet[2915]: I0706 23:57:49.708889 2915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-pq8r7" podStartSLOduration=3.708868064 podStartE2EDuration="3.708868064s" podCreationTimestamp="2025-07-06 23:57:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:57:47.850223396 +0000 UTC m=+8.255407818" watchObservedRunningTime="2025-07-06 23:57:49.708868064 +0000 UTC m=+10.114052478" Jul 6 23:57:49.759972 containerd[1646]: time="2025-07-06T23:57:49.759907620Z" level=info msg="CreateContainer within sandbox \"b2db6977aeb188fd0af5e3a7a6b99f0f4a4e56ffd612ccc7f4308d532c2c7a60\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"d3bc4aa63be77cf8c855ec8cea965805d6a0b4699a49dbb7447d19b8d00e48b7\"" Jul 6 23:57:49.760884 containerd[1646]: time="2025-07-06T23:57:49.760736437Z" level=info msg="StartContainer for \"d3bc4aa63be77cf8c855ec8cea965805d6a0b4699a49dbb7447d19b8d00e48b7\"" Jul 6 23:57:49.827009 containerd[1646]: time="2025-07-06T23:57:49.826979776Z" level=info msg="StartContainer for \"d3bc4aa63be77cf8c855ec8cea965805d6a0b4699a49dbb7447d19b8d00e48b7\" returns successfully" Jul 6 23:57:49.875227 kubelet[2915]: I0706 23:57:49.875125 2915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-5bf8dfcb4-92hzj" podStartSLOduration=0.701924115 podStartE2EDuration="2.875112276s" podCreationTimestamp="2025-07-06 23:57:47 +0000 UTC" firstStartedPulling="2025-07-06 23:57:47.498214232 +0000 UTC m=+7.903398645" lastFinishedPulling="2025-07-06 23:57:49.67140239 +0000 UTC m=+10.076586806" observedRunningTime="2025-07-06 23:57:49.848719873 +0000 UTC m=+10.253904286" watchObservedRunningTime="2025-07-06 23:57:49.875112276 +0000 UTC m=+10.280296694" Jul 6 23:57:56.056579 sudo[1976]: pam_unix(sudo:session): session closed for user root Jul 6 23:57:56.064701 sshd[1972]: pam_unix(sshd:session): session closed for user core Jul 6 23:57:56.069994 systemd[1]: sshd@6-139.178.70.109:22-139.178.68.195:36528.service: Deactivated successfully. Jul 6 23:57:56.078210 systemd-logind[1621]: Session 9 logged out. Waiting for processes to exit. Jul 6 23:57:56.079902 systemd[1]: session-9.scope: Deactivated successfully. Jul 6 23:57:56.092990 systemd-logind[1621]: Removed session 9. Jul 6 23:57:58.603653 kubelet[2915]: I0706 23:57:58.603627 2915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/14ef42f7-a595-4748-854f-0f874be513f5-typha-certs\") pod \"calico-typha-5f44958f74-4vm5q\" (UID: \"14ef42f7-a595-4748-854f-0f874be513f5\") " pod="calico-system/calico-typha-5f44958f74-4vm5q" Jul 6 23:57:58.603653 kubelet[2915]: I0706 23:57:58.603658 2915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zmtv4\" (UniqueName: \"kubernetes.io/projected/14ef42f7-a595-4748-854f-0f874be513f5-kube-api-access-zmtv4\") pod \"calico-typha-5f44958f74-4vm5q\" (UID: \"14ef42f7-a595-4748-854f-0f874be513f5\") " pod="calico-system/calico-typha-5f44958f74-4vm5q" Jul 6 23:57:58.604007 kubelet[2915]: I0706 23:57:58.603674 2915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/14ef42f7-a595-4748-854f-0f874be513f5-tigera-ca-bundle\") pod \"calico-typha-5f44958f74-4vm5q\" (UID: \"14ef42f7-a595-4748-854f-0f874be513f5\") " pod="calico-system/calico-typha-5f44958f74-4vm5q" Jul 6 23:57:58.861068 containerd[1646]: time="2025-07-06T23:57:58.860894709Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5f44958f74-4vm5q,Uid:14ef42f7-a595-4748-854f-0f874be513f5,Namespace:calico-system,Attempt:0,}" Jul 6 23:57:59.009891 containerd[1646]: time="2025-07-06T23:57:59.009723561Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:57:59.009891 containerd[1646]: time="2025-07-06T23:57:59.009794733Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:57:59.009891 containerd[1646]: time="2025-07-06T23:57:59.009811090Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:57:59.010173 containerd[1646]: time="2025-07-06T23:57:59.010081332Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:57:59.075082 containerd[1646]: time="2025-07-06T23:57:59.075061668Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5f44958f74-4vm5q,Uid:14ef42f7-a595-4748-854f-0f874be513f5,Namespace:calico-system,Attempt:0,} returns sandbox id \"4dcfb653af890840739df87e5114a744212d45c822f6e022ccfb50fbb4613ac7\"" Jul 6 23:57:59.133386 kubelet[2915]: I0706 23:57:59.132465 2915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/dcaab830-5f77-4a97-8d87-0432ca45d0c9-cni-bin-dir\") pod \"calico-node-4l5b7\" (UID: \"dcaab830-5f77-4a97-8d87-0432ca45d0c9\") " pod="calico-system/calico-node-4l5b7" Jul 6 23:57:59.133386 kubelet[2915]: I0706 23:57:59.132499 2915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/dcaab830-5f77-4a97-8d87-0432ca45d0c9-lib-modules\") pod \"calico-node-4l5b7\" (UID: \"dcaab830-5f77-4a97-8d87-0432ca45d0c9\") " pod="calico-system/calico-node-4l5b7" Jul 6 23:57:59.133386 kubelet[2915]: I0706 23:57:59.132518 2915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/dcaab830-5f77-4a97-8d87-0432ca45d0c9-node-certs\") pod \"calico-node-4l5b7\" (UID: \"dcaab830-5f77-4a97-8d87-0432ca45d0c9\") " pod="calico-system/calico-node-4l5b7" Jul 6 23:57:59.133386 kubelet[2915]: I0706 23:57:59.132534 2915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dcaab830-5f77-4a97-8d87-0432ca45d0c9-tigera-ca-bundle\") pod \"calico-node-4l5b7\" (UID: \"dcaab830-5f77-4a97-8d87-0432ca45d0c9\") " pod="calico-system/calico-node-4l5b7" Jul 6 23:57:59.133386 kubelet[2915]: I0706 23:57:59.132549 2915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/dcaab830-5f77-4a97-8d87-0432ca45d0c9-var-lib-calico\") pod \"calico-node-4l5b7\" (UID: \"dcaab830-5f77-4a97-8d87-0432ca45d0c9\") " pod="calico-system/calico-node-4l5b7" Jul 6 23:57:59.133645 kubelet[2915]: I0706 23:57:59.132565 2915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q6lsv\" (UniqueName: \"kubernetes.io/projected/dcaab830-5f77-4a97-8d87-0432ca45d0c9-kube-api-access-q6lsv\") pod \"calico-node-4l5b7\" (UID: \"dcaab830-5f77-4a97-8d87-0432ca45d0c9\") " pod="calico-system/calico-node-4l5b7" Jul 6 23:57:59.133645 kubelet[2915]: I0706 23:57:59.132584 2915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/dcaab830-5f77-4a97-8d87-0432ca45d0c9-xtables-lock\") pod \"calico-node-4l5b7\" (UID: \"dcaab830-5f77-4a97-8d87-0432ca45d0c9\") " pod="calico-system/calico-node-4l5b7" Jul 6 23:57:59.133645 kubelet[2915]: I0706 23:57:59.132600 2915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/dcaab830-5f77-4a97-8d87-0432ca45d0c9-flexvol-driver-host\") pod \"calico-node-4l5b7\" (UID: \"dcaab830-5f77-4a97-8d87-0432ca45d0c9\") " pod="calico-system/calico-node-4l5b7" Jul 6 23:57:59.133645 kubelet[2915]: I0706 23:57:59.132632 2915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/dcaab830-5f77-4a97-8d87-0432ca45d0c9-var-run-calico\") pod \"calico-node-4l5b7\" (UID: \"dcaab830-5f77-4a97-8d87-0432ca45d0c9\") " pod="calico-system/calico-node-4l5b7" Jul 6 23:57:59.133645 kubelet[2915]: I0706 23:57:59.132647 2915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/dcaab830-5f77-4a97-8d87-0432ca45d0c9-cni-log-dir\") pod \"calico-node-4l5b7\" (UID: \"dcaab830-5f77-4a97-8d87-0432ca45d0c9\") " pod="calico-system/calico-node-4l5b7" Jul 6 23:57:59.139854 kubelet[2915]: I0706 23:57:59.132663 2915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/dcaab830-5f77-4a97-8d87-0432ca45d0c9-cni-net-dir\") pod \"calico-node-4l5b7\" (UID: \"dcaab830-5f77-4a97-8d87-0432ca45d0c9\") " pod="calico-system/calico-node-4l5b7" Jul 6 23:57:59.139854 kubelet[2915]: I0706 23:57:59.132674 2915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/dcaab830-5f77-4a97-8d87-0432ca45d0c9-policysync\") pod \"calico-node-4l5b7\" (UID: \"dcaab830-5f77-4a97-8d87-0432ca45d0c9\") " pod="calico-system/calico-node-4l5b7" Jul 6 23:57:59.139854 kubelet[2915]: E0706 23:57:59.137828 2915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mvnfh" podUID="451e31a5-b9c2-46fb-96e4-f1e20de500e9" Jul 6 23:57:59.210338 containerd[1646]: time="2025-07-06T23:57:59.210313018Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\"" Jul 6 23:57:59.233202 kubelet[2915]: I0706 23:57:59.233175 2915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/451e31a5-b9c2-46fb-96e4-f1e20de500e9-socket-dir\") pod \"csi-node-driver-mvnfh\" (UID: \"451e31a5-b9c2-46fb-96e4-f1e20de500e9\") " pod="calico-system/csi-node-driver-mvnfh" Jul 6 23:57:59.233202 kubelet[2915]: I0706 23:57:59.233204 2915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-47mtp\" (UniqueName: \"kubernetes.io/projected/451e31a5-b9c2-46fb-96e4-f1e20de500e9-kube-api-access-47mtp\") pod \"csi-node-driver-mvnfh\" (UID: \"451e31a5-b9c2-46fb-96e4-f1e20de500e9\") " pod="calico-system/csi-node-driver-mvnfh" Jul 6 23:57:59.233304 kubelet[2915]: I0706 23:57:59.233232 2915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/451e31a5-b9c2-46fb-96e4-f1e20de500e9-varrun\") pod \"csi-node-driver-mvnfh\" (UID: \"451e31a5-b9c2-46fb-96e4-f1e20de500e9\") " pod="calico-system/csi-node-driver-mvnfh" Jul 6 23:57:59.233304 kubelet[2915]: I0706 23:57:59.233251 2915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/451e31a5-b9c2-46fb-96e4-f1e20de500e9-registration-dir\") pod \"csi-node-driver-mvnfh\" (UID: \"451e31a5-b9c2-46fb-96e4-f1e20de500e9\") " pod="calico-system/csi-node-driver-mvnfh" Jul 6 23:57:59.233304 kubelet[2915]: I0706 23:57:59.233272 2915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/451e31a5-b9c2-46fb-96e4-f1e20de500e9-kubelet-dir\") pod \"csi-node-driver-mvnfh\" (UID: \"451e31a5-b9c2-46fb-96e4-f1e20de500e9\") " pod="calico-system/csi-node-driver-mvnfh" Jul 6 23:57:59.295954 kubelet[2915]: E0706 23:57:59.295877 2915 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:57:59.295954 kubelet[2915]: W0706 23:57:59.295899 2915 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:57:59.295954 kubelet[2915]: E0706 23:57:59.295922 2915 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:57:59.327524 containerd[1646]: time="2025-07-06T23:57:59.327500940Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-4l5b7,Uid:dcaab830-5f77-4a97-8d87-0432ca45d0c9,Namespace:calico-system,Attempt:0,}" Jul 6 23:57:59.334029 kubelet[2915]: E0706 23:57:59.333968 2915 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:57:59.334029 kubelet[2915]: W0706 23:57:59.333978 2915 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:57:59.334029 kubelet[2915]: E0706 23:57:59.333991 2915 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:57:59.334154 kubelet[2915]: E0706 23:57:59.334142 2915 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:57:59.334154 kubelet[2915]: W0706 23:57:59.334151 2915 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:57:59.341846 kubelet[2915]: E0706 23:57:59.334160 2915 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:57:59.341846 kubelet[2915]: E0706 23:57:59.334278 2915 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:57:59.341846 kubelet[2915]: W0706 23:57:59.334282 2915 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:57:59.341846 kubelet[2915]: E0706 23:57:59.334288 2915 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:57:59.341846 kubelet[2915]: E0706 23:57:59.334393 2915 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:57:59.341846 kubelet[2915]: W0706 23:57:59.334398 2915 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:57:59.341846 kubelet[2915]: E0706 23:57:59.334405 2915 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:57:59.341846 kubelet[2915]: E0706 23:57:59.334529 2915 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:57:59.341846 kubelet[2915]: W0706 23:57:59.334534 2915 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:57:59.341846 kubelet[2915]: E0706 23:57:59.334539 2915 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:57:59.342022 kubelet[2915]: E0706 23:57:59.334660 2915 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:57:59.342022 kubelet[2915]: W0706 23:57:59.334665 2915 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:57:59.342022 kubelet[2915]: E0706 23:57:59.334672 2915 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:57:59.342022 kubelet[2915]: E0706 23:57:59.334793 2915 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:57:59.342022 kubelet[2915]: W0706 23:57:59.334799 2915 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:57:59.342022 kubelet[2915]: E0706 23:57:59.334806 2915 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:57:59.342022 kubelet[2915]: E0706 23:57:59.334898 2915 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:57:59.342022 kubelet[2915]: W0706 23:57:59.334902 2915 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:57:59.342022 kubelet[2915]: E0706 23:57:59.334913 2915 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:57:59.342022 kubelet[2915]: E0706 23:57:59.335046 2915 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:57:59.342180 kubelet[2915]: W0706 23:57:59.335051 2915 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:57:59.342180 kubelet[2915]: E0706 23:57:59.335059 2915 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:57:59.342180 kubelet[2915]: E0706 23:57:59.335193 2915 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:57:59.342180 kubelet[2915]: W0706 23:57:59.335198 2915 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:57:59.342180 kubelet[2915]: E0706 23:57:59.335207 2915 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:57:59.342180 kubelet[2915]: E0706 23:57:59.335297 2915 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:57:59.342180 kubelet[2915]: W0706 23:57:59.335302 2915 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:57:59.342180 kubelet[2915]: E0706 23:57:59.335310 2915 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:57:59.342180 kubelet[2915]: E0706 23:57:59.335411 2915 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:57:59.342180 kubelet[2915]: W0706 23:57:59.335416 2915 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:57:59.342345 kubelet[2915]: E0706 23:57:59.335435 2915 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:57:59.342345 kubelet[2915]: E0706 23:57:59.335558 2915 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:57:59.342345 kubelet[2915]: W0706 23:57:59.335564 2915 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:57:59.342345 kubelet[2915]: E0706 23:57:59.335569 2915 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:57:59.342345 kubelet[2915]: E0706 23:57:59.335679 2915 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:57:59.342345 kubelet[2915]: W0706 23:57:59.335684 2915 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:57:59.342345 kubelet[2915]: E0706 23:57:59.335701 2915 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:57:59.342345 kubelet[2915]: E0706 23:57:59.335816 2915 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:57:59.342345 kubelet[2915]: W0706 23:57:59.335824 2915 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:57:59.342345 kubelet[2915]: E0706 23:57:59.335835 2915 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:57:59.352759 kubelet[2915]: E0706 23:57:59.335931 2915 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:57:59.352759 kubelet[2915]: W0706 23:57:59.335936 2915 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:57:59.352759 kubelet[2915]: E0706 23:57:59.335950 2915 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:57:59.352759 kubelet[2915]: E0706 23:57:59.336037 2915 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:57:59.352759 kubelet[2915]: W0706 23:57:59.336042 2915 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:57:59.352759 kubelet[2915]: E0706 23:57:59.336056 2915 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:57:59.352759 kubelet[2915]: E0706 23:57:59.336141 2915 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:57:59.352759 kubelet[2915]: W0706 23:57:59.336147 2915 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:57:59.352759 kubelet[2915]: E0706 23:57:59.336155 2915 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:57:59.352759 kubelet[2915]: E0706 23:57:59.336246 2915 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:57:59.354859 kubelet[2915]: W0706 23:57:59.336251 2915 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:57:59.354859 kubelet[2915]: E0706 23:57:59.336260 2915 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:57:59.354859 kubelet[2915]: E0706 23:57:59.336380 2915 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:57:59.354859 kubelet[2915]: W0706 23:57:59.336384 2915 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:57:59.354859 kubelet[2915]: E0706 23:57:59.336391 2915 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:57:59.354859 kubelet[2915]: E0706 23:57:59.336496 2915 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:57:59.354859 kubelet[2915]: W0706 23:57:59.336501 2915 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:57:59.354859 kubelet[2915]: E0706 23:57:59.336510 2915 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:57:59.354859 kubelet[2915]: E0706 23:57:59.336635 2915 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:57:59.354859 kubelet[2915]: W0706 23:57:59.336641 2915 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:57:59.355063 kubelet[2915]: E0706 23:57:59.336650 2915 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:57:59.355063 kubelet[2915]: E0706 23:57:59.336812 2915 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:57:59.355063 kubelet[2915]: W0706 23:57:59.336817 2915 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:57:59.355063 kubelet[2915]: E0706 23:57:59.336821 2915 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:57:59.355063 kubelet[2915]: E0706 23:57:59.336947 2915 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:57:59.355063 kubelet[2915]: W0706 23:57:59.336952 2915 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:57:59.355063 kubelet[2915]: E0706 23:57:59.336959 2915 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:57:59.355063 kubelet[2915]: E0706 23:57:59.342814 2915 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:57:59.355063 kubelet[2915]: W0706 23:57:59.342822 2915 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:57:59.355063 kubelet[2915]: E0706 23:57:59.342832 2915 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:57:59.362974 kubelet[2915]: E0706 23:57:59.362911 2915 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:57:59.362974 kubelet[2915]: W0706 23:57:59.362930 2915 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:57:59.362974 kubelet[2915]: E0706 23:57:59.362946 2915 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:57:59.422484 containerd[1646]: time="2025-07-06T23:57:59.422185878Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:57:59.422484 containerd[1646]: time="2025-07-06T23:57:59.422259024Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:57:59.422484 containerd[1646]: time="2025-07-06T23:57:59.422318859Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:57:59.424318 containerd[1646]: time="2025-07-06T23:57:59.424115244Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:57:59.483068 containerd[1646]: time="2025-07-06T23:57:59.483032516Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-4l5b7,Uid:dcaab830-5f77-4a97-8d87-0432ca45d0c9,Namespace:calico-system,Attempt:0,} returns sandbox id \"9c1a0188914d6f1b3127dddba09017f3ba619223aab72f6b555b72de381a9600\"" Jul 6 23:58:00.785551 kubelet[2915]: E0706 23:58:00.785243 2915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mvnfh" podUID="451e31a5-b9c2-46fb-96e4-f1e20de500e9" Jul 6 23:58:00.885313 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount889795588.mount: Deactivated successfully. Jul 6 23:58:01.547963 containerd[1646]: time="2025-07-06T23:58:01.547936258Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:58:01.550056 containerd[1646]: time="2025-07-06T23:58:01.548371812Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.2: active requests=0, bytes read=35233364" Jul 6 23:58:01.550056 containerd[1646]: time="2025-07-06T23:58:01.548783329Z" level=info msg="ImageCreate event name:\"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:58:01.550056 containerd[1646]: time="2025-07-06T23:58:01.549965702Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:58:01.550379 containerd[1646]: time="2025-07-06T23:58:01.550361375Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.2\" with image id \"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\", size \"35233218\" in 2.339881348s" Jul 6 23:58:01.550422 containerd[1646]: time="2025-07-06T23:58:01.550378673Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\" returns image reference \"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\"" Jul 6 23:58:01.551902 containerd[1646]: time="2025-07-06T23:58:01.551888003Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\"" Jul 6 23:58:01.586926 containerd[1646]: time="2025-07-06T23:58:01.586901260Z" level=info msg="CreateContainer within sandbox \"4dcfb653af890840739df87e5114a744212d45c822f6e022ccfb50fbb4613ac7\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jul 6 23:58:01.640541 containerd[1646]: time="2025-07-06T23:58:01.640506594Z" level=info msg="CreateContainer within sandbox \"4dcfb653af890840739df87e5114a744212d45c822f6e022ccfb50fbb4613ac7\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"edb3b6405baeb3ec245b7c3acfbe8bdd9fa16a86d20cc6059c04871136adc6ae\"" Jul 6 23:58:01.642997 containerd[1646]: time="2025-07-06T23:58:01.642978641Z" level=info msg="StartContainer for \"edb3b6405baeb3ec245b7c3acfbe8bdd9fa16a86d20cc6059c04871136adc6ae\"" Jul 6 23:58:01.706817 containerd[1646]: time="2025-07-06T23:58:01.706791173Z" level=info msg="StartContainer for \"edb3b6405baeb3ec245b7c3acfbe8bdd9fa16a86d20cc6059c04871136adc6ae\" returns successfully" Jul 6 23:58:02.017164 kubelet[2915]: E0706 23:58:02.017136 2915 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:58:02.017164 kubelet[2915]: W0706 23:58:02.017159 2915 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:58:02.017468 kubelet[2915]: E0706 23:58:02.017185 2915 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:58:02.017468 kubelet[2915]: E0706 23:58:02.017357 2915 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:58:02.017468 kubelet[2915]: W0706 23:58:02.017365 2915 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:58:02.017468 kubelet[2915]: E0706 23:58:02.017373 2915 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:58:02.017544 kubelet[2915]: E0706 23:58:02.017502 2915 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:58:02.017544 kubelet[2915]: W0706 23:58:02.017507 2915 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:58:02.017544 kubelet[2915]: E0706 23:58:02.017512 2915 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:58:02.017684 kubelet[2915]: E0706 23:58:02.017672 2915 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:58:02.017684 kubelet[2915]: W0706 23:58:02.017682 2915 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:58:02.017729 kubelet[2915]: E0706 23:58:02.017690 2915 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:58:02.017853 kubelet[2915]: E0706 23:58:02.017842 2915 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:58:02.017853 kubelet[2915]: W0706 23:58:02.017851 2915 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:58:02.017900 kubelet[2915]: E0706 23:58:02.017859 2915 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:58:02.017984 kubelet[2915]: E0706 23:58:02.017975 2915 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:58:02.018010 kubelet[2915]: W0706 23:58:02.017985 2915 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:58:02.018010 kubelet[2915]: E0706 23:58:02.017993 2915 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:58:02.018126 kubelet[2915]: E0706 23:58:02.018115 2915 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:58:02.018126 kubelet[2915]: W0706 23:58:02.018124 2915 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:58:02.018177 kubelet[2915]: E0706 23:58:02.018132 2915 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:58:02.018254 kubelet[2915]: E0706 23:58:02.018244 2915 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:58:02.018281 kubelet[2915]: W0706 23:58:02.018258 2915 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:58:02.018281 kubelet[2915]: E0706 23:58:02.018265 2915 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:58:02.018404 kubelet[2915]: E0706 23:58:02.018394 2915 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:58:02.018431 kubelet[2915]: W0706 23:58:02.018405 2915 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:58:02.018431 kubelet[2915]: E0706 23:58:02.018415 2915 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:58:02.018596 kubelet[2915]: E0706 23:58:02.018584 2915 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:58:02.018631 kubelet[2915]: W0706 23:58:02.018596 2915 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:58:02.018631 kubelet[2915]: E0706 23:58:02.018618 2915 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:58:02.018748 kubelet[2915]: E0706 23:58:02.018738 2915 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:58:02.018748 kubelet[2915]: W0706 23:58:02.018745 2915 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:58:02.018799 kubelet[2915]: E0706 23:58:02.018751 2915 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:58:02.018880 kubelet[2915]: E0706 23:58:02.018870 2915 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:58:02.018880 kubelet[2915]: W0706 23:58:02.018878 2915 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:58:02.018925 kubelet[2915]: E0706 23:58:02.018883 2915 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:58:02.019008 kubelet[2915]: E0706 23:58:02.018998 2915 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:58:02.019041 kubelet[2915]: W0706 23:58:02.019006 2915 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:58:02.019041 kubelet[2915]: E0706 23:58:02.019022 2915 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:58:02.019171 kubelet[2915]: E0706 23:58:02.019138 2915 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:58:02.019171 kubelet[2915]: W0706 23:58:02.019150 2915 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:58:02.019171 kubelet[2915]: E0706 23:58:02.019158 2915 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:58:02.019273 kubelet[2915]: E0706 23:58:02.019265 2915 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:58:02.019273 kubelet[2915]: W0706 23:58:02.019270 2915 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:58:02.019342 kubelet[2915]: E0706 23:58:02.019276 2915 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:58:02.048962 kubelet[2915]: E0706 23:58:02.048936 2915 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:58:02.048962 kubelet[2915]: W0706 23:58:02.048956 2915 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:58:02.049078 kubelet[2915]: E0706 23:58:02.048977 2915 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:58:02.049157 kubelet[2915]: E0706 23:58:02.049144 2915 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:58:02.049157 kubelet[2915]: W0706 23:58:02.049153 2915 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:58:02.049157 kubelet[2915]: E0706 23:58:02.049160 2915 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:58:02.049323 kubelet[2915]: E0706 23:58:02.049312 2915 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:58:02.049323 kubelet[2915]: W0706 23:58:02.049319 2915 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:58:02.049594 kubelet[2915]: E0706 23:58:02.049330 2915 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:58:02.049594 kubelet[2915]: E0706 23:58:02.049475 2915 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:58:02.049594 kubelet[2915]: W0706 23:58:02.049480 2915 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:58:02.049594 kubelet[2915]: E0706 23:58:02.049490 2915 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:58:02.049708 kubelet[2915]: E0706 23:58:02.049639 2915 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:58:02.049708 kubelet[2915]: W0706 23:58:02.049646 2915 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:58:02.049708 kubelet[2915]: E0706 23:58:02.049653 2915 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:58:02.049772 kubelet[2915]: E0706 23:58:02.049759 2915 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:58:02.049772 kubelet[2915]: W0706 23:58:02.049763 2915 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:58:02.049772 kubelet[2915]: E0706 23:58:02.049768 2915 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:58:02.049912 kubelet[2915]: E0706 23:58:02.049896 2915 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:58:02.049912 kubelet[2915]: W0706 23:58:02.049905 2915 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:58:02.049912 kubelet[2915]: E0706 23:58:02.049912 2915 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:58:02.050527 kubelet[2915]: E0706 23:58:02.050172 2915 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:58:02.050527 kubelet[2915]: W0706 23:58:02.050179 2915 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:58:02.050527 kubelet[2915]: E0706 23:58:02.050193 2915 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:58:02.050527 kubelet[2915]: E0706 23:58:02.050316 2915 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:58:02.050527 kubelet[2915]: W0706 23:58:02.050321 2915 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:58:02.050527 kubelet[2915]: E0706 23:58:02.050331 2915 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:58:02.050527 kubelet[2915]: E0706 23:58:02.050457 2915 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:58:02.050527 kubelet[2915]: W0706 23:58:02.050463 2915 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:58:02.050527 kubelet[2915]: E0706 23:58:02.050478 2915 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:58:02.050718 kubelet[2915]: E0706 23:58:02.050594 2915 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:58:02.050718 kubelet[2915]: W0706 23:58:02.050599 2915 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:58:02.050718 kubelet[2915]: E0706 23:58:02.050621 2915 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:58:02.050778 kubelet[2915]: E0706 23:58:02.050754 2915 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:58:02.050778 kubelet[2915]: W0706 23:58:02.050760 2915 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:58:02.050778 kubelet[2915]: E0706 23:58:02.050770 2915 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:58:02.050967 kubelet[2915]: E0706 23:58:02.050955 2915 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:58:02.050967 kubelet[2915]: W0706 23:58:02.050966 2915 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:58:02.051039 kubelet[2915]: E0706 23:58:02.050978 2915 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:58:02.051114 kubelet[2915]: E0706 23:58:02.051104 2915 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:58:02.051114 kubelet[2915]: W0706 23:58:02.051112 2915 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:58:02.051180 kubelet[2915]: E0706 23:58:02.051120 2915 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:58:02.051257 kubelet[2915]: E0706 23:58:02.051245 2915 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:58:02.051257 kubelet[2915]: W0706 23:58:02.051254 2915 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:58:02.051304 kubelet[2915]: E0706 23:58:02.051269 2915 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:58:02.051405 kubelet[2915]: E0706 23:58:02.051395 2915 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:58:02.051405 kubelet[2915]: W0706 23:58:02.051403 2915 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:58:02.051447 kubelet[2915]: E0706 23:58:02.051415 2915 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:58:02.051620 kubelet[2915]: E0706 23:58:02.051595 2915 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:58:02.051620 kubelet[2915]: W0706 23:58:02.051611 2915 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:58:02.051682 kubelet[2915]: E0706 23:58:02.051638 2915 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:58:02.051765 kubelet[2915]: E0706 23:58:02.051758 2915 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:58:02.051765 kubelet[2915]: W0706 23:58:02.051765 2915 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:58:02.051813 kubelet[2915]: E0706 23:58:02.051770 2915 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:58:02.786203 kubelet[2915]: E0706 23:58:02.786160 2915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mvnfh" podUID="451e31a5-b9c2-46fb-96e4-f1e20de500e9" Jul 6 23:58:02.803497 containerd[1646]: time="2025-07-06T23:58:02.803406839Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:58:02.804080 containerd[1646]: time="2025-07-06T23:58:02.803949605Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2: active requests=0, bytes read=4446956" Jul 6 23:58:02.804993 containerd[1646]: time="2025-07-06T23:58:02.804954324Z" level=info msg="ImageCreate event name:\"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:58:02.808861 containerd[1646]: time="2025-07-06T23:58:02.808826332Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:58:02.810617 containerd[1646]: time="2025-07-06T23:58:02.809596403Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" with image id \"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\", size \"5939619\" in 1.257572547s" Jul 6 23:58:02.810617 containerd[1646]: time="2025-07-06T23:58:02.809653644Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" returns image reference \"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\"" Jul 6 23:58:02.812823 containerd[1646]: time="2025-07-06T23:58:02.812745392Z" level=info msg="CreateContainer within sandbox \"9c1a0188914d6f1b3127dddba09017f3ba619223aab72f6b555b72de381a9600\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jul 6 23:58:02.834304 containerd[1646]: time="2025-07-06T23:58:02.834269784Z" level=info msg="CreateContainer within sandbox \"9c1a0188914d6f1b3127dddba09017f3ba619223aab72f6b555b72de381a9600\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"4d8422d8e17d998aead9c855410aebf763cf04d4f3b9e70157d5977aa7cb453c\"" Jul 6 23:58:02.835622 containerd[1646]: time="2025-07-06T23:58:02.834804109Z" level=info msg="StartContainer for \"4d8422d8e17d998aead9c855410aebf763cf04d4f3b9e70157d5977aa7cb453c\"" Jul 6 23:58:02.878312 containerd[1646]: time="2025-07-06T23:58:02.878268821Z" level=info msg="StartContainer for \"4d8422d8e17d998aead9c855410aebf763cf04d4f3b9e70157d5977aa7cb453c\" returns successfully" Jul 6 23:58:02.908962 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4d8422d8e17d998aead9c855410aebf763cf04d4f3b9e70157d5977aa7cb453c-rootfs.mount: Deactivated successfully. Jul 6 23:58:02.962672 kubelet[2915]: I0706 23:58:02.962650 2915 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 6 23:58:02.990758 kubelet[2915]: I0706 23:58:02.980965 2915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-5f44958f74-4vm5q" podStartSLOduration=2.567476407 podStartE2EDuration="4.980949451s" podCreationTimestamp="2025-07-06 23:57:58 +0000 UTC" firstStartedPulling="2025-07-06 23:57:59.138302779 +0000 UTC m=+19.543487198" lastFinishedPulling="2025-07-06 23:58:01.551775828 +0000 UTC m=+21.956960242" observedRunningTime="2025-07-06 23:58:02.007796749 +0000 UTC m=+22.412981169" watchObservedRunningTime="2025-07-06 23:58:02.980949451 +0000 UTC m=+23.386133865" Jul 6 23:58:03.393739 containerd[1646]: time="2025-07-06T23:58:03.378748422Z" level=info msg="shim disconnected" id=4d8422d8e17d998aead9c855410aebf763cf04d4f3b9e70157d5977aa7cb453c namespace=k8s.io Jul 6 23:58:03.393739 containerd[1646]: time="2025-07-06T23:58:03.393625769Z" level=warning msg="cleaning up after shim disconnected" id=4d8422d8e17d998aead9c855410aebf763cf04d4f3b9e70157d5977aa7cb453c namespace=k8s.io Jul 6 23:58:03.393739 containerd[1646]: time="2025-07-06T23:58:03.393640752Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 6 23:58:03.965746 containerd[1646]: time="2025-07-06T23:58:03.965479776Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\"" Jul 6 23:58:04.786089 kubelet[2915]: E0706 23:58:04.785757 2915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mvnfh" podUID="451e31a5-b9c2-46fb-96e4-f1e20de500e9" Jul 6 23:58:06.785798 kubelet[2915]: E0706 23:58:06.785768 2915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mvnfh" podUID="451e31a5-b9c2-46fb-96e4-f1e20de500e9" Jul 6 23:58:07.526302 containerd[1646]: time="2025-07-06T23:58:07.526260688Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:58:07.532308 containerd[1646]: time="2025-07-06T23:58:07.532279473Z" level=info msg="ImageCreate event name:\"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:58:07.532516 containerd[1646]: time="2025-07-06T23:58:07.532493484Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.2\" with image id \"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\", size \"71928924\" in 3.566985416s" Jul 6 23:58:07.532516 containerd[1646]: time="2025-07-06T23:58:07.532511889Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\" returns image reference \"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\"" Jul 6 23:58:07.532752 containerd[1646]: time="2025-07-06T23:58:07.532736272Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:58:07.533146 containerd[1646]: time="2025-07-06T23:58:07.533071680Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.2: active requests=0, bytes read=70436221" Jul 6 23:58:07.534458 containerd[1646]: time="2025-07-06T23:58:07.534438708Z" level=info msg="CreateContainer within sandbox \"9c1a0188914d6f1b3127dddba09017f3ba619223aab72f6b555b72de381a9600\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jul 6 23:58:07.552731 containerd[1646]: time="2025-07-06T23:58:07.552703942Z" level=info msg="CreateContainer within sandbox \"9c1a0188914d6f1b3127dddba09017f3ba619223aab72f6b555b72de381a9600\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"e2d54e0c7bdaf36a9f74f5229a293db8353b7205442415433ddb9834eaa92ef9\"" Jul 6 23:58:07.553196 containerd[1646]: time="2025-07-06T23:58:07.552975929Z" level=info msg="StartContainer for \"e2d54e0c7bdaf36a9f74f5229a293db8353b7205442415433ddb9834eaa92ef9\"" Jul 6 23:58:07.612787 containerd[1646]: time="2025-07-06T23:58:07.612715882Z" level=info msg="StartContainer for \"e2d54e0c7bdaf36a9f74f5229a293db8353b7205442415433ddb9834eaa92ef9\" returns successfully" Jul 6 23:58:08.785872 kubelet[2915]: E0706 23:58:08.785792 2915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mvnfh" podUID="451e31a5-b9c2-46fb-96e4-f1e20de500e9" Jul 6 23:58:09.191680 kubelet[2915]: I0706 23:58:09.191189 2915 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Jul 6 23:58:09.206746 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e2d54e0c7bdaf36a9f74f5229a293db8353b7205442415433ddb9834eaa92ef9-rootfs.mount: Deactivated successfully. Jul 6 23:58:09.209732 containerd[1646]: time="2025-07-06T23:58:09.208340727Z" level=info msg="shim disconnected" id=e2d54e0c7bdaf36a9f74f5229a293db8353b7205442415433ddb9834eaa92ef9 namespace=k8s.io Jul 6 23:58:09.209732 containerd[1646]: time="2025-07-06T23:58:09.208388752Z" level=warning msg="cleaning up after shim disconnected" id=e2d54e0c7bdaf36a9f74f5229a293db8353b7205442415433ddb9834eaa92ef9 namespace=k8s.io Jul 6 23:58:09.209732 containerd[1646]: time="2025-07-06T23:58:09.208399257Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 6 23:58:09.296183 kubelet[2915]: I0706 23:58:09.296159 2915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/94185960-601a-483a-9212-e04b82a9d723-goldmane-ca-bundle\") pod \"goldmane-58fd7646b9-spr8t\" (UID: \"94185960-601a-483a-9212-e04b82a9d723\") " pod="calico-system/goldmane-58fd7646b9-spr8t" Jul 6 23:58:09.296183 kubelet[2915]: I0706 23:58:09.296185 2915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/94185960-601a-483a-9212-e04b82a9d723-goldmane-key-pair\") pod \"goldmane-58fd7646b9-spr8t\" (UID: \"94185960-601a-483a-9212-e04b82a9d723\") " pod="calico-system/goldmane-58fd7646b9-spr8t" Jul 6 23:58:09.296324 kubelet[2915]: I0706 23:58:09.296200 2915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hjfsg\" (UniqueName: \"kubernetes.io/projected/fa6083f4-7360-4fbe-9116-052981667c66-kube-api-access-hjfsg\") pod \"calico-apiserver-66d984f854-sqcdr\" (UID: \"fa6083f4-7360-4fbe-9116-052981667c66\") " pod="calico-apiserver/calico-apiserver-66d984f854-sqcdr" Jul 6 23:58:09.296324 kubelet[2915]: I0706 23:58:09.296209 2915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mpg62\" (UniqueName: \"kubernetes.io/projected/edda1339-9324-4b2e-a45e-3e301a3aa698-kube-api-access-mpg62\") pod \"coredns-7c65d6cfc9-b7l64\" (UID: \"edda1339-9324-4b2e-a45e-3e301a3aa698\") " pod="kube-system/coredns-7c65d6cfc9-b7l64" Jul 6 23:58:09.296324 kubelet[2915]: I0706 23:58:09.296222 2915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/78734851-1240-4e35-b671-f1113509a863-calico-apiserver-certs\") pod \"calico-apiserver-66d984f854-dcpkk\" (UID: \"78734851-1240-4e35-b671-f1113509a863\") " pod="calico-apiserver/calico-apiserver-66d984f854-dcpkk" Jul 6 23:58:09.296324 kubelet[2915]: I0706 23:58:09.296232 2915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8f36e826-d10d-4e96-a49b-5cddfb9434f6-whisker-ca-bundle\") pod \"whisker-ccc6d79dd-5wwps\" (UID: \"8f36e826-d10d-4e96-a49b-5cddfb9434f6\") " pod="calico-system/whisker-ccc6d79dd-5wwps" Jul 6 23:58:09.296324 kubelet[2915]: I0706 23:58:09.296242 2915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cgggw\" (UniqueName: \"kubernetes.io/projected/94185960-601a-483a-9212-e04b82a9d723-kube-api-access-cgggw\") pod \"goldmane-58fd7646b9-spr8t\" (UID: \"94185960-601a-483a-9212-e04b82a9d723\") " pod="calico-system/goldmane-58fd7646b9-spr8t" Jul 6 23:58:09.296421 kubelet[2915]: I0706 23:58:09.296251 2915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v4crb\" (UniqueName: \"kubernetes.io/projected/1aab2d05-ec88-4437-948f-f21fe9b0d771-kube-api-access-v4crb\") pod \"calico-apiserver-cbc88db65-jzzkz\" (UID: \"1aab2d05-ec88-4437-948f-f21fe9b0d771\") " pod="calico-apiserver/calico-apiserver-cbc88db65-jzzkz" Jul 6 23:58:09.296421 kubelet[2915]: I0706 23:58:09.296261 2915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6hrxp\" (UniqueName: \"kubernetes.io/projected/8f36e826-d10d-4e96-a49b-5cddfb9434f6-kube-api-access-6hrxp\") pod \"whisker-ccc6d79dd-5wwps\" (UID: \"8f36e826-d10d-4e96-a49b-5cddfb9434f6\") " pod="calico-system/whisker-ccc6d79dd-5wwps" Jul 6 23:58:09.296421 kubelet[2915]: I0706 23:58:09.296271 2915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hx848\" (UniqueName: \"kubernetes.io/projected/6ee4ce16-275b-4d42-b3b2-5651a9edac72-kube-api-access-hx848\") pod \"coredns-7c65d6cfc9-rwltj\" (UID: \"6ee4ce16-275b-4d42-b3b2-5651a9edac72\") " pod="kube-system/coredns-7c65d6cfc9-rwltj" Jul 6 23:58:09.296421 kubelet[2915]: I0706 23:58:09.296279 2915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/94185960-601a-483a-9212-e04b82a9d723-config\") pod \"goldmane-58fd7646b9-spr8t\" (UID: \"94185960-601a-483a-9212-e04b82a9d723\") " pod="calico-system/goldmane-58fd7646b9-spr8t" Jul 6 23:58:09.296421 kubelet[2915]: I0706 23:58:09.296290 2915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/edda1339-9324-4b2e-a45e-3e301a3aa698-config-volume\") pod \"coredns-7c65d6cfc9-b7l64\" (UID: \"edda1339-9324-4b2e-a45e-3e301a3aa698\") " pod="kube-system/coredns-7c65d6cfc9-b7l64" Jul 6 23:58:09.296922 kubelet[2915]: I0706 23:58:09.296300 2915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-85l8j\" (UniqueName: \"kubernetes.io/projected/f6ec6603-698d-463e-944a-b5d9f581d0b3-kube-api-access-85l8j\") pod \"calico-kube-controllers-5bb8955bc9-hwzqb\" (UID: \"f6ec6603-698d-463e-944a-b5d9f581d0b3\") " pod="calico-system/calico-kube-controllers-5bb8955bc9-hwzqb" Jul 6 23:58:09.296922 kubelet[2915]: I0706 23:58:09.296312 2915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f6ec6603-698d-463e-944a-b5d9f581d0b3-tigera-ca-bundle\") pod \"calico-kube-controllers-5bb8955bc9-hwzqb\" (UID: \"f6ec6603-698d-463e-944a-b5d9f581d0b3\") " pod="calico-system/calico-kube-controllers-5bb8955bc9-hwzqb" Jul 6 23:58:09.296922 kubelet[2915]: I0706 23:58:09.296322 2915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z85mh\" (UniqueName: \"kubernetes.io/projected/78734851-1240-4e35-b671-f1113509a863-kube-api-access-z85mh\") pod \"calico-apiserver-66d984f854-dcpkk\" (UID: \"78734851-1240-4e35-b671-f1113509a863\") " pod="calico-apiserver/calico-apiserver-66d984f854-dcpkk" Jul 6 23:58:09.296922 kubelet[2915]: I0706 23:58:09.296333 2915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/1aab2d05-ec88-4437-948f-f21fe9b0d771-calico-apiserver-certs\") pod \"calico-apiserver-cbc88db65-jzzkz\" (UID: \"1aab2d05-ec88-4437-948f-f21fe9b0d771\") " pod="calico-apiserver/calico-apiserver-cbc88db65-jzzkz" Jul 6 23:58:09.296922 kubelet[2915]: I0706 23:58:09.296343 2915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/fa6083f4-7360-4fbe-9116-052981667c66-calico-apiserver-certs\") pod \"calico-apiserver-66d984f854-sqcdr\" (UID: \"fa6083f4-7360-4fbe-9116-052981667c66\") " pod="calico-apiserver/calico-apiserver-66d984f854-sqcdr" Jul 6 23:58:09.297062 kubelet[2915]: I0706 23:58:09.296354 2915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6ee4ce16-275b-4d42-b3b2-5651a9edac72-config-volume\") pod \"coredns-7c65d6cfc9-rwltj\" (UID: \"6ee4ce16-275b-4d42-b3b2-5651a9edac72\") " pod="kube-system/coredns-7c65d6cfc9-rwltj" Jul 6 23:58:09.297062 kubelet[2915]: I0706 23:58:09.296362 2915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/8f36e826-d10d-4e96-a49b-5cddfb9434f6-whisker-backend-key-pair\") pod \"whisker-ccc6d79dd-5wwps\" (UID: \"8f36e826-d10d-4e96-a49b-5cddfb9434f6\") " pod="calico-system/whisker-ccc6d79dd-5wwps" Jul 6 23:58:09.543761 containerd[1646]: time="2025-07-06T23:58:09.543274969Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-ccc6d79dd-5wwps,Uid:8f36e826-d10d-4e96-a49b-5cddfb9434f6,Namespace:calico-system,Attempt:0,}" Jul 6 23:58:09.546196 containerd[1646]: time="2025-07-06T23:58:09.546162686Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-rwltj,Uid:6ee4ce16-275b-4d42-b3b2-5651a9edac72,Namespace:kube-system,Attempt:0,}" Jul 6 23:58:09.546967 containerd[1646]: time="2025-07-06T23:58:09.546481343Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-66d984f854-dcpkk,Uid:78734851-1240-4e35-b671-f1113509a863,Namespace:calico-apiserver,Attempt:0,}" Jul 6 23:58:09.555748 containerd[1646]: time="2025-07-06T23:58:09.555720057Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-b7l64,Uid:edda1339-9324-4b2e-a45e-3e301a3aa698,Namespace:kube-system,Attempt:0,}" Jul 6 23:58:09.555875 containerd[1646]: time="2025-07-06T23:58:09.555859453Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-spr8t,Uid:94185960-601a-483a-9212-e04b82a9d723,Namespace:calico-system,Attempt:0,}" Jul 6 23:58:09.558032 containerd[1646]: time="2025-07-06T23:58:09.557728008Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5bb8955bc9-hwzqb,Uid:f6ec6603-698d-463e-944a-b5d9f581d0b3,Namespace:calico-system,Attempt:0,}" Jul 6 23:58:09.558032 containerd[1646]: time="2025-07-06T23:58:09.557847970Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-cbc88db65-jzzkz,Uid:1aab2d05-ec88-4437-948f-f21fe9b0d771,Namespace:calico-apiserver,Attempt:0,}" Jul 6 23:58:09.560576 containerd[1646]: time="2025-07-06T23:58:09.560454666Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-66d984f854-sqcdr,Uid:fa6083f4-7360-4fbe-9116-052981667c66,Namespace:calico-apiserver,Attempt:0,}" Jul 6 23:58:09.985693 containerd[1646]: time="2025-07-06T23:58:09.985658458Z" level=error msg="Failed to destroy network for sandbox \"dc9e9028cd60f228c6b9a959c1f03b99f7a97b50eec30d656c53b810f0ef03b2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:58:09.986965 containerd[1646]: time="2025-07-06T23:58:09.986305819Z" level=error msg="Failed to destroy network for sandbox \"305307f300ced02e26613baabde083e836f1bd6c04cb97de63fb07e42f655070\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:58:09.988080 containerd[1646]: time="2025-07-06T23:58:09.988060312Z" level=error msg="encountered an error cleaning up failed sandbox \"dc9e9028cd60f228c6b9a959c1f03b99f7a97b50eec30d656c53b810f0ef03b2\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:58:09.988523 containerd[1646]: time="2025-07-06T23:58:09.988503436Z" level=error msg="encountered an error cleaning up failed sandbox \"305307f300ced02e26613baabde083e836f1bd6c04cb97de63fb07e42f655070\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:58:10.003789 containerd[1646]: time="2025-07-06T23:58:10.003264813Z" level=error msg="Failed to destroy network for sandbox \"d0c843f0c6df51c8735546399490a9323349811e5441710045276716bd30f3b2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:58:10.004105 containerd[1646]: time="2025-07-06T23:58:10.004086835Z" level=error msg="encountered an error cleaning up failed sandbox \"d0c843f0c6df51c8735546399490a9323349811e5441710045276716bd30f3b2\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:58:10.004163 containerd[1646]: time="2025-07-06T23:58:10.004124711Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-cbc88db65-jzzkz,Uid:1aab2d05-ec88-4437-948f-f21fe9b0d771,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d0c843f0c6df51c8735546399490a9323349811e5441710045276716bd30f3b2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:58:10.004841 kubelet[2915]: E0706 23:58:10.004818 2915 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d0c843f0c6df51c8735546399490a9323349811e5441710045276716bd30f3b2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:58:10.005525 kubelet[2915]: E0706 23:58:10.004866 2915 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d0c843f0c6df51c8735546399490a9323349811e5441710045276716bd30f3b2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-cbc88db65-jzzkz" Jul 6 23:58:10.005525 kubelet[2915]: E0706 23:58:10.004883 2915 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d0c843f0c6df51c8735546399490a9323349811e5441710045276716bd30f3b2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-cbc88db65-jzzkz" Jul 6 23:58:10.005525 kubelet[2915]: E0706 23:58:10.004913 2915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-cbc88db65-jzzkz_calico-apiserver(1aab2d05-ec88-4437-948f-f21fe9b0d771)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-cbc88db65-jzzkz_calico-apiserver(1aab2d05-ec88-4437-948f-f21fe9b0d771)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d0c843f0c6df51c8735546399490a9323349811e5441710045276716bd30f3b2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-cbc88db65-jzzkz" podUID="1aab2d05-ec88-4437-948f-f21fe9b0d771" Jul 6 23:58:10.005657 containerd[1646]: time="2025-07-06T23:58:10.005539805Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-66d984f854-dcpkk,Uid:78734851-1240-4e35-b671-f1113509a863,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"305307f300ced02e26613baabde083e836f1bd6c04cb97de63fb07e42f655070\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:58:10.008813 containerd[1646]: time="2025-07-06T23:58:10.008365415Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-66d984f854-sqcdr,Uid:fa6083f4-7360-4fbe-9116-052981667c66,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"dc9e9028cd60f228c6b9a959c1f03b99f7a97b50eec30d656c53b810f0ef03b2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:58:10.008813 containerd[1646]: time="2025-07-06T23:58:10.008742599Z" level=error msg="Failed to destroy network for sandbox \"42a9b739b84e940584bfc305b57e88bd8cfb72f4a0d79d1c56bb7333c454cb64\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:58:10.009449 containerd[1646]: time="2025-07-06T23:58:10.009159739Z" level=error msg="Failed to destroy network for sandbox \"772b37fef7916dce815edadefbadb7112311e9a44095bf39f8cc4241e4af9033\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:58:10.009786 kubelet[2915]: E0706 23:58:10.008593 2915 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dc9e9028cd60f228c6b9a959c1f03b99f7a97b50eec30d656c53b810f0ef03b2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:58:10.009786 kubelet[2915]: E0706 23:58:10.009233 2915 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dc9e9028cd60f228c6b9a959c1f03b99f7a97b50eec30d656c53b810f0ef03b2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-66d984f854-sqcdr" Jul 6 23:58:10.009786 kubelet[2915]: E0706 23:58:10.009247 2915 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dc9e9028cd60f228c6b9a959c1f03b99f7a97b50eec30d656c53b810f0ef03b2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-66d984f854-sqcdr" Jul 6 23:58:10.009866 containerd[1646]: time="2025-07-06T23:58:10.009535864Z" level=error msg="encountered an error cleaning up failed sandbox \"42a9b739b84e940584bfc305b57e88bd8cfb72f4a0d79d1c56bb7333c454cb64\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:58:10.009866 containerd[1646]: time="2025-07-06T23:58:10.009557895Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-ccc6d79dd-5wwps,Uid:8f36e826-d10d-4e96-a49b-5cddfb9434f6,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"42a9b739b84e940584bfc305b57e88bd8cfb72f4a0d79d1c56bb7333c454cb64\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:58:10.009866 containerd[1646]: time="2025-07-06T23:58:10.009637038Z" level=error msg="Failed to destroy network for sandbox \"dd8b0779078be7ddc1011fe7e80b37e48186565ff6409636527690455bdaac28\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:58:10.009866 containerd[1646]: time="2025-07-06T23:58:10.009781765Z" level=error msg="encountered an error cleaning up failed sandbox \"772b37fef7916dce815edadefbadb7112311e9a44095bf39f8cc4241e4af9033\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:58:10.009866 containerd[1646]: time="2025-07-06T23:58:10.009800029Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-spr8t,Uid:94185960-601a-483a-9212-e04b82a9d723,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"772b37fef7916dce815edadefbadb7112311e9a44095bf39f8cc4241e4af9033\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:58:10.009866 containerd[1646]: time="2025-07-06T23:58:10.009843031Z" level=error msg="Failed to destroy network for sandbox \"164527933e14a429286d37d9f1028b7584a78d0eea5ddfbb13990110285bfa5a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:58:10.009991 kubelet[2915]: E0706 23:58:10.009275 2915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-66d984f854-sqcdr_calico-apiserver(fa6083f4-7360-4fbe-9116-052981667c66)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-66d984f854-sqcdr_calico-apiserver(fa6083f4-7360-4fbe-9116-052981667c66)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"dc9e9028cd60f228c6b9a959c1f03b99f7a97b50eec30d656c53b810f0ef03b2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-66d984f854-sqcdr" podUID="fa6083f4-7360-4fbe-9116-052981667c66" Jul 6 23:58:10.009991 kubelet[2915]: E0706 23:58:10.009304 2915 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"305307f300ced02e26613baabde083e836f1bd6c04cb97de63fb07e42f655070\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:58:10.009991 kubelet[2915]: E0706 23:58:10.009316 2915 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"305307f300ced02e26613baabde083e836f1bd6c04cb97de63fb07e42f655070\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-66d984f854-dcpkk" Jul 6 23:58:10.010063 kubelet[2915]: E0706 23:58:10.009324 2915 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"305307f300ced02e26613baabde083e836f1bd6c04cb97de63fb07e42f655070\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-66d984f854-dcpkk" Jul 6 23:58:10.010063 kubelet[2915]: E0706 23:58:10.009338 2915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-66d984f854-dcpkk_calico-apiserver(78734851-1240-4e35-b671-f1113509a863)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-66d984f854-dcpkk_calico-apiserver(78734851-1240-4e35-b671-f1113509a863)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"305307f300ced02e26613baabde083e836f1bd6c04cb97de63fb07e42f655070\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-66d984f854-dcpkk" podUID="78734851-1240-4e35-b671-f1113509a863" Jul 6 23:58:10.010394 kubelet[2915]: E0706 23:58:10.010375 2915 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"772b37fef7916dce815edadefbadb7112311e9a44095bf39f8cc4241e4af9033\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:58:10.010425 kubelet[2915]: E0706 23:58:10.010397 2915 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"772b37fef7916dce815edadefbadb7112311e9a44095bf39f8cc4241e4af9033\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-58fd7646b9-spr8t" Jul 6 23:58:10.010425 kubelet[2915]: E0706 23:58:10.010408 2915 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"772b37fef7916dce815edadefbadb7112311e9a44095bf39f8cc4241e4af9033\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-58fd7646b9-spr8t" Jul 6 23:58:10.010465 kubelet[2915]: E0706 23:58:10.010425 2915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-58fd7646b9-spr8t_calico-system(94185960-601a-483a-9212-e04b82a9d723)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-58fd7646b9-spr8t_calico-system(94185960-601a-483a-9212-e04b82a9d723)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"772b37fef7916dce815edadefbadb7112311e9a44095bf39f8cc4241e4af9033\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-58fd7646b9-spr8t" podUID="94185960-601a-483a-9212-e04b82a9d723" Jul 6 23:58:10.010563 kubelet[2915]: E0706 23:58:10.010548 2915 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"42a9b739b84e940584bfc305b57e88bd8cfb72f4a0d79d1c56bb7333c454cb64\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:58:10.010590 kubelet[2915]: E0706 23:58:10.010564 2915 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"42a9b739b84e940584bfc305b57e88bd8cfb72f4a0d79d1c56bb7333c454cb64\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-ccc6d79dd-5wwps" Jul 6 23:58:10.010590 kubelet[2915]: E0706 23:58:10.010573 2915 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"42a9b739b84e940584bfc305b57e88bd8cfb72f4a0d79d1c56bb7333c454cb64\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-ccc6d79dd-5wwps" Jul 6 23:58:10.010723 kubelet[2915]: E0706 23:58:10.010588 2915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-ccc6d79dd-5wwps_calico-system(8f36e826-d10d-4e96-a49b-5cddfb9434f6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-ccc6d79dd-5wwps_calico-system(8f36e826-d10d-4e96-a49b-5cddfb9434f6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"42a9b739b84e940584bfc305b57e88bd8cfb72f4a0d79d1c56bb7333c454cb64\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-ccc6d79dd-5wwps" podUID="8f36e826-d10d-4e96-a49b-5cddfb9434f6" Jul 6 23:58:10.010762 containerd[1646]: time="2025-07-06T23:58:10.010740962Z" level=error msg="Failed to destroy network for sandbox \"e27ff51af32743d5320f3003a78f1453d4f662bda7dbaa9be1c502ecd5dc9fcf\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:58:10.010912 containerd[1646]: time="2025-07-06T23:58:10.010869189Z" level=error msg="encountered an error cleaning up failed sandbox \"dd8b0779078be7ddc1011fe7e80b37e48186565ff6409636527690455bdaac28\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:58:10.010912 containerd[1646]: time="2025-07-06T23:58:10.010891286Z" level=error msg="encountered an error cleaning up failed sandbox \"e27ff51af32743d5320f3003a78f1453d4f662bda7dbaa9be1c502ecd5dc9fcf\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:58:10.011102 containerd[1646]: time="2025-07-06T23:58:10.010892686Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-b7l64,Uid:edda1339-9324-4b2e-a45e-3e301a3aa698,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"dd8b0779078be7ddc1011fe7e80b37e48186565ff6409636527690455bdaac28\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:58:10.011102 containerd[1646]: time="2025-07-06T23:58:10.010975510Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5bb8955bc9-hwzqb,Uid:f6ec6603-698d-463e-944a-b5d9f581d0b3,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e27ff51af32743d5320f3003a78f1453d4f662bda7dbaa9be1c502ecd5dc9fcf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:58:10.011176 kubelet[2915]: E0706 23:58:10.011052 2915 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e27ff51af32743d5320f3003a78f1453d4f662bda7dbaa9be1c502ecd5dc9fcf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:58:10.011176 kubelet[2915]: E0706 23:58:10.011067 2915 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e27ff51af32743d5320f3003a78f1453d4f662bda7dbaa9be1c502ecd5dc9fcf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5bb8955bc9-hwzqb" Jul 6 23:58:10.011176 kubelet[2915]: E0706 23:58:10.011076 2915 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e27ff51af32743d5320f3003a78f1453d4f662bda7dbaa9be1c502ecd5dc9fcf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5bb8955bc9-hwzqb" Jul 6 23:58:10.011247 kubelet[2915]: E0706 23:58:10.011091 2915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5bb8955bc9-hwzqb_calico-system(f6ec6603-698d-463e-944a-b5d9f581d0b3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5bb8955bc9-hwzqb_calico-system(f6ec6603-698d-463e-944a-b5d9f581d0b3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e27ff51af32743d5320f3003a78f1453d4f662bda7dbaa9be1c502ecd5dc9fcf\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5bb8955bc9-hwzqb" podUID="f6ec6603-698d-463e-944a-b5d9f581d0b3" Jul 6 23:58:10.011247 kubelet[2915]: E0706 23:58:10.011111 2915 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dd8b0779078be7ddc1011fe7e80b37e48186565ff6409636527690455bdaac28\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:58:10.011247 kubelet[2915]: E0706 23:58:10.011122 2915 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dd8b0779078be7ddc1011fe7e80b37e48186565ff6409636527690455bdaac28\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-b7l64" Jul 6 23:58:10.011355 containerd[1646]: time="2025-07-06T23:58:10.011230017Z" level=error msg="encountered an error cleaning up failed sandbox \"164527933e14a429286d37d9f1028b7584a78d0eea5ddfbb13990110285bfa5a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:58:10.011355 containerd[1646]: time="2025-07-06T23:58:10.011249087Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-rwltj,Uid:6ee4ce16-275b-4d42-b3b2-5651a9edac72,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"164527933e14a429286d37d9f1028b7584a78d0eea5ddfbb13990110285bfa5a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:58:10.011395 kubelet[2915]: E0706 23:58:10.011129 2915 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dd8b0779078be7ddc1011fe7e80b37e48186565ff6409636527690455bdaac28\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-b7l64" Jul 6 23:58:10.011395 kubelet[2915]: E0706 23:58:10.011143 2915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-b7l64_kube-system(edda1339-9324-4b2e-a45e-3e301a3aa698)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-b7l64_kube-system(edda1339-9324-4b2e-a45e-3e301a3aa698)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"dd8b0779078be7ddc1011fe7e80b37e48186565ff6409636527690455bdaac28\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-b7l64" podUID="edda1339-9324-4b2e-a45e-3e301a3aa698" Jul 6 23:58:10.011395 kubelet[2915]: E0706 23:58:10.011303 2915 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"164527933e14a429286d37d9f1028b7584a78d0eea5ddfbb13990110285bfa5a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:58:10.011483 kubelet[2915]: E0706 23:58:10.011317 2915 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"164527933e14a429286d37d9f1028b7584a78d0eea5ddfbb13990110285bfa5a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-rwltj" Jul 6 23:58:10.011483 kubelet[2915]: E0706 23:58:10.011339 2915 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"164527933e14a429286d37d9f1028b7584a78d0eea5ddfbb13990110285bfa5a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-rwltj" Jul 6 23:58:10.011483 kubelet[2915]: E0706 23:58:10.011357 2915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-rwltj_kube-system(6ee4ce16-275b-4d42-b3b2-5651a9edac72)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-rwltj_kube-system(6ee4ce16-275b-4d42-b3b2-5651a9edac72)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"164527933e14a429286d37d9f1028b7584a78d0eea5ddfbb13990110285bfa5a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-rwltj" podUID="6ee4ce16-275b-4d42-b3b2-5651a9edac72" Jul 6 23:58:10.026883 containerd[1646]: time="2025-07-06T23:58:10.026855315Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\"" Jul 6 23:58:10.787666 containerd[1646]: time="2025-07-06T23:58:10.787637207Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-mvnfh,Uid:451e31a5-b9c2-46fb-96e4-f1e20de500e9,Namespace:calico-system,Attempt:0,}" Jul 6 23:58:10.833229 containerd[1646]: time="2025-07-06T23:58:10.832967957Z" level=error msg="Failed to destroy network for sandbox \"f0f479c7249c63c5d8b2673a414c94d8ca74a4258d1279d96096fcf24d86b6ad\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:58:10.833229 containerd[1646]: time="2025-07-06T23:58:10.833173916Z" level=error msg="encountered an error cleaning up failed sandbox \"f0f479c7249c63c5d8b2673a414c94d8ca74a4258d1279d96096fcf24d86b6ad\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:58:10.833229 containerd[1646]: time="2025-07-06T23:58:10.833199887Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-mvnfh,Uid:451e31a5-b9c2-46fb-96e4-f1e20de500e9,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f0f479c7249c63c5d8b2673a414c94d8ca74a4258d1279d96096fcf24d86b6ad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:58:10.835300 kubelet[2915]: E0706 23:58:10.833469 2915 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f0f479c7249c63c5d8b2673a414c94d8ca74a4258d1279d96096fcf24d86b6ad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:58:10.835300 kubelet[2915]: E0706 23:58:10.833507 2915 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f0f479c7249c63c5d8b2673a414c94d8ca74a4258d1279d96096fcf24d86b6ad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-mvnfh" Jul 6 23:58:10.835300 kubelet[2915]: E0706 23:58:10.833520 2915 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f0f479c7249c63c5d8b2673a414c94d8ca74a4258d1279d96096fcf24d86b6ad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-mvnfh" Jul 6 23:58:10.835403 kubelet[2915]: E0706 23:58:10.833548 2915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-mvnfh_calico-system(451e31a5-b9c2-46fb-96e4-f1e20de500e9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-mvnfh_calico-system(451e31a5-b9c2-46fb-96e4-f1e20de500e9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f0f479c7249c63c5d8b2673a414c94d8ca74a4258d1279d96096fcf24d86b6ad\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-mvnfh" podUID="451e31a5-b9c2-46fb-96e4-f1e20de500e9" Jul 6 23:58:10.835645 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f0f479c7249c63c5d8b2673a414c94d8ca74a4258d1279d96096fcf24d86b6ad-shm.mount: Deactivated successfully. Jul 6 23:58:11.013957 kubelet[2915]: I0706 23:58:11.013937 2915 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="772b37fef7916dce815edadefbadb7112311e9a44095bf39f8cc4241e4af9033" Jul 6 23:58:11.015578 kubelet[2915]: I0706 23:58:11.015374 2915 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="164527933e14a429286d37d9f1028b7584a78d0eea5ddfbb13990110285bfa5a" Jul 6 23:58:11.035487 kubelet[2915]: I0706 23:58:11.035293 2915 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f0f479c7249c63c5d8b2673a414c94d8ca74a4258d1279d96096fcf24d86b6ad" Jul 6 23:58:11.037576 kubelet[2915]: I0706 23:58:11.037212 2915 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d0c843f0c6df51c8735546399490a9323349811e5441710045276716bd30f3b2" Jul 6 23:58:11.038203 kubelet[2915]: I0706 23:58:11.038133 2915 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="305307f300ced02e26613baabde083e836f1bd6c04cb97de63fb07e42f655070" Jul 6 23:58:11.039932 kubelet[2915]: I0706 23:58:11.039914 2915 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e27ff51af32743d5320f3003a78f1453d4f662bda7dbaa9be1c502ecd5dc9fcf" Jul 6 23:58:11.041546 kubelet[2915]: I0706 23:58:11.041275 2915 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dd8b0779078be7ddc1011fe7e80b37e48186565ff6409636527690455bdaac28" Jul 6 23:58:11.045087 kubelet[2915]: I0706 23:58:11.044761 2915 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="42a9b739b84e940584bfc305b57e88bd8cfb72f4a0d79d1c56bb7333c454cb64" Jul 6 23:58:11.046547 kubelet[2915]: I0706 23:58:11.046532 2915 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dc9e9028cd60f228c6b9a959c1f03b99f7a97b50eec30d656c53b810f0ef03b2" Jul 6 23:58:11.067156 containerd[1646]: time="2025-07-06T23:58:11.067015378Z" level=info msg="StopPodSandbox for \"dc9e9028cd60f228c6b9a959c1f03b99f7a97b50eec30d656c53b810f0ef03b2\"" Jul 6 23:58:11.067773 containerd[1646]: time="2025-07-06T23:58:11.067518499Z" level=info msg="StopPodSandbox for \"d0c843f0c6df51c8735546399490a9323349811e5441710045276716bd30f3b2\"" Jul 6 23:58:11.070110 containerd[1646]: time="2025-07-06T23:58:11.069964398Z" level=info msg="Ensure that sandbox dc9e9028cd60f228c6b9a959c1f03b99f7a97b50eec30d656c53b810f0ef03b2 in task-service has been cleanup successfully" Jul 6 23:58:11.070110 containerd[1646]: time="2025-07-06T23:58:11.070013551Z" level=info msg="StopPodSandbox for \"164527933e14a429286d37d9f1028b7584a78d0eea5ddfbb13990110285bfa5a\"" Jul 6 23:58:11.070174 containerd[1646]: time="2025-07-06T23:58:11.070114499Z" level=info msg="Ensure that sandbox 164527933e14a429286d37d9f1028b7584a78d0eea5ddfbb13990110285bfa5a in task-service has been cleanup successfully" Jul 6 23:58:11.079328 containerd[1646]: time="2025-07-06T23:58:11.079286182Z" level=info msg="StopPodSandbox for \"305307f300ced02e26613baabde083e836f1bd6c04cb97de63fb07e42f655070\"" Jul 6 23:58:11.079445 containerd[1646]: time="2025-07-06T23:58:11.079434029Z" level=info msg="StopPodSandbox for \"f0f479c7249c63c5d8b2673a414c94d8ca74a4258d1279d96096fcf24d86b6ad\"" Jul 6 23:58:11.079567 containerd[1646]: time="2025-07-06T23:58:11.079512064Z" level=info msg="StopPodSandbox for \"dd8b0779078be7ddc1011fe7e80b37e48186565ff6409636527690455bdaac28\"" Jul 6 23:58:11.079628 containerd[1646]: time="2025-07-06T23:58:11.079617545Z" level=info msg="Ensure that sandbox f0f479c7249c63c5d8b2673a414c94d8ca74a4258d1279d96096fcf24d86b6ad in task-service has been cleanup successfully" Jul 6 23:58:11.079818 containerd[1646]: time="2025-07-06T23:58:11.079309910Z" level=info msg="StopPodSandbox for \"772b37fef7916dce815edadefbadb7112311e9a44095bf39f8cc4241e4af9033\"" Jul 6 23:58:11.079868 containerd[1646]: time="2025-07-06T23:58:11.079848682Z" level=info msg="Ensure that sandbox 772b37fef7916dce815edadefbadb7112311e9a44095bf39f8cc4241e4af9033 in task-service has been cleanup successfully" Jul 6 23:58:11.080714 containerd[1646]: time="2025-07-06T23:58:11.079628873Z" level=info msg="Ensure that sandbox dd8b0779078be7ddc1011fe7e80b37e48186565ff6409636527690455bdaac28 in task-service has been cleanup successfully" Jul 6 23:58:11.081882 containerd[1646]: time="2025-07-06T23:58:11.079456104Z" level=info msg="Ensure that sandbox 305307f300ced02e26613baabde083e836f1bd6c04cb97de63fb07e42f655070 in task-service has been cleanup successfully" Jul 6 23:58:11.082077 containerd[1646]: time="2025-07-06T23:58:11.069964336Z" level=info msg="Ensure that sandbox d0c843f0c6df51c8735546399490a9323349811e5441710045276716bd30f3b2 in task-service has been cleanup successfully" Jul 6 23:58:11.083742 containerd[1646]: time="2025-07-06T23:58:11.079657400Z" level=info msg="StopPodSandbox for \"42a9b739b84e940584bfc305b57e88bd8cfb72f4a0d79d1c56bb7333c454cb64\"" Jul 6 23:58:11.083880 containerd[1646]: time="2025-07-06T23:58:11.083863527Z" level=info msg="Ensure that sandbox 42a9b739b84e940584bfc305b57e88bd8cfb72f4a0d79d1c56bb7333c454cb64 in task-service has been cleanup successfully" Jul 6 23:58:11.086285 containerd[1646]: time="2025-07-06T23:58:11.079488413Z" level=info msg="StopPodSandbox for \"e27ff51af32743d5320f3003a78f1453d4f662bda7dbaa9be1c502ecd5dc9fcf\"" Jul 6 23:58:11.089881 containerd[1646]: time="2025-07-06T23:58:11.086556521Z" level=info msg="Ensure that sandbox e27ff51af32743d5320f3003a78f1453d4f662bda7dbaa9be1c502ecd5dc9fcf in task-service has been cleanup successfully" Jul 6 23:58:11.166000 containerd[1646]: time="2025-07-06T23:58:11.165694065Z" level=error msg="StopPodSandbox for \"305307f300ced02e26613baabde083e836f1bd6c04cb97de63fb07e42f655070\" failed" error="failed to destroy network for sandbox \"305307f300ced02e26613baabde083e836f1bd6c04cb97de63fb07e42f655070\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:58:11.166185 kubelet[2915]: E0706 23:58:11.165847 2915 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"305307f300ced02e26613baabde083e836f1bd6c04cb97de63fb07e42f655070\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="305307f300ced02e26613baabde083e836f1bd6c04cb97de63fb07e42f655070" Jul 6 23:58:11.166320 containerd[1646]: time="2025-07-06T23:58:11.166306128Z" level=error msg="StopPodSandbox for \"e27ff51af32743d5320f3003a78f1453d4f662bda7dbaa9be1c502ecd5dc9fcf\" failed" error="failed to destroy network for sandbox \"e27ff51af32743d5320f3003a78f1453d4f662bda7dbaa9be1c502ecd5dc9fcf\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:58:11.167464 kubelet[2915]: E0706 23:58:11.166479 2915 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"e27ff51af32743d5320f3003a78f1453d4f662bda7dbaa9be1c502ecd5dc9fcf\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="e27ff51af32743d5320f3003a78f1453d4f662bda7dbaa9be1c502ecd5dc9fcf" Jul 6 23:58:11.169211 containerd[1646]: time="2025-07-06T23:58:11.168639173Z" level=error msg="StopPodSandbox for \"d0c843f0c6df51c8735546399490a9323349811e5441710045276716bd30f3b2\" failed" error="failed to destroy network for sandbox \"d0c843f0c6df51c8735546399490a9323349811e5441710045276716bd30f3b2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:58:11.169437 containerd[1646]: time="2025-07-06T23:58:11.169422589Z" level=error msg="StopPodSandbox for \"f0f479c7249c63c5d8b2673a414c94d8ca74a4258d1279d96096fcf24d86b6ad\" failed" error="failed to destroy network for sandbox \"f0f479c7249c63c5d8b2673a414c94d8ca74a4258d1279d96096fcf24d86b6ad\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:58:11.171160 containerd[1646]: time="2025-07-06T23:58:11.171145734Z" level=error msg="StopPodSandbox for \"dc9e9028cd60f228c6b9a959c1f03b99f7a97b50eec30d656c53b810f0ef03b2\" failed" error="failed to destroy network for sandbox \"dc9e9028cd60f228c6b9a959c1f03b99f7a97b50eec30d656c53b810f0ef03b2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:58:11.171885 kubelet[2915]: E0706 23:58:11.165884 2915 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"305307f300ced02e26613baabde083e836f1bd6c04cb97de63fb07e42f655070"} Jul 6 23:58:11.171885 kubelet[2915]: E0706 23:58:11.171802 2915 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"78734851-1240-4e35-b671-f1113509a863\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"305307f300ced02e26613baabde083e836f1bd6c04cb97de63fb07e42f655070\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 6 23:58:11.171885 kubelet[2915]: E0706 23:58:11.171801 2915 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d0c843f0c6df51c8735546399490a9323349811e5441710045276716bd30f3b2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d0c843f0c6df51c8735546399490a9323349811e5441710045276716bd30f3b2" Jul 6 23:58:11.171885 kubelet[2915]: E0706 23:58:11.171824 2915 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d0c843f0c6df51c8735546399490a9323349811e5441710045276716bd30f3b2"} Jul 6 23:58:11.171885 kubelet[2915]: E0706 23:58:11.171818 2915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"78734851-1240-4e35-b671-f1113509a863\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"305307f300ced02e26613baabde083e836f1bd6c04cb97de63fb07e42f655070\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-66d984f854-dcpkk" podUID="78734851-1240-4e35-b671-f1113509a863" Jul 6 23:58:11.172834 kubelet[2915]: E0706 23:58:11.166502 2915 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"e27ff51af32743d5320f3003a78f1453d4f662bda7dbaa9be1c502ecd5dc9fcf"} Jul 6 23:58:11.172834 kubelet[2915]: E0706 23:58:11.171847 2915 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"1aab2d05-ec88-4437-948f-f21fe9b0d771\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d0c843f0c6df51c8735546399490a9323349811e5441710045276716bd30f3b2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 6 23:58:11.172834 kubelet[2915]: E0706 23:58:11.171856 2915 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"f6ec6603-698d-463e-944a-b5d9f581d0b3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e27ff51af32743d5320f3003a78f1453d4f662bda7dbaa9be1c502ecd5dc9fcf\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 6 23:58:11.172834 kubelet[2915]: E0706 23:58:11.171858 2915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"1aab2d05-ec88-4437-948f-f21fe9b0d771\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d0c843f0c6df51c8735546399490a9323349811e5441710045276716bd30f3b2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-cbc88db65-jzzkz" podUID="1aab2d05-ec88-4437-948f-f21fe9b0d771" Jul 6 23:58:11.172952 kubelet[2915]: E0706 23:58:11.171865 2915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"f6ec6603-698d-463e-944a-b5d9f581d0b3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e27ff51af32743d5320f3003a78f1453d4f662bda7dbaa9be1c502ecd5dc9fcf\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5bb8955bc9-hwzqb" podUID="f6ec6603-698d-463e-944a-b5d9f581d0b3" Jul 6 23:58:11.172952 kubelet[2915]: E0706 23:58:11.171875 2915 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"f0f479c7249c63c5d8b2673a414c94d8ca74a4258d1279d96096fcf24d86b6ad\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="f0f479c7249c63c5d8b2673a414c94d8ca74a4258d1279d96096fcf24d86b6ad" Jul 6 23:58:11.172952 kubelet[2915]: E0706 23:58:11.171885 2915 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"f0f479c7249c63c5d8b2673a414c94d8ca74a4258d1279d96096fcf24d86b6ad"} Jul 6 23:58:11.172952 kubelet[2915]: E0706 23:58:11.171896 2915 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"451e31a5-b9c2-46fb-96e4-f1e20de500e9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f0f479c7249c63c5d8b2673a414c94d8ca74a4258d1279d96096fcf24d86b6ad\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 6 23:58:11.173051 kubelet[2915]: E0706 23:58:11.171905 2915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"451e31a5-b9c2-46fb-96e4-f1e20de500e9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f0f479c7249c63c5d8b2673a414c94d8ca74a4258d1279d96096fcf24d86b6ad\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-mvnfh" podUID="451e31a5-b9c2-46fb-96e4-f1e20de500e9" Jul 6 23:58:11.173361 kubelet[2915]: E0706 23:58:11.173292 2915 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"dc9e9028cd60f228c6b9a959c1f03b99f7a97b50eec30d656c53b810f0ef03b2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="dc9e9028cd60f228c6b9a959c1f03b99f7a97b50eec30d656c53b810f0ef03b2" Jul 6 23:58:11.173361 kubelet[2915]: E0706 23:58:11.173314 2915 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"dc9e9028cd60f228c6b9a959c1f03b99f7a97b50eec30d656c53b810f0ef03b2"} Jul 6 23:58:11.173361 kubelet[2915]: E0706 23:58:11.173333 2915 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"fa6083f4-7360-4fbe-9116-052981667c66\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"dc9e9028cd60f228c6b9a959c1f03b99f7a97b50eec30d656c53b810f0ef03b2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 6 23:58:11.173361 kubelet[2915]: E0706 23:58:11.173345 2915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"fa6083f4-7360-4fbe-9116-052981667c66\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"dc9e9028cd60f228c6b9a959c1f03b99f7a97b50eec30d656c53b810f0ef03b2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-66d984f854-sqcdr" podUID="fa6083f4-7360-4fbe-9116-052981667c66" Jul 6 23:58:11.173481 containerd[1646]: time="2025-07-06T23:58:11.172261011Z" level=error msg="StopPodSandbox for \"772b37fef7916dce815edadefbadb7112311e9a44095bf39f8cc4241e4af9033\" failed" error="failed to destroy network for sandbox \"772b37fef7916dce815edadefbadb7112311e9a44095bf39f8cc4241e4af9033\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:58:11.173481 containerd[1646]: time="2025-07-06T23:58:11.173399328Z" level=error msg="StopPodSandbox for \"dd8b0779078be7ddc1011fe7e80b37e48186565ff6409636527690455bdaac28\" failed" error="failed to destroy network for sandbox \"dd8b0779078be7ddc1011fe7e80b37e48186565ff6409636527690455bdaac28\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:58:11.173481 containerd[1646]: time="2025-07-06T23:58:11.173442024Z" level=error msg="StopPodSandbox for \"164527933e14a429286d37d9f1028b7584a78d0eea5ddfbb13990110285bfa5a\" failed" error="failed to destroy network for sandbox \"164527933e14a429286d37d9f1028b7584a78d0eea5ddfbb13990110285bfa5a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:58:11.173963 containerd[1646]: time="2025-07-06T23:58:11.173494265Z" level=error msg="StopPodSandbox for \"42a9b739b84e940584bfc305b57e88bd8cfb72f4a0d79d1c56bb7333c454cb64\" failed" error="failed to destroy network for sandbox \"42a9b739b84e940584bfc305b57e88bd8cfb72f4a0d79d1c56bb7333c454cb64\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:58:11.173986 kubelet[2915]: E0706 23:58:11.173721 2915 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"772b37fef7916dce815edadefbadb7112311e9a44095bf39f8cc4241e4af9033\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="772b37fef7916dce815edadefbadb7112311e9a44095bf39f8cc4241e4af9033" Jul 6 23:58:11.173986 kubelet[2915]: E0706 23:58:11.173736 2915 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"772b37fef7916dce815edadefbadb7112311e9a44095bf39f8cc4241e4af9033"} Jul 6 23:58:11.173986 kubelet[2915]: E0706 23:58:11.173748 2915 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"94185960-601a-483a-9212-e04b82a9d723\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"772b37fef7916dce815edadefbadb7112311e9a44095bf39f8cc4241e4af9033\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 6 23:58:11.173986 kubelet[2915]: E0706 23:58:11.173746 2915 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"42a9b739b84e940584bfc305b57e88bd8cfb72f4a0d79d1c56bb7333c454cb64\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="42a9b739b84e940584bfc305b57e88bd8cfb72f4a0d79d1c56bb7333c454cb64" Jul 6 23:58:11.173986 kubelet[2915]: E0706 23:58:11.173770 2915 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"42a9b739b84e940584bfc305b57e88bd8cfb72f4a0d79d1c56bb7333c454cb64"} Jul 6 23:58:11.174084 kubelet[2915]: E0706 23:58:11.173770 2915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"94185960-601a-483a-9212-e04b82a9d723\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"772b37fef7916dce815edadefbadb7112311e9a44095bf39f8cc4241e4af9033\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-58fd7646b9-spr8t" podUID="94185960-601a-483a-9212-e04b82a9d723" Jul 6 23:58:11.174084 kubelet[2915]: E0706 23:58:11.173783 2915 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"8f36e826-d10d-4e96-a49b-5cddfb9434f6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"42a9b739b84e940584bfc305b57e88bd8cfb72f4a0d79d1c56bb7333c454cb64\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 6 23:58:11.174084 kubelet[2915]: E0706 23:58:11.173789 2915 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"dd8b0779078be7ddc1011fe7e80b37e48186565ff6409636527690455bdaac28\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="dd8b0779078be7ddc1011fe7e80b37e48186565ff6409636527690455bdaac28" Jul 6 23:58:11.174084 kubelet[2915]: E0706 23:58:11.173800 2915 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"dd8b0779078be7ddc1011fe7e80b37e48186565ff6409636527690455bdaac28"} Jul 6 23:58:11.174175 kubelet[2915]: E0706 23:58:11.173796 2915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"8f36e826-d10d-4e96-a49b-5cddfb9434f6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"42a9b739b84e940584bfc305b57e88bd8cfb72f4a0d79d1c56bb7333c454cb64\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-ccc6d79dd-5wwps" podUID="8f36e826-d10d-4e96-a49b-5cddfb9434f6" Jul 6 23:58:11.174175 kubelet[2915]: E0706 23:58:11.173823 2915 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"edda1339-9324-4b2e-a45e-3e301a3aa698\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"dd8b0779078be7ddc1011fe7e80b37e48186565ff6409636527690455bdaac28\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 6 23:58:11.174175 kubelet[2915]: E0706 23:58:11.173835 2915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"edda1339-9324-4b2e-a45e-3e301a3aa698\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"dd8b0779078be7ddc1011fe7e80b37e48186565ff6409636527690455bdaac28\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-b7l64" podUID="edda1339-9324-4b2e-a45e-3e301a3aa698" Jul 6 23:58:11.174266 kubelet[2915]: E0706 23:58:11.173822 2915 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"164527933e14a429286d37d9f1028b7584a78d0eea5ddfbb13990110285bfa5a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="164527933e14a429286d37d9f1028b7584a78d0eea5ddfbb13990110285bfa5a" Jul 6 23:58:11.174266 kubelet[2915]: E0706 23:58:11.173848 2915 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"164527933e14a429286d37d9f1028b7584a78d0eea5ddfbb13990110285bfa5a"} Jul 6 23:58:11.174266 kubelet[2915]: E0706 23:58:11.173869 2915 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"6ee4ce16-275b-4d42-b3b2-5651a9edac72\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"164527933e14a429286d37d9f1028b7584a78d0eea5ddfbb13990110285bfa5a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 6 23:58:11.174266 kubelet[2915]: E0706 23:58:11.173880 2915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"6ee4ce16-275b-4d42-b3b2-5651a9edac72\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"164527933e14a429286d37d9f1028b7584a78d0eea5ddfbb13990110285bfa5a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-rwltj" podUID="6ee4ce16-275b-4d42-b3b2-5651a9edac72" Jul 6 23:58:14.184803 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2834484099.mount: Deactivated successfully. Jul 6 23:58:14.229619 containerd[1646]: time="2025-07-06T23:58:14.229067725Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.2: active requests=0, bytes read=158500163" Jul 6 23:58:14.238394 containerd[1646]: time="2025-07-06T23:58:14.238306152Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.2\" with image id \"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\", size \"158500025\" in 4.208763767s" Jul 6 23:58:14.238394 containerd[1646]: time="2025-07-06T23:58:14.238333137Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\" returns image reference \"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\"" Jul 6 23:58:14.241433 containerd[1646]: time="2025-07-06T23:58:14.241263181Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:58:14.266771 containerd[1646]: time="2025-07-06T23:58:14.266746855Z" level=info msg="ImageCreate event name:\"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:58:14.267105 containerd[1646]: time="2025-07-06T23:58:14.267084245Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:58:14.287414 containerd[1646]: time="2025-07-06T23:58:14.287337016Z" level=info msg="CreateContainer within sandbox \"9c1a0188914d6f1b3127dddba09017f3ba619223aab72f6b555b72de381a9600\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jul 6 23:58:14.318203 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2963101991.mount: Deactivated successfully. Jul 6 23:58:14.325574 containerd[1646]: time="2025-07-06T23:58:14.325542035Z" level=info msg="CreateContainer within sandbox \"9c1a0188914d6f1b3127dddba09017f3ba619223aab72f6b555b72de381a9600\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"a96cd798000deebb465b3195f8ecf3f1a40f02c11b3b80f1678b7162464ed270\"" Jul 6 23:58:14.329573 containerd[1646]: time="2025-07-06T23:58:14.329484802Z" level=info msg="StartContainer for \"a96cd798000deebb465b3195f8ecf3f1a40f02c11b3b80f1678b7162464ed270\"" Jul 6 23:58:14.478625 containerd[1646]: time="2025-07-06T23:58:14.478536629Z" level=info msg="StartContainer for \"a96cd798000deebb465b3195f8ecf3f1a40f02c11b3b80f1678b7162464ed270\" returns successfully" Jul 6 23:58:14.971985 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jul 6 23:58:14.973176 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jul 6 23:58:15.167619 kubelet[2915]: I0706 23:58:15.141857 2915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-4l5b7" podStartSLOduration=2.369729413 podStartE2EDuration="17.122970483s" podCreationTimestamp="2025-07-06 23:57:58 +0000 UTC" firstStartedPulling="2025-07-06 23:57:59.485573626 +0000 UTC m=+19.890758039" lastFinishedPulling="2025-07-06 23:58:14.238814695 +0000 UTC m=+34.643999109" observedRunningTime="2025-07-06 23:58:15.122840935 +0000 UTC m=+35.528025358" watchObservedRunningTime="2025-07-06 23:58:15.122970483 +0000 UTC m=+35.528154900" Jul 6 23:58:15.363704 containerd[1646]: time="2025-07-06T23:58:15.363610460Z" level=info msg="StopPodSandbox for \"42a9b739b84e940584bfc305b57e88bd8cfb72f4a0d79d1c56bb7333c454cb64\"" Jul 6 23:58:15.704793 containerd[1646]: 2025-07-06 23:58:15.460 [INFO][4082] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="42a9b739b84e940584bfc305b57e88bd8cfb72f4a0d79d1c56bb7333c454cb64" Jul 6 23:58:15.704793 containerd[1646]: 2025-07-06 23:58:15.462 [INFO][4082] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="42a9b739b84e940584bfc305b57e88bd8cfb72f4a0d79d1c56bb7333c454cb64" iface="eth0" netns="/var/run/netns/cni-afb33e17-ff3b-3b64-efa5-d1bbb7e1bc9d" Jul 6 23:58:15.704793 containerd[1646]: 2025-07-06 23:58:15.462 [INFO][4082] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="42a9b739b84e940584bfc305b57e88bd8cfb72f4a0d79d1c56bb7333c454cb64" iface="eth0" netns="/var/run/netns/cni-afb33e17-ff3b-3b64-efa5-d1bbb7e1bc9d" Jul 6 23:58:15.704793 containerd[1646]: 2025-07-06 23:58:15.463 [INFO][4082] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="42a9b739b84e940584bfc305b57e88bd8cfb72f4a0d79d1c56bb7333c454cb64" iface="eth0" netns="/var/run/netns/cni-afb33e17-ff3b-3b64-efa5-d1bbb7e1bc9d" Jul 6 23:58:15.704793 containerd[1646]: 2025-07-06 23:58:15.463 [INFO][4082] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="42a9b739b84e940584bfc305b57e88bd8cfb72f4a0d79d1c56bb7333c454cb64" Jul 6 23:58:15.704793 containerd[1646]: 2025-07-06 23:58:15.463 [INFO][4082] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="42a9b739b84e940584bfc305b57e88bd8cfb72f4a0d79d1c56bb7333c454cb64" Jul 6 23:58:15.704793 containerd[1646]: 2025-07-06 23:58:15.683 [INFO][4100] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="42a9b739b84e940584bfc305b57e88bd8cfb72f4a0d79d1c56bb7333c454cb64" HandleID="k8s-pod-network.42a9b739b84e940584bfc305b57e88bd8cfb72f4a0d79d1c56bb7333c454cb64" Workload="localhost-k8s-whisker--ccc6d79dd--5wwps-eth0" Jul 6 23:58:15.704793 containerd[1646]: 2025-07-06 23:58:15.688 [INFO][4100] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:58:15.704793 containerd[1646]: 2025-07-06 23:58:15.688 [INFO][4100] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:58:15.704793 containerd[1646]: 2025-07-06 23:58:15.701 [WARNING][4100] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="42a9b739b84e940584bfc305b57e88bd8cfb72f4a0d79d1c56bb7333c454cb64" HandleID="k8s-pod-network.42a9b739b84e940584bfc305b57e88bd8cfb72f4a0d79d1c56bb7333c454cb64" Workload="localhost-k8s-whisker--ccc6d79dd--5wwps-eth0" Jul 6 23:58:15.704793 containerd[1646]: 2025-07-06 23:58:15.701 [INFO][4100] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="42a9b739b84e940584bfc305b57e88bd8cfb72f4a0d79d1c56bb7333c454cb64" HandleID="k8s-pod-network.42a9b739b84e940584bfc305b57e88bd8cfb72f4a0d79d1c56bb7333c454cb64" Workload="localhost-k8s-whisker--ccc6d79dd--5wwps-eth0" Jul 6 23:58:15.704793 containerd[1646]: 2025-07-06 23:58:15.702 [INFO][4100] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:58:15.704793 containerd[1646]: 2025-07-06 23:58:15.703 [INFO][4082] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="42a9b739b84e940584bfc305b57e88bd8cfb72f4a0d79d1c56bb7333c454cb64" Jul 6 23:58:15.708185 systemd[1]: run-netns-cni\x2dafb33e17\x2dff3b\x2d3b64\x2defa5\x2dd1bbb7e1bc9d.mount: Deactivated successfully. Jul 6 23:58:15.710501 containerd[1646]: time="2025-07-06T23:58:15.710330379Z" level=info msg="TearDown network for sandbox \"42a9b739b84e940584bfc305b57e88bd8cfb72f4a0d79d1c56bb7333c454cb64\" successfully" Jul 6 23:58:15.710501 containerd[1646]: time="2025-07-06T23:58:15.710356095Z" level=info msg="StopPodSandbox for \"42a9b739b84e940584bfc305b57e88bd8cfb72f4a0d79d1c56bb7333c454cb64\" returns successfully" Jul 6 23:58:15.848696 kubelet[2915]: I0706 23:58:15.848659 2915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/8f36e826-d10d-4e96-a49b-5cddfb9434f6-whisker-backend-key-pair\") pod \"8f36e826-d10d-4e96-a49b-5cddfb9434f6\" (UID: \"8f36e826-d10d-4e96-a49b-5cddfb9434f6\") " Jul 6 23:58:15.851882 kubelet[2915]: I0706 23:58:15.851841 2915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8f36e826-d10d-4e96-a49b-5cddfb9434f6-whisker-ca-bundle\") pod \"8f36e826-d10d-4e96-a49b-5cddfb9434f6\" (UID: \"8f36e826-d10d-4e96-a49b-5cddfb9434f6\") " Jul 6 23:58:15.851882 kubelet[2915]: I0706 23:58:15.851883 2915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6hrxp\" (UniqueName: \"kubernetes.io/projected/8f36e826-d10d-4e96-a49b-5cddfb9434f6-kube-api-access-6hrxp\") pod \"8f36e826-d10d-4e96-a49b-5cddfb9434f6\" (UID: \"8f36e826-d10d-4e96-a49b-5cddfb9434f6\") " Jul 6 23:58:15.860022 systemd[1]: var-lib-kubelet-pods-8f36e826\x2dd10d\x2d4e96\x2da49b\x2d5cddfb9434f6-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jul 6 23:58:15.865280 kubelet[2915]: I0706 23:58:15.863113 2915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f36e826-d10d-4e96-a49b-5cddfb9434f6-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "8f36e826-d10d-4e96-a49b-5cddfb9434f6" (UID: "8f36e826-d10d-4e96-a49b-5cddfb9434f6"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 6 23:58:15.868308 systemd[1]: var-lib-kubelet-pods-8f36e826\x2dd10d\x2d4e96\x2da49b\x2d5cddfb9434f6-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d6hrxp.mount: Deactivated successfully. Jul 6 23:58:15.869453 kubelet[2915]: I0706 23:58:15.869435 2915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f36e826-d10d-4e96-a49b-5cddfb9434f6-kube-api-access-6hrxp" (OuterVolumeSpecName: "kube-api-access-6hrxp") pod "8f36e826-d10d-4e96-a49b-5cddfb9434f6" (UID: "8f36e826-d10d-4e96-a49b-5cddfb9434f6"). InnerVolumeSpecName "kube-api-access-6hrxp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 6 23:58:15.873668 kubelet[2915]: I0706 23:58:15.873640 2915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f36e826-d10d-4e96-a49b-5cddfb9434f6-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "8f36e826-d10d-4e96-a49b-5cddfb9434f6" (UID: "8f36e826-d10d-4e96-a49b-5cddfb9434f6"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 6 23:58:15.953008 kubelet[2915]: I0706 23:58:15.952974 2915 reconciler_common.go:293] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8f36e826-d10d-4e96-a49b-5cddfb9434f6-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Jul 6 23:58:15.953008 kubelet[2915]: I0706 23:58:15.953001 2915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6hrxp\" (UniqueName: \"kubernetes.io/projected/8f36e826-d10d-4e96-a49b-5cddfb9434f6-kube-api-access-6hrxp\") on node \"localhost\" DevicePath \"\"" Jul 6 23:58:15.953008 kubelet[2915]: I0706 23:58:15.953010 2915 reconciler_common.go:293] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/8f36e826-d10d-4e96-a49b-5cddfb9434f6-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Jul 6 23:58:16.254377 kubelet[2915]: I0706 23:58:16.254255 2915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a1429ffa-d939-4b95-95f8-efc3dab8536b-whisker-ca-bundle\") pod \"whisker-5c9db55dc6-fz46s\" (UID: \"a1429ffa-d939-4b95-95f8-efc3dab8536b\") " pod="calico-system/whisker-5c9db55dc6-fz46s" Jul 6 23:58:16.254377 kubelet[2915]: I0706 23:58:16.254303 2915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s28j8\" (UniqueName: \"kubernetes.io/projected/a1429ffa-d939-4b95-95f8-efc3dab8536b-kube-api-access-s28j8\") pod \"whisker-5c9db55dc6-fz46s\" (UID: \"a1429ffa-d939-4b95-95f8-efc3dab8536b\") " pod="calico-system/whisker-5c9db55dc6-fz46s" Jul 6 23:58:16.254377 kubelet[2915]: I0706 23:58:16.254326 2915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/a1429ffa-d939-4b95-95f8-efc3dab8536b-whisker-backend-key-pair\") pod \"whisker-5c9db55dc6-fz46s\" (UID: \"a1429ffa-d939-4b95-95f8-efc3dab8536b\") " pod="calico-system/whisker-5c9db55dc6-fz46s" Jul 6 23:58:16.434050 containerd[1646]: time="2025-07-06T23:58:16.433976715Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5c9db55dc6-fz46s,Uid:a1429ffa-d939-4b95-95f8-efc3dab8536b,Namespace:calico-system,Attempt:0,}" Jul 6 23:58:16.573328 systemd-journald[1183]: Under memory pressure, flushing caches. Jul 6 23:58:16.560037 systemd-resolved[1540]: Under memory pressure, flushing caches. Jul 6 23:58:16.560077 systemd-resolved[1540]: Flushed all caches. Jul 6 23:58:16.727560 systemd-networkd[1286]: cali1f8fbbce1ed: Link UP Jul 6 23:58:16.729958 systemd-networkd[1286]: cali1f8fbbce1ed: Gained carrier Jul 6 23:58:16.750959 containerd[1646]: 2025-07-06 23:58:16.496 [INFO][4123] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 6 23:58:16.750959 containerd[1646]: 2025-07-06 23:58:16.521 [INFO][4123] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--5c9db55dc6--fz46s-eth0 whisker-5c9db55dc6- calico-system a1429ffa-d939-4b95-95f8-efc3dab8536b 891 0 2025-07-06 23:58:16 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:5c9db55dc6 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-5c9db55dc6-fz46s eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali1f8fbbce1ed [] [] }} ContainerID="4a73c299375c5c9c8bb0c18df9043509b3d7b6340487df19df56ab5128e5d3d6" Namespace="calico-system" Pod="whisker-5c9db55dc6-fz46s" WorkloadEndpoint="localhost-k8s-whisker--5c9db55dc6--fz46s-" Jul 6 23:58:16.750959 containerd[1646]: 2025-07-06 23:58:16.521 [INFO][4123] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4a73c299375c5c9c8bb0c18df9043509b3d7b6340487df19df56ab5128e5d3d6" Namespace="calico-system" Pod="whisker-5c9db55dc6-fz46s" WorkloadEndpoint="localhost-k8s-whisker--5c9db55dc6--fz46s-eth0" Jul 6 23:58:16.750959 containerd[1646]: 2025-07-06 23:58:16.563 [INFO][4198] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4a73c299375c5c9c8bb0c18df9043509b3d7b6340487df19df56ab5128e5d3d6" HandleID="k8s-pod-network.4a73c299375c5c9c8bb0c18df9043509b3d7b6340487df19df56ab5128e5d3d6" Workload="localhost-k8s-whisker--5c9db55dc6--fz46s-eth0" Jul 6 23:58:16.750959 containerd[1646]: 2025-07-06 23:58:16.572 [INFO][4198] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="4a73c299375c5c9c8bb0c18df9043509b3d7b6340487df19df56ab5128e5d3d6" HandleID="k8s-pod-network.4a73c299375c5c9c8bb0c18df9043509b3d7b6340487df19df56ab5128e5d3d6" Workload="localhost-k8s-whisker--5c9db55dc6--fz46s-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d4ff0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-5c9db55dc6-fz46s", "timestamp":"2025-07-06 23:58:16.563076522 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 6 23:58:16.750959 containerd[1646]: 2025-07-06 23:58:16.572 [INFO][4198] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:58:16.750959 containerd[1646]: 2025-07-06 23:58:16.572 [INFO][4198] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:58:16.750959 containerd[1646]: 2025-07-06 23:58:16.572 [INFO][4198] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 6 23:58:16.750959 containerd[1646]: 2025-07-06 23:58:16.590 [INFO][4198] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4a73c299375c5c9c8bb0c18df9043509b3d7b6340487df19df56ab5128e5d3d6" host="localhost" Jul 6 23:58:16.750959 containerd[1646]: 2025-07-06 23:58:16.621 [INFO][4198] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 6 23:58:16.750959 containerd[1646]: 2025-07-06 23:58:16.626 [INFO][4198] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 6 23:58:16.750959 containerd[1646]: 2025-07-06 23:58:16.628 [INFO][4198] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 6 23:58:16.750959 containerd[1646]: 2025-07-06 23:58:16.629 [INFO][4198] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 6 23:58:16.750959 containerd[1646]: 2025-07-06 23:58:16.630 [INFO][4198] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.4a73c299375c5c9c8bb0c18df9043509b3d7b6340487df19df56ab5128e5d3d6" host="localhost" Jul 6 23:58:16.750959 containerd[1646]: 2025-07-06 23:58:16.631 [INFO][4198] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.4a73c299375c5c9c8bb0c18df9043509b3d7b6340487df19df56ab5128e5d3d6 Jul 6 23:58:16.750959 containerd[1646]: 2025-07-06 23:58:16.635 [INFO][4198] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.4a73c299375c5c9c8bb0c18df9043509b3d7b6340487df19df56ab5128e5d3d6" host="localhost" Jul 6 23:58:16.750959 containerd[1646]: 2025-07-06 23:58:16.645 [INFO][4198] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.4a73c299375c5c9c8bb0c18df9043509b3d7b6340487df19df56ab5128e5d3d6" host="localhost" Jul 6 23:58:16.750959 containerd[1646]: 2025-07-06 23:58:16.645 [INFO][4198] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.4a73c299375c5c9c8bb0c18df9043509b3d7b6340487df19df56ab5128e5d3d6" host="localhost" Jul 6 23:58:16.750959 containerd[1646]: 2025-07-06 23:58:16.645 [INFO][4198] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:58:16.750959 containerd[1646]: 2025-07-06 23:58:16.645 [INFO][4198] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="4a73c299375c5c9c8bb0c18df9043509b3d7b6340487df19df56ab5128e5d3d6" HandleID="k8s-pod-network.4a73c299375c5c9c8bb0c18df9043509b3d7b6340487df19df56ab5128e5d3d6" Workload="localhost-k8s-whisker--5c9db55dc6--fz46s-eth0" Jul 6 23:58:16.758815 containerd[1646]: 2025-07-06 23:58:16.648 [INFO][4123] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4a73c299375c5c9c8bb0c18df9043509b3d7b6340487df19df56ab5128e5d3d6" Namespace="calico-system" Pod="whisker-5c9db55dc6-fz46s" WorkloadEndpoint="localhost-k8s-whisker--5c9db55dc6--fz46s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--5c9db55dc6--fz46s-eth0", GenerateName:"whisker-5c9db55dc6-", Namespace:"calico-system", SelfLink:"", UID:"a1429ffa-d939-4b95-95f8-efc3dab8536b", ResourceVersion:"891", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 58, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5c9db55dc6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-5c9db55dc6-fz46s", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali1f8fbbce1ed", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:58:16.758815 containerd[1646]: 2025-07-06 23:58:16.648 [INFO][4123] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="4a73c299375c5c9c8bb0c18df9043509b3d7b6340487df19df56ab5128e5d3d6" Namespace="calico-system" Pod="whisker-5c9db55dc6-fz46s" WorkloadEndpoint="localhost-k8s-whisker--5c9db55dc6--fz46s-eth0" Jul 6 23:58:16.758815 containerd[1646]: 2025-07-06 23:58:16.649 [INFO][4123] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1f8fbbce1ed ContainerID="4a73c299375c5c9c8bb0c18df9043509b3d7b6340487df19df56ab5128e5d3d6" Namespace="calico-system" Pod="whisker-5c9db55dc6-fz46s" WorkloadEndpoint="localhost-k8s-whisker--5c9db55dc6--fz46s-eth0" Jul 6 23:58:16.758815 containerd[1646]: 2025-07-06 23:58:16.712 [INFO][4123] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4a73c299375c5c9c8bb0c18df9043509b3d7b6340487df19df56ab5128e5d3d6" Namespace="calico-system" Pod="whisker-5c9db55dc6-fz46s" WorkloadEndpoint="localhost-k8s-whisker--5c9db55dc6--fz46s-eth0" Jul 6 23:58:16.758815 containerd[1646]: 2025-07-06 23:58:16.714 [INFO][4123] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4a73c299375c5c9c8bb0c18df9043509b3d7b6340487df19df56ab5128e5d3d6" Namespace="calico-system" Pod="whisker-5c9db55dc6-fz46s" WorkloadEndpoint="localhost-k8s-whisker--5c9db55dc6--fz46s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--5c9db55dc6--fz46s-eth0", GenerateName:"whisker-5c9db55dc6-", Namespace:"calico-system", SelfLink:"", UID:"a1429ffa-d939-4b95-95f8-efc3dab8536b", ResourceVersion:"891", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 58, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5c9db55dc6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4a73c299375c5c9c8bb0c18df9043509b3d7b6340487df19df56ab5128e5d3d6", Pod:"whisker-5c9db55dc6-fz46s", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali1f8fbbce1ed", MAC:"f6:6a:3c:ba:56:f2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:58:16.758815 containerd[1646]: 2025-07-06 23:58:16.733 [INFO][4123] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4a73c299375c5c9c8bb0c18df9043509b3d7b6340487df19df56ab5128e5d3d6" Namespace="calico-system" Pod="whisker-5c9db55dc6-fz46s" WorkloadEndpoint="localhost-k8s-whisker--5c9db55dc6--fz46s-eth0" Jul 6 23:58:16.776189 containerd[1646]: time="2025-07-06T23:58:16.775978138Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:58:16.776189 containerd[1646]: time="2025-07-06T23:58:16.776025800Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:58:16.776189 containerd[1646]: time="2025-07-06T23:58:16.776037803Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:58:16.777373 containerd[1646]: time="2025-07-06T23:58:16.777061429Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:58:16.809534 systemd-resolved[1540]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 6 23:58:16.851111 containerd[1646]: time="2025-07-06T23:58:16.850986831Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5c9db55dc6-fz46s,Uid:a1429ffa-d939-4b95-95f8-efc3dab8536b,Namespace:calico-system,Attempt:0,} returns sandbox id \"4a73c299375c5c9c8bb0c18df9043509b3d7b6340487df19df56ab5128e5d3d6\"" Jul 6 23:58:16.858622 containerd[1646]: time="2025-07-06T23:58:16.858502816Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\"" Jul 6 23:58:17.368966 systemd[1]: run-containerd-runc-k8s.io-4a73c299375c5c9c8bb0c18df9043509b3d7b6340487df19df56ab5128e5d3d6-runc.wLmLT8.mount: Deactivated successfully. Jul 6 23:58:17.829818 kubelet[2915]: I0706 23:58:17.829775 2915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f36e826-d10d-4e96-a49b-5cddfb9434f6" path="/var/lib/kubelet/pods/8f36e826-d10d-4e96-a49b-5cddfb9434f6/volumes" Jul 6 23:58:17.849323 systemd-networkd[1286]: cali1f8fbbce1ed: Gained IPv6LL Jul 6 23:58:18.370225 containerd[1646]: time="2025-07-06T23:58:18.369750329Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:58:18.370225 containerd[1646]: time="2025-07-06T23:58:18.370201888Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.2: active requests=0, bytes read=4661207" Jul 6 23:58:18.370658 containerd[1646]: time="2025-07-06T23:58:18.370645628Z" level=info msg="ImageCreate event name:\"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:58:18.371839 containerd[1646]: time="2025-07-06T23:58:18.371821204Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:58:18.372383 containerd[1646]: time="2025-07-06T23:58:18.372362618Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.2\" with image id \"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\", size \"6153902\" in 1.51383626s" Jul 6 23:58:18.375048 containerd[1646]: time="2025-07-06T23:58:18.372385424Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\" returns image reference \"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\"" Jul 6 23:58:18.375048 containerd[1646]: time="2025-07-06T23:58:18.374132221Z" level=info msg="CreateContainer within sandbox \"4a73c299375c5c9c8bb0c18df9043509b3d7b6340487df19df56ab5128e5d3d6\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Jul 6 23:58:18.412679 containerd[1646]: time="2025-07-06T23:58:18.412634306Z" level=info msg="CreateContainer within sandbox \"4a73c299375c5c9c8bb0c18df9043509b3d7b6340487df19df56ab5128e5d3d6\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"62ed7008d4a219913932ed82ed2d59380a1a8988367bee64b29edcb737e461c8\"" Jul 6 23:58:18.413967 containerd[1646]: time="2025-07-06T23:58:18.413472504Z" level=info msg="StartContainer for \"62ed7008d4a219913932ed82ed2d59380a1a8988367bee64b29edcb737e461c8\"" Jul 6 23:58:18.476149 containerd[1646]: time="2025-07-06T23:58:18.476129919Z" level=info msg="StartContainer for \"62ed7008d4a219913932ed82ed2d59380a1a8988367bee64b29edcb737e461c8\" returns successfully" Jul 6 23:58:18.486802 containerd[1646]: time="2025-07-06T23:58:18.477240757Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\"" Jul 6 23:58:20.382145 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1148429131.mount: Deactivated successfully. Jul 6 23:58:20.600144 containerd[1646]: time="2025-07-06T23:58:20.599866315Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:58:20.601298 containerd[1646]: time="2025-07-06T23:58:20.600594382Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.2: active requests=0, bytes read=33083477" Jul 6 23:58:20.601298 containerd[1646]: time="2025-07-06T23:58:20.600759407Z" level=info msg="ImageCreate event name:\"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:58:20.615491 containerd[1646]: time="2025-07-06T23:58:20.614971597Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:58:20.616395 containerd[1646]: time="2025-07-06T23:58:20.616355433Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" with image id \"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\", size \"33083307\" in 2.139096337s" Jul 6 23:58:20.616395 containerd[1646]: time="2025-07-06T23:58:20.616379126Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" returns image reference \"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\"" Jul 6 23:58:20.618414 containerd[1646]: time="2025-07-06T23:58:20.618387770Z" level=info msg="CreateContainer within sandbox \"4a73c299375c5c9c8bb0c18df9043509b3d7b6340487df19df56ab5128e5d3d6\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Jul 6 23:58:20.623564 containerd[1646]: time="2025-07-06T23:58:20.623537078Z" level=info msg="CreateContainer within sandbox \"4a73c299375c5c9c8bb0c18df9043509b3d7b6340487df19df56ab5128e5d3d6\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"757d67b929ddc58cc17c768ea8861eff0e6c03e32d4caf3c82e1fbe11b62b870\"" Jul 6 23:58:20.624245 containerd[1646]: time="2025-07-06T23:58:20.623921830Z" level=info msg="StartContainer for \"757d67b929ddc58cc17c768ea8861eff0e6c03e32d4caf3c82e1fbe11b62b870\"" Jul 6 23:58:20.684271 containerd[1646]: time="2025-07-06T23:58:20.684246913Z" level=info msg="StartContainer for \"757d67b929ddc58cc17c768ea8861eff0e6c03e32d4caf3c82e1fbe11b62b870\" returns successfully" Jul 6 23:58:21.786866 containerd[1646]: time="2025-07-06T23:58:21.786789594Z" level=info msg="StopPodSandbox for \"164527933e14a429286d37d9f1028b7584a78d0eea5ddfbb13990110285bfa5a\"" Jul 6 23:58:21.818835 kubelet[2915]: I0706 23:58:21.818576 2915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-5c9db55dc6-fz46s" podStartSLOduration=2.059907957 podStartE2EDuration="5.818545252s" podCreationTimestamp="2025-07-06 23:58:16 +0000 UTC" firstStartedPulling="2025-07-06 23:58:16.858265151 +0000 UTC m=+37.263449567" lastFinishedPulling="2025-07-06 23:58:20.616902446 +0000 UTC m=+41.022086862" observedRunningTime="2025-07-06 23:58:21.094324555 +0000 UTC m=+41.499508985" watchObservedRunningTime="2025-07-06 23:58:21.818545252 +0000 UTC m=+42.223729668" Jul 6 23:58:21.842568 containerd[1646]: 2025-07-06 23:58:21.820 [INFO][4453] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="164527933e14a429286d37d9f1028b7584a78d0eea5ddfbb13990110285bfa5a" Jul 6 23:58:21.842568 containerd[1646]: 2025-07-06 23:58:21.820 [INFO][4453] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="164527933e14a429286d37d9f1028b7584a78d0eea5ddfbb13990110285bfa5a" iface="eth0" netns="/var/run/netns/cni-7b622cce-4161-69f8-a43e-21bbe5acb432" Jul 6 23:58:21.842568 containerd[1646]: 2025-07-06 23:58:21.821 [INFO][4453] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="164527933e14a429286d37d9f1028b7584a78d0eea5ddfbb13990110285bfa5a" iface="eth0" netns="/var/run/netns/cni-7b622cce-4161-69f8-a43e-21bbe5acb432" Jul 6 23:58:21.842568 containerd[1646]: 2025-07-06 23:58:21.821 [INFO][4453] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="164527933e14a429286d37d9f1028b7584a78d0eea5ddfbb13990110285bfa5a" iface="eth0" netns="/var/run/netns/cni-7b622cce-4161-69f8-a43e-21bbe5acb432" Jul 6 23:58:21.842568 containerd[1646]: 2025-07-06 23:58:21.821 [INFO][4453] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="164527933e14a429286d37d9f1028b7584a78d0eea5ddfbb13990110285bfa5a" Jul 6 23:58:21.842568 containerd[1646]: 2025-07-06 23:58:21.821 [INFO][4453] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="164527933e14a429286d37d9f1028b7584a78d0eea5ddfbb13990110285bfa5a" Jul 6 23:58:21.842568 containerd[1646]: 2025-07-06 23:58:21.835 [INFO][4460] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="164527933e14a429286d37d9f1028b7584a78d0eea5ddfbb13990110285bfa5a" HandleID="k8s-pod-network.164527933e14a429286d37d9f1028b7584a78d0eea5ddfbb13990110285bfa5a" Workload="localhost-k8s-coredns--7c65d6cfc9--rwltj-eth0" Jul 6 23:58:21.842568 containerd[1646]: 2025-07-06 23:58:21.835 [INFO][4460] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:58:21.842568 containerd[1646]: 2025-07-06 23:58:21.835 [INFO][4460] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:58:21.842568 containerd[1646]: 2025-07-06 23:58:21.839 [WARNING][4460] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="164527933e14a429286d37d9f1028b7584a78d0eea5ddfbb13990110285bfa5a" HandleID="k8s-pod-network.164527933e14a429286d37d9f1028b7584a78d0eea5ddfbb13990110285bfa5a" Workload="localhost-k8s-coredns--7c65d6cfc9--rwltj-eth0" Jul 6 23:58:21.842568 containerd[1646]: 2025-07-06 23:58:21.839 [INFO][4460] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="164527933e14a429286d37d9f1028b7584a78d0eea5ddfbb13990110285bfa5a" HandleID="k8s-pod-network.164527933e14a429286d37d9f1028b7584a78d0eea5ddfbb13990110285bfa5a" Workload="localhost-k8s-coredns--7c65d6cfc9--rwltj-eth0" Jul 6 23:58:21.842568 containerd[1646]: 2025-07-06 23:58:21.840 [INFO][4460] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:58:21.842568 containerd[1646]: 2025-07-06 23:58:21.841 [INFO][4453] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="164527933e14a429286d37d9f1028b7584a78d0eea5ddfbb13990110285bfa5a" Jul 6 23:58:21.858140 containerd[1646]: time="2025-07-06T23:58:21.844257890Z" level=info msg="TearDown network for sandbox \"164527933e14a429286d37d9f1028b7584a78d0eea5ddfbb13990110285bfa5a\" successfully" Jul 6 23:58:21.858140 containerd[1646]: time="2025-07-06T23:58:21.844276782Z" level=info msg="StopPodSandbox for \"164527933e14a429286d37d9f1028b7584a78d0eea5ddfbb13990110285bfa5a\" returns successfully" Jul 6 23:58:21.858140 containerd[1646]: time="2025-07-06T23:58:21.844677108Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-rwltj,Uid:6ee4ce16-275b-4d42-b3b2-5651a9edac72,Namespace:kube-system,Attempt:1,}" Jul 6 23:58:21.845080 systemd[1]: run-netns-cni\x2d7b622cce\x2d4161\x2d69f8\x2da43e\x2d21bbe5acb432.mount: Deactivated successfully. Jul 6 23:58:21.928729 systemd-networkd[1286]: calib6621f5c05e: Link UP Jul 6 23:58:21.928849 systemd-networkd[1286]: calib6621f5c05e: Gained carrier Jul 6 23:58:21.940272 containerd[1646]: 2025-07-06 23:58:21.883 [INFO][4467] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 6 23:58:21.940272 containerd[1646]: 2025-07-06 23:58:21.889 [INFO][4467] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7c65d6cfc9--rwltj-eth0 coredns-7c65d6cfc9- kube-system 6ee4ce16-275b-4d42-b3b2-5651a9edac72 921 0 2025-07-06 23:57:47 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7c65d6cfc9-rwltj eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calib6621f5c05e [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="0d00052c86bf1f5d8cea5e5c94afaa55a723afcd08018c58522ef559cda1d383" Namespace="kube-system" Pod="coredns-7c65d6cfc9-rwltj" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--rwltj-" Jul 6 23:58:21.940272 containerd[1646]: 2025-07-06 23:58:21.889 [INFO][4467] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0d00052c86bf1f5d8cea5e5c94afaa55a723afcd08018c58522ef559cda1d383" Namespace="kube-system" Pod="coredns-7c65d6cfc9-rwltj" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--rwltj-eth0" Jul 6 23:58:21.940272 containerd[1646]: 2025-07-06 23:58:21.907 [INFO][4479] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0d00052c86bf1f5d8cea5e5c94afaa55a723afcd08018c58522ef559cda1d383" HandleID="k8s-pod-network.0d00052c86bf1f5d8cea5e5c94afaa55a723afcd08018c58522ef559cda1d383" Workload="localhost-k8s-coredns--7c65d6cfc9--rwltj-eth0" Jul 6 23:58:21.940272 containerd[1646]: 2025-07-06 23:58:21.907 [INFO][4479] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="0d00052c86bf1f5d8cea5e5c94afaa55a723afcd08018c58522ef559cda1d383" HandleID="k8s-pod-network.0d00052c86bf1f5d8cea5e5c94afaa55a723afcd08018c58522ef559cda1d383" Workload="localhost-k8s-coredns--7c65d6cfc9--rwltj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024eff0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7c65d6cfc9-rwltj", "timestamp":"2025-07-06 23:58:21.907268083 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 6 23:58:21.940272 containerd[1646]: 2025-07-06 23:58:21.907 [INFO][4479] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:58:21.940272 containerd[1646]: 2025-07-06 23:58:21.907 [INFO][4479] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:58:21.940272 containerd[1646]: 2025-07-06 23:58:21.907 [INFO][4479] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 6 23:58:21.940272 containerd[1646]: 2025-07-06 23:58:21.911 [INFO][4479] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.0d00052c86bf1f5d8cea5e5c94afaa55a723afcd08018c58522ef559cda1d383" host="localhost" Jul 6 23:58:21.940272 containerd[1646]: 2025-07-06 23:58:21.913 [INFO][4479] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 6 23:58:21.940272 containerd[1646]: 2025-07-06 23:58:21.915 [INFO][4479] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 6 23:58:21.940272 containerd[1646]: 2025-07-06 23:58:21.916 [INFO][4479] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 6 23:58:21.940272 containerd[1646]: 2025-07-06 23:58:21.918 [INFO][4479] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 6 23:58:21.940272 containerd[1646]: 2025-07-06 23:58:21.918 [INFO][4479] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.0d00052c86bf1f5d8cea5e5c94afaa55a723afcd08018c58522ef559cda1d383" host="localhost" Jul 6 23:58:21.940272 containerd[1646]: 2025-07-06 23:58:21.919 [INFO][4479] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.0d00052c86bf1f5d8cea5e5c94afaa55a723afcd08018c58522ef559cda1d383 Jul 6 23:58:21.940272 containerd[1646]: 2025-07-06 23:58:21.921 [INFO][4479] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.0d00052c86bf1f5d8cea5e5c94afaa55a723afcd08018c58522ef559cda1d383" host="localhost" Jul 6 23:58:21.940272 containerd[1646]: 2025-07-06 23:58:21.924 [INFO][4479] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.0d00052c86bf1f5d8cea5e5c94afaa55a723afcd08018c58522ef559cda1d383" host="localhost" Jul 6 23:58:21.940272 containerd[1646]: 2025-07-06 23:58:21.924 [INFO][4479] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.0d00052c86bf1f5d8cea5e5c94afaa55a723afcd08018c58522ef559cda1d383" host="localhost" Jul 6 23:58:21.940272 containerd[1646]: 2025-07-06 23:58:21.924 [INFO][4479] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:58:21.940272 containerd[1646]: 2025-07-06 23:58:21.924 [INFO][4479] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="0d00052c86bf1f5d8cea5e5c94afaa55a723afcd08018c58522ef559cda1d383" HandleID="k8s-pod-network.0d00052c86bf1f5d8cea5e5c94afaa55a723afcd08018c58522ef559cda1d383" Workload="localhost-k8s-coredns--7c65d6cfc9--rwltj-eth0" Jul 6 23:58:21.941301 containerd[1646]: 2025-07-06 23:58:21.926 [INFO][4467] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0d00052c86bf1f5d8cea5e5c94afaa55a723afcd08018c58522ef559cda1d383" Namespace="kube-system" Pod="coredns-7c65d6cfc9-rwltj" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--rwltj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--rwltj-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"6ee4ce16-275b-4d42-b3b2-5651a9edac72", ResourceVersion:"921", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 57, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7c65d6cfc9-rwltj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib6621f5c05e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:58:21.941301 containerd[1646]: 2025-07-06 23:58:21.926 [INFO][4467] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="0d00052c86bf1f5d8cea5e5c94afaa55a723afcd08018c58522ef559cda1d383" Namespace="kube-system" Pod="coredns-7c65d6cfc9-rwltj" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--rwltj-eth0" Jul 6 23:58:21.941301 containerd[1646]: 2025-07-06 23:58:21.926 [INFO][4467] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib6621f5c05e ContainerID="0d00052c86bf1f5d8cea5e5c94afaa55a723afcd08018c58522ef559cda1d383" Namespace="kube-system" Pod="coredns-7c65d6cfc9-rwltj" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--rwltj-eth0" Jul 6 23:58:21.941301 containerd[1646]: 2025-07-06 23:58:21.928 [INFO][4467] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0d00052c86bf1f5d8cea5e5c94afaa55a723afcd08018c58522ef559cda1d383" Namespace="kube-system" Pod="coredns-7c65d6cfc9-rwltj" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--rwltj-eth0" Jul 6 23:58:21.941301 containerd[1646]: 2025-07-06 23:58:21.929 [INFO][4467] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0d00052c86bf1f5d8cea5e5c94afaa55a723afcd08018c58522ef559cda1d383" Namespace="kube-system" Pod="coredns-7c65d6cfc9-rwltj" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--rwltj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--rwltj-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"6ee4ce16-275b-4d42-b3b2-5651a9edac72", ResourceVersion:"921", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 57, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0d00052c86bf1f5d8cea5e5c94afaa55a723afcd08018c58522ef559cda1d383", Pod:"coredns-7c65d6cfc9-rwltj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib6621f5c05e", MAC:"ba:91:9b:33:ed:85", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:58:21.941301 containerd[1646]: 2025-07-06 23:58:21.938 [INFO][4467] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0d00052c86bf1f5d8cea5e5c94afaa55a723afcd08018c58522ef559cda1d383" Namespace="kube-system" Pod="coredns-7c65d6cfc9-rwltj" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--rwltj-eth0" Jul 6 23:58:21.951240 containerd[1646]: time="2025-07-06T23:58:21.951183963Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:58:21.951240 containerd[1646]: time="2025-07-06T23:58:21.951216428Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:58:21.951240 containerd[1646]: time="2025-07-06T23:58:21.951236252Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:58:21.951833 containerd[1646]: time="2025-07-06T23:58:21.951318485Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:58:21.969919 systemd-resolved[1540]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 6 23:58:21.994876 containerd[1646]: time="2025-07-06T23:58:21.994851538Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-rwltj,Uid:6ee4ce16-275b-4d42-b3b2-5651a9edac72,Namespace:kube-system,Attempt:1,} returns sandbox id \"0d00052c86bf1f5d8cea5e5c94afaa55a723afcd08018c58522ef559cda1d383\"" Jul 6 23:58:22.002130 containerd[1646]: time="2025-07-06T23:58:22.002105957Z" level=info msg="CreateContainer within sandbox \"0d00052c86bf1f5d8cea5e5c94afaa55a723afcd08018c58522ef559cda1d383\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 6 23:58:22.022351 containerd[1646]: time="2025-07-06T23:58:22.022325311Z" level=info msg="CreateContainer within sandbox \"0d00052c86bf1f5d8cea5e5c94afaa55a723afcd08018c58522ef559cda1d383\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"78e043acecb4b1a855c65c7aee8ba311b29985ef4d94fc3b696d668ce8d64adf\"" Jul 6 23:58:22.023043 containerd[1646]: time="2025-07-06T23:58:22.023018393Z" level=info msg="StartContainer for \"78e043acecb4b1a855c65c7aee8ba311b29985ef4d94fc3b696d668ce8d64adf\"" Jul 6 23:58:22.062431 containerd[1646]: time="2025-07-06T23:58:22.062330495Z" level=info msg="StartContainer for \"78e043acecb4b1a855c65c7aee8ba311b29985ef4d94fc3b696d668ce8d64adf\" returns successfully" Jul 6 23:58:22.106838 kubelet[2915]: I0706 23:58:22.106712 2915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-rwltj" podStartSLOduration=35.106699861 podStartE2EDuration="35.106699861s" podCreationTimestamp="2025-07-06 23:57:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:58:22.106440931 +0000 UTC m=+42.511625354" watchObservedRunningTime="2025-07-06 23:58:22.106699861 +0000 UTC m=+42.511884282" Jul 6 23:58:22.785849 containerd[1646]: time="2025-07-06T23:58:22.785657611Z" level=info msg="StopPodSandbox for \"305307f300ced02e26613baabde083e836f1bd6c04cb97de63fb07e42f655070\"" Jul 6 23:58:22.785849 containerd[1646]: time="2025-07-06T23:58:22.785695282Z" level=info msg="StopPodSandbox for \"f0f479c7249c63c5d8b2673a414c94d8ca74a4258d1279d96096fcf24d86b6ad\"" Jul 6 23:58:22.786586 containerd[1646]: time="2025-07-06T23:58:22.785657634Z" level=info msg="StopPodSandbox for \"dc9e9028cd60f228c6b9a959c1f03b99f7a97b50eec30d656c53b810f0ef03b2\"" Jul 6 23:58:22.787843 containerd[1646]: time="2025-07-06T23:58:22.785672696Z" level=info msg="StopPodSandbox for \"dd8b0779078be7ddc1011fe7e80b37e48186565ff6409636527690455bdaac28\"" Jul 6 23:58:22.926272 containerd[1646]: 2025-07-06 23:58:22.867 [INFO][4628] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="305307f300ced02e26613baabde083e836f1bd6c04cb97de63fb07e42f655070" Jul 6 23:58:22.926272 containerd[1646]: 2025-07-06 23:58:22.867 [INFO][4628] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="305307f300ced02e26613baabde083e836f1bd6c04cb97de63fb07e42f655070" iface="eth0" netns="/var/run/netns/cni-ff53b318-8080-8822-aacd-f588884a3fb1" Jul 6 23:58:22.926272 containerd[1646]: 2025-07-06 23:58:22.867 [INFO][4628] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="305307f300ced02e26613baabde083e836f1bd6c04cb97de63fb07e42f655070" iface="eth0" netns="/var/run/netns/cni-ff53b318-8080-8822-aacd-f588884a3fb1" Jul 6 23:58:22.926272 containerd[1646]: 2025-07-06 23:58:22.867 [INFO][4628] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="305307f300ced02e26613baabde083e836f1bd6c04cb97de63fb07e42f655070" iface="eth0" netns="/var/run/netns/cni-ff53b318-8080-8822-aacd-f588884a3fb1" Jul 6 23:58:22.926272 containerd[1646]: 2025-07-06 23:58:22.867 [INFO][4628] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="305307f300ced02e26613baabde083e836f1bd6c04cb97de63fb07e42f655070" Jul 6 23:58:22.926272 containerd[1646]: 2025-07-06 23:58:22.867 [INFO][4628] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="305307f300ced02e26613baabde083e836f1bd6c04cb97de63fb07e42f655070" Jul 6 23:58:22.926272 containerd[1646]: 2025-07-06 23:58:22.890 [INFO][4657] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="305307f300ced02e26613baabde083e836f1bd6c04cb97de63fb07e42f655070" HandleID="k8s-pod-network.305307f300ced02e26613baabde083e836f1bd6c04cb97de63fb07e42f655070" Workload="localhost-k8s-calico--apiserver--66d984f854--dcpkk-eth0" Jul 6 23:58:22.926272 containerd[1646]: 2025-07-06 23:58:22.890 [INFO][4657] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:58:22.926272 containerd[1646]: 2025-07-06 23:58:22.890 [INFO][4657] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:58:22.926272 containerd[1646]: 2025-07-06 23:58:22.918 [WARNING][4657] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="305307f300ced02e26613baabde083e836f1bd6c04cb97de63fb07e42f655070" HandleID="k8s-pod-network.305307f300ced02e26613baabde083e836f1bd6c04cb97de63fb07e42f655070" Workload="localhost-k8s-calico--apiserver--66d984f854--dcpkk-eth0" Jul 6 23:58:22.926272 containerd[1646]: 2025-07-06 23:58:22.918 [INFO][4657] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="305307f300ced02e26613baabde083e836f1bd6c04cb97de63fb07e42f655070" HandleID="k8s-pod-network.305307f300ced02e26613baabde083e836f1bd6c04cb97de63fb07e42f655070" Workload="localhost-k8s-calico--apiserver--66d984f854--dcpkk-eth0" Jul 6 23:58:22.926272 containerd[1646]: 2025-07-06 23:58:22.921 [INFO][4657] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:58:22.926272 containerd[1646]: 2025-07-06 23:58:22.924 [INFO][4628] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="305307f300ced02e26613baabde083e836f1bd6c04cb97de63fb07e42f655070" Jul 6 23:58:22.959791 containerd[1646]: time="2025-07-06T23:58:22.926273602Z" level=info msg="TearDown network for sandbox \"305307f300ced02e26613baabde083e836f1bd6c04cb97de63fb07e42f655070\" successfully" Jul 6 23:58:22.959791 containerd[1646]: time="2025-07-06T23:58:22.926291624Z" level=info msg="StopPodSandbox for \"305307f300ced02e26613baabde083e836f1bd6c04cb97de63fb07e42f655070\" returns successfully" Jul 6 23:58:22.959791 containerd[1646]: time="2025-07-06T23:58:22.928987064Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-66d984f854-dcpkk,Uid:78734851-1240-4e35-b671-f1113509a863,Namespace:calico-apiserver,Attempt:1,}" Jul 6 23:58:22.959791 containerd[1646]: 2025-07-06 23:58:22.882 [INFO][4636] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="dc9e9028cd60f228c6b9a959c1f03b99f7a97b50eec30d656c53b810f0ef03b2" Jul 6 23:58:22.959791 containerd[1646]: 2025-07-06 23:58:22.882 [INFO][4636] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="dc9e9028cd60f228c6b9a959c1f03b99f7a97b50eec30d656c53b810f0ef03b2" iface="eth0" netns="/var/run/netns/cni-1c7d8e75-dd96-94d8-f428-488703b2100a" Jul 6 23:58:22.959791 containerd[1646]: 2025-07-06 23:58:22.882 [INFO][4636] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="dc9e9028cd60f228c6b9a959c1f03b99f7a97b50eec30d656c53b810f0ef03b2" iface="eth0" netns="/var/run/netns/cni-1c7d8e75-dd96-94d8-f428-488703b2100a" Jul 6 23:58:22.959791 containerd[1646]: 2025-07-06 23:58:22.882 [INFO][4636] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="dc9e9028cd60f228c6b9a959c1f03b99f7a97b50eec30d656c53b810f0ef03b2" iface="eth0" netns="/var/run/netns/cni-1c7d8e75-dd96-94d8-f428-488703b2100a" Jul 6 23:58:22.959791 containerd[1646]: 2025-07-06 23:58:22.882 [INFO][4636] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="dc9e9028cd60f228c6b9a959c1f03b99f7a97b50eec30d656c53b810f0ef03b2" Jul 6 23:58:22.959791 containerd[1646]: 2025-07-06 23:58:22.882 [INFO][4636] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="dc9e9028cd60f228c6b9a959c1f03b99f7a97b50eec30d656c53b810f0ef03b2" Jul 6 23:58:22.959791 containerd[1646]: 2025-07-06 23:58:22.900 [INFO][4663] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="dc9e9028cd60f228c6b9a959c1f03b99f7a97b50eec30d656c53b810f0ef03b2" HandleID="k8s-pod-network.dc9e9028cd60f228c6b9a959c1f03b99f7a97b50eec30d656c53b810f0ef03b2" Workload="localhost-k8s-calico--apiserver--66d984f854--sqcdr-eth0" Jul 6 23:58:22.959791 containerd[1646]: 2025-07-06 23:58:22.900 [INFO][4663] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:58:22.959791 containerd[1646]: 2025-07-06 23:58:22.921 [INFO][4663] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:58:22.959791 containerd[1646]: 2025-07-06 23:58:22.929 [WARNING][4663] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="dc9e9028cd60f228c6b9a959c1f03b99f7a97b50eec30d656c53b810f0ef03b2" HandleID="k8s-pod-network.dc9e9028cd60f228c6b9a959c1f03b99f7a97b50eec30d656c53b810f0ef03b2" Workload="localhost-k8s-calico--apiserver--66d984f854--sqcdr-eth0" Jul 6 23:58:22.959791 containerd[1646]: 2025-07-06 23:58:22.929 [INFO][4663] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="dc9e9028cd60f228c6b9a959c1f03b99f7a97b50eec30d656c53b810f0ef03b2" HandleID="k8s-pod-network.dc9e9028cd60f228c6b9a959c1f03b99f7a97b50eec30d656c53b810f0ef03b2" Workload="localhost-k8s-calico--apiserver--66d984f854--sqcdr-eth0" Jul 6 23:58:22.959791 containerd[1646]: 2025-07-06 23:58:22.930 [INFO][4663] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:58:22.959791 containerd[1646]: 2025-07-06 23:58:22.934 [INFO][4636] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="dc9e9028cd60f228c6b9a959c1f03b99f7a97b50eec30d656c53b810f0ef03b2" Jul 6 23:58:22.959791 containerd[1646]: time="2025-07-06T23:58:22.935422627Z" level=info msg="TearDown network for sandbox \"dc9e9028cd60f228c6b9a959c1f03b99f7a97b50eec30d656c53b810f0ef03b2\" successfully" Jul 6 23:58:22.959791 containerd[1646]: time="2025-07-06T23:58:22.935438849Z" level=info msg="StopPodSandbox for \"dc9e9028cd60f228c6b9a959c1f03b99f7a97b50eec30d656c53b810f0ef03b2\" returns successfully" Jul 6 23:58:22.959791 containerd[1646]: time="2025-07-06T23:58:22.935910673Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-66d984f854-sqcdr,Uid:fa6083f4-7360-4fbe-9116-052981667c66,Namespace:calico-apiserver,Attempt:1,}" Jul 6 23:58:22.930180 systemd[1]: run-netns-cni\x2dff53b318\x2d8080\x2d8822\x2daacd\x2df588884a3fb1.mount: Deactivated successfully. Jul 6 23:58:22.938035 systemd[1]: run-netns-cni\x2d1c7d8e75\x2ddd96\x2d94d8\x2df428\x2d488703b2100a.mount: Deactivated successfully. Jul 6 23:58:22.993825 containerd[1646]: 2025-07-06 23:58:22.907 [INFO][4632] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="dd8b0779078be7ddc1011fe7e80b37e48186565ff6409636527690455bdaac28" Jul 6 23:58:22.993825 containerd[1646]: 2025-07-06 23:58:22.908 [INFO][4632] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="dd8b0779078be7ddc1011fe7e80b37e48186565ff6409636527690455bdaac28" iface="eth0" netns="/var/run/netns/cni-2764f47d-517a-5bd9-9808-ddbc3c8af169" Jul 6 23:58:22.993825 containerd[1646]: 2025-07-06 23:58:22.908 [INFO][4632] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="dd8b0779078be7ddc1011fe7e80b37e48186565ff6409636527690455bdaac28" iface="eth0" netns="/var/run/netns/cni-2764f47d-517a-5bd9-9808-ddbc3c8af169" Jul 6 23:58:22.993825 containerd[1646]: 2025-07-06 23:58:22.908 [INFO][4632] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="dd8b0779078be7ddc1011fe7e80b37e48186565ff6409636527690455bdaac28" iface="eth0" netns="/var/run/netns/cni-2764f47d-517a-5bd9-9808-ddbc3c8af169" Jul 6 23:58:22.993825 containerd[1646]: 2025-07-06 23:58:22.908 [INFO][4632] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="dd8b0779078be7ddc1011fe7e80b37e48186565ff6409636527690455bdaac28" Jul 6 23:58:22.993825 containerd[1646]: 2025-07-06 23:58:22.908 [INFO][4632] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="dd8b0779078be7ddc1011fe7e80b37e48186565ff6409636527690455bdaac28" Jul 6 23:58:22.993825 containerd[1646]: 2025-07-06 23:58:22.962 [INFO][4671] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="dd8b0779078be7ddc1011fe7e80b37e48186565ff6409636527690455bdaac28" HandleID="k8s-pod-network.dd8b0779078be7ddc1011fe7e80b37e48186565ff6409636527690455bdaac28" Workload="localhost-k8s-coredns--7c65d6cfc9--b7l64-eth0" Jul 6 23:58:22.993825 containerd[1646]: 2025-07-06 23:58:22.962 [INFO][4671] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:58:22.993825 containerd[1646]: 2025-07-06 23:58:22.962 [INFO][4671] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:58:22.993825 containerd[1646]: 2025-07-06 23:58:22.967 [WARNING][4671] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="dd8b0779078be7ddc1011fe7e80b37e48186565ff6409636527690455bdaac28" HandleID="k8s-pod-network.dd8b0779078be7ddc1011fe7e80b37e48186565ff6409636527690455bdaac28" Workload="localhost-k8s-coredns--7c65d6cfc9--b7l64-eth0" Jul 6 23:58:22.993825 containerd[1646]: 2025-07-06 23:58:22.967 [INFO][4671] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="dd8b0779078be7ddc1011fe7e80b37e48186565ff6409636527690455bdaac28" HandleID="k8s-pod-network.dd8b0779078be7ddc1011fe7e80b37e48186565ff6409636527690455bdaac28" Workload="localhost-k8s-coredns--7c65d6cfc9--b7l64-eth0" Jul 6 23:58:22.993825 containerd[1646]: 2025-07-06 23:58:22.968 [INFO][4671] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:58:22.993825 containerd[1646]: 2025-07-06 23:58:22.969 [INFO][4632] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="dd8b0779078be7ddc1011fe7e80b37e48186565ff6409636527690455bdaac28" Jul 6 23:58:22.993825 containerd[1646]: time="2025-07-06T23:58:22.970509347Z" level=info msg="TearDown network for sandbox \"dd8b0779078be7ddc1011fe7e80b37e48186565ff6409636527690455bdaac28\" successfully" Jul 6 23:58:22.993825 containerd[1646]: time="2025-07-06T23:58:22.970525327Z" level=info msg="StopPodSandbox for \"dd8b0779078be7ddc1011fe7e80b37e48186565ff6409636527690455bdaac28\" returns successfully" Jul 6 23:58:22.993825 containerd[1646]: time="2025-07-06T23:58:22.970913829Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-b7l64,Uid:edda1339-9324-4b2e-a45e-3e301a3aa698,Namespace:kube-system,Attempt:1,}" Jul 6 23:58:22.993825 containerd[1646]: 2025-07-06 23:58:22.920 [INFO][4627] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f0f479c7249c63c5d8b2673a414c94d8ca74a4258d1279d96096fcf24d86b6ad" Jul 6 23:58:22.993825 containerd[1646]: 2025-07-06 23:58:22.920 [INFO][4627] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="f0f479c7249c63c5d8b2673a414c94d8ca74a4258d1279d96096fcf24d86b6ad" iface="eth0" netns="/var/run/netns/cni-e9cf08b3-7a96-1db0-d2d0-4295ffb6f50e" Jul 6 23:58:22.993825 containerd[1646]: 2025-07-06 23:58:22.920 [INFO][4627] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="f0f479c7249c63c5d8b2673a414c94d8ca74a4258d1279d96096fcf24d86b6ad" iface="eth0" netns="/var/run/netns/cni-e9cf08b3-7a96-1db0-d2d0-4295ffb6f50e" Jul 6 23:58:22.993825 containerd[1646]: 2025-07-06 23:58:22.920 [INFO][4627] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="f0f479c7249c63c5d8b2673a414c94d8ca74a4258d1279d96096fcf24d86b6ad" iface="eth0" netns="/var/run/netns/cni-e9cf08b3-7a96-1db0-d2d0-4295ffb6f50e" Jul 6 23:58:22.993825 containerd[1646]: 2025-07-06 23:58:22.920 [INFO][4627] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f0f479c7249c63c5d8b2673a414c94d8ca74a4258d1279d96096fcf24d86b6ad" Jul 6 23:58:22.993825 containerd[1646]: 2025-07-06 23:58:22.920 [INFO][4627] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f0f479c7249c63c5d8b2673a414c94d8ca74a4258d1279d96096fcf24d86b6ad" Jul 6 23:58:22.993825 containerd[1646]: 2025-07-06 23:58:22.977 [INFO][4676] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f0f479c7249c63c5d8b2673a414c94d8ca74a4258d1279d96096fcf24d86b6ad" HandleID="k8s-pod-network.f0f479c7249c63c5d8b2673a414c94d8ca74a4258d1279d96096fcf24d86b6ad" Workload="localhost-k8s-csi--node--driver--mvnfh-eth0" Jul 6 23:58:22.993825 containerd[1646]: 2025-07-06 23:58:22.978 [INFO][4676] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:58:22.993825 containerd[1646]: 2025-07-06 23:58:22.978 [INFO][4676] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:58:22.993825 containerd[1646]: 2025-07-06 23:58:22.981 [WARNING][4676] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f0f479c7249c63c5d8b2673a414c94d8ca74a4258d1279d96096fcf24d86b6ad" HandleID="k8s-pod-network.f0f479c7249c63c5d8b2673a414c94d8ca74a4258d1279d96096fcf24d86b6ad" Workload="localhost-k8s-csi--node--driver--mvnfh-eth0" Jul 6 23:58:22.993825 containerd[1646]: 2025-07-06 23:58:22.981 [INFO][4676] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f0f479c7249c63c5d8b2673a414c94d8ca74a4258d1279d96096fcf24d86b6ad" HandleID="k8s-pod-network.f0f479c7249c63c5d8b2673a414c94d8ca74a4258d1279d96096fcf24d86b6ad" Workload="localhost-k8s-csi--node--driver--mvnfh-eth0" Jul 6 23:58:22.993825 containerd[1646]: 2025-07-06 23:58:22.982 [INFO][4676] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:58:22.993825 containerd[1646]: 2025-07-06 23:58:22.983 [INFO][4627] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f0f479c7249c63c5d8b2673a414c94d8ca74a4258d1279d96096fcf24d86b6ad" Jul 6 23:58:22.972828 systemd[1]: run-netns-cni\x2d2764f47d\x2d517a\x2d5bd9\x2d9808\x2dddbc3c8af169.mount: Deactivated successfully. Jul 6 23:58:23.005685 containerd[1646]: time="2025-07-06T23:58:22.984943181Z" level=info msg="TearDown network for sandbox \"f0f479c7249c63c5d8b2673a414c94d8ca74a4258d1279d96096fcf24d86b6ad\" successfully" Jul 6 23:58:23.005685 containerd[1646]: time="2025-07-06T23:58:22.984958977Z" level=info msg="StopPodSandbox for \"f0f479c7249c63c5d8b2673a414c94d8ca74a4258d1279d96096fcf24d86b6ad\" returns successfully" Jul 6 23:58:23.005685 containerd[1646]: time="2025-07-06T23:58:22.985336465Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-mvnfh,Uid:451e31a5-b9c2-46fb-96e4-f1e20de500e9,Namespace:calico-system,Attempt:1,}" Jul 6 23:58:22.986690 systemd[1]: run-netns-cni\x2de9cf08b3\x2d7a96\x2d1db0\x2dd2d0\x2d4295ffb6f50e.mount: Deactivated successfully. Jul 6 23:58:23.384556 systemd-networkd[1286]: calic52d19b1f87: Link UP Jul 6 23:58:23.385698 systemd-networkd[1286]: calic52d19b1f87: Gained carrier Jul 6 23:58:23.399282 containerd[1646]: 2025-07-06 23:58:23.230 [INFO][4693] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 6 23:58:23.399282 containerd[1646]: 2025-07-06 23:58:23.246 [INFO][4693] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--66d984f854--dcpkk-eth0 calico-apiserver-66d984f854- calico-apiserver 78734851-1240-4e35-b671-f1113509a863 937 0 2025-07-06 23:57:56 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:66d984f854 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-66d984f854-dcpkk eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calic52d19b1f87 [] [] }} ContainerID="92d522d9ac441fafb019292f14edfec9154fd26a8f2454909143d70b72384de0" Namespace="calico-apiserver" Pod="calico-apiserver-66d984f854-dcpkk" WorkloadEndpoint="localhost-k8s-calico--apiserver--66d984f854--dcpkk-" Jul 6 23:58:23.399282 containerd[1646]: 2025-07-06 23:58:23.246 [INFO][4693] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="92d522d9ac441fafb019292f14edfec9154fd26a8f2454909143d70b72384de0" Namespace="calico-apiserver" Pod="calico-apiserver-66d984f854-dcpkk" WorkloadEndpoint="localhost-k8s-calico--apiserver--66d984f854--dcpkk-eth0" Jul 6 23:58:23.399282 containerd[1646]: 2025-07-06 23:58:23.343 [INFO][4752] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="92d522d9ac441fafb019292f14edfec9154fd26a8f2454909143d70b72384de0" HandleID="k8s-pod-network.92d522d9ac441fafb019292f14edfec9154fd26a8f2454909143d70b72384de0" Workload="localhost-k8s-calico--apiserver--66d984f854--dcpkk-eth0" Jul 6 23:58:23.399282 containerd[1646]: 2025-07-06 23:58:23.344 [INFO][4752] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="92d522d9ac441fafb019292f14edfec9154fd26a8f2454909143d70b72384de0" HandleID="k8s-pod-network.92d522d9ac441fafb019292f14edfec9154fd26a8f2454909143d70b72384de0" Workload="localhost-k8s-calico--apiserver--66d984f854--dcpkk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c5600), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-66d984f854-dcpkk", "timestamp":"2025-07-06 23:58:23.343799777 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 6 23:58:23.399282 containerd[1646]: 2025-07-06 23:58:23.344 [INFO][4752] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:58:23.399282 containerd[1646]: 2025-07-06 23:58:23.344 [INFO][4752] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:58:23.399282 containerd[1646]: 2025-07-06 23:58:23.344 [INFO][4752] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 6 23:58:23.399282 containerd[1646]: 2025-07-06 23:58:23.353 [INFO][4752] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.92d522d9ac441fafb019292f14edfec9154fd26a8f2454909143d70b72384de0" host="localhost" Jul 6 23:58:23.399282 containerd[1646]: 2025-07-06 23:58:23.359 [INFO][4752] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 6 23:58:23.399282 containerd[1646]: 2025-07-06 23:58:23.361 [INFO][4752] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 6 23:58:23.399282 containerd[1646]: 2025-07-06 23:58:23.363 [INFO][4752] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 6 23:58:23.399282 containerd[1646]: 2025-07-06 23:58:23.364 [INFO][4752] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 6 23:58:23.399282 containerd[1646]: 2025-07-06 23:58:23.364 [INFO][4752] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.92d522d9ac441fafb019292f14edfec9154fd26a8f2454909143d70b72384de0" host="localhost" Jul 6 23:58:23.399282 containerd[1646]: 2025-07-06 23:58:23.365 [INFO][4752] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.92d522d9ac441fafb019292f14edfec9154fd26a8f2454909143d70b72384de0 Jul 6 23:58:23.399282 containerd[1646]: 2025-07-06 23:58:23.370 [INFO][4752] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.92d522d9ac441fafb019292f14edfec9154fd26a8f2454909143d70b72384de0" host="localhost" Jul 6 23:58:23.399282 containerd[1646]: 2025-07-06 23:58:23.375 [INFO][4752] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.92d522d9ac441fafb019292f14edfec9154fd26a8f2454909143d70b72384de0" host="localhost" Jul 6 23:58:23.399282 containerd[1646]: 2025-07-06 23:58:23.375 [INFO][4752] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.92d522d9ac441fafb019292f14edfec9154fd26a8f2454909143d70b72384de0" host="localhost" Jul 6 23:58:23.399282 containerd[1646]: 2025-07-06 23:58:23.375 [INFO][4752] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:58:23.399282 containerd[1646]: 2025-07-06 23:58:23.375 [INFO][4752] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="92d522d9ac441fafb019292f14edfec9154fd26a8f2454909143d70b72384de0" HandleID="k8s-pod-network.92d522d9ac441fafb019292f14edfec9154fd26a8f2454909143d70b72384de0" Workload="localhost-k8s-calico--apiserver--66d984f854--dcpkk-eth0" Jul 6 23:58:23.400819 containerd[1646]: 2025-07-06 23:58:23.378 [INFO][4693] cni-plugin/k8s.go 418: Populated endpoint ContainerID="92d522d9ac441fafb019292f14edfec9154fd26a8f2454909143d70b72384de0" Namespace="calico-apiserver" Pod="calico-apiserver-66d984f854-dcpkk" WorkloadEndpoint="localhost-k8s-calico--apiserver--66d984f854--dcpkk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--66d984f854--dcpkk-eth0", GenerateName:"calico-apiserver-66d984f854-", Namespace:"calico-apiserver", SelfLink:"", UID:"78734851-1240-4e35-b671-f1113509a863", ResourceVersion:"937", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 57, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"66d984f854", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-66d984f854-dcpkk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic52d19b1f87", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:58:23.400819 containerd[1646]: 2025-07-06 23:58:23.378 [INFO][4693] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="92d522d9ac441fafb019292f14edfec9154fd26a8f2454909143d70b72384de0" Namespace="calico-apiserver" Pod="calico-apiserver-66d984f854-dcpkk" WorkloadEndpoint="localhost-k8s-calico--apiserver--66d984f854--dcpkk-eth0" Jul 6 23:58:23.400819 containerd[1646]: 2025-07-06 23:58:23.378 [INFO][4693] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic52d19b1f87 ContainerID="92d522d9ac441fafb019292f14edfec9154fd26a8f2454909143d70b72384de0" Namespace="calico-apiserver" Pod="calico-apiserver-66d984f854-dcpkk" WorkloadEndpoint="localhost-k8s-calico--apiserver--66d984f854--dcpkk-eth0" Jul 6 23:58:23.400819 containerd[1646]: 2025-07-06 23:58:23.385 [INFO][4693] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="92d522d9ac441fafb019292f14edfec9154fd26a8f2454909143d70b72384de0" Namespace="calico-apiserver" Pod="calico-apiserver-66d984f854-dcpkk" WorkloadEndpoint="localhost-k8s-calico--apiserver--66d984f854--dcpkk-eth0" Jul 6 23:58:23.400819 containerd[1646]: 2025-07-06 23:58:23.386 [INFO][4693] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="92d522d9ac441fafb019292f14edfec9154fd26a8f2454909143d70b72384de0" Namespace="calico-apiserver" Pod="calico-apiserver-66d984f854-dcpkk" WorkloadEndpoint="localhost-k8s-calico--apiserver--66d984f854--dcpkk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--66d984f854--dcpkk-eth0", GenerateName:"calico-apiserver-66d984f854-", Namespace:"calico-apiserver", SelfLink:"", UID:"78734851-1240-4e35-b671-f1113509a863", ResourceVersion:"937", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 57, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"66d984f854", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"92d522d9ac441fafb019292f14edfec9154fd26a8f2454909143d70b72384de0", Pod:"calico-apiserver-66d984f854-dcpkk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic52d19b1f87", MAC:"8e:c5:75:de:19:2f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:58:23.400819 containerd[1646]: 2025-07-06 23:58:23.395 [INFO][4693] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="92d522d9ac441fafb019292f14edfec9154fd26a8f2454909143d70b72384de0" Namespace="calico-apiserver" Pod="calico-apiserver-66d984f854-dcpkk" WorkloadEndpoint="localhost-k8s-calico--apiserver--66d984f854--dcpkk-eth0" Jul 6 23:58:23.476533 containerd[1646]: time="2025-07-06T23:58:23.476357790Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:58:23.476533 containerd[1646]: time="2025-07-06T23:58:23.476399233Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:58:23.476533 containerd[1646]: time="2025-07-06T23:58:23.476410456Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:58:23.476533 containerd[1646]: time="2025-07-06T23:58:23.476464101Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:58:23.498950 systemd-networkd[1286]: cali2fd4be5d904: Link UP Jul 6 23:58:23.499050 systemd-networkd[1286]: cali2fd4be5d904: Gained carrier Jul 6 23:58:23.509577 containerd[1646]: 2025-07-06 23:58:23.253 [INFO][4712] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 6 23:58:23.509577 containerd[1646]: 2025-07-06 23:58:23.288 [INFO][4712] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7c65d6cfc9--b7l64-eth0 coredns-7c65d6cfc9- kube-system edda1339-9324-4b2e-a45e-3e301a3aa698 939 0 2025-07-06 23:57:47 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7c65d6cfc9-b7l64 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali2fd4be5d904 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="ca8abc3b178eeec606bbb79902c6816480289ba44c065219d28226879d385130" Namespace="kube-system" Pod="coredns-7c65d6cfc9-b7l64" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--b7l64-" Jul 6 23:58:23.509577 containerd[1646]: 2025-07-06 23:58:23.288 [INFO][4712] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ca8abc3b178eeec606bbb79902c6816480289ba44c065219d28226879d385130" Namespace="kube-system" Pod="coredns-7c65d6cfc9-b7l64" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--b7l64-eth0" Jul 6 23:58:23.509577 containerd[1646]: 2025-07-06 23:58:23.348 [INFO][4764] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ca8abc3b178eeec606bbb79902c6816480289ba44c065219d28226879d385130" HandleID="k8s-pod-network.ca8abc3b178eeec606bbb79902c6816480289ba44c065219d28226879d385130" Workload="localhost-k8s-coredns--7c65d6cfc9--b7l64-eth0" Jul 6 23:58:23.509577 containerd[1646]: 2025-07-06 23:58:23.348 [INFO][4764] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ca8abc3b178eeec606bbb79902c6816480289ba44c065219d28226879d385130" HandleID="k8s-pod-network.ca8abc3b178eeec606bbb79902c6816480289ba44c065219d28226879d385130" Workload="localhost-k8s-coredns--7c65d6cfc9--b7l64-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f830), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7c65d6cfc9-b7l64", "timestamp":"2025-07-06 23:58:23.348428946 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 6 23:58:23.509577 containerd[1646]: 2025-07-06 23:58:23.348 [INFO][4764] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:58:23.509577 containerd[1646]: 2025-07-06 23:58:23.375 [INFO][4764] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:58:23.509577 containerd[1646]: 2025-07-06 23:58:23.375 [INFO][4764] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 6 23:58:23.509577 containerd[1646]: 2025-07-06 23:58:23.453 [INFO][4764] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ca8abc3b178eeec606bbb79902c6816480289ba44c065219d28226879d385130" host="localhost" Jul 6 23:58:23.509577 containerd[1646]: 2025-07-06 23:58:23.459 [INFO][4764] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 6 23:58:23.509577 containerd[1646]: 2025-07-06 23:58:23.462 [INFO][4764] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 6 23:58:23.509577 containerd[1646]: 2025-07-06 23:58:23.463 [INFO][4764] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 6 23:58:23.509577 containerd[1646]: 2025-07-06 23:58:23.464 [INFO][4764] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 6 23:58:23.509577 containerd[1646]: 2025-07-06 23:58:23.464 [INFO][4764] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.ca8abc3b178eeec606bbb79902c6816480289ba44c065219d28226879d385130" host="localhost" Jul 6 23:58:23.509577 containerd[1646]: 2025-07-06 23:58:23.465 [INFO][4764] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.ca8abc3b178eeec606bbb79902c6816480289ba44c065219d28226879d385130 Jul 6 23:58:23.509577 containerd[1646]: 2025-07-06 23:58:23.470 [INFO][4764] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.ca8abc3b178eeec606bbb79902c6816480289ba44c065219d28226879d385130" host="localhost" Jul 6 23:58:23.509577 containerd[1646]: 2025-07-06 23:58:23.493 [INFO][4764] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.ca8abc3b178eeec606bbb79902c6816480289ba44c065219d28226879d385130" host="localhost" Jul 6 23:58:23.509577 containerd[1646]: 2025-07-06 23:58:23.493 [INFO][4764] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.ca8abc3b178eeec606bbb79902c6816480289ba44c065219d28226879d385130" host="localhost" Jul 6 23:58:23.509577 containerd[1646]: 2025-07-06 23:58:23.494 [INFO][4764] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:58:23.509577 containerd[1646]: 2025-07-06 23:58:23.494 [INFO][4764] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="ca8abc3b178eeec606bbb79902c6816480289ba44c065219d28226879d385130" HandleID="k8s-pod-network.ca8abc3b178eeec606bbb79902c6816480289ba44c065219d28226879d385130" Workload="localhost-k8s-coredns--7c65d6cfc9--b7l64-eth0" Jul 6 23:58:23.510157 containerd[1646]: 2025-07-06 23:58:23.495 [INFO][4712] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ca8abc3b178eeec606bbb79902c6816480289ba44c065219d28226879d385130" Namespace="kube-system" Pod="coredns-7c65d6cfc9-b7l64" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--b7l64-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--b7l64-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"edda1339-9324-4b2e-a45e-3e301a3aa698", ResourceVersion:"939", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 57, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7c65d6cfc9-b7l64", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2fd4be5d904", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:58:23.510157 containerd[1646]: 2025-07-06 23:58:23.495 [INFO][4712] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="ca8abc3b178eeec606bbb79902c6816480289ba44c065219d28226879d385130" Namespace="kube-system" Pod="coredns-7c65d6cfc9-b7l64" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--b7l64-eth0" Jul 6 23:58:23.510157 containerd[1646]: 2025-07-06 23:58:23.495 [INFO][4712] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2fd4be5d904 ContainerID="ca8abc3b178eeec606bbb79902c6816480289ba44c065219d28226879d385130" Namespace="kube-system" Pod="coredns-7c65d6cfc9-b7l64" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--b7l64-eth0" Jul 6 23:58:23.510157 containerd[1646]: 2025-07-06 23:58:23.499 [INFO][4712] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ca8abc3b178eeec606bbb79902c6816480289ba44c065219d28226879d385130" Namespace="kube-system" Pod="coredns-7c65d6cfc9-b7l64" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--b7l64-eth0" Jul 6 23:58:23.510157 containerd[1646]: 2025-07-06 23:58:23.499 [INFO][4712] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ca8abc3b178eeec606bbb79902c6816480289ba44c065219d28226879d385130" Namespace="kube-system" Pod="coredns-7c65d6cfc9-b7l64" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--b7l64-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--b7l64-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"edda1339-9324-4b2e-a45e-3e301a3aa698", ResourceVersion:"939", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 57, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ca8abc3b178eeec606bbb79902c6816480289ba44c065219d28226879d385130", Pod:"coredns-7c65d6cfc9-b7l64", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2fd4be5d904", MAC:"de:99:0f:80:d9:0c", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:58:23.510157 containerd[1646]: 2025-07-06 23:58:23.508 [INFO][4712] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ca8abc3b178eeec606bbb79902c6816480289ba44c065219d28226879d385130" Namespace="kube-system" Pod="coredns-7c65d6cfc9-b7l64" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--b7l64-eth0" Jul 6 23:58:23.537438 systemd-resolved[1540]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 6 23:58:23.568549 containerd[1646]: time="2025-07-06T23:58:23.568392529Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-66d984f854-dcpkk,Uid:78734851-1240-4e35-b671-f1113509a863,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"92d522d9ac441fafb019292f14edfec9154fd26a8f2454909143d70b72384de0\"" Jul 6 23:58:23.570661 containerd[1646]: time="2025-07-06T23:58:23.570481868Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 6 23:58:23.576681 containerd[1646]: time="2025-07-06T23:58:23.576561391Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:58:23.576681 containerd[1646]: time="2025-07-06T23:58:23.576590970Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:58:23.576859 containerd[1646]: time="2025-07-06T23:58:23.576601565Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:58:23.577201 containerd[1646]: time="2025-07-06T23:58:23.577160285Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:58:23.597055 systemd-resolved[1540]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 6 23:58:23.616448 systemd-networkd[1286]: calic3137b4a894: Link UP Jul 6 23:58:23.616782 systemd-networkd[1286]: calic3137b4a894: Gained carrier Jul 6 23:58:23.628378 containerd[1646]: time="2025-07-06T23:58:23.628357941Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-b7l64,Uid:edda1339-9324-4b2e-a45e-3e301a3aa698,Namespace:kube-system,Attempt:1,} returns sandbox id \"ca8abc3b178eeec606bbb79902c6816480289ba44c065219d28226879d385130\"" Jul 6 23:58:23.639222 containerd[1646]: time="2025-07-06T23:58:23.639083456Z" level=info msg="CreateContainer within sandbox \"ca8abc3b178eeec606bbb79902c6816480289ba44c065219d28226879d385130\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 6 23:58:23.644078 containerd[1646]: 2025-07-06 23:58:23.235 [INFO][4702] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 6 23:58:23.644078 containerd[1646]: 2025-07-06 23:58:23.250 [INFO][4702] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--66d984f854--sqcdr-eth0 calico-apiserver-66d984f854- calico-apiserver fa6083f4-7360-4fbe-9116-052981667c66 938 0 2025-07-06 23:57:56 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:66d984f854 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-66d984f854-sqcdr eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calic3137b4a894 [] [] }} ContainerID="4abfd4ae1f87c33ea5d39d546776e977b621c84c7dc2664f6e53324b212f5ca2" Namespace="calico-apiserver" Pod="calico-apiserver-66d984f854-sqcdr" WorkloadEndpoint="localhost-k8s-calico--apiserver--66d984f854--sqcdr-" Jul 6 23:58:23.644078 containerd[1646]: 2025-07-06 23:58:23.250 [INFO][4702] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4abfd4ae1f87c33ea5d39d546776e977b621c84c7dc2664f6e53324b212f5ca2" Namespace="calico-apiserver" Pod="calico-apiserver-66d984f854-sqcdr" WorkloadEndpoint="localhost-k8s-calico--apiserver--66d984f854--sqcdr-eth0" Jul 6 23:58:23.644078 containerd[1646]: 2025-07-06 23:58:23.353 [INFO][4749] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4abfd4ae1f87c33ea5d39d546776e977b621c84c7dc2664f6e53324b212f5ca2" HandleID="k8s-pod-network.4abfd4ae1f87c33ea5d39d546776e977b621c84c7dc2664f6e53324b212f5ca2" Workload="localhost-k8s-calico--apiserver--66d984f854--sqcdr-eth0" Jul 6 23:58:23.644078 containerd[1646]: 2025-07-06 23:58:23.353 [INFO][4749] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="4abfd4ae1f87c33ea5d39d546776e977b621c84c7dc2664f6e53324b212f5ca2" HandleID="k8s-pod-network.4abfd4ae1f87c33ea5d39d546776e977b621c84c7dc2664f6e53324b212f5ca2" Workload="localhost-k8s-calico--apiserver--66d984f854--sqcdr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c5820), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-66d984f854-sqcdr", "timestamp":"2025-07-06 23:58:23.353395103 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 6 23:58:23.644078 containerd[1646]: 2025-07-06 23:58:23.353 [INFO][4749] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:58:23.644078 containerd[1646]: 2025-07-06 23:58:23.494 [INFO][4749] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:58:23.644078 containerd[1646]: 2025-07-06 23:58:23.494 [INFO][4749] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 6 23:58:23.644078 containerd[1646]: 2025-07-06 23:58:23.554 [INFO][4749] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4abfd4ae1f87c33ea5d39d546776e977b621c84c7dc2664f6e53324b212f5ca2" host="localhost" Jul 6 23:58:23.644078 containerd[1646]: 2025-07-06 23:58:23.560 [INFO][4749] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 6 23:58:23.644078 containerd[1646]: 2025-07-06 23:58:23.564 [INFO][4749] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 6 23:58:23.644078 containerd[1646]: 2025-07-06 23:58:23.566 [INFO][4749] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 6 23:58:23.644078 containerd[1646]: 2025-07-06 23:58:23.568 [INFO][4749] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 6 23:58:23.644078 containerd[1646]: 2025-07-06 23:58:23.568 [INFO][4749] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.4abfd4ae1f87c33ea5d39d546776e977b621c84c7dc2664f6e53324b212f5ca2" host="localhost" Jul 6 23:58:23.644078 containerd[1646]: 2025-07-06 23:58:23.569 [INFO][4749] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.4abfd4ae1f87c33ea5d39d546776e977b621c84c7dc2664f6e53324b212f5ca2 Jul 6 23:58:23.644078 containerd[1646]: 2025-07-06 23:58:23.588 [INFO][4749] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.4abfd4ae1f87c33ea5d39d546776e977b621c84c7dc2664f6e53324b212f5ca2" host="localhost" Jul 6 23:58:23.644078 containerd[1646]: 2025-07-06 23:58:23.599 [INFO][4749] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.4abfd4ae1f87c33ea5d39d546776e977b621c84c7dc2664f6e53324b212f5ca2" host="localhost" Jul 6 23:58:23.644078 containerd[1646]: 2025-07-06 23:58:23.599 [INFO][4749] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.4abfd4ae1f87c33ea5d39d546776e977b621c84c7dc2664f6e53324b212f5ca2" host="localhost" Jul 6 23:58:23.644078 containerd[1646]: 2025-07-06 23:58:23.599 [INFO][4749] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:58:23.644078 containerd[1646]: 2025-07-06 23:58:23.599 [INFO][4749] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="4abfd4ae1f87c33ea5d39d546776e977b621c84c7dc2664f6e53324b212f5ca2" HandleID="k8s-pod-network.4abfd4ae1f87c33ea5d39d546776e977b621c84c7dc2664f6e53324b212f5ca2" Workload="localhost-k8s-calico--apiserver--66d984f854--sqcdr-eth0" Jul 6 23:58:23.647396 containerd[1646]: 2025-07-06 23:58:23.604 [INFO][4702] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4abfd4ae1f87c33ea5d39d546776e977b621c84c7dc2664f6e53324b212f5ca2" Namespace="calico-apiserver" Pod="calico-apiserver-66d984f854-sqcdr" WorkloadEndpoint="localhost-k8s-calico--apiserver--66d984f854--sqcdr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--66d984f854--sqcdr-eth0", GenerateName:"calico-apiserver-66d984f854-", Namespace:"calico-apiserver", SelfLink:"", UID:"fa6083f4-7360-4fbe-9116-052981667c66", ResourceVersion:"938", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 57, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"66d984f854", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-66d984f854-sqcdr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic3137b4a894", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:58:23.647396 containerd[1646]: 2025-07-06 23:58:23.605 [INFO][4702] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="4abfd4ae1f87c33ea5d39d546776e977b621c84c7dc2664f6e53324b212f5ca2" Namespace="calico-apiserver" Pod="calico-apiserver-66d984f854-sqcdr" WorkloadEndpoint="localhost-k8s-calico--apiserver--66d984f854--sqcdr-eth0" Jul 6 23:58:23.647396 containerd[1646]: 2025-07-06 23:58:23.605 [INFO][4702] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic3137b4a894 ContainerID="4abfd4ae1f87c33ea5d39d546776e977b621c84c7dc2664f6e53324b212f5ca2" Namespace="calico-apiserver" Pod="calico-apiserver-66d984f854-sqcdr" WorkloadEndpoint="localhost-k8s-calico--apiserver--66d984f854--sqcdr-eth0" Jul 6 23:58:23.647396 containerd[1646]: 2025-07-06 23:58:23.615 [INFO][4702] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4abfd4ae1f87c33ea5d39d546776e977b621c84c7dc2664f6e53324b212f5ca2" Namespace="calico-apiserver" Pod="calico-apiserver-66d984f854-sqcdr" WorkloadEndpoint="localhost-k8s-calico--apiserver--66d984f854--sqcdr-eth0" Jul 6 23:58:23.647396 containerd[1646]: 2025-07-06 23:58:23.615 [INFO][4702] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4abfd4ae1f87c33ea5d39d546776e977b621c84c7dc2664f6e53324b212f5ca2" Namespace="calico-apiserver" Pod="calico-apiserver-66d984f854-sqcdr" WorkloadEndpoint="localhost-k8s-calico--apiserver--66d984f854--sqcdr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--66d984f854--sqcdr-eth0", GenerateName:"calico-apiserver-66d984f854-", Namespace:"calico-apiserver", SelfLink:"", UID:"fa6083f4-7360-4fbe-9116-052981667c66", ResourceVersion:"938", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 57, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"66d984f854", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4abfd4ae1f87c33ea5d39d546776e977b621c84c7dc2664f6e53324b212f5ca2", Pod:"calico-apiserver-66d984f854-sqcdr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic3137b4a894", MAC:"16:c3:d8:f8:db:31", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:58:23.647396 containerd[1646]: 2025-07-06 23:58:23.642 [INFO][4702] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4abfd4ae1f87c33ea5d39d546776e977b621c84c7dc2664f6e53324b212f5ca2" Namespace="calico-apiserver" Pod="calico-apiserver-66d984f854-sqcdr" WorkloadEndpoint="localhost-k8s-calico--apiserver--66d984f854--sqcdr-eth0" Jul 6 23:58:23.657355 containerd[1646]: time="2025-07-06T23:58:23.657275407Z" level=info msg="CreateContainer within sandbox \"ca8abc3b178eeec606bbb79902c6816480289ba44c065219d28226879d385130\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"3b88b8f79586a11a65abe03d6df9337207ebb399168383c1fc0d4b18a9dcd9cd\"" Jul 6 23:58:23.659260 containerd[1646]: time="2025-07-06T23:58:23.657946843Z" level=info msg="StartContainer for \"3b88b8f79586a11a65abe03d6df9337207ebb399168383c1fc0d4b18a9dcd9cd\"" Jul 6 23:58:23.660906 containerd[1646]: time="2025-07-06T23:58:23.660857980Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:58:23.660992 containerd[1646]: time="2025-07-06T23:58:23.660900870Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:58:23.660992 containerd[1646]: time="2025-07-06T23:58:23.660909158Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:58:23.660992 containerd[1646]: time="2025-07-06T23:58:23.660964371Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:58:23.663822 systemd-networkd[1286]: calib6621f5c05e: Gained IPv6LL Jul 6 23:58:23.697417 systemd-resolved[1540]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 6 23:58:23.702349 systemd-networkd[1286]: calif183d8f7871: Link UP Jul 6 23:58:23.706255 systemd-networkd[1286]: calif183d8f7871: Gained carrier Jul 6 23:58:23.730385 containerd[1646]: time="2025-07-06T23:58:23.730358350Z" level=info msg="StartContainer for \"3b88b8f79586a11a65abe03d6df9337207ebb399168383c1fc0d4b18a9dcd9cd\" returns successfully" Jul 6 23:58:23.732508 containerd[1646]: 2025-07-06 23:58:23.291 [INFO][4722] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 6 23:58:23.732508 containerd[1646]: 2025-07-06 23:58:23.309 [INFO][4722] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--mvnfh-eth0 csi-node-driver- calico-system 451e31a5-b9c2-46fb-96e4-f1e20de500e9 940 0 2025-07-06 23:57:59 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:57bd658777 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-mvnfh eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calif183d8f7871 [] [] }} ContainerID="6461fb565e002e05b72e4cbc5a9767f7c3a4335865e788a6801fd0f8ec0c4f85" Namespace="calico-system" Pod="csi-node-driver-mvnfh" WorkloadEndpoint="localhost-k8s-csi--node--driver--mvnfh-" Jul 6 23:58:23.732508 containerd[1646]: 2025-07-06 23:58:23.311 [INFO][4722] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6461fb565e002e05b72e4cbc5a9767f7c3a4335865e788a6801fd0f8ec0c4f85" Namespace="calico-system" Pod="csi-node-driver-mvnfh" WorkloadEndpoint="localhost-k8s-csi--node--driver--mvnfh-eth0" Jul 6 23:58:23.732508 containerd[1646]: 2025-07-06 23:58:23.358 [INFO][4770] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6461fb565e002e05b72e4cbc5a9767f7c3a4335865e788a6801fd0f8ec0c4f85" HandleID="k8s-pod-network.6461fb565e002e05b72e4cbc5a9767f7c3a4335865e788a6801fd0f8ec0c4f85" Workload="localhost-k8s-csi--node--driver--mvnfh-eth0" Jul 6 23:58:23.732508 containerd[1646]: 2025-07-06 23:58:23.358 [INFO][4770] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="6461fb565e002e05b72e4cbc5a9767f7c3a4335865e788a6801fd0f8ec0c4f85" HandleID="k8s-pod-network.6461fb565e002e05b72e4cbc5a9767f7c3a4335865e788a6801fd0f8ec0c4f85" Workload="localhost-k8s-csi--node--driver--mvnfh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d4ff0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-mvnfh", "timestamp":"2025-07-06 23:58:23.357981061 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 6 23:58:23.732508 containerd[1646]: 2025-07-06 23:58:23.358 [INFO][4770] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:58:23.732508 containerd[1646]: 2025-07-06 23:58:23.599 [INFO][4770] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:58:23.732508 containerd[1646]: 2025-07-06 23:58:23.600 [INFO][4770] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 6 23:58:23.732508 containerd[1646]: 2025-07-06 23:58:23.655 [INFO][4770] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6461fb565e002e05b72e4cbc5a9767f7c3a4335865e788a6801fd0f8ec0c4f85" host="localhost" Jul 6 23:58:23.732508 containerd[1646]: 2025-07-06 23:58:23.669 [INFO][4770] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 6 23:58:23.732508 containerd[1646]: 2025-07-06 23:58:23.674 [INFO][4770] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 6 23:58:23.732508 containerd[1646]: 2025-07-06 23:58:23.676 [INFO][4770] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 6 23:58:23.732508 containerd[1646]: 2025-07-06 23:58:23.679 [INFO][4770] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 6 23:58:23.732508 containerd[1646]: 2025-07-06 23:58:23.679 [INFO][4770] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.6461fb565e002e05b72e4cbc5a9767f7c3a4335865e788a6801fd0f8ec0c4f85" host="localhost" Jul 6 23:58:23.732508 containerd[1646]: 2025-07-06 23:58:23.681 [INFO][4770] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.6461fb565e002e05b72e4cbc5a9767f7c3a4335865e788a6801fd0f8ec0c4f85 Jul 6 23:58:23.732508 containerd[1646]: 2025-07-06 23:58:23.686 [INFO][4770] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.6461fb565e002e05b72e4cbc5a9767f7c3a4335865e788a6801fd0f8ec0c4f85" host="localhost" Jul 6 23:58:23.732508 containerd[1646]: 2025-07-06 23:58:23.694 [INFO][4770] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.6461fb565e002e05b72e4cbc5a9767f7c3a4335865e788a6801fd0f8ec0c4f85" host="localhost" Jul 6 23:58:23.732508 containerd[1646]: 2025-07-06 23:58:23.694 [INFO][4770] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.6461fb565e002e05b72e4cbc5a9767f7c3a4335865e788a6801fd0f8ec0c4f85" host="localhost" Jul 6 23:58:23.732508 containerd[1646]: 2025-07-06 23:58:23.694 [INFO][4770] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:58:23.732508 containerd[1646]: 2025-07-06 23:58:23.694 [INFO][4770] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="6461fb565e002e05b72e4cbc5a9767f7c3a4335865e788a6801fd0f8ec0c4f85" HandleID="k8s-pod-network.6461fb565e002e05b72e4cbc5a9767f7c3a4335865e788a6801fd0f8ec0c4f85" Workload="localhost-k8s-csi--node--driver--mvnfh-eth0" Jul 6 23:58:23.749130 containerd[1646]: 2025-07-06 23:58:23.697 [INFO][4722] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6461fb565e002e05b72e4cbc5a9767f7c3a4335865e788a6801fd0f8ec0c4f85" Namespace="calico-system" Pod="csi-node-driver-mvnfh" WorkloadEndpoint="localhost-k8s-csi--node--driver--mvnfh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--mvnfh-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"451e31a5-b9c2-46fb-96e4-f1e20de500e9", ResourceVersion:"940", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 57, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-mvnfh", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calif183d8f7871", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:58:23.749130 containerd[1646]: 2025-07-06 23:58:23.698 [INFO][4722] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="6461fb565e002e05b72e4cbc5a9767f7c3a4335865e788a6801fd0f8ec0c4f85" Namespace="calico-system" Pod="csi-node-driver-mvnfh" WorkloadEndpoint="localhost-k8s-csi--node--driver--mvnfh-eth0" Jul 6 23:58:23.749130 containerd[1646]: 2025-07-06 23:58:23.698 [INFO][4722] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif183d8f7871 ContainerID="6461fb565e002e05b72e4cbc5a9767f7c3a4335865e788a6801fd0f8ec0c4f85" Namespace="calico-system" Pod="csi-node-driver-mvnfh" WorkloadEndpoint="localhost-k8s-csi--node--driver--mvnfh-eth0" Jul 6 23:58:23.749130 containerd[1646]: 2025-07-06 23:58:23.713 [INFO][4722] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6461fb565e002e05b72e4cbc5a9767f7c3a4335865e788a6801fd0f8ec0c4f85" Namespace="calico-system" Pod="csi-node-driver-mvnfh" WorkloadEndpoint="localhost-k8s-csi--node--driver--mvnfh-eth0" Jul 6 23:58:23.749130 containerd[1646]: 2025-07-06 23:58:23.713 [INFO][4722] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6461fb565e002e05b72e4cbc5a9767f7c3a4335865e788a6801fd0f8ec0c4f85" Namespace="calico-system" Pod="csi-node-driver-mvnfh" WorkloadEndpoint="localhost-k8s-csi--node--driver--mvnfh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--mvnfh-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"451e31a5-b9c2-46fb-96e4-f1e20de500e9", ResourceVersion:"940", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 57, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6461fb565e002e05b72e4cbc5a9767f7c3a4335865e788a6801fd0f8ec0c4f85", Pod:"csi-node-driver-mvnfh", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calif183d8f7871", MAC:"76:04:56:bd:74:c3", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:58:23.749130 containerd[1646]: 2025-07-06 23:58:23.729 [INFO][4722] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6461fb565e002e05b72e4cbc5a9767f7c3a4335865e788a6801fd0f8ec0c4f85" Namespace="calico-system" Pod="csi-node-driver-mvnfh" WorkloadEndpoint="localhost-k8s-csi--node--driver--mvnfh-eth0" Jul 6 23:58:23.749130 containerd[1646]: time="2025-07-06T23:58:23.737214459Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-66d984f854-sqcdr,Uid:fa6083f4-7360-4fbe-9116-052981667c66,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"4abfd4ae1f87c33ea5d39d546776e977b621c84c7dc2664f6e53324b212f5ca2\"" Jul 6 23:58:23.758888 containerd[1646]: time="2025-07-06T23:58:23.758831193Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:58:23.758888 containerd[1646]: time="2025-07-06T23:58:23.758867229Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:58:23.758994 containerd[1646]: time="2025-07-06T23:58:23.758879758Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:58:23.758994 containerd[1646]: time="2025-07-06T23:58:23.758932817Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:58:23.775217 systemd-resolved[1540]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 6 23:58:23.783526 containerd[1646]: time="2025-07-06T23:58:23.783500573Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-mvnfh,Uid:451e31a5-b9c2-46fb-96e4-f1e20de500e9,Namespace:calico-system,Attempt:1,} returns sandbox id \"6461fb565e002e05b72e4cbc5a9767f7c3a4335865e788a6801fd0f8ec0c4f85\"" Jul 6 23:58:23.786014 containerd[1646]: time="2025-07-06T23:58:23.785695380Z" level=info msg="StopPodSandbox for \"772b37fef7916dce815edadefbadb7112311e9a44095bf39f8cc4241e4af9033\"" Jul 6 23:58:23.786014 containerd[1646]: time="2025-07-06T23:58:23.785818037Z" level=info msg="StopPodSandbox for \"d0c843f0c6df51c8735546399490a9323349811e5441710045276716bd30f3b2\"" Jul 6 23:58:23.787573 containerd[1646]: time="2025-07-06T23:58:23.786526401Z" level=info msg="StopPodSandbox for \"e27ff51af32743d5320f3003a78f1453d4f662bda7dbaa9be1c502ecd5dc9fcf\"" Jul 6 23:58:23.903449 containerd[1646]: 2025-07-06 23:58:23.856 [INFO][5044] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="772b37fef7916dce815edadefbadb7112311e9a44095bf39f8cc4241e4af9033" Jul 6 23:58:23.903449 containerd[1646]: 2025-07-06 23:58:23.856 [INFO][5044] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="772b37fef7916dce815edadefbadb7112311e9a44095bf39f8cc4241e4af9033" iface="eth0" netns="/var/run/netns/cni-0e966e3a-af79-0185-5557-c95cb5d9a372" Jul 6 23:58:23.903449 containerd[1646]: 2025-07-06 23:58:23.857 [INFO][5044] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="772b37fef7916dce815edadefbadb7112311e9a44095bf39f8cc4241e4af9033" iface="eth0" netns="/var/run/netns/cni-0e966e3a-af79-0185-5557-c95cb5d9a372" Jul 6 23:58:23.903449 containerd[1646]: 2025-07-06 23:58:23.857 [INFO][5044] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="772b37fef7916dce815edadefbadb7112311e9a44095bf39f8cc4241e4af9033" iface="eth0" netns="/var/run/netns/cni-0e966e3a-af79-0185-5557-c95cb5d9a372" Jul 6 23:58:23.903449 containerd[1646]: 2025-07-06 23:58:23.857 [INFO][5044] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="772b37fef7916dce815edadefbadb7112311e9a44095bf39f8cc4241e4af9033" Jul 6 23:58:23.903449 containerd[1646]: 2025-07-06 23:58:23.857 [INFO][5044] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="772b37fef7916dce815edadefbadb7112311e9a44095bf39f8cc4241e4af9033" Jul 6 23:58:23.903449 containerd[1646]: 2025-07-06 23:58:23.883 [INFO][5067] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="772b37fef7916dce815edadefbadb7112311e9a44095bf39f8cc4241e4af9033" HandleID="k8s-pod-network.772b37fef7916dce815edadefbadb7112311e9a44095bf39f8cc4241e4af9033" Workload="localhost-k8s-goldmane--58fd7646b9--spr8t-eth0" Jul 6 23:58:23.903449 containerd[1646]: 2025-07-06 23:58:23.884 [INFO][5067] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:58:23.903449 containerd[1646]: 2025-07-06 23:58:23.884 [INFO][5067] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:58:23.903449 containerd[1646]: 2025-07-06 23:58:23.890 [WARNING][5067] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="772b37fef7916dce815edadefbadb7112311e9a44095bf39f8cc4241e4af9033" HandleID="k8s-pod-network.772b37fef7916dce815edadefbadb7112311e9a44095bf39f8cc4241e4af9033" Workload="localhost-k8s-goldmane--58fd7646b9--spr8t-eth0" Jul 6 23:58:23.903449 containerd[1646]: 2025-07-06 23:58:23.890 [INFO][5067] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="772b37fef7916dce815edadefbadb7112311e9a44095bf39f8cc4241e4af9033" HandleID="k8s-pod-network.772b37fef7916dce815edadefbadb7112311e9a44095bf39f8cc4241e4af9033" Workload="localhost-k8s-goldmane--58fd7646b9--spr8t-eth0" Jul 6 23:58:23.903449 containerd[1646]: 2025-07-06 23:58:23.898 [INFO][5067] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:58:23.903449 containerd[1646]: 2025-07-06 23:58:23.900 [INFO][5044] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="772b37fef7916dce815edadefbadb7112311e9a44095bf39f8cc4241e4af9033" Jul 6 23:58:23.907108 containerd[1646]: time="2025-07-06T23:58:23.905896510Z" level=info msg="TearDown network for sandbox \"772b37fef7916dce815edadefbadb7112311e9a44095bf39f8cc4241e4af9033\" successfully" Jul 6 23:58:23.907108 containerd[1646]: time="2025-07-06T23:58:23.905918433Z" level=info msg="StopPodSandbox for \"772b37fef7916dce815edadefbadb7112311e9a44095bf39f8cc4241e4af9033\" returns successfully" Jul 6 23:58:23.907249 containerd[1646]: time="2025-07-06T23:58:23.907204215Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-spr8t,Uid:94185960-601a-483a-9212-e04b82a9d723,Namespace:calico-system,Attempt:1,}" Jul 6 23:58:23.911427 containerd[1646]: 2025-07-06 23:58:23.861 [INFO][5049] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e27ff51af32743d5320f3003a78f1453d4f662bda7dbaa9be1c502ecd5dc9fcf" Jul 6 23:58:23.911427 containerd[1646]: 2025-07-06 23:58:23.862 [INFO][5049] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="e27ff51af32743d5320f3003a78f1453d4f662bda7dbaa9be1c502ecd5dc9fcf" iface="eth0" netns="/var/run/netns/cni-1b83143e-50d6-afd1-2ad7-7969c5c8126d" Jul 6 23:58:23.911427 containerd[1646]: 2025-07-06 23:58:23.863 [INFO][5049] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="e27ff51af32743d5320f3003a78f1453d4f662bda7dbaa9be1c502ecd5dc9fcf" iface="eth0" netns="/var/run/netns/cni-1b83143e-50d6-afd1-2ad7-7969c5c8126d" Jul 6 23:58:23.911427 containerd[1646]: 2025-07-06 23:58:23.863 [INFO][5049] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="e27ff51af32743d5320f3003a78f1453d4f662bda7dbaa9be1c502ecd5dc9fcf" iface="eth0" netns="/var/run/netns/cni-1b83143e-50d6-afd1-2ad7-7969c5c8126d" Jul 6 23:58:23.911427 containerd[1646]: 2025-07-06 23:58:23.863 [INFO][5049] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e27ff51af32743d5320f3003a78f1453d4f662bda7dbaa9be1c502ecd5dc9fcf" Jul 6 23:58:23.911427 containerd[1646]: 2025-07-06 23:58:23.863 [INFO][5049] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e27ff51af32743d5320f3003a78f1453d4f662bda7dbaa9be1c502ecd5dc9fcf" Jul 6 23:58:23.911427 containerd[1646]: 2025-07-06 23:58:23.896 [INFO][5072] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e27ff51af32743d5320f3003a78f1453d4f662bda7dbaa9be1c502ecd5dc9fcf" HandleID="k8s-pod-network.e27ff51af32743d5320f3003a78f1453d4f662bda7dbaa9be1c502ecd5dc9fcf" Workload="localhost-k8s-calico--kube--controllers--5bb8955bc9--hwzqb-eth0" Jul 6 23:58:23.911427 containerd[1646]: 2025-07-06 23:58:23.896 [INFO][5072] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:58:23.911427 containerd[1646]: 2025-07-06 23:58:23.898 [INFO][5072] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:58:23.911427 containerd[1646]: 2025-07-06 23:58:23.906 [WARNING][5072] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e27ff51af32743d5320f3003a78f1453d4f662bda7dbaa9be1c502ecd5dc9fcf" HandleID="k8s-pod-network.e27ff51af32743d5320f3003a78f1453d4f662bda7dbaa9be1c502ecd5dc9fcf" Workload="localhost-k8s-calico--kube--controllers--5bb8955bc9--hwzqb-eth0" Jul 6 23:58:23.911427 containerd[1646]: 2025-07-06 23:58:23.906 [INFO][5072] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e27ff51af32743d5320f3003a78f1453d4f662bda7dbaa9be1c502ecd5dc9fcf" HandleID="k8s-pod-network.e27ff51af32743d5320f3003a78f1453d4f662bda7dbaa9be1c502ecd5dc9fcf" Workload="localhost-k8s-calico--kube--controllers--5bb8955bc9--hwzqb-eth0" Jul 6 23:58:23.911427 containerd[1646]: 2025-07-06 23:58:23.908 [INFO][5072] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:58:23.911427 containerd[1646]: 2025-07-06 23:58:23.910 [INFO][5049] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e27ff51af32743d5320f3003a78f1453d4f662bda7dbaa9be1c502ecd5dc9fcf" Jul 6 23:58:23.912775 containerd[1646]: time="2025-07-06T23:58:23.912219906Z" level=info msg="TearDown network for sandbox \"e27ff51af32743d5320f3003a78f1453d4f662bda7dbaa9be1c502ecd5dc9fcf\" successfully" Jul 6 23:58:23.912775 containerd[1646]: time="2025-07-06T23:58:23.912675856Z" level=info msg="StopPodSandbox for \"e27ff51af32743d5320f3003a78f1453d4f662bda7dbaa9be1c502ecd5dc9fcf\" returns successfully" Jul 6 23:58:23.913633 containerd[1646]: time="2025-07-06T23:58:23.913334485Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5bb8955bc9-hwzqb,Uid:f6ec6603-698d-463e-944a-b5d9f581d0b3,Namespace:calico-system,Attempt:1,}" Jul 6 23:58:23.921071 containerd[1646]: 2025-07-06 23:58:23.874 [INFO][5050] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d0c843f0c6df51c8735546399490a9323349811e5441710045276716bd30f3b2" Jul 6 23:58:23.921071 containerd[1646]: 2025-07-06 23:58:23.874 [INFO][5050] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="d0c843f0c6df51c8735546399490a9323349811e5441710045276716bd30f3b2" iface="eth0" netns="/var/run/netns/cni-60c05074-6a84-9cb7-30d1-8856cf28a24e" Jul 6 23:58:23.921071 containerd[1646]: 2025-07-06 23:58:23.875 [INFO][5050] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="d0c843f0c6df51c8735546399490a9323349811e5441710045276716bd30f3b2" iface="eth0" netns="/var/run/netns/cni-60c05074-6a84-9cb7-30d1-8856cf28a24e" Jul 6 23:58:23.921071 containerd[1646]: 2025-07-06 23:58:23.875 [INFO][5050] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="d0c843f0c6df51c8735546399490a9323349811e5441710045276716bd30f3b2" iface="eth0" netns="/var/run/netns/cni-60c05074-6a84-9cb7-30d1-8856cf28a24e" Jul 6 23:58:23.921071 containerd[1646]: 2025-07-06 23:58:23.875 [INFO][5050] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d0c843f0c6df51c8735546399490a9323349811e5441710045276716bd30f3b2" Jul 6 23:58:23.921071 containerd[1646]: 2025-07-06 23:58:23.875 [INFO][5050] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d0c843f0c6df51c8735546399490a9323349811e5441710045276716bd30f3b2" Jul 6 23:58:23.921071 containerd[1646]: 2025-07-06 23:58:23.911 [INFO][5077] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d0c843f0c6df51c8735546399490a9323349811e5441710045276716bd30f3b2" HandleID="k8s-pod-network.d0c843f0c6df51c8735546399490a9323349811e5441710045276716bd30f3b2" Workload="localhost-k8s-calico--apiserver--cbc88db65--jzzkz-eth0" Jul 6 23:58:23.921071 containerd[1646]: 2025-07-06 23:58:23.912 [INFO][5077] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:58:23.921071 containerd[1646]: 2025-07-06 23:58:23.912 [INFO][5077] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:58:23.921071 containerd[1646]: 2025-07-06 23:58:23.917 [WARNING][5077] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d0c843f0c6df51c8735546399490a9323349811e5441710045276716bd30f3b2" HandleID="k8s-pod-network.d0c843f0c6df51c8735546399490a9323349811e5441710045276716bd30f3b2" Workload="localhost-k8s-calico--apiserver--cbc88db65--jzzkz-eth0" Jul 6 23:58:23.921071 containerd[1646]: 2025-07-06 23:58:23.917 [INFO][5077] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d0c843f0c6df51c8735546399490a9323349811e5441710045276716bd30f3b2" HandleID="k8s-pod-network.d0c843f0c6df51c8735546399490a9323349811e5441710045276716bd30f3b2" Workload="localhost-k8s-calico--apiserver--cbc88db65--jzzkz-eth0" Jul 6 23:58:23.921071 containerd[1646]: 2025-07-06 23:58:23.918 [INFO][5077] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:58:23.921071 containerd[1646]: 2025-07-06 23:58:23.919 [INFO][5050] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d0c843f0c6df51c8735546399490a9323349811e5441710045276716bd30f3b2" Jul 6 23:58:23.922020 containerd[1646]: time="2025-07-06T23:58:23.921174024Z" level=info msg="TearDown network for sandbox \"d0c843f0c6df51c8735546399490a9323349811e5441710045276716bd30f3b2\" successfully" Jul 6 23:58:23.922020 containerd[1646]: time="2025-07-06T23:58:23.921195943Z" level=info msg="StopPodSandbox for \"d0c843f0c6df51c8735546399490a9323349811e5441710045276716bd30f3b2\" returns successfully" Jul 6 23:58:23.922809 containerd[1646]: time="2025-07-06T23:58:23.922685158Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-cbc88db65-jzzkz,Uid:1aab2d05-ec88-4437-948f-f21fe9b0d771,Namespace:calico-apiserver,Attempt:1,}" Jul 6 23:58:24.116244 kubelet[2915]: I0706 23:58:24.116150 2915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-b7l64" podStartSLOduration=37.116122023 podStartE2EDuration="37.116122023s" podCreationTimestamp="2025-07-06 23:57:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:58:24.114859335 +0000 UTC m=+44.520043757" watchObservedRunningTime="2025-07-06 23:58:24.116122023 +0000 UTC m=+44.521306439" Jul 6 23:58:24.166706 systemd-networkd[1286]: cali31fdb0253fa: Link UP Jul 6 23:58:24.169708 systemd-networkd[1286]: cali31fdb0253fa: Gained carrier Jul 6 23:58:24.179371 containerd[1646]: 2025-07-06 23:58:24.077 [INFO][5087] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 6 23:58:24.179371 containerd[1646]: 2025-07-06 23:58:24.089 [INFO][5087] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--5bb8955bc9--hwzqb-eth0 calico-kube-controllers-5bb8955bc9- calico-system f6ec6603-698d-463e-944a-b5d9f581d0b3 969 0 2025-07-06 23:57:59 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:5bb8955bc9 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-5bb8955bc9-hwzqb eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali31fdb0253fa [] [] }} ContainerID="90a3d0afe6030b142b745dd67fa6d49824367b979548ab4208d151fc8222b44e" Namespace="calico-system" Pod="calico-kube-controllers-5bb8955bc9-hwzqb" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5bb8955bc9--hwzqb-" Jul 6 23:58:24.179371 containerd[1646]: 2025-07-06 23:58:24.089 [INFO][5087] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="90a3d0afe6030b142b745dd67fa6d49824367b979548ab4208d151fc8222b44e" Namespace="calico-system" Pod="calico-kube-controllers-5bb8955bc9-hwzqb" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5bb8955bc9--hwzqb-eth0" Jul 6 23:58:24.179371 containerd[1646]: 2025-07-06 23:58:24.127 [INFO][5130] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="90a3d0afe6030b142b745dd67fa6d49824367b979548ab4208d151fc8222b44e" HandleID="k8s-pod-network.90a3d0afe6030b142b745dd67fa6d49824367b979548ab4208d151fc8222b44e" Workload="localhost-k8s-calico--kube--controllers--5bb8955bc9--hwzqb-eth0" Jul 6 23:58:24.179371 containerd[1646]: 2025-07-06 23:58:24.128 [INFO][5130] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="90a3d0afe6030b142b745dd67fa6d49824367b979548ab4208d151fc8222b44e" HandleID="k8s-pod-network.90a3d0afe6030b142b745dd67fa6d49824367b979548ab4208d151fc8222b44e" Workload="localhost-k8s-calico--kube--controllers--5bb8955bc9--hwzqb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d4ff0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-5bb8955bc9-hwzqb", "timestamp":"2025-07-06 23:58:24.127955861 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 6 23:58:24.179371 containerd[1646]: 2025-07-06 23:58:24.128 [INFO][5130] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:58:24.179371 containerd[1646]: 2025-07-06 23:58:24.128 [INFO][5130] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:58:24.179371 containerd[1646]: 2025-07-06 23:58:24.128 [INFO][5130] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 6 23:58:24.179371 containerd[1646]: 2025-07-06 23:58:24.138 [INFO][5130] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.90a3d0afe6030b142b745dd67fa6d49824367b979548ab4208d151fc8222b44e" host="localhost" Jul 6 23:58:24.179371 containerd[1646]: 2025-07-06 23:58:24.142 [INFO][5130] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 6 23:58:24.179371 containerd[1646]: 2025-07-06 23:58:24.145 [INFO][5130] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 6 23:58:24.179371 containerd[1646]: 2025-07-06 23:58:24.146 [INFO][5130] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 6 23:58:24.179371 containerd[1646]: 2025-07-06 23:58:24.147 [INFO][5130] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 6 23:58:24.179371 containerd[1646]: 2025-07-06 23:58:24.147 [INFO][5130] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.90a3d0afe6030b142b745dd67fa6d49824367b979548ab4208d151fc8222b44e" host="localhost" Jul 6 23:58:24.179371 containerd[1646]: 2025-07-06 23:58:24.149 [INFO][5130] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.90a3d0afe6030b142b745dd67fa6d49824367b979548ab4208d151fc8222b44e Jul 6 23:58:24.179371 containerd[1646]: 2025-07-06 23:58:24.152 [INFO][5130] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.90a3d0afe6030b142b745dd67fa6d49824367b979548ab4208d151fc8222b44e" host="localhost" Jul 6 23:58:24.179371 containerd[1646]: 2025-07-06 23:58:24.160 [INFO][5130] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.90a3d0afe6030b142b745dd67fa6d49824367b979548ab4208d151fc8222b44e" host="localhost" Jul 6 23:58:24.179371 containerd[1646]: 2025-07-06 23:58:24.160 [INFO][5130] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.90a3d0afe6030b142b745dd67fa6d49824367b979548ab4208d151fc8222b44e" host="localhost" Jul 6 23:58:24.179371 containerd[1646]: 2025-07-06 23:58:24.160 [INFO][5130] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:58:24.179371 containerd[1646]: 2025-07-06 23:58:24.160 [INFO][5130] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="90a3d0afe6030b142b745dd67fa6d49824367b979548ab4208d151fc8222b44e" HandleID="k8s-pod-network.90a3d0afe6030b142b745dd67fa6d49824367b979548ab4208d151fc8222b44e" Workload="localhost-k8s-calico--kube--controllers--5bb8955bc9--hwzqb-eth0" Jul 6 23:58:24.179830 containerd[1646]: 2025-07-06 23:58:24.162 [INFO][5087] cni-plugin/k8s.go 418: Populated endpoint ContainerID="90a3d0afe6030b142b745dd67fa6d49824367b979548ab4208d151fc8222b44e" Namespace="calico-system" Pod="calico-kube-controllers-5bb8955bc9-hwzqb" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5bb8955bc9--hwzqb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--5bb8955bc9--hwzqb-eth0", GenerateName:"calico-kube-controllers-5bb8955bc9-", Namespace:"calico-system", SelfLink:"", UID:"f6ec6603-698d-463e-944a-b5d9f581d0b3", ResourceVersion:"969", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 57, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5bb8955bc9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-5bb8955bc9-hwzqb", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali31fdb0253fa", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:58:24.179830 containerd[1646]: 2025-07-06 23:58:24.162 [INFO][5087] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="90a3d0afe6030b142b745dd67fa6d49824367b979548ab4208d151fc8222b44e" Namespace="calico-system" Pod="calico-kube-controllers-5bb8955bc9-hwzqb" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5bb8955bc9--hwzqb-eth0" Jul 6 23:58:24.179830 containerd[1646]: 2025-07-06 23:58:24.162 [INFO][5087] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali31fdb0253fa ContainerID="90a3d0afe6030b142b745dd67fa6d49824367b979548ab4208d151fc8222b44e" Namespace="calico-system" Pod="calico-kube-controllers-5bb8955bc9-hwzqb" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5bb8955bc9--hwzqb-eth0" Jul 6 23:58:24.179830 containerd[1646]: 2025-07-06 23:58:24.167 [INFO][5087] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="90a3d0afe6030b142b745dd67fa6d49824367b979548ab4208d151fc8222b44e" Namespace="calico-system" Pod="calico-kube-controllers-5bb8955bc9-hwzqb" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5bb8955bc9--hwzqb-eth0" Jul 6 23:58:24.179830 containerd[1646]: 2025-07-06 23:58:24.170 [INFO][5087] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="90a3d0afe6030b142b745dd67fa6d49824367b979548ab4208d151fc8222b44e" Namespace="calico-system" Pod="calico-kube-controllers-5bb8955bc9-hwzqb" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5bb8955bc9--hwzqb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--5bb8955bc9--hwzqb-eth0", GenerateName:"calico-kube-controllers-5bb8955bc9-", Namespace:"calico-system", SelfLink:"", UID:"f6ec6603-698d-463e-944a-b5d9f581d0b3", ResourceVersion:"969", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 57, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5bb8955bc9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"90a3d0afe6030b142b745dd67fa6d49824367b979548ab4208d151fc8222b44e", Pod:"calico-kube-controllers-5bb8955bc9-hwzqb", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali31fdb0253fa", MAC:"76:85:6d:b0:b7:26", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:58:24.179830 containerd[1646]: 2025-07-06 23:58:24.176 [INFO][5087] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="90a3d0afe6030b142b745dd67fa6d49824367b979548ab4208d151fc8222b44e" Namespace="calico-system" Pod="calico-kube-controllers-5bb8955bc9-hwzqb" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5bb8955bc9--hwzqb-eth0" Jul 6 23:58:24.189127 systemd[1]: run-netns-cni\x2d60c05074\x2d6a84\x2d9cb7\x2d30d1\x2d8856cf28a24e.mount: Deactivated successfully. Jul 6 23:58:24.189642 systemd[1]: run-netns-cni\x2d1b83143e\x2d50d6\x2dafd1\x2d2ad7\x2d7969c5c8126d.mount: Deactivated successfully. Jul 6 23:58:24.189762 systemd[1]: run-netns-cni\x2d0e966e3a\x2daf79\x2d0185\x2d5557\x2dc95cb5d9a372.mount: Deactivated successfully. Jul 6 23:58:24.202847 containerd[1646]: time="2025-07-06T23:58:24.202673817Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:58:24.202847 containerd[1646]: time="2025-07-06T23:58:24.202711925Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:58:24.202847 containerd[1646]: time="2025-07-06T23:58:24.202725819Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:58:24.202847 containerd[1646]: time="2025-07-06T23:58:24.202793552Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:58:24.221676 systemd-resolved[1540]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 6 23:58:24.250084 containerd[1646]: time="2025-07-06T23:58:24.250017169Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5bb8955bc9-hwzqb,Uid:f6ec6603-698d-463e-944a-b5d9f581d0b3,Namespace:calico-system,Attempt:1,} returns sandbox id \"90a3d0afe6030b142b745dd67fa6d49824367b979548ab4208d151fc8222b44e\"" Jul 6 23:58:24.290545 systemd-networkd[1286]: cali2f29524f208: Link UP Jul 6 23:58:24.290699 systemd-networkd[1286]: cali2f29524f208: Gained carrier Jul 6 23:58:24.315636 containerd[1646]: 2025-07-06 23:58:24.071 [INFO][5093] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 6 23:58:24.315636 containerd[1646]: 2025-07-06 23:58:24.083 [INFO][5093] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--58fd7646b9--spr8t-eth0 goldmane-58fd7646b9- calico-system 94185960-601a-483a-9212-e04b82a9d723 968 0 2025-07-06 23:57:57 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:58fd7646b9 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-58fd7646b9-spr8t eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali2f29524f208 [] [] }} ContainerID="f801d867b750b12e4caf2a28ca584bbf1b765617cf5b00eb199dd591f952a576" Namespace="calico-system" Pod="goldmane-58fd7646b9-spr8t" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--spr8t-" Jul 6 23:58:24.315636 containerd[1646]: 2025-07-06 23:58:24.083 [INFO][5093] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f801d867b750b12e4caf2a28ca584bbf1b765617cf5b00eb199dd591f952a576" Namespace="calico-system" Pod="goldmane-58fd7646b9-spr8t" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--spr8t-eth0" Jul 6 23:58:24.315636 containerd[1646]: 2025-07-06 23:58:24.142 [INFO][5123] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f801d867b750b12e4caf2a28ca584bbf1b765617cf5b00eb199dd591f952a576" HandleID="k8s-pod-network.f801d867b750b12e4caf2a28ca584bbf1b765617cf5b00eb199dd591f952a576" Workload="localhost-k8s-goldmane--58fd7646b9--spr8t-eth0" Jul 6 23:58:24.315636 containerd[1646]: 2025-07-06 23:58:24.142 [INFO][5123] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f801d867b750b12e4caf2a28ca584bbf1b765617cf5b00eb199dd591f952a576" HandleID="k8s-pod-network.f801d867b750b12e4caf2a28ca584bbf1b765617cf5b00eb199dd591f952a576" Workload="localhost-k8s-goldmane--58fd7646b9--spr8t-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5020), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-58fd7646b9-spr8t", "timestamp":"2025-07-06 23:58:24.142186206 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 6 23:58:24.315636 containerd[1646]: 2025-07-06 23:58:24.142 [INFO][5123] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:58:24.315636 containerd[1646]: 2025-07-06 23:58:24.160 [INFO][5123] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:58:24.315636 containerd[1646]: 2025-07-06 23:58:24.160 [INFO][5123] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 6 23:58:24.315636 containerd[1646]: 2025-07-06 23:58:24.238 [INFO][5123] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f801d867b750b12e4caf2a28ca584bbf1b765617cf5b00eb199dd591f952a576" host="localhost" Jul 6 23:58:24.315636 containerd[1646]: 2025-07-06 23:58:24.242 [INFO][5123] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 6 23:58:24.315636 containerd[1646]: 2025-07-06 23:58:24.247 [INFO][5123] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 6 23:58:24.315636 containerd[1646]: 2025-07-06 23:58:24.248 [INFO][5123] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 6 23:58:24.315636 containerd[1646]: 2025-07-06 23:58:24.255 [INFO][5123] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 6 23:58:24.315636 containerd[1646]: 2025-07-06 23:58:24.255 [INFO][5123] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.f801d867b750b12e4caf2a28ca584bbf1b765617cf5b00eb199dd591f952a576" host="localhost" Jul 6 23:58:24.315636 containerd[1646]: 2025-07-06 23:58:24.256 [INFO][5123] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.f801d867b750b12e4caf2a28ca584bbf1b765617cf5b00eb199dd591f952a576 Jul 6 23:58:24.315636 containerd[1646]: 2025-07-06 23:58:24.270 [INFO][5123] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.f801d867b750b12e4caf2a28ca584bbf1b765617cf5b00eb199dd591f952a576" host="localhost" Jul 6 23:58:24.315636 containerd[1646]: 2025-07-06 23:58:24.283 [INFO][5123] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.f801d867b750b12e4caf2a28ca584bbf1b765617cf5b00eb199dd591f952a576" host="localhost" Jul 6 23:58:24.315636 containerd[1646]: 2025-07-06 23:58:24.283 [INFO][5123] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.f801d867b750b12e4caf2a28ca584bbf1b765617cf5b00eb199dd591f952a576" host="localhost" Jul 6 23:58:24.315636 containerd[1646]: 2025-07-06 23:58:24.283 [INFO][5123] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:58:24.315636 containerd[1646]: 2025-07-06 23:58:24.284 [INFO][5123] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="f801d867b750b12e4caf2a28ca584bbf1b765617cf5b00eb199dd591f952a576" HandleID="k8s-pod-network.f801d867b750b12e4caf2a28ca584bbf1b765617cf5b00eb199dd591f952a576" Workload="localhost-k8s-goldmane--58fd7646b9--spr8t-eth0" Jul 6 23:58:24.316052 containerd[1646]: 2025-07-06 23:58:24.285 [INFO][5093] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f801d867b750b12e4caf2a28ca584bbf1b765617cf5b00eb199dd591f952a576" Namespace="calico-system" Pod="goldmane-58fd7646b9-spr8t" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--spr8t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--58fd7646b9--spr8t-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"94185960-601a-483a-9212-e04b82a9d723", ResourceVersion:"968", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 57, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-58fd7646b9-spr8t", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali2f29524f208", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:58:24.316052 containerd[1646]: 2025-07-06 23:58:24.285 [INFO][5093] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="f801d867b750b12e4caf2a28ca584bbf1b765617cf5b00eb199dd591f952a576" Namespace="calico-system" Pod="goldmane-58fd7646b9-spr8t" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--spr8t-eth0" Jul 6 23:58:24.316052 containerd[1646]: 2025-07-06 23:58:24.285 [INFO][5093] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2f29524f208 ContainerID="f801d867b750b12e4caf2a28ca584bbf1b765617cf5b00eb199dd591f952a576" Namespace="calico-system" Pod="goldmane-58fd7646b9-spr8t" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--spr8t-eth0" Jul 6 23:58:24.316052 containerd[1646]: 2025-07-06 23:58:24.290 [INFO][5093] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f801d867b750b12e4caf2a28ca584bbf1b765617cf5b00eb199dd591f952a576" Namespace="calico-system" Pod="goldmane-58fd7646b9-spr8t" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--spr8t-eth0" Jul 6 23:58:24.316052 containerd[1646]: 2025-07-06 23:58:24.290 [INFO][5093] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f801d867b750b12e4caf2a28ca584bbf1b765617cf5b00eb199dd591f952a576" Namespace="calico-system" Pod="goldmane-58fd7646b9-spr8t" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--spr8t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--58fd7646b9--spr8t-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"94185960-601a-483a-9212-e04b82a9d723", ResourceVersion:"968", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 57, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f801d867b750b12e4caf2a28ca584bbf1b765617cf5b00eb199dd591f952a576", Pod:"goldmane-58fd7646b9-spr8t", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali2f29524f208", MAC:"de:4d:6f:3e:4b:e7", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:58:24.316052 containerd[1646]: 2025-07-06 23:58:24.314 [INFO][5093] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f801d867b750b12e4caf2a28ca584bbf1b765617cf5b00eb199dd591f952a576" Namespace="calico-system" Pod="goldmane-58fd7646b9-spr8t" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--spr8t-eth0" Jul 6 23:58:24.365700 containerd[1646]: time="2025-07-06T23:58:24.362982268Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:58:24.365700 containerd[1646]: time="2025-07-06T23:58:24.365502405Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:58:24.365700 containerd[1646]: time="2025-07-06T23:58:24.365512362Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:58:24.365700 containerd[1646]: time="2025-07-06T23:58:24.365575663Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:58:24.401289 systemd-resolved[1540]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 6 23:58:24.447009 systemd-networkd[1286]: calicc2da34053d: Link UP Jul 6 23:58:24.450863 systemd-networkd[1286]: calicc2da34053d: Gained carrier Jul 6 23:58:24.470741 containerd[1646]: time="2025-07-06T23:58:24.470711114Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-spr8t,Uid:94185960-601a-483a-9212-e04b82a9d723,Namespace:calico-system,Attempt:1,} returns sandbox id \"f801d867b750b12e4caf2a28ca584bbf1b765617cf5b00eb199dd591f952a576\"" Jul 6 23:58:24.471383 containerd[1646]: 2025-07-06 23:58:24.081 [INFO][5107] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 6 23:58:24.471383 containerd[1646]: 2025-07-06 23:58:24.090 [INFO][5107] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--cbc88db65--jzzkz-eth0 calico-apiserver-cbc88db65- calico-apiserver 1aab2d05-ec88-4437-948f-f21fe9b0d771 970 0 2025-07-06 23:57:56 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:cbc88db65 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-cbc88db65-jzzkz eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calicc2da34053d [] [] }} ContainerID="5d1e5071ce5b713787b744e03738176a94fd81f6020823d2e1e7758e77dbfd1d" Namespace="calico-apiserver" Pod="calico-apiserver-cbc88db65-jzzkz" WorkloadEndpoint="localhost-k8s-calico--apiserver--cbc88db65--jzzkz-" Jul 6 23:58:24.471383 containerd[1646]: 2025-07-06 23:58:24.090 [INFO][5107] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5d1e5071ce5b713787b744e03738176a94fd81f6020823d2e1e7758e77dbfd1d" Namespace="calico-apiserver" Pod="calico-apiserver-cbc88db65-jzzkz" WorkloadEndpoint="localhost-k8s-calico--apiserver--cbc88db65--jzzkz-eth0" Jul 6 23:58:24.471383 containerd[1646]: 2025-07-06 23:58:24.143 [INFO][5126] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5d1e5071ce5b713787b744e03738176a94fd81f6020823d2e1e7758e77dbfd1d" HandleID="k8s-pod-network.5d1e5071ce5b713787b744e03738176a94fd81f6020823d2e1e7758e77dbfd1d" Workload="localhost-k8s-calico--apiserver--cbc88db65--jzzkz-eth0" Jul 6 23:58:24.471383 containerd[1646]: 2025-07-06 23:58:24.143 [INFO][5126] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="5d1e5071ce5b713787b744e03738176a94fd81f6020823d2e1e7758e77dbfd1d" HandleID="k8s-pod-network.5d1e5071ce5b713787b744e03738176a94fd81f6020823d2e1e7758e77dbfd1d" Workload="localhost-k8s-calico--apiserver--cbc88db65--jzzkz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5600), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-cbc88db65-jzzkz", "timestamp":"2025-07-06 23:58:24.143420485 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 6 23:58:24.471383 containerd[1646]: 2025-07-06 23:58:24.143 [INFO][5126] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:58:24.471383 containerd[1646]: 2025-07-06 23:58:24.283 [INFO][5126] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:58:24.471383 containerd[1646]: 2025-07-06 23:58:24.283 [INFO][5126] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 6 23:58:24.471383 containerd[1646]: 2025-07-06 23:58:24.350 [INFO][5126] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.5d1e5071ce5b713787b744e03738176a94fd81f6020823d2e1e7758e77dbfd1d" host="localhost" Jul 6 23:58:24.471383 containerd[1646]: 2025-07-06 23:58:24.355 [INFO][5126] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 6 23:58:24.471383 containerd[1646]: 2025-07-06 23:58:24.358 [INFO][5126] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 6 23:58:24.471383 containerd[1646]: 2025-07-06 23:58:24.360 [INFO][5126] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 6 23:58:24.471383 containerd[1646]: 2025-07-06 23:58:24.369 [INFO][5126] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 6 23:58:24.471383 containerd[1646]: 2025-07-06 23:58:24.369 [INFO][5126] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.5d1e5071ce5b713787b744e03738176a94fd81f6020823d2e1e7758e77dbfd1d" host="localhost" Jul 6 23:58:24.471383 containerd[1646]: 2025-07-06 23:58:24.371 [INFO][5126] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.5d1e5071ce5b713787b744e03738176a94fd81f6020823d2e1e7758e77dbfd1d Jul 6 23:58:24.471383 containerd[1646]: 2025-07-06 23:58:24.377 [INFO][5126] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.5d1e5071ce5b713787b744e03738176a94fd81f6020823d2e1e7758e77dbfd1d" host="localhost" Jul 6 23:58:24.471383 containerd[1646]: 2025-07-06 23:58:24.395 [INFO][5126] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.137/26] block=192.168.88.128/26 handle="k8s-pod-network.5d1e5071ce5b713787b744e03738176a94fd81f6020823d2e1e7758e77dbfd1d" host="localhost" Jul 6 23:58:24.471383 containerd[1646]: 2025-07-06 23:58:24.395 [INFO][5126] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.137/26] handle="k8s-pod-network.5d1e5071ce5b713787b744e03738176a94fd81f6020823d2e1e7758e77dbfd1d" host="localhost" Jul 6 23:58:24.471383 containerd[1646]: 2025-07-06 23:58:24.395 [INFO][5126] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:58:24.471383 containerd[1646]: 2025-07-06 23:58:24.395 [INFO][5126] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.137/26] IPv6=[] ContainerID="5d1e5071ce5b713787b744e03738176a94fd81f6020823d2e1e7758e77dbfd1d" HandleID="k8s-pod-network.5d1e5071ce5b713787b744e03738176a94fd81f6020823d2e1e7758e77dbfd1d" Workload="localhost-k8s-calico--apiserver--cbc88db65--jzzkz-eth0" Jul 6 23:58:24.473859 containerd[1646]: 2025-07-06 23:58:24.401 [INFO][5107] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5d1e5071ce5b713787b744e03738176a94fd81f6020823d2e1e7758e77dbfd1d" Namespace="calico-apiserver" Pod="calico-apiserver-cbc88db65-jzzkz" WorkloadEndpoint="localhost-k8s-calico--apiserver--cbc88db65--jzzkz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--cbc88db65--jzzkz-eth0", GenerateName:"calico-apiserver-cbc88db65-", Namespace:"calico-apiserver", SelfLink:"", UID:"1aab2d05-ec88-4437-948f-f21fe9b0d771", ResourceVersion:"970", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 57, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"cbc88db65", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-cbc88db65-jzzkz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calicc2da34053d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:58:24.473859 containerd[1646]: 2025-07-06 23:58:24.402 [INFO][5107] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.137/32] ContainerID="5d1e5071ce5b713787b744e03738176a94fd81f6020823d2e1e7758e77dbfd1d" Namespace="calico-apiserver" Pod="calico-apiserver-cbc88db65-jzzkz" WorkloadEndpoint="localhost-k8s-calico--apiserver--cbc88db65--jzzkz-eth0" Jul 6 23:58:24.473859 containerd[1646]: 2025-07-06 23:58:24.402 [INFO][5107] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calicc2da34053d ContainerID="5d1e5071ce5b713787b744e03738176a94fd81f6020823d2e1e7758e77dbfd1d" Namespace="calico-apiserver" Pod="calico-apiserver-cbc88db65-jzzkz" WorkloadEndpoint="localhost-k8s-calico--apiserver--cbc88db65--jzzkz-eth0" Jul 6 23:58:24.473859 containerd[1646]: 2025-07-06 23:58:24.453 [INFO][5107] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5d1e5071ce5b713787b744e03738176a94fd81f6020823d2e1e7758e77dbfd1d" Namespace="calico-apiserver" Pod="calico-apiserver-cbc88db65-jzzkz" WorkloadEndpoint="localhost-k8s-calico--apiserver--cbc88db65--jzzkz-eth0" Jul 6 23:58:24.473859 containerd[1646]: 2025-07-06 23:58:24.456 [INFO][5107] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5d1e5071ce5b713787b744e03738176a94fd81f6020823d2e1e7758e77dbfd1d" Namespace="calico-apiserver" Pod="calico-apiserver-cbc88db65-jzzkz" WorkloadEndpoint="localhost-k8s-calico--apiserver--cbc88db65--jzzkz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--cbc88db65--jzzkz-eth0", GenerateName:"calico-apiserver-cbc88db65-", Namespace:"calico-apiserver", SelfLink:"", UID:"1aab2d05-ec88-4437-948f-f21fe9b0d771", ResourceVersion:"970", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 57, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"cbc88db65", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5d1e5071ce5b713787b744e03738176a94fd81f6020823d2e1e7758e77dbfd1d", Pod:"calico-apiserver-cbc88db65-jzzkz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calicc2da34053d", MAC:"76:71:a5:92:ae:94", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:58:24.473859 containerd[1646]: 2025-07-06 23:58:24.467 [INFO][5107] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5d1e5071ce5b713787b744e03738176a94fd81f6020823d2e1e7758e77dbfd1d" Namespace="calico-apiserver" Pod="calico-apiserver-cbc88db65-jzzkz" WorkloadEndpoint="localhost-k8s-calico--apiserver--cbc88db65--jzzkz-eth0" Jul 6 23:58:24.515256 containerd[1646]: time="2025-07-06T23:58:24.515104976Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:58:24.515621 containerd[1646]: time="2025-07-06T23:58:24.515226814Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:58:24.515695 containerd[1646]: time="2025-07-06T23:58:24.515658409Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:58:24.515804 containerd[1646]: time="2025-07-06T23:58:24.515763101Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:58:24.529601 systemd-resolved[1540]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 6 23:58:24.549266 containerd[1646]: time="2025-07-06T23:58:24.549246541Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-cbc88db65-jzzkz,Uid:1aab2d05-ec88-4437-948f-f21fe9b0d771,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"5d1e5071ce5b713787b744e03738176a94fd81f6020823d2e1e7758e77dbfd1d\"" Jul 6 23:58:24.592410 kubelet[2915]: I0706 23:58:24.591952 2915 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 6 23:58:24.879704 systemd-networkd[1286]: calic3137b4a894: Gained IPv6LL Jul 6 23:58:25.135764 systemd-networkd[1286]: calic52d19b1f87: Gained IPv6LL Jul 6 23:58:25.136007 systemd-networkd[1286]: cali2fd4be5d904: Gained IPv6LL Jul 6 23:58:25.327797 systemd-networkd[1286]: calif183d8f7871: Gained IPv6LL Jul 6 23:58:25.391744 systemd-networkd[1286]: cali31fdb0253fa: Gained IPv6LL Jul 6 23:58:25.809293 kubelet[2915]: I0706 23:58:25.809272 2915 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 6 23:58:26.096118 systemd-networkd[1286]: cali2f29524f208: Gained IPv6LL Jul 6 23:58:26.288486 systemd-networkd[1286]: calicc2da34053d: Gained IPv6LL Jul 6 23:58:26.611768 systemd-journald[1183]: Under memory pressure, flushing caches. Jul 6 23:58:26.609031 systemd-resolved[1540]: Under memory pressure, flushing caches. Jul 6 23:58:26.609053 systemd-resolved[1540]: Flushed all caches. Jul 6 23:58:26.950273 containerd[1646]: time="2025-07-06T23:58:26.950239246Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:58:26.985001 containerd[1646]: time="2025-07-06T23:58:26.984468147Z" level=info msg="ImageCreate event name:\"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:58:26.999560 containerd[1646]: time="2025-07-06T23:58:26.985677087Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=47317977" Jul 6 23:58:27.000472 containerd[1646]: time="2025-07-06T23:58:26.992863886Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"48810696\" in 3.420503972s" Jul 6 23:58:27.000472 containerd[1646]: time="2025-07-06T23:58:27.000230134Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\"" Jul 6 23:58:27.002355 containerd[1646]: time="2025-07-06T23:58:26.990994828Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:58:27.006454 containerd[1646]: time="2025-07-06T23:58:27.006257731Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 6 23:58:27.008348 containerd[1646]: time="2025-07-06T23:58:27.007676635Z" level=info msg="CreateContainer within sandbox \"92d522d9ac441fafb019292f14edfec9154fd26a8f2454909143d70b72384de0\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 6 23:58:27.019640 containerd[1646]: time="2025-07-06T23:58:27.019599569Z" level=info msg="CreateContainer within sandbox \"92d522d9ac441fafb019292f14edfec9154fd26a8f2454909143d70b72384de0\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"c494c1beb6eec56ef03fb1240e2d8284b2539c7158e49d862e67ab8d490809ef\"" Jul 6 23:58:27.024622 containerd[1646]: time="2025-07-06T23:58:27.022022686Z" level=info msg="StartContainer for \"c494c1beb6eec56ef03fb1240e2d8284b2539c7158e49d862e67ab8d490809ef\"" Jul 6 23:58:27.101062 containerd[1646]: time="2025-07-06T23:58:27.101041119Z" level=info msg="StartContainer for \"c494c1beb6eec56ef03fb1240e2d8284b2539c7158e49d862e67ab8d490809ef\" returns successfully" Jul 6 23:58:27.135024 kubelet[2915]: I0706 23:58:27.134983 2915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-66d984f854-dcpkk" podStartSLOduration=27.699587069 podStartE2EDuration="31.134963792s" podCreationTimestamp="2025-07-06 23:57:56 +0000 UTC" firstStartedPulling="2025-07-06 23:58:23.570192196 +0000 UTC m=+43.975376610" lastFinishedPulling="2025-07-06 23:58:27.00556892 +0000 UTC m=+47.410753333" observedRunningTime="2025-07-06 23:58:27.134415095 +0000 UTC m=+47.539599518" watchObservedRunningTime="2025-07-06 23:58:27.134963792 +0000 UTC m=+47.540148209" Jul 6 23:58:27.144679 kernel: bpftool[5481]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jul 6 23:58:27.378891 systemd-networkd[1286]: vxlan.calico: Link UP Jul 6 23:58:27.378896 systemd-networkd[1286]: vxlan.calico: Gained carrier Jul 6 23:58:27.455479 containerd[1646]: time="2025-07-06T23:58:27.455018997Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:58:27.462170 containerd[1646]: time="2025-07-06T23:58:27.458073394Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=77" Jul 6 23:58:27.473421 containerd[1646]: time="2025-07-06T23:58:27.473386229Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"48810696\" in 466.57519ms" Jul 6 23:58:27.473421 containerd[1646]: time="2025-07-06T23:58:27.473419612Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\"" Jul 6 23:58:27.479219 containerd[1646]: time="2025-07-06T23:58:27.478234784Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\"" Jul 6 23:58:27.481873 containerd[1646]: time="2025-07-06T23:58:27.481557158Z" level=info msg="CreateContainer within sandbox \"4abfd4ae1f87c33ea5d39d546776e977b621c84c7dc2664f6e53324b212f5ca2\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 6 23:58:27.504521 containerd[1646]: time="2025-07-06T23:58:27.504496503Z" level=info msg="CreateContainer within sandbox \"4abfd4ae1f87c33ea5d39d546776e977b621c84c7dc2664f6e53324b212f5ca2\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"d58de1d7022bb21d24395e1cf1e49b7e37b6ee6df05083db0a590f234ccd90f2\"" Jul 6 23:58:27.505777 containerd[1646]: time="2025-07-06T23:58:27.505763924Z" level=info msg="StartContainer for \"d58de1d7022bb21d24395e1cf1e49b7e37b6ee6df05083db0a590f234ccd90f2\"" Jul 6 23:58:27.776698 containerd[1646]: time="2025-07-06T23:58:27.776656378Z" level=info msg="StartContainer for \"d58de1d7022bb21d24395e1cf1e49b7e37b6ee6df05083db0a590f234ccd90f2\" returns successfully" Jul 6 23:58:28.130682 kubelet[2915]: I0706 23:58:28.130404 2915 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 6 23:58:28.657020 systemd-journald[1183]: Under memory pressure, flushing caches. Jul 6 23:58:28.655671 systemd-resolved[1540]: Under memory pressure, flushing caches. Jul 6 23:58:28.655935 systemd-resolved[1540]: Flushed all caches. Jul 6 23:58:28.928103 containerd[1646]: time="2025-07-06T23:58:28.928077855Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:58:28.928890 containerd[1646]: time="2025-07-06T23:58:28.928781207Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.2: active requests=0, bytes read=8759190" Jul 6 23:58:28.929110 containerd[1646]: time="2025-07-06T23:58:28.929094540Z" level=info msg="ImageCreate event name:\"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:58:28.930612 containerd[1646]: time="2025-07-06T23:58:28.930590258Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:58:28.930996 containerd[1646]: time="2025-07-06T23:58:28.930980664Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.2\" with image id \"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\", size \"10251893\" in 1.452722628s" Jul 6 23:58:28.931031 containerd[1646]: time="2025-07-06T23:58:28.930999621Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\" returns image reference \"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\"" Jul 6 23:58:28.934300 containerd[1646]: time="2025-07-06T23:58:28.934242625Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\"" Jul 6 23:58:28.939025 containerd[1646]: time="2025-07-06T23:58:28.939004941Z" level=info msg="CreateContainer within sandbox \"6461fb565e002e05b72e4cbc5a9767f7c3a4335865e788a6801fd0f8ec0c4f85\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jul 6 23:58:28.951570 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1284233346.mount: Deactivated successfully. Jul 6 23:58:28.956995 containerd[1646]: time="2025-07-06T23:58:28.954199180Z" level=info msg="CreateContainer within sandbox \"6461fb565e002e05b72e4cbc5a9767f7c3a4335865e788a6801fd0f8ec0c4f85\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"e4914cca0a00f961ec1fa53841fcd221e8403a6749ba97e79db3a184ca7cea8a\"" Jul 6 23:58:28.956995 containerd[1646]: time="2025-07-06T23:58:28.954674602Z" level=info msg="StartContainer for \"e4914cca0a00f961ec1fa53841fcd221e8403a6749ba97e79db3a184ca7cea8a\"" Jul 6 23:58:29.012960 containerd[1646]: time="2025-07-06T23:58:29.012890185Z" level=info msg="StartContainer for \"e4914cca0a00f961ec1fa53841fcd221e8403a6749ba97e79db3a184ca7cea8a\" returns successfully" Jul 6 23:58:29.109978 systemd-networkd[1286]: vxlan.calico: Gained IPv6LL Jul 6 23:58:29.218130 kubelet[2915]: I0706 23:58:29.218043 2915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-66d984f854-sqcdr" podStartSLOduration=29.470524264 podStartE2EDuration="33.209262326s" podCreationTimestamp="2025-07-06 23:57:56 +0000 UTC" firstStartedPulling="2025-07-06 23:58:23.737799408 +0000 UTC m=+44.142983821" lastFinishedPulling="2025-07-06 23:58:27.476537469 +0000 UTC m=+47.881721883" observedRunningTime="2025-07-06 23:58:28.189171122 +0000 UTC m=+48.594355538" watchObservedRunningTime="2025-07-06 23:58:29.209262326 +0000 UTC m=+49.614446749" Jul 6 23:58:30.704734 systemd-journald[1183]: Under memory pressure, flushing caches. Jul 6 23:58:30.703674 systemd-resolved[1540]: Under memory pressure, flushing caches. Jul 6 23:58:30.703697 systemd-resolved[1540]: Flushed all caches. Jul 6 23:58:31.418025 containerd[1646]: time="2025-07-06T23:58:31.417990060Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:58:31.428620 containerd[1646]: time="2025-07-06T23:58:31.428403406Z" level=info msg="ImageCreate event name:\"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:58:31.442387 containerd[1646]: time="2025-07-06T23:58:31.429330193Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.2: active requests=0, bytes read=51276688" Jul 6 23:58:31.442387 containerd[1646]: time="2025-07-06T23:58:31.431387110Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:58:31.443286 containerd[1646]: time="2025-07-06T23:58:31.437180991Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" with image id \"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\", size \"52769359\" in 2.497261161s" Jul 6 23:58:31.443286 containerd[1646]: time="2025-07-06T23:58:31.442485843Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" returns image reference \"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\"" Jul 6 23:58:31.473890 containerd[1646]: time="2025-07-06T23:58:31.473842825Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\"" Jul 6 23:58:31.724171 containerd[1646]: time="2025-07-06T23:58:31.724150537Z" level=info msg="CreateContainer within sandbox \"90a3d0afe6030b142b745dd67fa6d49824367b979548ab4208d151fc8222b44e\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jul 6 23:58:31.744089 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1028622633.mount: Deactivated successfully. Jul 6 23:58:31.757281 containerd[1646]: time="2025-07-06T23:58:31.757225824Z" level=info msg="CreateContainer within sandbox \"90a3d0afe6030b142b745dd67fa6d49824367b979548ab4208d151fc8222b44e\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"f9af144a69c72c18cc0bd31df7bdd34d920c6fe2082c28bccd20604c7f1df0cd\"" Jul 6 23:58:31.764956 containerd[1646]: time="2025-07-06T23:58:31.764906422Z" level=info msg="StartContainer for \"f9af144a69c72c18cc0bd31df7bdd34d920c6fe2082c28bccd20604c7f1df0cd\"" Jul 6 23:58:31.958008 containerd[1646]: time="2025-07-06T23:58:31.957983091Z" level=info msg="StartContainer for \"f9af144a69c72c18cc0bd31df7bdd34d920c6fe2082c28bccd20604c7f1df0cd\" returns successfully" Jul 6 23:58:32.707842 kubelet[2915]: I0706 23:58:32.686497 2915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-5bb8955bc9-hwzqb" podStartSLOduration=26.44384499 podStartE2EDuration="33.665213659s" podCreationTimestamp="2025-07-06 23:57:59 +0000 UTC" firstStartedPulling="2025-07-06 23:58:24.250858731 +0000 UTC m=+44.656043145" lastFinishedPulling="2025-07-06 23:58:31.472227398 +0000 UTC m=+51.877411814" observedRunningTime="2025-07-06 23:58:32.616908498 +0000 UTC m=+53.022092920" watchObservedRunningTime="2025-07-06 23:58:32.665213659 +0000 UTC m=+53.070398076" Jul 6 23:58:32.751851 systemd-resolved[1540]: Under memory pressure, flushing caches. Jul 6 23:58:32.754164 systemd-journald[1183]: Under memory pressure, flushing caches. Jul 6 23:58:32.751873 systemd-resolved[1540]: Flushed all caches. Jul 6 23:58:33.649631 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2286733319.mount: Deactivated successfully. Jul 6 23:58:34.259003 containerd[1646]: time="2025-07-06T23:58:34.258955613Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:58:34.271758 containerd[1646]: time="2025-07-06T23:58:34.264147453Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.2: active requests=0, bytes read=66352308" Jul 6 23:58:34.271758 containerd[1646]: time="2025-07-06T23:58:34.270663194Z" level=info msg="ImageCreate event name:\"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:58:34.300194 containerd[1646]: time="2025-07-06T23:58:34.300073836Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:58:34.301804 containerd[1646]: time="2025-07-06T23:58:34.300963197Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" with image id \"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\", size \"66352154\" in 2.827098745s" Jul 6 23:58:34.301804 containerd[1646]: time="2025-07-06T23:58:34.300979446Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" returns image reference \"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\"" Jul 6 23:58:34.537824 containerd[1646]: time="2025-07-06T23:58:34.537165703Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 6 23:58:34.572944 containerd[1646]: time="2025-07-06T23:58:34.572907186Z" level=info msg="CreateContainer within sandbox \"f801d867b750b12e4caf2a28ca584bbf1b765617cf5b00eb199dd591f952a576\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Jul 6 23:58:34.590177 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1722562179.mount: Deactivated successfully. Jul 6 23:58:34.592370 containerd[1646]: time="2025-07-06T23:58:34.592347940Z" level=info msg="CreateContainer within sandbox \"f801d867b750b12e4caf2a28ca584bbf1b765617cf5b00eb199dd591f952a576\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"23ef58bc3d7ddc10f8897b5887516bf6bd90e29d421bfeb0005ee0d59dc335ef\"" Jul 6 23:58:34.595043 containerd[1646]: time="2025-07-06T23:58:34.594407348Z" level=info msg="StartContainer for \"23ef58bc3d7ddc10f8897b5887516bf6bd90e29d421bfeb0005ee0d59dc335ef\"" Jul 6 23:58:34.800692 systemd-journald[1183]: Under memory pressure, flushing caches. Jul 6 23:58:34.799869 systemd-resolved[1540]: Under memory pressure, flushing caches. Jul 6 23:58:34.799887 systemd-resolved[1540]: Flushed all caches. Jul 6 23:58:34.867415 containerd[1646]: time="2025-07-06T23:58:34.867317564Z" level=info msg="StartContainer for \"23ef58bc3d7ddc10f8897b5887516bf6bd90e29d421bfeb0005ee0d59dc335ef\" returns successfully" Jul 6 23:58:35.000035 containerd[1646]: time="2025-07-06T23:58:35.000010918Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:58:35.000711 containerd[1646]: time="2025-07-06T23:58:35.000543046Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=77" Jul 6 23:58:35.002112 containerd[1646]: time="2025-07-06T23:58:35.002066881Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"48810696\" in 464.876194ms" Jul 6 23:58:35.002112 containerd[1646]: time="2025-07-06T23:58:35.002085139Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\"" Jul 6 23:58:35.003644 containerd[1646]: time="2025-07-06T23:58:35.002804401Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\"" Jul 6 23:58:35.004185 containerd[1646]: time="2025-07-06T23:58:35.004057984Z" level=info msg="CreateContainer within sandbox \"5d1e5071ce5b713787b744e03738176a94fd81f6020823d2e1e7758e77dbfd1d\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 6 23:58:35.010993 containerd[1646]: time="2025-07-06T23:58:35.010937281Z" level=info msg="CreateContainer within sandbox \"5d1e5071ce5b713787b744e03738176a94fd81f6020823d2e1e7758e77dbfd1d\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"2d7c3ecbfc8d1aeb6b22ea29ddcd5ae6fa327979146d30a609d27c481f0668b9\"" Jul 6 23:58:35.011461 containerd[1646]: time="2025-07-06T23:58:35.011368951Z" level=info msg="StartContainer for \"2d7c3ecbfc8d1aeb6b22ea29ddcd5ae6fa327979146d30a609d27c481f0668b9\"" Jul 6 23:58:35.077218 containerd[1646]: time="2025-07-06T23:58:35.077087098Z" level=info msg="StartContainer for \"2d7c3ecbfc8d1aeb6b22ea29ddcd5ae6fa327979146d30a609d27c481f0668b9\" returns successfully" Jul 6 23:58:36.084587 kubelet[2915]: I0706 23:58:36.082007 2915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-58fd7646b9-spr8t" podStartSLOduration=29.009406236 podStartE2EDuration="39.068100368s" podCreationTimestamp="2025-07-06 23:57:57 +0000 UTC" firstStartedPulling="2025-07-06 23:58:24.474322569 +0000 UTC m=+44.879506982" lastFinishedPulling="2025-07-06 23:58:34.5330167 +0000 UTC m=+54.938201114" observedRunningTime="2025-07-06 23:58:36.023752727 +0000 UTC m=+56.428937149" watchObservedRunningTime="2025-07-06 23:58:36.068100368 +0000 UTC m=+56.473284785" Jul 6 23:58:36.098360 kubelet[2915]: I0706 23:58:36.086025 2915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-cbc88db65-jzzkz" podStartSLOduration=29.633313619 podStartE2EDuration="40.08601297s" podCreationTimestamp="2025-07-06 23:57:56 +0000 UTC" firstStartedPulling="2025-07-06 23:58:24.549979949 +0000 UTC m=+44.955164362" lastFinishedPulling="2025-07-06 23:58:35.002679298 +0000 UTC m=+55.407863713" observedRunningTime="2025-07-06 23:58:36.067028732 +0000 UTC m=+56.472213146" watchObservedRunningTime="2025-07-06 23:58:36.08601297 +0000 UTC m=+56.491197387" Jul 6 23:58:36.849691 systemd-journald[1183]: Under memory pressure, flushing caches. Jul 6 23:58:36.848498 systemd-resolved[1540]: Under memory pressure, flushing caches. Jul 6 23:58:36.848521 systemd-resolved[1540]: Flushed all caches. Jul 6 23:58:36.953049 systemd[1]: run-containerd-runc-k8s.io-23ef58bc3d7ddc10f8897b5887516bf6bd90e29d421bfeb0005ee0d59dc335ef-runc.IIQXFj.mount: Deactivated successfully. Jul 6 23:58:37.001718 containerd[1646]: time="2025-07-06T23:58:37.001691973Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:58:37.004004 containerd[1646]: time="2025-07-06T23:58:37.002979749Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2: active requests=0, bytes read=14703784" Jul 6 23:58:37.005366 containerd[1646]: time="2025-07-06T23:58:37.005081016Z" level=info msg="ImageCreate event name:\"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:58:37.013557 containerd[1646]: time="2025-07-06T23:58:37.013525626Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:58:37.014382 containerd[1646]: time="2025-07-06T23:58:37.014085173Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" with image id \"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\", size \"16196439\" in 2.011170428s" Jul 6 23:58:37.014382 containerd[1646]: time="2025-07-06T23:58:37.014106349Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" returns image reference \"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\"" Jul 6 23:58:37.020284 containerd[1646]: time="2025-07-06T23:58:37.019448471Z" level=info msg="CreateContainer within sandbox \"6461fb565e002e05b72e4cbc5a9767f7c3a4335865e788a6801fd0f8ec0c4f85\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jul 6 23:58:37.040917 containerd[1646]: time="2025-07-06T23:58:37.040840250Z" level=info msg="CreateContainer within sandbox \"6461fb565e002e05b72e4cbc5a9767f7c3a4335865e788a6801fd0f8ec0c4f85\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"1cfe4cd3e2b76b89976455b02cc40d878606be58b9401e926805358ab7661a8c\"" Jul 6 23:58:37.042283 containerd[1646]: time="2025-07-06T23:58:37.042269677Z" level=info msg="StartContainer for \"1cfe4cd3e2b76b89976455b02cc40d878606be58b9401e926805358ab7661a8c\"" Jul 6 23:58:37.146768 containerd[1646]: time="2025-07-06T23:58:37.146445575Z" level=info msg="StartContainer for \"1cfe4cd3e2b76b89976455b02cc40d878606be58b9401e926805358ab7661a8c\" returns successfully" Jul 6 23:58:37.653388 kubelet[2915]: I0706 23:58:37.653293 2915 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 6 23:58:37.889878 containerd[1646]: time="2025-07-06T23:58:37.889785813Z" level=info msg="StopContainer for \"c494c1beb6eec56ef03fb1240e2d8284b2539c7158e49d862e67ab8d490809ef\" with timeout 30 (s)" Jul 6 23:58:37.893553 containerd[1646]: time="2025-07-06T23:58:37.893537726Z" level=info msg="Stop container \"c494c1beb6eec56ef03fb1240e2d8284b2539c7158e49d862e67ab8d490809ef\" with signal terminated" Jul 6 23:58:37.961921 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c494c1beb6eec56ef03fb1240e2d8284b2539c7158e49d862e67ab8d490809ef-rootfs.mount: Deactivated successfully. Jul 6 23:58:37.988755 containerd[1646]: time="2025-07-06T23:58:37.964801920Z" level=info msg="shim disconnected" id=c494c1beb6eec56ef03fb1240e2d8284b2539c7158e49d862e67ab8d490809ef namespace=k8s.io Jul 6 23:58:38.037341 containerd[1646]: time="2025-07-06T23:58:38.037297743Z" level=warning msg="cleaning up after shim disconnected" id=c494c1beb6eec56ef03fb1240e2d8284b2539c7158e49d862e67ab8d490809ef namespace=k8s.io Jul 6 23:58:38.037341 containerd[1646]: time="2025-07-06T23:58:38.037331227Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 6 23:58:38.156868 kubelet[2915]: I0706 23:58:38.156824 2915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/0786dbbf-3e1d-4a1c-a8ff-edf71d8da2fb-calico-apiserver-certs\") pod \"calico-apiserver-cbc88db65-w5dbg\" (UID: \"0786dbbf-3e1d-4a1c-a8ff-edf71d8da2fb\") " pod="calico-apiserver/calico-apiserver-cbc88db65-w5dbg" Jul 6 23:58:38.157016 kubelet[2915]: I0706 23:58:38.156915 2915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xpwc4\" (UniqueName: \"kubernetes.io/projected/0786dbbf-3e1d-4a1c-a8ff-edf71d8da2fb-kube-api-access-xpwc4\") pod \"calico-apiserver-cbc88db65-w5dbg\" (UID: \"0786dbbf-3e1d-4a1c-a8ff-edf71d8da2fb\") " pod="calico-apiserver/calico-apiserver-cbc88db65-w5dbg" Jul 6 23:58:38.208123 containerd[1646]: time="2025-07-06T23:58:38.208072534Z" level=info msg="StopContainer for \"c494c1beb6eec56ef03fb1240e2d8284b2539c7158e49d862e67ab8d490809ef\" returns successfully" Jul 6 23:58:38.706022 containerd[1646]: time="2025-07-06T23:58:38.705921541Z" level=info msg="StopPodSandbox for \"92d522d9ac441fafb019292f14edfec9154fd26a8f2454909143d70b72384de0\"" Jul 6 23:58:38.712827 containerd[1646]: time="2025-07-06T23:58:38.709961600Z" level=info msg="Container to stop \"c494c1beb6eec56ef03fb1240e2d8284b2539c7158e49d862e67ab8d490809ef\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 6 23:58:38.712497 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-92d522d9ac441fafb019292f14edfec9154fd26a8f2454909143d70b72384de0-shm.mount: Deactivated successfully. Jul 6 23:58:38.745263 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-92d522d9ac441fafb019292f14edfec9154fd26a8f2454909143d70b72384de0-rootfs.mount: Deactivated successfully. Jul 6 23:58:38.746445 containerd[1646]: time="2025-07-06T23:58:38.746228839Z" level=info msg="shim disconnected" id=92d522d9ac441fafb019292f14edfec9154fd26a8f2454909143d70b72384de0 namespace=k8s.io Jul 6 23:58:38.747294 containerd[1646]: time="2025-07-06T23:58:38.746322460Z" level=warning msg="cleaning up after shim disconnected" id=92d522d9ac441fafb019292f14edfec9154fd26a8f2454909143d70b72384de0 namespace=k8s.io Jul 6 23:58:38.747294 containerd[1646]: time="2025-07-06T23:58:38.746578741Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 6 23:58:38.764666 containerd[1646]: time="2025-07-06T23:58:38.764148050Z" level=warning msg="cleanup warnings time=\"2025-07-06T23:58:38Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jul 6 23:58:38.896961 systemd-resolved[1540]: Under memory pressure, flushing caches. Jul 6 23:58:38.898291 systemd-journald[1183]: Under memory pressure, flushing caches. Jul 6 23:58:38.896966 systemd-resolved[1540]: Flushed all caches. Jul 6 23:58:38.967996 kubelet[2915]: I0706 23:58:38.961208 2915 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="92d522d9ac441fafb019292f14edfec9154fd26a8f2454909143d70b72384de0" Jul 6 23:58:38.978888 kubelet[2915]: I0706 23:58:38.978861 2915 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jul 6 23:58:38.983158 kubelet[2915]: I0706 23:58:38.983146 2915 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jul 6 23:58:39.051136 containerd[1646]: time="2025-07-06T23:58:39.051106900Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-cbc88db65-w5dbg,Uid:0786dbbf-3e1d-4a1c-a8ff-edf71d8da2fb,Namespace:calico-apiserver,Attempt:0,}" Jul 6 23:58:39.304321 systemd-networkd[1286]: calic52d19b1f87: Link DOWN Jul 6 23:58:39.304325 systemd-networkd[1286]: calic52d19b1f87: Lost carrier Jul 6 23:58:39.324699 kubelet[2915]: I0706 23:58:39.305622 2915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-mvnfh" podStartSLOduration=27.057427622 podStartE2EDuration="40.289009606s" podCreationTimestamp="2025-07-06 23:57:59 +0000 UTC" firstStartedPulling="2025-07-06 23:58:23.784165074 +0000 UTC m=+44.189349487" lastFinishedPulling="2025-07-06 23:58:37.015747058 +0000 UTC m=+57.420931471" observedRunningTime="2025-07-06 23:58:37.929577735 +0000 UTC m=+58.334762158" watchObservedRunningTime="2025-07-06 23:58:39.289009606 +0000 UTC m=+59.694194023" Jul 6 23:58:39.694594 systemd-networkd[1286]: cali8575c5553f3: Link UP Jul 6 23:58:39.695845 systemd-networkd[1286]: cali8575c5553f3: Gained carrier Jul 6 23:58:39.711272 containerd[1646]: 2025-07-06 23:58:39.266 [INFO][6033] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--cbc88db65--w5dbg-eth0 calico-apiserver-cbc88db65- calico-apiserver 0786dbbf-3e1d-4a1c-a8ff-edf71d8da2fb 1106 0 2025-07-06 23:58:37 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:cbc88db65 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-cbc88db65-w5dbg eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali8575c5553f3 [] [] }} ContainerID="eaee2ee399ab04165e3e1424a00279c467df57fa107de66f6d233d0977487fc2" Namespace="calico-apiserver" Pod="calico-apiserver-cbc88db65-w5dbg" WorkloadEndpoint="localhost-k8s-calico--apiserver--cbc88db65--w5dbg-" Jul 6 23:58:39.711272 containerd[1646]: 2025-07-06 23:58:39.270 [INFO][6033] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="eaee2ee399ab04165e3e1424a00279c467df57fa107de66f6d233d0977487fc2" Namespace="calico-apiserver" Pod="calico-apiserver-cbc88db65-w5dbg" WorkloadEndpoint="localhost-k8s-calico--apiserver--cbc88db65--w5dbg-eth0" Jul 6 23:58:39.711272 containerd[1646]: 2025-07-06 23:58:39.639 [INFO][6048] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="eaee2ee399ab04165e3e1424a00279c467df57fa107de66f6d233d0977487fc2" HandleID="k8s-pod-network.eaee2ee399ab04165e3e1424a00279c467df57fa107de66f6d233d0977487fc2" Workload="localhost-k8s-calico--apiserver--cbc88db65--w5dbg-eth0" Jul 6 23:58:39.711272 containerd[1646]: 2025-07-06 23:58:39.640 [INFO][6048] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="eaee2ee399ab04165e3e1424a00279c467df57fa107de66f6d233d0977487fc2" HandleID="k8s-pod-network.eaee2ee399ab04165e3e1424a00279c467df57fa107de66f6d233d0977487fc2" Workload="localhost-k8s-calico--apiserver--cbc88db65--w5dbg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002bf9c0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-cbc88db65-w5dbg", "timestamp":"2025-07-06 23:58:39.63983282 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 6 23:58:39.711272 containerd[1646]: 2025-07-06 23:58:39.640 [INFO][6048] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:58:39.711272 containerd[1646]: 2025-07-06 23:58:39.640 [INFO][6048] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:58:39.711272 containerd[1646]: 2025-07-06 23:58:39.640 [INFO][6048] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 6 23:58:39.711272 containerd[1646]: 2025-07-06 23:58:39.662 [INFO][6048] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.eaee2ee399ab04165e3e1424a00279c467df57fa107de66f6d233d0977487fc2" host="localhost" Jul 6 23:58:39.711272 containerd[1646]: 2025-07-06 23:58:39.670 [INFO][6048] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 6 23:58:39.711272 containerd[1646]: 2025-07-06 23:58:39.672 [INFO][6048] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 6 23:58:39.711272 containerd[1646]: 2025-07-06 23:58:39.673 [INFO][6048] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 6 23:58:39.711272 containerd[1646]: 2025-07-06 23:58:39.675 [INFO][6048] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 6 23:58:39.711272 containerd[1646]: 2025-07-06 23:58:39.675 [INFO][6048] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.eaee2ee399ab04165e3e1424a00279c467df57fa107de66f6d233d0977487fc2" host="localhost" Jul 6 23:58:39.711272 containerd[1646]: 2025-07-06 23:58:39.677 [INFO][6048] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.eaee2ee399ab04165e3e1424a00279c467df57fa107de66f6d233d0977487fc2 Jul 6 23:58:39.711272 containerd[1646]: 2025-07-06 23:58:39.682 [INFO][6048] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.eaee2ee399ab04165e3e1424a00279c467df57fa107de66f6d233d0977487fc2" host="localhost" Jul 6 23:58:39.711272 containerd[1646]: 2025-07-06 23:58:39.685 [INFO][6048] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.138/26] block=192.168.88.128/26 handle="k8s-pod-network.eaee2ee399ab04165e3e1424a00279c467df57fa107de66f6d233d0977487fc2" host="localhost" Jul 6 23:58:39.711272 containerd[1646]: 2025-07-06 23:58:39.685 [INFO][6048] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.138/26] handle="k8s-pod-network.eaee2ee399ab04165e3e1424a00279c467df57fa107de66f6d233d0977487fc2" host="localhost" Jul 6 23:58:39.711272 containerd[1646]: 2025-07-06 23:58:39.686 [INFO][6048] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:58:39.711272 containerd[1646]: 2025-07-06 23:58:39.686 [INFO][6048] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.138/26] IPv6=[] ContainerID="eaee2ee399ab04165e3e1424a00279c467df57fa107de66f6d233d0977487fc2" HandleID="k8s-pod-network.eaee2ee399ab04165e3e1424a00279c467df57fa107de66f6d233d0977487fc2" Workload="localhost-k8s-calico--apiserver--cbc88db65--w5dbg-eth0" Jul 6 23:58:39.721837 containerd[1646]: 2025-07-06 23:58:39.688 [INFO][6033] cni-plugin/k8s.go 418: Populated endpoint ContainerID="eaee2ee399ab04165e3e1424a00279c467df57fa107de66f6d233d0977487fc2" Namespace="calico-apiserver" Pod="calico-apiserver-cbc88db65-w5dbg" WorkloadEndpoint="localhost-k8s-calico--apiserver--cbc88db65--w5dbg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--cbc88db65--w5dbg-eth0", GenerateName:"calico-apiserver-cbc88db65-", Namespace:"calico-apiserver", SelfLink:"", UID:"0786dbbf-3e1d-4a1c-a8ff-edf71d8da2fb", ResourceVersion:"1106", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 58, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"cbc88db65", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-cbc88db65-w5dbg", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.138/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8575c5553f3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:58:39.721837 containerd[1646]: 2025-07-06 23:58:39.689 [INFO][6033] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.138/32] ContainerID="eaee2ee399ab04165e3e1424a00279c467df57fa107de66f6d233d0977487fc2" Namespace="calico-apiserver" Pod="calico-apiserver-cbc88db65-w5dbg" WorkloadEndpoint="localhost-k8s-calico--apiserver--cbc88db65--w5dbg-eth0" Jul 6 23:58:39.721837 containerd[1646]: 2025-07-06 23:58:39.689 [INFO][6033] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8575c5553f3 ContainerID="eaee2ee399ab04165e3e1424a00279c467df57fa107de66f6d233d0977487fc2" Namespace="calico-apiserver" Pod="calico-apiserver-cbc88db65-w5dbg" WorkloadEndpoint="localhost-k8s-calico--apiserver--cbc88db65--w5dbg-eth0" Jul 6 23:58:39.721837 containerd[1646]: 2025-07-06 23:58:39.699 [INFO][6033] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="eaee2ee399ab04165e3e1424a00279c467df57fa107de66f6d233d0977487fc2" Namespace="calico-apiserver" Pod="calico-apiserver-cbc88db65-w5dbg" WorkloadEndpoint="localhost-k8s-calico--apiserver--cbc88db65--w5dbg-eth0" Jul 6 23:58:39.721837 containerd[1646]: 2025-07-06 23:58:39.699 [INFO][6033] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="eaee2ee399ab04165e3e1424a00279c467df57fa107de66f6d233d0977487fc2" Namespace="calico-apiserver" Pod="calico-apiserver-cbc88db65-w5dbg" WorkloadEndpoint="localhost-k8s-calico--apiserver--cbc88db65--w5dbg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--cbc88db65--w5dbg-eth0", GenerateName:"calico-apiserver-cbc88db65-", Namespace:"calico-apiserver", SelfLink:"", UID:"0786dbbf-3e1d-4a1c-a8ff-edf71d8da2fb", ResourceVersion:"1106", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 58, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"cbc88db65", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"eaee2ee399ab04165e3e1424a00279c467df57fa107de66f6d233d0977487fc2", Pod:"calico-apiserver-cbc88db65-w5dbg", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.138/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8575c5553f3", MAC:"2e:3b:f0:78:18:d7", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:58:39.721837 containerd[1646]: 2025-07-06 23:58:39.706 [INFO][6033] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="eaee2ee399ab04165e3e1424a00279c467df57fa107de66f6d233d0977487fc2" Namespace="calico-apiserver" Pod="calico-apiserver-cbc88db65-w5dbg" WorkloadEndpoint="localhost-k8s-calico--apiserver--cbc88db65--w5dbg-eth0" Jul 6 23:58:39.775628 containerd[1646]: 2025-07-06 23:58:39.277 [INFO][6029] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="92d522d9ac441fafb019292f14edfec9154fd26a8f2454909143d70b72384de0" Jul 6 23:58:39.775628 containerd[1646]: 2025-07-06 23:58:39.278 [INFO][6029] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="92d522d9ac441fafb019292f14edfec9154fd26a8f2454909143d70b72384de0" iface="eth0" netns="/var/run/netns/cni-5968fdf4-959d-7f8b-ff1f-bd1c6da866fc" Jul 6 23:58:39.775628 containerd[1646]: 2025-07-06 23:58:39.281 [INFO][6029] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="92d522d9ac441fafb019292f14edfec9154fd26a8f2454909143d70b72384de0" iface="eth0" netns="/var/run/netns/cni-5968fdf4-959d-7f8b-ff1f-bd1c6da866fc" Jul 6 23:58:39.775628 containerd[1646]: 2025-07-06 23:58:39.295 [INFO][6029] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="92d522d9ac441fafb019292f14edfec9154fd26a8f2454909143d70b72384de0" after=17.655713ms iface="eth0" netns="/var/run/netns/cni-5968fdf4-959d-7f8b-ff1f-bd1c6da866fc" Jul 6 23:58:39.775628 containerd[1646]: 2025-07-06 23:58:39.295 [INFO][6029] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="92d522d9ac441fafb019292f14edfec9154fd26a8f2454909143d70b72384de0" Jul 6 23:58:39.775628 containerd[1646]: 2025-07-06 23:58:39.295 [INFO][6029] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="92d522d9ac441fafb019292f14edfec9154fd26a8f2454909143d70b72384de0" Jul 6 23:58:39.775628 containerd[1646]: 2025-07-06 23:58:39.639 [INFO][6050] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="92d522d9ac441fafb019292f14edfec9154fd26a8f2454909143d70b72384de0" HandleID="k8s-pod-network.92d522d9ac441fafb019292f14edfec9154fd26a8f2454909143d70b72384de0" Workload="localhost-k8s-calico--apiserver--66d984f854--dcpkk-eth0" Jul 6 23:58:39.775628 containerd[1646]: 2025-07-06 23:58:39.641 [INFO][6050] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:58:39.775628 containerd[1646]: 2025-07-06 23:58:39.686 [INFO][6050] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:58:39.775628 containerd[1646]: 2025-07-06 23:58:39.770 [INFO][6050] ipam/ipam_plugin.go 431: Released address using handleID ContainerID="92d522d9ac441fafb019292f14edfec9154fd26a8f2454909143d70b72384de0" HandleID="k8s-pod-network.92d522d9ac441fafb019292f14edfec9154fd26a8f2454909143d70b72384de0" Workload="localhost-k8s-calico--apiserver--66d984f854--dcpkk-eth0" Jul 6 23:58:39.775628 containerd[1646]: 2025-07-06 23:58:39.770 [INFO][6050] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="92d522d9ac441fafb019292f14edfec9154fd26a8f2454909143d70b72384de0" HandleID="k8s-pod-network.92d522d9ac441fafb019292f14edfec9154fd26a8f2454909143d70b72384de0" Workload="localhost-k8s-calico--apiserver--66d984f854--dcpkk-eth0" Jul 6 23:58:39.775628 containerd[1646]: 2025-07-06 23:58:39.771 [INFO][6050] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:58:39.775628 containerd[1646]: 2025-07-06 23:58:39.774 [INFO][6029] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="92d522d9ac441fafb019292f14edfec9154fd26a8f2454909143d70b72384de0" Jul 6 23:58:39.782532 containerd[1646]: time="2025-07-06T23:58:39.781888856Z" level=info msg="TearDown network for sandbox \"92d522d9ac441fafb019292f14edfec9154fd26a8f2454909143d70b72384de0\" successfully" Jul 6 23:58:39.782532 containerd[1646]: time="2025-07-06T23:58:39.781917609Z" level=info msg="StopPodSandbox for \"92d522d9ac441fafb019292f14edfec9154fd26a8f2454909143d70b72384de0\" returns successfully" Jul 6 23:58:39.787074 systemd[1]: run-netns-cni\x2d5968fdf4\x2d959d\x2d7f8b\x2dff1f\x2dbd1c6da866fc.mount: Deactivated successfully. Jul 6 23:58:39.921837 containerd[1646]: time="2025-07-06T23:58:39.921797402Z" level=info msg="StopPodSandbox for \"305307f300ced02e26613baabde083e836f1bd6c04cb97de63fb07e42f655070\"" Jul 6 23:58:40.042470 containerd[1646]: time="2025-07-06T23:58:40.005011259Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:58:40.042470 containerd[1646]: time="2025-07-06T23:58:40.019423863Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:58:40.042470 containerd[1646]: time="2025-07-06T23:58:40.019435111Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:58:40.042470 containerd[1646]: time="2025-07-06T23:58:40.028178974Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:58:40.086657 containerd[1646]: 2025-07-06 23:58:40.001 [WARNING][6093] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="305307f300ced02e26613baabde083e836f1bd6c04cb97de63fb07e42f655070" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--66d984f854--dcpkk-eth0", GenerateName:"calico-apiserver-66d984f854-", Namespace:"calico-apiserver", SelfLink:"", UID:"78734851-1240-4e35-b671-f1113509a863", ResourceVersion:"1118", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 57, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"66d984f854", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"92d522d9ac441fafb019292f14edfec9154fd26a8f2454909143d70b72384de0", Pod:"calico-apiserver-66d984f854-dcpkk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic52d19b1f87", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:58:40.086657 containerd[1646]: 2025-07-06 23:58:40.003 [INFO][6093] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="305307f300ced02e26613baabde083e836f1bd6c04cb97de63fb07e42f655070" Jul 6 23:58:40.086657 containerd[1646]: 2025-07-06 23:58:40.003 [INFO][6093] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="305307f300ced02e26613baabde083e836f1bd6c04cb97de63fb07e42f655070" iface="eth0" netns="" Jul 6 23:58:40.086657 containerd[1646]: 2025-07-06 23:58:40.003 [INFO][6093] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="305307f300ced02e26613baabde083e836f1bd6c04cb97de63fb07e42f655070" Jul 6 23:58:40.086657 containerd[1646]: 2025-07-06 23:58:40.003 [INFO][6093] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="305307f300ced02e26613baabde083e836f1bd6c04cb97de63fb07e42f655070" Jul 6 23:58:40.086657 containerd[1646]: 2025-07-06 23:58:40.042 [INFO][6110] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="305307f300ced02e26613baabde083e836f1bd6c04cb97de63fb07e42f655070" HandleID="k8s-pod-network.305307f300ced02e26613baabde083e836f1bd6c04cb97de63fb07e42f655070" Workload="localhost-k8s-calico--apiserver--66d984f854--dcpkk-eth0" Jul 6 23:58:40.086657 containerd[1646]: 2025-07-06 23:58:40.044 [INFO][6110] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:58:40.086657 containerd[1646]: 2025-07-06 23:58:40.044 [INFO][6110] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:58:40.086657 containerd[1646]: 2025-07-06 23:58:40.053 [WARNING][6110] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="305307f300ced02e26613baabde083e836f1bd6c04cb97de63fb07e42f655070" HandleID="k8s-pod-network.305307f300ced02e26613baabde083e836f1bd6c04cb97de63fb07e42f655070" Workload="localhost-k8s-calico--apiserver--66d984f854--dcpkk-eth0" Jul 6 23:58:40.086657 containerd[1646]: 2025-07-06 23:58:40.054 [INFO][6110] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="305307f300ced02e26613baabde083e836f1bd6c04cb97de63fb07e42f655070" HandleID="k8s-pod-network.305307f300ced02e26613baabde083e836f1bd6c04cb97de63fb07e42f655070" Workload="localhost-k8s-calico--apiserver--66d984f854--dcpkk-eth0" Jul 6 23:58:40.086657 containerd[1646]: 2025-07-06 23:58:40.059 [INFO][6110] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:58:40.086657 containerd[1646]: 2025-07-06 23:58:40.066 [INFO][6093] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="305307f300ced02e26613baabde083e836f1bd6c04cb97de63fb07e42f655070" Jul 6 23:58:40.090324 containerd[1646]: time="2025-07-06T23:58:40.087933940Z" level=info msg="TearDown network for sandbox \"305307f300ced02e26613baabde083e836f1bd6c04cb97de63fb07e42f655070\" successfully" Jul 6 23:58:40.090324 containerd[1646]: time="2025-07-06T23:58:40.087953707Z" level=info msg="StopPodSandbox for \"305307f300ced02e26613baabde083e836f1bd6c04cb97de63fb07e42f655070\" returns successfully" Jul 6 23:58:40.112788 containerd[1646]: time="2025-07-06T23:58:40.112618480Z" level=info msg="StopPodSandbox for \"e27ff51af32743d5320f3003a78f1453d4f662bda7dbaa9be1c502ecd5dc9fcf\"" Jul 6 23:58:40.143627 systemd[1]: run-containerd-runc-k8s.io-eaee2ee399ab04165e3e1424a00279c467df57fa107de66f6d233d0977487fc2-runc.7khaut.mount: Deactivated successfully. Jul 6 23:58:40.157529 systemd-resolved[1540]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 6 23:58:40.215975 containerd[1646]: time="2025-07-06T23:58:40.215945672Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-cbc88db65-w5dbg,Uid:0786dbbf-3e1d-4a1c-a8ff-edf71d8da2fb,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"eaee2ee399ab04165e3e1424a00279c467df57fa107de66f6d233d0977487fc2\"" Jul 6 23:58:40.231368 containerd[1646]: 2025-07-06 23:58:40.177 [WARNING][6129] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e27ff51af32743d5320f3003a78f1453d4f662bda7dbaa9be1c502ecd5dc9fcf" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--5bb8955bc9--hwzqb-eth0", GenerateName:"calico-kube-controllers-5bb8955bc9-", Namespace:"calico-system", SelfLink:"", UID:"f6ec6603-698d-463e-944a-b5d9f581d0b3", ResourceVersion:"1046", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 57, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5bb8955bc9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"90a3d0afe6030b142b745dd67fa6d49824367b979548ab4208d151fc8222b44e", Pod:"calico-kube-controllers-5bb8955bc9-hwzqb", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali31fdb0253fa", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:58:40.231368 containerd[1646]: 2025-07-06 23:58:40.178 [INFO][6129] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e27ff51af32743d5320f3003a78f1453d4f662bda7dbaa9be1c502ecd5dc9fcf" Jul 6 23:58:40.231368 containerd[1646]: 2025-07-06 23:58:40.178 [INFO][6129] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e27ff51af32743d5320f3003a78f1453d4f662bda7dbaa9be1c502ecd5dc9fcf" iface="eth0" netns="" Jul 6 23:58:40.231368 containerd[1646]: 2025-07-06 23:58:40.178 [INFO][6129] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e27ff51af32743d5320f3003a78f1453d4f662bda7dbaa9be1c502ecd5dc9fcf" Jul 6 23:58:40.231368 containerd[1646]: 2025-07-06 23:58:40.178 [INFO][6129] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e27ff51af32743d5320f3003a78f1453d4f662bda7dbaa9be1c502ecd5dc9fcf" Jul 6 23:58:40.231368 containerd[1646]: 2025-07-06 23:58:40.219 [INFO][6148] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e27ff51af32743d5320f3003a78f1453d4f662bda7dbaa9be1c502ecd5dc9fcf" HandleID="k8s-pod-network.e27ff51af32743d5320f3003a78f1453d4f662bda7dbaa9be1c502ecd5dc9fcf" Workload="localhost-k8s-calico--kube--controllers--5bb8955bc9--hwzqb-eth0" Jul 6 23:58:40.231368 containerd[1646]: 2025-07-06 23:58:40.219 [INFO][6148] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:58:40.231368 containerd[1646]: 2025-07-06 23:58:40.219 [INFO][6148] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:58:40.231368 containerd[1646]: 2025-07-06 23:58:40.225 [WARNING][6148] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e27ff51af32743d5320f3003a78f1453d4f662bda7dbaa9be1c502ecd5dc9fcf" HandleID="k8s-pod-network.e27ff51af32743d5320f3003a78f1453d4f662bda7dbaa9be1c502ecd5dc9fcf" Workload="localhost-k8s-calico--kube--controllers--5bb8955bc9--hwzqb-eth0" Jul 6 23:58:40.231368 containerd[1646]: 2025-07-06 23:58:40.225 [INFO][6148] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e27ff51af32743d5320f3003a78f1453d4f662bda7dbaa9be1c502ecd5dc9fcf" HandleID="k8s-pod-network.e27ff51af32743d5320f3003a78f1453d4f662bda7dbaa9be1c502ecd5dc9fcf" Workload="localhost-k8s-calico--kube--controllers--5bb8955bc9--hwzqb-eth0" Jul 6 23:58:40.231368 containerd[1646]: 2025-07-06 23:58:40.227 [INFO][6148] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:58:40.231368 containerd[1646]: 2025-07-06 23:58:40.228 [INFO][6129] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e27ff51af32743d5320f3003a78f1453d4f662bda7dbaa9be1c502ecd5dc9fcf" Jul 6 23:58:40.233456 containerd[1646]: time="2025-07-06T23:58:40.231436336Z" level=info msg="TearDown network for sandbox \"e27ff51af32743d5320f3003a78f1453d4f662bda7dbaa9be1c502ecd5dc9fcf\" successfully" Jul 6 23:58:40.233456 containerd[1646]: time="2025-07-06T23:58:40.231449413Z" level=info msg="StopPodSandbox for \"e27ff51af32743d5320f3003a78f1453d4f662bda7dbaa9be1c502ecd5dc9fcf\" returns successfully" Jul 6 23:58:40.296674 kubelet[2915]: I0706 23:58:40.296498 2915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/78734851-1240-4e35-b671-f1113509a863-calico-apiserver-certs\") pod \"78734851-1240-4e35-b671-f1113509a863\" (UID: \"78734851-1240-4e35-b671-f1113509a863\") " Jul 6 23:58:40.296674 kubelet[2915]: I0706 23:58:40.296544 2915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z85mh\" (UniqueName: \"kubernetes.io/projected/78734851-1240-4e35-b671-f1113509a863-kube-api-access-z85mh\") pod \"78734851-1240-4e35-b671-f1113509a863\" (UID: \"78734851-1240-4e35-b671-f1113509a863\") " Jul 6 23:58:40.322125 systemd[1]: var-lib-kubelet-pods-78734851\x2d1240\x2d4e35\x2db671\x2df1113509a863-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dz85mh.mount: Deactivated successfully. Jul 6 23:58:40.322221 systemd[1]: var-lib-kubelet-pods-78734851\x2d1240\x2d4e35\x2db671\x2df1113509a863-volumes-kubernetes.io\x7esecret-calico\x2dapiserver\x2dcerts.mount: Deactivated successfully. Jul 6 23:58:40.332626 kubelet[2915]: I0706 23:58:40.330742 2915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/78734851-1240-4e35-b671-f1113509a863-kube-api-access-z85mh" (OuterVolumeSpecName: "kube-api-access-z85mh") pod "78734851-1240-4e35-b671-f1113509a863" (UID: "78734851-1240-4e35-b671-f1113509a863"). InnerVolumeSpecName "kube-api-access-z85mh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 6 23:58:40.332706 containerd[1646]: time="2025-07-06T23:58:40.332681128Z" level=info msg="RemovePodSandbox for \"e27ff51af32743d5320f3003a78f1453d4f662bda7dbaa9be1c502ecd5dc9fcf\"" Jul 6 23:58:40.332706 containerd[1646]: time="2025-07-06T23:58:40.332704314Z" level=info msg="Forcibly stopping sandbox \"e27ff51af32743d5320f3003a78f1453d4f662bda7dbaa9be1c502ecd5dc9fcf\"" Jul 6 23:58:40.347417 kubelet[2915]: I0706 23:58:40.346899 2915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/78734851-1240-4e35-b671-f1113509a863-calico-apiserver-certs" (OuterVolumeSpecName: "calico-apiserver-certs") pod "78734851-1240-4e35-b671-f1113509a863" (UID: "78734851-1240-4e35-b671-f1113509a863"). InnerVolumeSpecName "calico-apiserver-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 6 23:58:40.371991 containerd[1646]: time="2025-07-06T23:58:40.371968126Z" level=info msg="CreateContainer within sandbox \"eaee2ee399ab04165e3e1424a00279c467df57fa107de66f6d233d0977487fc2\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 6 23:58:40.391054 containerd[1646]: 2025-07-06 23:58:40.365 [WARNING][6172] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e27ff51af32743d5320f3003a78f1453d4f662bda7dbaa9be1c502ecd5dc9fcf" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--5bb8955bc9--hwzqb-eth0", GenerateName:"calico-kube-controllers-5bb8955bc9-", Namespace:"calico-system", SelfLink:"", UID:"f6ec6603-698d-463e-944a-b5d9f581d0b3", ResourceVersion:"1046", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 57, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5bb8955bc9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"90a3d0afe6030b142b745dd67fa6d49824367b979548ab4208d151fc8222b44e", Pod:"calico-kube-controllers-5bb8955bc9-hwzqb", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali31fdb0253fa", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:58:40.391054 containerd[1646]: 2025-07-06 23:58:40.366 [INFO][6172] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e27ff51af32743d5320f3003a78f1453d4f662bda7dbaa9be1c502ecd5dc9fcf" Jul 6 23:58:40.391054 containerd[1646]: 2025-07-06 23:58:40.366 [INFO][6172] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e27ff51af32743d5320f3003a78f1453d4f662bda7dbaa9be1c502ecd5dc9fcf" iface="eth0" netns="" Jul 6 23:58:40.391054 containerd[1646]: 2025-07-06 23:58:40.366 [INFO][6172] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e27ff51af32743d5320f3003a78f1453d4f662bda7dbaa9be1c502ecd5dc9fcf" Jul 6 23:58:40.391054 containerd[1646]: 2025-07-06 23:58:40.366 [INFO][6172] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e27ff51af32743d5320f3003a78f1453d4f662bda7dbaa9be1c502ecd5dc9fcf" Jul 6 23:58:40.391054 containerd[1646]: 2025-07-06 23:58:40.382 [INFO][6180] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e27ff51af32743d5320f3003a78f1453d4f662bda7dbaa9be1c502ecd5dc9fcf" HandleID="k8s-pod-network.e27ff51af32743d5320f3003a78f1453d4f662bda7dbaa9be1c502ecd5dc9fcf" Workload="localhost-k8s-calico--kube--controllers--5bb8955bc9--hwzqb-eth0" Jul 6 23:58:40.391054 containerd[1646]: 2025-07-06 23:58:40.383 [INFO][6180] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:58:40.391054 containerd[1646]: 2025-07-06 23:58:40.383 [INFO][6180] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:58:40.391054 containerd[1646]: 2025-07-06 23:58:40.386 [WARNING][6180] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e27ff51af32743d5320f3003a78f1453d4f662bda7dbaa9be1c502ecd5dc9fcf" HandleID="k8s-pod-network.e27ff51af32743d5320f3003a78f1453d4f662bda7dbaa9be1c502ecd5dc9fcf" Workload="localhost-k8s-calico--kube--controllers--5bb8955bc9--hwzqb-eth0" Jul 6 23:58:40.391054 containerd[1646]: 2025-07-06 23:58:40.386 [INFO][6180] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e27ff51af32743d5320f3003a78f1453d4f662bda7dbaa9be1c502ecd5dc9fcf" HandleID="k8s-pod-network.e27ff51af32743d5320f3003a78f1453d4f662bda7dbaa9be1c502ecd5dc9fcf" Workload="localhost-k8s-calico--kube--controllers--5bb8955bc9--hwzqb-eth0" Jul 6 23:58:40.391054 containerd[1646]: 2025-07-06 23:58:40.387 [INFO][6180] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:58:40.391054 containerd[1646]: 2025-07-06 23:58:40.389 [INFO][6172] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e27ff51af32743d5320f3003a78f1453d4f662bda7dbaa9be1c502ecd5dc9fcf" Jul 6 23:58:40.391054 containerd[1646]: time="2025-07-06T23:58:40.390849556Z" level=info msg="TearDown network for sandbox \"e27ff51af32743d5320f3003a78f1453d4f662bda7dbaa9be1c502ecd5dc9fcf\" successfully" Jul 6 23:58:40.396922 kubelet[2915]: I0706 23:58:40.396882 2915 reconciler_common.go:293] "Volume detached for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/78734851-1240-4e35-b671-f1113509a863-calico-apiserver-certs\") on node \"localhost\" DevicePath \"\"" Jul 6 23:58:40.396922 kubelet[2915]: I0706 23:58:40.396900 2915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z85mh\" (UniqueName: \"kubernetes.io/projected/78734851-1240-4e35-b671-f1113509a863-kube-api-access-z85mh\") on node \"localhost\" DevicePath \"\"" Jul 6 23:58:40.512526 containerd[1646]: time="2025-07-06T23:58:40.512497361Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e27ff51af32743d5320f3003a78f1453d4f662bda7dbaa9be1c502ecd5dc9fcf\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 6 23:58:40.512933 containerd[1646]: time="2025-07-06T23:58:40.512553465Z" level=info msg="RemovePodSandbox \"e27ff51af32743d5320f3003a78f1453d4f662bda7dbaa9be1c502ecd5dc9fcf\" returns successfully" Jul 6 23:58:40.512933 containerd[1646]: time="2025-07-06T23:58:40.512858409Z" level=info msg="StopPodSandbox for \"42a9b739b84e940584bfc305b57e88bd8cfb72f4a0d79d1c56bb7333c454cb64\"" Jul 6 23:58:40.549508 containerd[1646]: time="2025-07-06T23:58:40.548924338Z" level=info msg="CreateContainer within sandbox \"eaee2ee399ab04165e3e1424a00279c467df57fa107de66f6d233d0977487fc2\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"c3c0e5d41de7aa301097fcb892c7c135701edd54cf24f4b55cb5cfdfe3038ada\"" Jul 6 23:58:40.558625 containerd[1646]: time="2025-07-06T23:58:40.557786936Z" level=info msg="StartContainer for \"c3c0e5d41de7aa301097fcb892c7c135701edd54cf24f4b55cb5cfdfe3038ada\"" Jul 6 23:58:40.598772 containerd[1646]: 2025-07-06 23:58:40.566 [WARNING][6194] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="42a9b739b84e940584bfc305b57e88bd8cfb72f4a0d79d1c56bb7333c454cb64" WorkloadEndpoint="localhost-k8s-whisker--ccc6d79dd--5wwps-eth0" Jul 6 23:58:40.598772 containerd[1646]: 2025-07-06 23:58:40.567 [INFO][6194] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="42a9b739b84e940584bfc305b57e88bd8cfb72f4a0d79d1c56bb7333c454cb64" Jul 6 23:58:40.598772 containerd[1646]: 2025-07-06 23:58:40.567 [INFO][6194] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="42a9b739b84e940584bfc305b57e88bd8cfb72f4a0d79d1c56bb7333c454cb64" iface="eth0" netns="" Jul 6 23:58:40.598772 containerd[1646]: 2025-07-06 23:58:40.567 [INFO][6194] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="42a9b739b84e940584bfc305b57e88bd8cfb72f4a0d79d1c56bb7333c454cb64" Jul 6 23:58:40.598772 containerd[1646]: 2025-07-06 23:58:40.567 [INFO][6194] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="42a9b739b84e940584bfc305b57e88bd8cfb72f4a0d79d1c56bb7333c454cb64" Jul 6 23:58:40.598772 containerd[1646]: 2025-07-06 23:58:40.590 [INFO][6212] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="42a9b739b84e940584bfc305b57e88bd8cfb72f4a0d79d1c56bb7333c454cb64" HandleID="k8s-pod-network.42a9b739b84e940584bfc305b57e88bd8cfb72f4a0d79d1c56bb7333c454cb64" Workload="localhost-k8s-whisker--ccc6d79dd--5wwps-eth0" Jul 6 23:58:40.598772 containerd[1646]: 2025-07-06 23:58:40.590 [INFO][6212] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:58:40.598772 containerd[1646]: 2025-07-06 23:58:40.590 [INFO][6212] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:58:40.598772 containerd[1646]: 2025-07-06 23:58:40.595 [WARNING][6212] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="42a9b739b84e940584bfc305b57e88bd8cfb72f4a0d79d1c56bb7333c454cb64" HandleID="k8s-pod-network.42a9b739b84e940584bfc305b57e88bd8cfb72f4a0d79d1c56bb7333c454cb64" Workload="localhost-k8s-whisker--ccc6d79dd--5wwps-eth0" Jul 6 23:58:40.598772 containerd[1646]: 2025-07-06 23:58:40.595 [INFO][6212] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="42a9b739b84e940584bfc305b57e88bd8cfb72f4a0d79d1c56bb7333c454cb64" HandleID="k8s-pod-network.42a9b739b84e940584bfc305b57e88bd8cfb72f4a0d79d1c56bb7333c454cb64" Workload="localhost-k8s-whisker--ccc6d79dd--5wwps-eth0" Jul 6 23:58:40.598772 containerd[1646]: 2025-07-06 23:58:40.595 [INFO][6212] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:58:40.598772 containerd[1646]: 2025-07-06 23:58:40.597 [INFO][6194] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="42a9b739b84e940584bfc305b57e88bd8cfb72f4a0d79d1c56bb7333c454cb64" Jul 6 23:58:40.599578 containerd[1646]: time="2025-07-06T23:58:40.598773227Z" level=info msg="TearDown network for sandbox \"42a9b739b84e940584bfc305b57e88bd8cfb72f4a0d79d1c56bb7333c454cb64\" successfully" Jul 6 23:58:40.599578 containerd[1646]: time="2025-07-06T23:58:40.598790998Z" level=info msg="StopPodSandbox for \"42a9b739b84e940584bfc305b57e88bd8cfb72f4a0d79d1c56bb7333c454cb64\" returns successfully" Jul 6 23:58:40.599578 containerd[1646]: time="2025-07-06T23:58:40.599157868Z" level=info msg="RemovePodSandbox for \"42a9b739b84e940584bfc305b57e88bd8cfb72f4a0d79d1c56bb7333c454cb64\"" Jul 6 23:58:40.599578 containerd[1646]: time="2025-07-06T23:58:40.599173493Z" level=info msg="Forcibly stopping sandbox \"42a9b739b84e940584bfc305b57e88bd8cfb72f4a0d79d1c56bb7333c454cb64\"" Jul 6 23:58:40.615742 containerd[1646]: time="2025-07-06T23:58:40.615708001Z" level=info msg="StartContainer for \"c3c0e5d41de7aa301097fcb892c7c135701edd54cf24f4b55cb5cfdfe3038ada\" returns successfully" Jul 6 23:58:40.677021 containerd[1646]: 2025-07-06 23:58:40.635 [WARNING][6242] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="42a9b739b84e940584bfc305b57e88bd8cfb72f4a0d79d1c56bb7333c454cb64" WorkloadEndpoint="localhost-k8s-whisker--ccc6d79dd--5wwps-eth0" Jul 6 23:58:40.677021 containerd[1646]: 2025-07-06 23:58:40.635 [INFO][6242] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="42a9b739b84e940584bfc305b57e88bd8cfb72f4a0d79d1c56bb7333c454cb64" Jul 6 23:58:40.677021 containerd[1646]: 2025-07-06 23:58:40.635 [INFO][6242] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="42a9b739b84e940584bfc305b57e88bd8cfb72f4a0d79d1c56bb7333c454cb64" iface="eth0" netns="" Jul 6 23:58:40.677021 containerd[1646]: 2025-07-06 23:58:40.635 [INFO][6242] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="42a9b739b84e940584bfc305b57e88bd8cfb72f4a0d79d1c56bb7333c454cb64" Jul 6 23:58:40.677021 containerd[1646]: 2025-07-06 23:58:40.635 [INFO][6242] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="42a9b739b84e940584bfc305b57e88bd8cfb72f4a0d79d1c56bb7333c454cb64" Jul 6 23:58:40.677021 containerd[1646]: 2025-07-06 23:58:40.664 [INFO][6257] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="42a9b739b84e940584bfc305b57e88bd8cfb72f4a0d79d1c56bb7333c454cb64" HandleID="k8s-pod-network.42a9b739b84e940584bfc305b57e88bd8cfb72f4a0d79d1c56bb7333c454cb64" Workload="localhost-k8s-whisker--ccc6d79dd--5wwps-eth0" Jul 6 23:58:40.677021 containerd[1646]: 2025-07-06 23:58:40.665 [INFO][6257] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:58:40.677021 containerd[1646]: 2025-07-06 23:58:40.665 [INFO][6257] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:58:40.677021 containerd[1646]: 2025-07-06 23:58:40.670 [WARNING][6257] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="42a9b739b84e940584bfc305b57e88bd8cfb72f4a0d79d1c56bb7333c454cb64" HandleID="k8s-pod-network.42a9b739b84e940584bfc305b57e88bd8cfb72f4a0d79d1c56bb7333c454cb64" Workload="localhost-k8s-whisker--ccc6d79dd--5wwps-eth0" Jul 6 23:58:40.677021 containerd[1646]: 2025-07-06 23:58:40.670 [INFO][6257] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="42a9b739b84e940584bfc305b57e88bd8cfb72f4a0d79d1c56bb7333c454cb64" HandleID="k8s-pod-network.42a9b739b84e940584bfc305b57e88bd8cfb72f4a0d79d1c56bb7333c454cb64" Workload="localhost-k8s-whisker--ccc6d79dd--5wwps-eth0" Jul 6 23:58:40.677021 containerd[1646]: 2025-07-06 23:58:40.671 [INFO][6257] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:58:40.677021 containerd[1646]: 2025-07-06 23:58:40.674 [INFO][6242] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="42a9b739b84e940584bfc305b57e88bd8cfb72f4a0d79d1c56bb7333c454cb64" Jul 6 23:58:40.678090 containerd[1646]: time="2025-07-06T23:58:40.677044543Z" level=info msg="TearDown network for sandbox \"42a9b739b84e940584bfc305b57e88bd8cfb72f4a0d79d1c56bb7333c454cb64\" successfully" Jul 6 23:58:40.684472 containerd[1646]: time="2025-07-06T23:58:40.684434064Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"42a9b739b84e940584bfc305b57e88bd8cfb72f4a0d79d1c56bb7333c454cb64\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 6 23:58:40.684472 containerd[1646]: time="2025-07-06T23:58:40.684469164Z" level=info msg="RemovePodSandbox \"42a9b739b84e940584bfc305b57e88bd8cfb72f4a0d79d1c56bb7333c454cb64\" returns successfully" Jul 6 23:58:40.705321 containerd[1646]: time="2025-07-06T23:58:40.705281628Z" level=info msg="StopPodSandbox for \"dc9e9028cd60f228c6b9a959c1f03b99f7a97b50eec30d656c53b810f0ef03b2\"" Jul 6 23:58:40.763208 containerd[1646]: 2025-07-06 23:58:40.732 [WARNING][6274] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="dc9e9028cd60f228c6b9a959c1f03b99f7a97b50eec30d656c53b810f0ef03b2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--66d984f854--sqcdr-eth0", GenerateName:"calico-apiserver-66d984f854-", Namespace:"calico-apiserver", SelfLink:"", UID:"fa6083f4-7360-4fbe-9116-052981667c66", ResourceVersion:"1028", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 57, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"66d984f854", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4abfd4ae1f87c33ea5d39d546776e977b621c84c7dc2664f6e53324b212f5ca2", Pod:"calico-apiserver-66d984f854-sqcdr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic3137b4a894", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:58:40.763208 containerd[1646]: 2025-07-06 23:58:40.732 [INFO][6274] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="dc9e9028cd60f228c6b9a959c1f03b99f7a97b50eec30d656c53b810f0ef03b2" Jul 6 23:58:40.763208 containerd[1646]: 2025-07-06 23:58:40.732 [INFO][6274] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="dc9e9028cd60f228c6b9a959c1f03b99f7a97b50eec30d656c53b810f0ef03b2" iface="eth0" netns="" Jul 6 23:58:40.763208 containerd[1646]: 2025-07-06 23:58:40.732 [INFO][6274] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="dc9e9028cd60f228c6b9a959c1f03b99f7a97b50eec30d656c53b810f0ef03b2" Jul 6 23:58:40.763208 containerd[1646]: 2025-07-06 23:58:40.732 [INFO][6274] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="dc9e9028cd60f228c6b9a959c1f03b99f7a97b50eec30d656c53b810f0ef03b2" Jul 6 23:58:40.763208 containerd[1646]: 2025-07-06 23:58:40.753 [INFO][6281] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="dc9e9028cd60f228c6b9a959c1f03b99f7a97b50eec30d656c53b810f0ef03b2" HandleID="k8s-pod-network.dc9e9028cd60f228c6b9a959c1f03b99f7a97b50eec30d656c53b810f0ef03b2" Workload="localhost-k8s-calico--apiserver--66d984f854--sqcdr-eth0" Jul 6 23:58:40.763208 containerd[1646]: 2025-07-06 23:58:40.753 [INFO][6281] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:58:40.763208 containerd[1646]: 2025-07-06 23:58:40.753 [INFO][6281] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:58:40.763208 containerd[1646]: 2025-07-06 23:58:40.757 [WARNING][6281] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="dc9e9028cd60f228c6b9a959c1f03b99f7a97b50eec30d656c53b810f0ef03b2" HandleID="k8s-pod-network.dc9e9028cd60f228c6b9a959c1f03b99f7a97b50eec30d656c53b810f0ef03b2" Workload="localhost-k8s-calico--apiserver--66d984f854--sqcdr-eth0" Jul 6 23:58:40.763208 containerd[1646]: 2025-07-06 23:58:40.757 [INFO][6281] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="dc9e9028cd60f228c6b9a959c1f03b99f7a97b50eec30d656c53b810f0ef03b2" HandleID="k8s-pod-network.dc9e9028cd60f228c6b9a959c1f03b99f7a97b50eec30d656c53b810f0ef03b2" Workload="localhost-k8s-calico--apiserver--66d984f854--sqcdr-eth0" Jul 6 23:58:40.763208 containerd[1646]: 2025-07-06 23:58:40.758 [INFO][6281] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:58:40.763208 containerd[1646]: 2025-07-06 23:58:40.761 [INFO][6274] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="dc9e9028cd60f228c6b9a959c1f03b99f7a97b50eec30d656c53b810f0ef03b2" Jul 6 23:58:40.764531 containerd[1646]: time="2025-07-06T23:58:40.763307976Z" level=info msg="TearDown network for sandbox \"dc9e9028cd60f228c6b9a959c1f03b99f7a97b50eec30d656c53b810f0ef03b2\" successfully" Jul 6 23:58:40.764531 containerd[1646]: time="2025-07-06T23:58:40.763495731Z" level=info msg="StopPodSandbox for \"dc9e9028cd60f228c6b9a959c1f03b99f7a97b50eec30d656c53b810f0ef03b2\" returns successfully" Jul 6 23:58:40.773302 containerd[1646]: time="2025-07-06T23:58:40.773130615Z" level=info msg="RemovePodSandbox for \"dc9e9028cd60f228c6b9a959c1f03b99f7a97b50eec30d656c53b810f0ef03b2\"" Jul 6 23:58:40.773302 containerd[1646]: time="2025-07-06T23:58:40.773151100Z" level=info msg="Forcibly stopping sandbox \"dc9e9028cd60f228c6b9a959c1f03b99f7a97b50eec30d656c53b810f0ef03b2\"" Jul 6 23:58:40.847156 containerd[1646]: 2025-07-06 23:58:40.813 [WARNING][6295] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="dc9e9028cd60f228c6b9a959c1f03b99f7a97b50eec30d656c53b810f0ef03b2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--66d984f854--sqcdr-eth0", GenerateName:"calico-apiserver-66d984f854-", Namespace:"calico-apiserver", SelfLink:"", UID:"fa6083f4-7360-4fbe-9116-052981667c66", ResourceVersion:"1028", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 57, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"66d984f854", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4abfd4ae1f87c33ea5d39d546776e977b621c84c7dc2664f6e53324b212f5ca2", Pod:"calico-apiserver-66d984f854-sqcdr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic3137b4a894", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:58:40.847156 containerd[1646]: 2025-07-06 23:58:40.813 [INFO][6295] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="dc9e9028cd60f228c6b9a959c1f03b99f7a97b50eec30d656c53b810f0ef03b2" Jul 6 23:58:40.847156 containerd[1646]: 2025-07-06 23:58:40.813 [INFO][6295] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="dc9e9028cd60f228c6b9a959c1f03b99f7a97b50eec30d656c53b810f0ef03b2" iface="eth0" netns="" Jul 6 23:58:40.847156 containerd[1646]: 2025-07-06 23:58:40.813 [INFO][6295] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="dc9e9028cd60f228c6b9a959c1f03b99f7a97b50eec30d656c53b810f0ef03b2" Jul 6 23:58:40.847156 containerd[1646]: 2025-07-06 23:58:40.813 [INFO][6295] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="dc9e9028cd60f228c6b9a959c1f03b99f7a97b50eec30d656c53b810f0ef03b2" Jul 6 23:58:40.847156 containerd[1646]: 2025-07-06 23:58:40.838 [INFO][6302] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="dc9e9028cd60f228c6b9a959c1f03b99f7a97b50eec30d656c53b810f0ef03b2" HandleID="k8s-pod-network.dc9e9028cd60f228c6b9a959c1f03b99f7a97b50eec30d656c53b810f0ef03b2" Workload="localhost-k8s-calico--apiserver--66d984f854--sqcdr-eth0" Jul 6 23:58:40.847156 containerd[1646]: 2025-07-06 23:58:40.838 [INFO][6302] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:58:40.847156 containerd[1646]: 2025-07-06 23:58:40.839 [INFO][6302] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:58:40.847156 containerd[1646]: 2025-07-06 23:58:40.842 [WARNING][6302] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="dc9e9028cd60f228c6b9a959c1f03b99f7a97b50eec30d656c53b810f0ef03b2" HandleID="k8s-pod-network.dc9e9028cd60f228c6b9a959c1f03b99f7a97b50eec30d656c53b810f0ef03b2" Workload="localhost-k8s-calico--apiserver--66d984f854--sqcdr-eth0" Jul 6 23:58:40.847156 containerd[1646]: 2025-07-06 23:58:40.842 [INFO][6302] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="dc9e9028cd60f228c6b9a959c1f03b99f7a97b50eec30d656c53b810f0ef03b2" HandleID="k8s-pod-network.dc9e9028cd60f228c6b9a959c1f03b99f7a97b50eec30d656c53b810f0ef03b2" Workload="localhost-k8s-calico--apiserver--66d984f854--sqcdr-eth0" Jul 6 23:58:40.847156 containerd[1646]: 2025-07-06 23:58:40.843 [INFO][6302] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:58:40.847156 containerd[1646]: 2025-07-06 23:58:40.845 [INFO][6295] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="dc9e9028cd60f228c6b9a959c1f03b99f7a97b50eec30d656c53b810f0ef03b2" Jul 6 23:58:40.847156 containerd[1646]: time="2025-07-06T23:58:40.846505165Z" level=info msg="TearDown network for sandbox \"dc9e9028cd60f228c6b9a959c1f03b99f7a97b50eec30d656c53b810f0ef03b2\" successfully" Jul 6 23:58:40.852731 containerd[1646]: time="2025-07-06T23:58:40.852709861Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"dc9e9028cd60f228c6b9a959c1f03b99f7a97b50eec30d656c53b810f0ef03b2\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 6 23:58:40.852868 containerd[1646]: time="2025-07-06T23:58:40.852793891Z" level=info msg="RemovePodSandbox \"dc9e9028cd60f228c6b9a959c1f03b99f7a97b50eec30d656c53b810f0ef03b2\" returns successfully" Jul 6 23:58:40.853315 containerd[1646]: time="2025-07-06T23:58:40.853153659Z" level=info msg="StopPodSandbox for \"772b37fef7916dce815edadefbadb7112311e9a44095bf39f8cc4241e4af9033\"" Jul 6 23:58:40.907402 containerd[1646]: 2025-07-06 23:58:40.881 [WARNING][6316] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="772b37fef7916dce815edadefbadb7112311e9a44095bf39f8cc4241e4af9033" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--58fd7646b9--spr8t-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"94185960-601a-483a-9212-e04b82a9d723", ResourceVersion:"1065", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 57, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f801d867b750b12e4caf2a28ca584bbf1b765617cf5b00eb199dd591f952a576", Pod:"goldmane-58fd7646b9-spr8t", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali2f29524f208", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:58:40.907402 containerd[1646]: 2025-07-06 23:58:40.881 [INFO][6316] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="772b37fef7916dce815edadefbadb7112311e9a44095bf39f8cc4241e4af9033" Jul 6 23:58:40.907402 containerd[1646]: 2025-07-06 23:58:40.881 [INFO][6316] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="772b37fef7916dce815edadefbadb7112311e9a44095bf39f8cc4241e4af9033" iface="eth0" netns="" Jul 6 23:58:40.907402 containerd[1646]: 2025-07-06 23:58:40.881 [INFO][6316] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="772b37fef7916dce815edadefbadb7112311e9a44095bf39f8cc4241e4af9033" Jul 6 23:58:40.907402 containerd[1646]: 2025-07-06 23:58:40.881 [INFO][6316] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="772b37fef7916dce815edadefbadb7112311e9a44095bf39f8cc4241e4af9033" Jul 6 23:58:40.907402 containerd[1646]: 2025-07-06 23:58:40.899 [INFO][6323] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="772b37fef7916dce815edadefbadb7112311e9a44095bf39f8cc4241e4af9033" HandleID="k8s-pod-network.772b37fef7916dce815edadefbadb7112311e9a44095bf39f8cc4241e4af9033" Workload="localhost-k8s-goldmane--58fd7646b9--spr8t-eth0" Jul 6 23:58:40.907402 containerd[1646]: 2025-07-06 23:58:40.899 [INFO][6323] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:58:40.907402 containerd[1646]: 2025-07-06 23:58:40.900 [INFO][6323] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:58:40.907402 containerd[1646]: 2025-07-06 23:58:40.903 [WARNING][6323] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="772b37fef7916dce815edadefbadb7112311e9a44095bf39f8cc4241e4af9033" HandleID="k8s-pod-network.772b37fef7916dce815edadefbadb7112311e9a44095bf39f8cc4241e4af9033" Workload="localhost-k8s-goldmane--58fd7646b9--spr8t-eth0" Jul 6 23:58:40.907402 containerd[1646]: 2025-07-06 23:58:40.903 [INFO][6323] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="772b37fef7916dce815edadefbadb7112311e9a44095bf39f8cc4241e4af9033" HandleID="k8s-pod-network.772b37fef7916dce815edadefbadb7112311e9a44095bf39f8cc4241e4af9033" Workload="localhost-k8s-goldmane--58fd7646b9--spr8t-eth0" Jul 6 23:58:40.907402 containerd[1646]: 2025-07-06 23:58:40.904 [INFO][6323] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:58:40.907402 containerd[1646]: 2025-07-06 23:58:40.906 [INFO][6316] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="772b37fef7916dce815edadefbadb7112311e9a44095bf39f8cc4241e4af9033" Jul 6 23:58:40.909906 containerd[1646]: time="2025-07-06T23:58:40.907663464Z" level=info msg="TearDown network for sandbox \"772b37fef7916dce815edadefbadb7112311e9a44095bf39f8cc4241e4af9033\" successfully" Jul 6 23:58:40.909906 containerd[1646]: time="2025-07-06T23:58:40.907680712Z" level=info msg="StopPodSandbox for \"772b37fef7916dce815edadefbadb7112311e9a44095bf39f8cc4241e4af9033\" returns successfully" Jul 6 23:58:40.909906 containerd[1646]: time="2025-07-06T23:58:40.908720119Z" level=info msg="RemovePodSandbox for \"772b37fef7916dce815edadefbadb7112311e9a44095bf39f8cc4241e4af9033\"" Jul 6 23:58:40.909906 containerd[1646]: time="2025-07-06T23:58:40.908739588Z" level=info msg="Forcibly stopping sandbox \"772b37fef7916dce815edadefbadb7112311e9a44095bf39f8cc4241e4af9033\"" Jul 6 23:58:40.943742 systemd-resolved[1540]: Under memory pressure, flushing caches. Jul 6 23:58:40.945735 systemd-journald[1183]: Under memory pressure, flushing caches. Jul 6 23:58:40.943747 systemd-resolved[1540]: Flushed all caches. Jul 6 23:58:40.983301 containerd[1646]: 2025-07-06 23:58:40.939 [WARNING][6338] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="772b37fef7916dce815edadefbadb7112311e9a44095bf39f8cc4241e4af9033" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--58fd7646b9--spr8t-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"94185960-601a-483a-9212-e04b82a9d723", ResourceVersion:"1065", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 57, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f801d867b750b12e4caf2a28ca584bbf1b765617cf5b00eb199dd591f952a576", Pod:"goldmane-58fd7646b9-spr8t", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali2f29524f208", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:58:40.983301 containerd[1646]: 2025-07-06 23:58:40.940 [INFO][6338] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="772b37fef7916dce815edadefbadb7112311e9a44095bf39f8cc4241e4af9033" Jul 6 23:58:40.983301 containerd[1646]: 2025-07-06 23:58:40.940 [INFO][6338] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="772b37fef7916dce815edadefbadb7112311e9a44095bf39f8cc4241e4af9033" iface="eth0" netns="" Jul 6 23:58:40.983301 containerd[1646]: 2025-07-06 23:58:40.940 [INFO][6338] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="772b37fef7916dce815edadefbadb7112311e9a44095bf39f8cc4241e4af9033" Jul 6 23:58:40.983301 containerd[1646]: 2025-07-06 23:58:40.940 [INFO][6338] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="772b37fef7916dce815edadefbadb7112311e9a44095bf39f8cc4241e4af9033" Jul 6 23:58:40.983301 containerd[1646]: 2025-07-06 23:58:40.958 [INFO][6345] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="772b37fef7916dce815edadefbadb7112311e9a44095bf39f8cc4241e4af9033" HandleID="k8s-pod-network.772b37fef7916dce815edadefbadb7112311e9a44095bf39f8cc4241e4af9033" Workload="localhost-k8s-goldmane--58fd7646b9--spr8t-eth0" Jul 6 23:58:40.983301 containerd[1646]: 2025-07-06 23:58:40.958 [INFO][6345] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:58:40.983301 containerd[1646]: 2025-07-06 23:58:40.958 [INFO][6345] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:58:40.983301 containerd[1646]: 2025-07-06 23:58:40.962 [WARNING][6345] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="772b37fef7916dce815edadefbadb7112311e9a44095bf39f8cc4241e4af9033" HandleID="k8s-pod-network.772b37fef7916dce815edadefbadb7112311e9a44095bf39f8cc4241e4af9033" Workload="localhost-k8s-goldmane--58fd7646b9--spr8t-eth0" Jul 6 23:58:40.983301 containerd[1646]: 2025-07-06 23:58:40.962 [INFO][6345] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="772b37fef7916dce815edadefbadb7112311e9a44095bf39f8cc4241e4af9033" HandleID="k8s-pod-network.772b37fef7916dce815edadefbadb7112311e9a44095bf39f8cc4241e4af9033" Workload="localhost-k8s-goldmane--58fd7646b9--spr8t-eth0" Jul 6 23:58:40.983301 containerd[1646]: 2025-07-06 23:58:40.963 [INFO][6345] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:58:40.983301 containerd[1646]: 2025-07-06 23:58:40.964 [INFO][6338] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="772b37fef7916dce815edadefbadb7112311e9a44095bf39f8cc4241e4af9033" Jul 6 23:58:40.983301 containerd[1646]: time="2025-07-06T23:58:40.965582837Z" level=info msg="TearDown network for sandbox \"772b37fef7916dce815edadefbadb7112311e9a44095bf39f8cc4241e4af9033\" successfully" Jul 6 23:58:40.991147 containerd[1646]: time="2025-07-06T23:58:40.990995869Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"772b37fef7916dce815edadefbadb7112311e9a44095bf39f8cc4241e4af9033\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 6 23:58:40.991147 containerd[1646]: time="2025-07-06T23:58:40.991048313Z" level=info msg="RemovePodSandbox \"772b37fef7916dce815edadefbadb7112311e9a44095bf39f8cc4241e4af9033\" returns successfully" Jul 6 23:58:40.991764 containerd[1646]: time="2025-07-06T23:58:40.991518600Z" level=info msg="StopPodSandbox for \"305307f300ced02e26613baabde083e836f1bd6c04cb97de63fb07e42f655070\"" Jul 6 23:58:41.049171 containerd[1646]: 2025-07-06 23:58:41.019 [WARNING][6359] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="305307f300ced02e26613baabde083e836f1bd6c04cb97de63fb07e42f655070" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--66d984f854--dcpkk-eth0", GenerateName:"calico-apiserver-66d984f854-", Namespace:"calico-apiserver", SelfLink:"", UID:"78734851-1240-4e35-b671-f1113509a863", ResourceVersion:"1126", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 57, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"66d984f854", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"92d522d9ac441fafb019292f14edfec9154fd26a8f2454909143d70b72384de0", Pod:"calico-apiserver-66d984f854-dcpkk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic52d19b1f87", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:58:41.049171 containerd[1646]: 2025-07-06 23:58:41.019 [INFO][6359] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="305307f300ced02e26613baabde083e836f1bd6c04cb97de63fb07e42f655070" Jul 6 23:58:41.049171 containerd[1646]: 2025-07-06 23:58:41.019 [INFO][6359] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="305307f300ced02e26613baabde083e836f1bd6c04cb97de63fb07e42f655070" iface="eth0" netns="" Jul 6 23:58:41.049171 containerd[1646]: 2025-07-06 23:58:41.019 [INFO][6359] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="305307f300ced02e26613baabde083e836f1bd6c04cb97de63fb07e42f655070" Jul 6 23:58:41.049171 containerd[1646]: 2025-07-06 23:58:41.019 [INFO][6359] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="305307f300ced02e26613baabde083e836f1bd6c04cb97de63fb07e42f655070" Jul 6 23:58:41.049171 containerd[1646]: 2025-07-06 23:58:41.039 [INFO][6366] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="305307f300ced02e26613baabde083e836f1bd6c04cb97de63fb07e42f655070" HandleID="k8s-pod-network.305307f300ced02e26613baabde083e836f1bd6c04cb97de63fb07e42f655070" Workload="localhost-k8s-calico--apiserver--66d984f854--dcpkk-eth0" Jul 6 23:58:41.049171 containerd[1646]: 2025-07-06 23:58:41.039 [INFO][6366] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:58:41.049171 containerd[1646]: 2025-07-06 23:58:41.039 [INFO][6366] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:58:41.049171 containerd[1646]: 2025-07-06 23:58:41.044 [WARNING][6366] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="305307f300ced02e26613baabde083e836f1bd6c04cb97de63fb07e42f655070" HandleID="k8s-pod-network.305307f300ced02e26613baabde083e836f1bd6c04cb97de63fb07e42f655070" Workload="localhost-k8s-calico--apiserver--66d984f854--dcpkk-eth0" Jul 6 23:58:41.049171 containerd[1646]: 2025-07-06 23:58:41.044 [INFO][6366] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="305307f300ced02e26613baabde083e836f1bd6c04cb97de63fb07e42f655070" HandleID="k8s-pod-network.305307f300ced02e26613baabde083e836f1bd6c04cb97de63fb07e42f655070" Workload="localhost-k8s-calico--apiserver--66d984f854--dcpkk-eth0" Jul 6 23:58:41.049171 containerd[1646]: 2025-07-06 23:58:41.045 [INFO][6366] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:58:41.049171 containerd[1646]: 2025-07-06 23:58:41.047 [INFO][6359] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="305307f300ced02e26613baabde083e836f1bd6c04cb97de63fb07e42f655070" Jul 6 23:58:41.052327 containerd[1646]: time="2025-07-06T23:58:41.049656563Z" level=info msg="TearDown network for sandbox \"305307f300ced02e26613baabde083e836f1bd6c04cb97de63fb07e42f655070\" successfully" Jul 6 23:58:41.052327 containerd[1646]: time="2025-07-06T23:58:41.049676920Z" level=info msg="StopPodSandbox for \"305307f300ced02e26613baabde083e836f1bd6c04cb97de63fb07e42f655070\" returns successfully" Jul 6 23:58:41.052327 containerd[1646]: time="2025-07-06T23:58:41.050038545Z" level=info msg="RemovePodSandbox for \"305307f300ced02e26613baabde083e836f1bd6c04cb97de63fb07e42f655070\"" Jul 6 23:58:41.052327 containerd[1646]: time="2025-07-06T23:58:41.050057749Z" level=info msg="Forcibly stopping sandbox \"305307f300ced02e26613baabde083e836f1bd6c04cb97de63fb07e42f655070\"" Jul 6 23:58:41.072865 systemd-networkd[1286]: cali8575c5553f3: Gained IPv6LL Jul 6 23:58:41.077866 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount866335934.mount: Deactivated successfully. Jul 6 23:58:41.110445 containerd[1646]: 2025-07-06 23:58:41.085 [WARNING][6382] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="305307f300ced02e26613baabde083e836f1bd6c04cb97de63fb07e42f655070" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--66d984f854--dcpkk-eth0", GenerateName:"calico-apiserver-66d984f854-", Namespace:"calico-apiserver", SelfLink:"", UID:"78734851-1240-4e35-b671-f1113509a863", ResourceVersion:"1126", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 57, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"66d984f854", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"92d522d9ac441fafb019292f14edfec9154fd26a8f2454909143d70b72384de0", Pod:"calico-apiserver-66d984f854-dcpkk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic52d19b1f87", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:58:41.110445 containerd[1646]: 2025-07-06 23:58:41.085 [INFO][6382] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="305307f300ced02e26613baabde083e836f1bd6c04cb97de63fb07e42f655070" Jul 6 23:58:41.110445 containerd[1646]: 2025-07-06 23:58:41.085 [INFO][6382] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="305307f300ced02e26613baabde083e836f1bd6c04cb97de63fb07e42f655070" iface="eth0" netns="" Jul 6 23:58:41.110445 containerd[1646]: 2025-07-06 23:58:41.085 [INFO][6382] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="305307f300ced02e26613baabde083e836f1bd6c04cb97de63fb07e42f655070" Jul 6 23:58:41.110445 containerd[1646]: 2025-07-06 23:58:41.085 [INFO][6382] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="305307f300ced02e26613baabde083e836f1bd6c04cb97de63fb07e42f655070" Jul 6 23:58:41.110445 containerd[1646]: 2025-07-06 23:58:41.102 [INFO][6389] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="305307f300ced02e26613baabde083e836f1bd6c04cb97de63fb07e42f655070" HandleID="k8s-pod-network.305307f300ced02e26613baabde083e836f1bd6c04cb97de63fb07e42f655070" Workload="localhost-k8s-calico--apiserver--66d984f854--dcpkk-eth0" Jul 6 23:58:41.110445 containerd[1646]: 2025-07-06 23:58:41.102 [INFO][6389] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:58:41.110445 containerd[1646]: 2025-07-06 23:58:41.102 [INFO][6389] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:58:41.110445 containerd[1646]: 2025-07-06 23:58:41.106 [WARNING][6389] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="305307f300ced02e26613baabde083e836f1bd6c04cb97de63fb07e42f655070" HandleID="k8s-pod-network.305307f300ced02e26613baabde083e836f1bd6c04cb97de63fb07e42f655070" Workload="localhost-k8s-calico--apiserver--66d984f854--dcpkk-eth0" Jul 6 23:58:41.110445 containerd[1646]: 2025-07-06 23:58:41.106 [INFO][6389] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="305307f300ced02e26613baabde083e836f1bd6c04cb97de63fb07e42f655070" HandleID="k8s-pod-network.305307f300ced02e26613baabde083e836f1bd6c04cb97de63fb07e42f655070" Workload="localhost-k8s-calico--apiserver--66d984f854--dcpkk-eth0" Jul 6 23:58:41.110445 containerd[1646]: 2025-07-06 23:58:41.106 [INFO][6389] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:58:41.110445 containerd[1646]: 2025-07-06 23:58:41.108 [INFO][6382] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="305307f300ced02e26613baabde083e836f1bd6c04cb97de63fb07e42f655070" Jul 6 23:58:41.111301 containerd[1646]: time="2025-07-06T23:58:41.110661745Z" level=info msg="TearDown network for sandbox \"305307f300ced02e26613baabde083e836f1bd6c04cb97de63fb07e42f655070\" successfully" Jul 6 23:58:41.142397 containerd[1646]: time="2025-07-06T23:58:41.141852693Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"305307f300ced02e26613baabde083e836f1bd6c04cb97de63fb07e42f655070\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 6 23:58:41.142397 containerd[1646]: time="2025-07-06T23:58:41.142127484Z" level=info msg="RemovePodSandbox \"305307f300ced02e26613baabde083e836f1bd6c04cb97de63fb07e42f655070\" returns successfully" Jul 6 23:58:41.144131 containerd[1646]: time="2025-07-06T23:58:41.144000431Z" level=info msg="StopPodSandbox for \"dd8b0779078be7ddc1011fe7e80b37e48186565ff6409636527690455bdaac28\"" Jul 6 23:58:41.261695 containerd[1646]: 2025-07-06 23:58:41.213 [WARNING][6408] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="dd8b0779078be7ddc1011fe7e80b37e48186565ff6409636527690455bdaac28" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--b7l64-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"edda1339-9324-4b2e-a45e-3e301a3aa698", ResourceVersion:"993", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 57, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ca8abc3b178eeec606bbb79902c6816480289ba44c065219d28226879d385130", Pod:"coredns-7c65d6cfc9-b7l64", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2fd4be5d904", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:58:41.261695 containerd[1646]: 2025-07-06 23:58:41.217 [INFO][6408] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="dd8b0779078be7ddc1011fe7e80b37e48186565ff6409636527690455bdaac28" Jul 6 23:58:41.261695 containerd[1646]: 2025-07-06 23:58:41.217 [INFO][6408] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="dd8b0779078be7ddc1011fe7e80b37e48186565ff6409636527690455bdaac28" iface="eth0" netns="" Jul 6 23:58:41.261695 containerd[1646]: 2025-07-06 23:58:41.217 [INFO][6408] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="dd8b0779078be7ddc1011fe7e80b37e48186565ff6409636527690455bdaac28" Jul 6 23:58:41.261695 containerd[1646]: 2025-07-06 23:58:41.217 [INFO][6408] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="dd8b0779078be7ddc1011fe7e80b37e48186565ff6409636527690455bdaac28" Jul 6 23:58:41.261695 containerd[1646]: 2025-07-06 23:58:41.248 [INFO][6415] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="dd8b0779078be7ddc1011fe7e80b37e48186565ff6409636527690455bdaac28" HandleID="k8s-pod-network.dd8b0779078be7ddc1011fe7e80b37e48186565ff6409636527690455bdaac28" Workload="localhost-k8s-coredns--7c65d6cfc9--b7l64-eth0" Jul 6 23:58:41.261695 containerd[1646]: 2025-07-06 23:58:41.248 [INFO][6415] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:58:41.261695 containerd[1646]: 2025-07-06 23:58:41.249 [INFO][6415] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:58:41.261695 containerd[1646]: 2025-07-06 23:58:41.253 [WARNING][6415] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="dd8b0779078be7ddc1011fe7e80b37e48186565ff6409636527690455bdaac28" HandleID="k8s-pod-network.dd8b0779078be7ddc1011fe7e80b37e48186565ff6409636527690455bdaac28" Workload="localhost-k8s-coredns--7c65d6cfc9--b7l64-eth0" Jul 6 23:58:41.261695 containerd[1646]: 2025-07-06 23:58:41.253 [INFO][6415] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="dd8b0779078be7ddc1011fe7e80b37e48186565ff6409636527690455bdaac28" HandleID="k8s-pod-network.dd8b0779078be7ddc1011fe7e80b37e48186565ff6409636527690455bdaac28" Workload="localhost-k8s-coredns--7c65d6cfc9--b7l64-eth0" Jul 6 23:58:41.261695 containerd[1646]: 2025-07-06 23:58:41.256 [INFO][6415] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:58:41.261695 containerd[1646]: 2025-07-06 23:58:41.258 [INFO][6408] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="dd8b0779078be7ddc1011fe7e80b37e48186565ff6409636527690455bdaac28" Jul 6 23:58:41.264525 containerd[1646]: time="2025-07-06T23:58:41.261891516Z" level=info msg="TearDown network for sandbox \"dd8b0779078be7ddc1011fe7e80b37e48186565ff6409636527690455bdaac28\" successfully" Jul 6 23:58:41.264525 containerd[1646]: time="2025-07-06T23:58:41.261912228Z" level=info msg="StopPodSandbox for \"dd8b0779078be7ddc1011fe7e80b37e48186565ff6409636527690455bdaac28\" returns successfully" Jul 6 23:58:41.267597 containerd[1646]: time="2025-07-06T23:58:41.267579378Z" level=info msg="RemovePodSandbox for \"dd8b0779078be7ddc1011fe7e80b37e48186565ff6409636527690455bdaac28\"" Jul 6 23:58:41.267724 containerd[1646]: time="2025-07-06T23:58:41.267714764Z" level=info msg="Forcibly stopping sandbox \"dd8b0779078be7ddc1011fe7e80b37e48186565ff6409636527690455bdaac28\"" Jul 6 23:58:41.339083 kubelet[2915]: I0706 23:58:41.339048 2915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-cbc88db65-w5dbg" podStartSLOduration=4.339031596 podStartE2EDuration="4.339031596s" podCreationTimestamp="2025-07-06 23:58:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:58:41.31988609 +0000 UTC m=+61.725070507" watchObservedRunningTime="2025-07-06 23:58:41.339031596 +0000 UTC m=+61.744216013" Jul 6 23:58:41.370818 containerd[1646]: 2025-07-06 23:58:41.307 [WARNING][6429] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="dd8b0779078be7ddc1011fe7e80b37e48186565ff6409636527690455bdaac28" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--b7l64-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"edda1339-9324-4b2e-a45e-3e301a3aa698", ResourceVersion:"993", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 57, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ca8abc3b178eeec606bbb79902c6816480289ba44c065219d28226879d385130", Pod:"coredns-7c65d6cfc9-b7l64", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2fd4be5d904", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:58:41.370818 containerd[1646]: 2025-07-06 23:58:41.307 [INFO][6429] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="dd8b0779078be7ddc1011fe7e80b37e48186565ff6409636527690455bdaac28" Jul 6 23:58:41.370818 containerd[1646]: 2025-07-06 23:58:41.307 [INFO][6429] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="dd8b0779078be7ddc1011fe7e80b37e48186565ff6409636527690455bdaac28" iface="eth0" netns="" Jul 6 23:58:41.370818 containerd[1646]: 2025-07-06 23:58:41.307 [INFO][6429] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="dd8b0779078be7ddc1011fe7e80b37e48186565ff6409636527690455bdaac28" Jul 6 23:58:41.370818 containerd[1646]: 2025-07-06 23:58:41.307 [INFO][6429] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="dd8b0779078be7ddc1011fe7e80b37e48186565ff6409636527690455bdaac28" Jul 6 23:58:41.370818 containerd[1646]: 2025-07-06 23:58:41.336 [INFO][6436] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="dd8b0779078be7ddc1011fe7e80b37e48186565ff6409636527690455bdaac28" HandleID="k8s-pod-network.dd8b0779078be7ddc1011fe7e80b37e48186565ff6409636527690455bdaac28" Workload="localhost-k8s-coredns--7c65d6cfc9--b7l64-eth0" Jul 6 23:58:41.370818 containerd[1646]: 2025-07-06 23:58:41.337 [INFO][6436] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:58:41.370818 containerd[1646]: 2025-07-06 23:58:41.337 [INFO][6436] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:58:41.370818 containerd[1646]: 2025-07-06 23:58:41.348 [WARNING][6436] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="dd8b0779078be7ddc1011fe7e80b37e48186565ff6409636527690455bdaac28" HandleID="k8s-pod-network.dd8b0779078be7ddc1011fe7e80b37e48186565ff6409636527690455bdaac28" Workload="localhost-k8s-coredns--7c65d6cfc9--b7l64-eth0" Jul 6 23:58:41.370818 containerd[1646]: 2025-07-06 23:58:41.350 [INFO][6436] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="dd8b0779078be7ddc1011fe7e80b37e48186565ff6409636527690455bdaac28" HandleID="k8s-pod-network.dd8b0779078be7ddc1011fe7e80b37e48186565ff6409636527690455bdaac28" Workload="localhost-k8s-coredns--7c65d6cfc9--b7l64-eth0" Jul 6 23:58:41.370818 containerd[1646]: 2025-07-06 23:58:41.365 [INFO][6436] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:58:41.370818 containerd[1646]: 2025-07-06 23:58:41.367 [INFO][6429] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="dd8b0779078be7ddc1011fe7e80b37e48186565ff6409636527690455bdaac28" Jul 6 23:58:41.370818 containerd[1646]: time="2025-07-06T23:58:41.370791445Z" level=info msg="TearDown network for sandbox \"dd8b0779078be7ddc1011fe7e80b37e48186565ff6409636527690455bdaac28\" successfully" Jul 6 23:58:41.379279 containerd[1646]: time="2025-07-06T23:58:41.379254112Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"dd8b0779078be7ddc1011fe7e80b37e48186565ff6409636527690455bdaac28\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 6 23:58:41.379381 containerd[1646]: time="2025-07-06T23:58:41.379301105Z" level=info msg="RemovePodSandbox \"dd8b0779078be7ddc1011fe7e80b37e48186565ff6409636527690455bdaac28\" returns successfully" Jul 6 23:58:41.379639 containerd[1646]: time="2025-07-06T23:58:41.379595486Z" level=info msg="StopPodSandbox for \"d0c843f0c6df51c8735546399490a9323349811e5441710045276716bd30f3b2\"" Jul 6 23:58:41.431422 containerd[1646]: 2025-07-06 23:58:41.409 [WARNING][6452] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d0c843f0c6df51c8735546399490a9323349811e5441710045276716bd30f3b2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--cbc88db65--jzzkz-eth0", GenerateName:"calico-apiserver-cbc88db65-", Namespace:"calico-apiserver", SelfLink:"", UID:"1aab2d05-ec88-4437-948f-f21fe9b0d771", ResourceVersion:"1077", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 57, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"cbc88db65", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5d1e5071ce5b713787b744e03738176a94fd81f6020823d2e1e7758e77dbfd1d", Pod:"calico-apiserver-cbc88db65-jzzkz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calicc2da34053d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:58:41.431422 containerd[1646]: 2025-07-06 23:58:41.409 [INFO][6452] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d0c843f0c6df51c8735546399490a9323349811e5441710045276716bd30f3b2" Jul 6 23:58:41.431422 containerd[1646]: 2025-07-06 23:58:41.409 [INFO][6452] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d0c843f0c6df51c8735546399490a9323349811e5441710045276716bd30f3b2" iface="eth0" netns="" Jul 6 23:58:41.431422 containerd[1646]: 2025-07-06 23:58:41.409 [INFO][6452] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d0c843f0c6df51c8735546399490a9323349811e5441710045276716bd30f3b2" Jul 6 23:58:41.431422 containerd[1646]: 2025-07-06 23:58:41.409 [INFO][6452] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d0c843f0c6df51c8735546399490a9323349811e5441710045276716bd30f3b2" Jul 6 23:58:41.431422 containerd[1646]: 2025-07-06 23:58:41.424 [INFO][6459] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d0c843f0c6df51c8735546399490a9323349811e5441710045276716bd30f3b2" HandleID="k8s-pod-network.d0c843f0c6df51c8735546399490a9323349811e5441710045276716bd30f3b2" Workload="localhost-k8s-calico--apiserver--cbc88db65--jzzkz-eth0" Jul 6 23:58:41.431422 containerd[1646]: 2025-07-06 23:58:41.424 [INFO][6459] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:58:41.431422 containerd[1646]: 2025-07-06 23:58:41.424 [INFO][6459] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:58:41.431422 containerd[1646]: 2025-07-06 23:58:41.427 [WARNING][6459] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d0c843f0c6df51c8735546399490a9323349811e5441710045276716bd30f3b2" HandleID="k8s-pod-network.d0c843f0c6df51c8735546399490a9323349811e5441710045276716bd30f3b2" Workload="localhost-k8s-calico--apiserver--cbc88db65--jzzkz-eth0" Jul 6 23:58:41.431422 containerd[1646]: 2025-07-06 23:58:41.427 [INFO][6459] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d0c843f0c6df51c8735546399490a9323349811e5441710045276716bd30f3b2" HandleID="k8s-pod-network.d0c843f0c6df51c8735546399490a9323349811e5441710045276716bd30f3b2" Workload="localhost-k8s-calico--apiserver--cbc88db65--jzzkz-eth0" Jul 6 23:58:41.431422 containerd[1646]: 2025-07-06 23:58:41.428 [INFO][6459] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:58:41.431422 containerd[1646]: 2025-07-06 23:58:41.430 [INFO][6452] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d0c843f0c6df51c8735546399490a9323349811e5441710045276716bd30f3b2" Jul 6 23:58:41.431422 containerd[1646]: time="2025-07-06T23:58:41.431343962Z" level=info msg="TearDown network for sandbox \"d0c843f0c6df51c8735546399490a9323349811e5441710045276716bd30f3b2\" successfully" Jul 6 23:58:41.431422 containerd[1646]: time="2025-07-06T23:58:41.431358777Z" level=info msg="StopPodSandbox for \"d0c843f0c6df51c8735546399490a9323349811e5441710045276716bd30f3b2\" returns successfully" Jul 6 23:58:41.436890 containerd[1646]: time="2025-07-06T23:58:41.431854671Z" level=info msg="RemovePodSandbox for \"d0c843f0c6df51c8735546399490a9323349811e5441710045276716bd30f3b2\"" Jul 6 23:58:41.436890 containerd[1646]: time="2025-07-06T23:58:41.431872485Z" level=info msg="Forcibly stopping sandbox \"d0c843f0c6df51c8735546399490a9323349811e5441710045276716bd30f3b2\"" Jul 6 23:58:41.484981 containerd[1646]: 2025-07-06 23:58:41.463 [WARNING][6473] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d0c843f0c6df51c8735546399490a9323349811e5441710045276716bd30f3b2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--cbc88db65--jzzkz-eth0", GenerateName:"calico-apiserver-cbc88db65-", Namespace:"calico-apiserver", SelfLink:"", UID:"1aab2d05-ec88-4437-948f-f21fe9b0d771", ResourceVersion:"1077", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 57, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"cbc88db65", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5d1e5071ce5b713787b744e03738176a94fd81f6020823d2e1e7758e77dbfd1d", Pod:"calico-apiserver-cbc88db65-jzzkz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calicc2da34053d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:58:41.484981 containerd[1646]: 2025-07-06 23:58:41.463 [INFO][6473] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d0c843f0c6df51c8735546399490a9323349811e5441710045276716bd30f3b2" Jul 6 23:58:41.484981 containerd[1646]: 2025-07-06 23:58:41.463 [INFO][6473] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d0c843f0c6df51c8735546399490a9323349811e5441710045276716bd30f3b2" iface="eth0" netns="" Jul 6 23:58:41.484981 containerd[1646]: 2025-07-06 23:58:41.463 [INFO][6473] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d0c843f0c6df51c8735546399490a9323349811e5441710045276716bd30f3b2" Jul 6 23:58:41.484981 containerd[1646]: 2025-07-06 23:58:41.463 [INFO][6473] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d0c843f0c6df51c8735546399490a9323349811e5441710045276716bd30f3b2" Jul 6 23:58:41.484981 containerd[1646]: 2025-07-06 23:58:41.477 [INFO][6481] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d0c843f0c6df51c8735546399490a9323349811e5441710045276716bd30f3b2" HandleID="k8s-pod-network.d0c843f0c6df51c8735546399490a9323349811e5441710045276716bd30f3b2" Workload="localhost-k8s-calico--apiserver--cbc88db65--jzzkz-eth0" Jul 6 23:58:41.484981 containerd[1646]: 2025-07-06 23:58:41.477 [INFO][6481] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:58:41.484981 containerd[1646]: 2025-07-06 23:58:41.477 [INFO][6481] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:58:41.484981 containerd[1646]: 2025-07-06 23:58:41.480 [WARNING][6481] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d0c843f0c6df51c8735546399490a9323349811e5441710045276716bd30f3b2" HandleID="k8s-pod-network.d0c843f0c6df51c8735546399490a9323349811e5441710045276716bd30f3b2" Workload="localhost-k8s-calico--apiserver--cbc88db65--jzzkz-eth0" Jul 6 23:58:41.484981 containerd[1646]: 2025-07-06 23:58:41.480 [INFO][6481] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d0c843f0c6df51c8735546399490a9323349811e5441710045276716bd30f3b2" HandleID="k8s-pod-network.d0c843f0c6df51c8735546399490a9323349811e5441710045276716bd30f3b2" Workload="localhost-k8s-calico--apiserver--cbc88db65--jzzkz-eth0" Jul 6 23:58:41.484981 containerd[1646]: 2025-07-06 23:58:41.481 [INFO][6481] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:58:41.484981 containerd[1646]: 2025-07-06 23:58:41.483 [INFO][6473] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d0c843f0c6df51c8735546399490a9323349811e5441710045276716bd30f3b2" Jul 6 23:58:41.485765 containerd[1646]: time="2025-07-06T23:58:41.485030290Z" level=info msg="TearDown network for sandbox \"d0c843f0c6df51c8735546399490a9323349811e5441710045276716bd30f3b2\" successfully" Jul 6 23:58:41.488759 containerd[1646]: time="2025-07-06T23:58:41.488733369Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d0c843f0c6df51c8735546399490a9323349811e5441710045276716bd30f3b2\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 6 23:58:41.488804 containerd[1646]: time="2025-07-06T23:58:41.488766336Z" level=info msg="RemovePodSandbox \"d0c843f0c6df51c8735546399490a9323349811e5441710045276716bd30f3b2\" returns successfully" Jul 6 23:58:41.489839 containerd[1646]: time="2025-07-06T23:58:41.489122874Z" level=info msg="StopPodSandbox for \"f0f479c7249c63c5d8b2673a414c94d8ca74a4258d1279d96096fcf24d86b6ad\"" Jul 6 23:58:41.555019 containerd[1646]: 2025-07-06 23:58:41.528 [WARNING][6495] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f0f479c7249c63c5d8b2673a414c94d8ca74a4258d1279d96096fcf24d86b6ad" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--mvnfh-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"451e31a5-b9c2-46fb-96e4-f1e20de500e9", ResourceVersion:"1108", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 57, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6461fb565e002e05b72e4cbc5a9767f7c3a4335865e788a6801fd0f8ec0c4f85", Pod:"csi-node-driver-mvnfh", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calif183d8f7871", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:58:41.555019 containerd[1646]: 2025-07-06 23:58:41.528 [INFO][6495] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f0f479c7249c63c5d8b2673a414c94d8ca74a4258d1279d96096fcf24d86b6ad" Jul 6 23:58:41.555019 containerd[1646]: 2025-07-06 23:58:41.528 [INFO][6495] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f0f479c7249c63c5d8b2673a414c94d8ca74a4258d1279d96096fcf24d86b6ad" iface="eth0" netns="" Jul 6 23:58:41.555019 containerd[1646]: 2025-07-06 23:58:41.528 [INFO][6495] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f0f479c7249c63c5d8b2673a414c94d8ca74a4258d1279d96096fcf24d86b6ad" Jul 6 23:58:41.555019 containerd[1646]: 2025-07-06 23:58:41.528 [INFO][6495] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f0f479c7249c63c5d8b2673a414c94d8ca74a4258d1279d96096fcf24d86b6ad" Jul 6 23:58:41.555019 containerd[1646]: 2025-07-06 23:58:41.546 [INFO][6502] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f0f479c7249c63c5d8b2673a414c94d8ca74a4258d1279d96096fcf24d86b6ad" HandleID="k8s-pod-network.f0f479c7249c63c5d8b2673a414c94d8ca74a4258d1279d96096fcf24d86b6ad" Workload="localhost-k8s-csi--node--driver--mvnfh-eth0" Jul 6 23:58:41.555019 containerd[1646]: 2025-07-06 23:58:41.546 [INFO][6502] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:58:41.555019 containerd[1646]: 2025-07-06 23:58:41.546 [INFO][6502] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:58:41.555019 containerd[1646]: 2025-07-06 23:58:41.550 [WARNING][6502] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f0f479c7249c63c5d8b2673a414c94d8ca74a4258d1279d96096fcf24d86b6ad" HandleID="k8s-pod-network.f0f479c7249c63c5d8b2673a414c94d8ca74a4258d1279d96096fcf24d86b6ad" Workload="localhost-k8s-csi--node--driver--mvnfh-eth0" Jul 6 23:58:41.555019 containerd[1646]: 2025-07-06 23:58:41.550 [INFO][6502] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f0f479c7249c63c5d8b2673a414c94d8ca74a4258d1279d96096fcf24d86b6ad" HandleID="k8s-pod-network.f0f479c7249c63c5d8b2673a414c94d8ca74a4258d1279d96096fcf24d86b6ad" Workload="localhost-k8s-csi--node--driver--mvnfh-eth0" Jul 6 23:58:41.555019 containerd[1646]: 2025-07-06 23:58:41.552 [INFO][6502] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:58:41.555019 containerd[1646]: 2025-07-06 23:58:41.553 [INFO][6495] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f0f479c7249c63c5d8b2673a414c94d8ca74a4258d1279d96096fcf24d86b6ad" Jul 6 23:58:41.558276 containerd[1646]: time="2025-07-06T23:58:41.555047267Z" level=info msg="TearDown network for sandbox \"f0f479c7249c63c5d8b2673a414c94d8ca74a4258d1279d96096fcf24d86b6ad\" successfully" Jul 6 23:58:41.558276 containerd[1646]: time="2025-07-06T23:58:41.555062879Z" level=info msg="StopPodSandbox for \"f0f479c7249c63c5d8b2673a414c94d8ca74a4258d1279d96096fcf24d86b6ad\" returns successfully" Jul 6 23:58:41.558276 containerd[1646]: time="2025-07-06T23:58:41.555438761Z" level=info msg="RemovePodSandbox for \"f0f479c7249c63c5d8b2673a414c94d8ca74a4258d1279d96096fcf24d86b6ad\"" Jul 6 23:58:41.558276 containerd[1646]: time="2025-07-06T23:58:41.555509274Z" level=info msg="Forcibly stopping sandbox \"f0f479c7249c63c5d8b2673a414c94d8ca74a4258d1279d96096fcf24d86b6ad\"" Jul 6 23:58:41.609106 containerd[1646]: 2025-07-06 23:58:41.581 [WARNING][6516] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f0f479c7249c63c5d8b2673a414c94d8ca74a4258d1279d96096fcf24d86b6ad" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--mvnfh-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"451e31a5-b9c2-46fb-96e4-f1e20de500e9", ResourceVersion:"1108", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 57, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6461fb565e002e05b72e4cbc5a9767f7c3a4335865e788a6801fd0f8ec0c4f85", Pod:"csi-node-driver-mvnfh", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calif183d8f7871", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:58:41.609106 containerd[1646]: 2025-07-06 23:58:41.581 [INFO][6516] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f0f479c7249c63c5d8b2673a414c94d8ca74a4258d1279d96096fcf24d86b6ad" Jul 6 23:58:41.609106 containerd[1646]: 2025-07-06 23:58:41.581 [INFO][6516] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f0f479c7249c63c5d8b2673a414c94d8ca74a4258d1279d96096fcf24d86b6ad" iface="eth0" netns="" Jul 6 23:58:41.609106 containerd[1646]: 2025-07-06 23:58:41.581 [INFO][6516] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f0f479c7249c63c5d8b2673a414c94d8ca74a4258d1279d96096fcf24d86b6ad" Jul 6 23:58:41.609106 containerd[1646]: 2025-07-06 23:58:41.581 [INFO][6516] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f0f479c7249c63c5d8b2673a414c94d8ca74a4258d1279d96096fcf24d86b6ad" Jul 6 23:58:41.609106 containerd[1646]: 2025-07-06 23:58:41.598 [INFO][6523] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f0f479c7249c63c5d8b2673a414c94d8ca74a4258d1279d96096fcf24d86b6ad" HandleID="k8s-pod-network.f0f479c7249c63c5d8b2673a414c94d8ca74a4258d1279d96096fcf24d86b6ad" Workload="localhost-k8s-csi--node--driver--mvnfh-eth0" Jul 6 23:58:41.609106 containerd[1646]: 2025-07-06 23:58:41.598 [INFO][6523] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:58:41.609106 containerd[1646]: 2025-07-06 23:58:41.598 [INFO][6523] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:58:41.609106 containerd[1646]: 2025-07-06 23:58:41.603 [WARNING][6523] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f0f479c7249c63c5d8b2673a414c94d8ca74a4258d1279d96096fcf24d86b6ad" HandleID="k8s-pod-network.f0f479c7249c63c5d8b2673a414c94d8ca74a4258d1279d96096fcf24d86b6ad" Workload="localhost-k8s-csi--node--driver--mvnfh-eth0" Jul 6 23:58:41.609106 containerd[1646]: 2025-07-06 23:58:41.603 [INFO][6523] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f0f479c7249c63c5d8b2673a414c94d8ca74a4258d1279d96096fcf24d86b6ad" HandleID="k8s-pod-network.f0f479c7249c63c5d8b2673a414c94d8ca74a4258d1279d96096fcf24d86b6ad" Workload="localhost-k8s-csi--node--driver--mvnfh-eth0" Jul 6 23:58:41.609106 containerd[1646]: 2025-07-06 23:58:41.604 [INFO][6523] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:58:41.609106 containerd[1646]: 2025-07-06 23:58:41.607 [INFO][6516] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f0f479c7249c63c5d8b2673a414c94d8ca74a4258d1279d96096fcf24d86b6ad" Jul 6 23:58:41.609106 containerd[1646]: time="2025-07-06T23:58:41.608990647Z" level=info msg="TearDown network for sandbox \"f0f479c7249c63c5d8b2673a414c94d8ca74a4258d1279d96096fcf24d86b6ad\" successfully" Jul 6 23:58:41.611781 containerd[1646]: time="2025-07-06T23:58:41.611754863Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f0f479c7249c63c5d8b2673a414c94d8ca74a4258d1279d96096fcf24d86b6ad\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 6 23:58:41.611829 containerd[1646]: time="2025-07-06T23:58:41.611792357Z" level=info msg="RemovePodSandbox \"f0f479c7249c63c5d8b2673a414c94d8ca74a4258d1279d96096fcf24d86b6ad\" returns successfully" Jul 6 23:58:41.612755 containerd[1646]: time="2025-07-06T23:58:41.612736413Z" level=info msg="StopPodSandbox for \"164527933e14a429286d37d9f1028b7584a78d0eea5ddfbb13990110285bfa5a\"" Jul 6 23:58:41.690163 containerd[1646]: 2025-07-06 23:58:41.658 [WARNING][6538] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="164527933e14a429286d37d9f1028b7584a78d0eea5ddfbb13990110285bfa5a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--rwltj-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"6ee4ce16-275b-4d42-b3b2-5651a9edac72", ResourceVersion:"941", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 57, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0d00052c86bf1f5d8cea5e5c94afaa55a723afcd08018c58522ef559cda1d383", Pod:"coredns-7c65d6cfc9-rwltj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib6621f5c05e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:58:41.690163 containerd[1646]: 2025-07-06 23:58:41.658 [INFO][6538] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="164527933e14a429286d37d9f1028b7584a78d0eea5ddfbb13990110285bfa5a" Jul 6 23:58:41.690163 containerd[1646]: 2025-07-06 23:58:41.658 [INFO][6538] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="164527933e14a429286d37d9f1028b7584a78d0eea5ddfbb13990110285bfa5a" iface="eth0" netns="" Jul 6 23:58:41.690163 containerd[1646]: 2025-07-06 23:58:41.658 [INFO][6538] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="164527933e14a429286d37d9f1028b7584a78d0eea5ddfbb13990110285bfa5a" Jul 6 23:58:41.690163 containerd[1646]: 2025-07-06 23:58:41.658 [INFO][6538] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="164527933e14a429286d37d9f1028b7584a78d0eea5ddfbb13990110285bfa5a" Jul 6 23:58:41.690163 containerd[1646]: 2025-07-06 23:58:41.679 [INFO][6545] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="164527933e14a429286d37d9f1028b7584a78d0eea5ddfbb13990110285bfa5a" HandleID="k8s-pod-network.164527933e14a429286d37d9f1028b7584a78d0eea5ddfbb13990110285bfa5a" Workload="localhost-k8s-coredns--7c65d6cfc9--rwltj-eth0" Jul 6 23:58:41.690163 containerd[1646]: 2025-07-06 23:58:41.680 [INFO][6545] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:58:41.690163 containerd[1646]: 2025-07-06 23:58:41.680 [INFO][6545] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:58:41.690163 containerd[1646]: 2025-07-06 23:58:41.685 [WARNING][6545] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="164527933e14a429286d37d9f1028b7584a78d0eea5ddfbb13990110285bfa5a" HandleID="k8s-pod-network.164527933e14a429286d37d9f1028b7584a78d0eea5ddfbb13990110285bfa5a" Workload="localhost-k8s-coredns--7c65d6cfc9--rwltj-eth0" Jul 6 23:58:41.690163 containerd[1646]: 2025-07-06 23:58:41.685 [INFO][6545] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="164527933e14a429286d37d9f1028b7584a78d0eea5ddfbb13990110285bfa5a" HandleID="k8s-pod-network.164527933e14a429286d37d9f1028b7584a78d0eea5ddfbb13990110285bfa5a" Workload="localhost-k8s-coredns--7c65d6cfc9--rwltj-eth0" Jul 6 23:58:41.690163 containerd[1646]: 2025-07-06 23:58:41.686 [INFO][6545] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:58:41.690163 containerd[1646]: 2025-07-06 23:58:41.687 [INFO][6538] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="164527933e14a429286d37d9f1028b7584a78d0eea5ddfbb13990110285bfa5a" Jul 6 23:58:41.693795 containerd[1646]: time="2025-07-06T23:58:41.690951387Z" level=info msg="TearDown network for sandbox \"164527933e14a429286d37d9f1028b7584a78d0eea5ddfbb13990110285bfa5a\" successfully" Jul 6 23:58:41.693795 containerd[1646]: time="2025-07-06T23:58:41.690969958Z" level=info msg="StopPodSandbox for \"164527933e14a429286d37d9f1028b7584a78d0eea5ddfbb13990110285bfa5a\" returns successfully" Jul 6 23:58:41.698176 containerd[1646]: time="2025-07-06T23:58:41.698084974Z" level=info msg="RemovePodSandbox for \"164527933e14a429286d37d9f1028b7584a78d0eea5ddfbb13990110285bfa5a\"" Jul 6 23:58:41.698176 containerd[1646]: time="2025-07-06T23:58:41.698110891Z" level=info msg="Forcibly stopping sandbox \"164527933e14a429286d37d9f1028b7584a78d0eea5ddfbb13990110285bfa5a\"" Jul 6 23:58:41.772057 containerd[1646]: 2025-07-06 23:58:41.730 [WARNING][6560] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="164527933e14a429286d37d9f1028b7584a78d0eea5ddfbb13990110285bfa5a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--rwltj-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"6ee4ce16-275b-4d42-b3b2-5651a9edac72", ResourceVersion:"941", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 57, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0d00052c86bf1f5d8cea5e5c94afaa55a723afcd08018c58522ef559cda1d383", Pod:"coredns-7c65d6cfc9-rwltj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib6621f5c05e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:58:41.772057 containerd[1646]: 2025-07-06 23:58:41.730 [INFO][6560] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="164527933e14a429286d37d9f1028b7584a78d0eea5ddfbb13990110285bfa5a" Jul 6 23:58:41.772057 containerd[1646]: 2025-07-06 23:58:41.730 [INFO][6560] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="164527933e14a429286d37d9f1028b7584a78d0eea5ddfbb13990110285bfa5a" iface="eth0" netns="" Jul 6 23:58:41.772057 containerd[1646]: 2025-07-06 23:58:41.730 [INFO][6560] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="164527933e14a429286d37d9f1028b7584a78d0eea5ddfbb13990110285bfa5a" Jul 6 23:58:41.772057 containerd[1646]: 2025-07-06 23:58:41.730 [INFO][6560] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="164527933e14a429286d37d9f1028b7584a78d0eea5ddfbb13990110285bfa5a" Jul 6 23:58:41.772057 containerd[1646]: 2025-07-06 23:58:41.755 [INFO][6567] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="164527933e14a429286d37d9f1028b7584a78d0eea5ddfbb13990110285bfa5a" HandleID="k8s-pod-network.164527933e14a429286d37d9f1028b7584a78d0eea5ddfbb13990110285bfa5a" Workload="localhost-k8s-coredns--7c65d6cfc9--rwltj-eth0" Jul 6 23:58:41.772057 containerd[1646]: 2025-07-06 23:58:41.755 [INFO][6567] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:58:41.772057 containerd[1646]: 2025-07-06 23:58:41.755 [INFO][6567] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:58:41.772057 containerd[1646]: 2025-07-06 23:58:41.765 [WARNING][6567] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="164527933e14a429286d37d9f1028b7584a78d0eea5ddfbb13990110285bfa5a" HandleID="k8s-pod-network.164527933e14a429286d37d9f1028b7584a78d0eea5ddfbb13990110285bfa5a" Workload="localhost-k8s-coredns--7c65d6cfc9--rwltj-eth0" Jul 6 23:58:41.772057 containerd[1646]: 2025-07-06 23:58:41.765 [INFO][6567] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="164527933e14a429286d37d9f1028b7584a78d0eea5ddfbb13990110285bfa5a" HandleID="k8s-pod-network.164527933e14a429286d37d9f1028b7584a78d0eea5ddfbb13990110285bfa5a" Workload="localhost-k8s-coredns--7c65d6cfc9--rwltj-eth0" Jul 6 23:58:41.772057 containerd[1646]: 2025-07-06 23:58:41.766 [INFO][6567] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:58:41.772057 containerd[1646]: 2025-07-06 23:58:41.770 [INFO][6560] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="164527933e14a429286d37d9f1028b7584a78d0eea5ddfbb13990110285bfa5a" Jul 6 23:58:41.772897 containerd[1646]: time="2025-07-06T23:58:41.772082475Z" level=info msg="TearDown network for sandbox \"164527933e14a429286d37d9f1028b7584a78d0eea5ddfbb13990110285bfa5a\" successfully" Jul 6 23:58:41.775101 containerd[1646]: time="2025-07-06T23:58:41.775083123Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"164527933e14a429286d37d9f1028b7584a78d0eea5ddfbb13990110285bfa5a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 6 23:58:41.775153 containerd[1646]: time="2025-07-06T23:58:41.775145649Z" level=info msg="RemovePodSandbox \"164527933e14a429286d37d9f1028b7584a78d0eea5ddfbb13990110285bfa5a\" returns successfully" Jul 6 23:58:41.795463 kubelet[2915]: I0706 23:58:41.795432 2915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="78734851-1240-4e35-b671-f1113509a863" path="/var/lib/kubelet/pods/78734851-1240-4e35-b671-f1113509a863/volumes" Jul 6 23:58:42.195706 kubelet[2915]: I0706 23:58:42.195591 2915 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 6 23:58:42.991781 systemd-resolved[1540]: Under memory pressure, flushing caches. Jul 6 23:58:43.002962 systemd-journald[1183]: Under memory pressure, flushing caches. Jul 6 23:58:42.991787 systemd-resolved[1540]: Flushed all caches. Jul 6 23:58:48.561021 systemd-resolved[1540]: Under memory pressure, flushing caches. Jul 6 23:58:48.561027 systemd-resolved[1540]: Flushed all caches. Jul 6 23:58:48.562680 systemd-journald[1183]: Under memory pressure, flushing caches. Jul 6 23:58:50.608698 systemd-journald[1183]: Under memory pressure, flushing caches. Jul 6 23:58:50.607775 systemd-resolved[1540]: Under memory pressure, flushing caches. Jul 6 23:58:50.607782 systemd-resolved[1540]: Flushed all caches. Jul 6 23:58:58.111809 systemd[1]: Started sshd@7-139.178.70.109:22-139.178.68.195:60668.service - OpenSSH per-connection server daemon (139.178.68.195:60668). Jul 6 23:58:58.258394 sshd[6673]: Accepted publickey for core from 139.178.68.195 port 60668 ssh2: RSA SHA256:/9exTOE5j0h3myXsW5LwESM2vwqV1QarY1uHfK4Vy7k Jul 6 23:58:58.263051 sshd[6673]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:58:58.285741 systemd-logind[1621]: New session 10 of user core. Jul 6 23:58:58.289756 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 6 23:58:58.745544 sshd[6673]: pam_unix(sshd:session): session closed for user core Jul 6 23:58:58.757393 systemd-logind[1621]: Session 10 logged out. Waiting for processes to exit. Jul 6 23:58:58.761340 systemd[1]: sshd@7-139.178.70.109:22-139.178.68.195:60668.service: Deactivated successfully. Jul 6 23:58:58.763387 systemd[1]: session-10.scope: Deactivated successfully. Jul 6 23:58:58.764889 systemd-logind[1621]: Removed session 10. Jul 6 23:59:00.591994 systemd-resolved[1540]: Under memory pressure, flushing caches. Jul 6 23:59:00.611289 systemd-journald[1183]: Under memory pressure, flushing caches. Jul 6 23:59:00.591999 systemd-resolved[1540]: Flushed all caches. Jul 6 23:59:02.639733 systemd-resolved[1540]: Under memory pressure, flushing caches. Jul 6 23:59:02.640823 systemd-journald[1183]: Under memory pressure, flushing caches. Jul 6 23:59:02.639739 systemd-resolved[1540]: Flushed all caches. Jul 6 23:59:03.764830 systemd[1]: Started sshd@8-139.178.70.109:22-139.178.68.195:60684.service - OpenSSH per-connection server daemon (139.178.68.195:60684). Jul 6 23:59:03.943544 sshd[6706]: Accepted publickey for core from 139.178.68.195 port 60684 ssh2: RSA SHA256:/9exTOE5j0h3myXsW5LwESM2vwqV1QarY1uHfK4Vy7k Jul 6 23:59:03.968181 sshd[6706]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:59:03.974308 systemd-logind[1621]: New session 11 of user core. Jul 6 23:59:03.981769 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 6 23:59:04.687766 systemd-resolved[1540]: Under memory pressure, flushing caches. Jul 6 23:59:04.694431 systemd-journald[1183]: Under memory pressure, flushing caches. Jul 6 23:59:04.687771 systemd-resolved[1540]: Flushed all caches. Jul 6 23:59:04.786192 kubelet[2915]: I0706 23:59:04.764167 2915 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 6 23:59:05.591497 sshd[6706]: pam_unix(sshd:session): session closed for user core Jul 6 23:59:05.617047 systemd[1]: sshd@8-139.178.70.109:22-139.178.68.195:60684.service: Deactivated successfully. Jul 6 23:59:05.624812 systemd[1]: session-11.scope: Deactivated successfully. Jul 6 23:59:05.625636 systemd-logind[1621]: Session 11 logged out. Waiting for processes to exit. Jul 6 23:59:05.628836 systemd-logind[1621]: Removed session 11. Jul 6 23:59:05.648169 containerd[1646]: time="2025-07-06T23:59:05.640313695Z" level=info msg="StopContainer for \"d58de1d7022bb21d24395e1cf1e49b7e37b6ee6df05083db0a590f234ccd90f2\" with timeout 30 (s)" Jul 6 23:59:06.039256 containerd[1646]: time="2025-07-06T23:59:06.039184396Z" level=info msg="Stop container \"d58de1d7022bb21d24395e1cf1e49b7e37b6ee6df05083db0a590f234ccd90f2\" with signal terminated" Jul 6 23:59:06.276404 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d58de1d7022bb21d24395e1cf1e49b7e37b6ee6df05083db0a590f234ccd90f2-rootfs.mount: Deactivated successfully. Jul 6 23:59:06.290692 containerd[1646]: time="2025-07-06T23:59:06.281152457Z" level=info msg="shim disconnected" id=d58de1d7022bb21d24395e1cf1e49b7e37b6ee6df05083db0a590f234ccd90f2 namespace=k8s.io Jul 6 23:59:06.290692 containerd[1646]: time="2025-07-06T23:59:06.290533982Z" level=warning msg="cleaning up after shim disconnected" id=d58de1d7022bb21d24395e1cf1e49b7e37b6ee6df05083db0a590f234ccd90f2 namespace=k8s.io Jul 6 23:59:06.290692 containerd[1646]: time="2025-07-06T23:59:06.290543991Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 6 23:59:06.346322 containerd[1646]: time="2025-07-06T23:59:06.346290863Z" level=info msg="StopContainer for \"d58de1d7022bb21d24395e1cf1e49b7e37b6ee6df05083db0a590f234ccd90f2\" returns successfully" Jul 6 23:59:06.367395 containerd[1646]: time="2025-07-06T23:59:06.367358348Z" level=info msg="StopPodSandbox for \"4abfd4ae1f87c33ea5d39d546776e977b621c84c7dc2664f6e53324b212f5ca2\"" Jul 6 23:59:06.367395 containerd[1646]: time="2025-07-06T23:59:06.367392806Z" level=info msg="Container to stop \"d58de1d7022bb21d24395e1cf1e49b7e37b6ee6df05083db0a590f234ccd90f2\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 6 23:59:06.369559 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4abfd4ae1f87c33ea5d39d546776e977b621c84c7dc2664f6e53324b212f5ca2-shm.mount: Deactivated successfully. Jul 6 23:59:06.390400 containerd[1646]: time="2025-07-06T23:59:06.390364361Z" level=info msg="shim disconnected" id=4abfd4ae1f87c33ea5d39d546776e977b621c84c7dc2664f6e53324b212f5ca2 namespace=k8s.io Jul 6 23:59:06.390400 containerd[1646]: time="2025-07-06T23:59:06.390396955Z" level=warning msg="cleaning up after shim disconnected" id=4abfd4ae1f87c33ea5d39d546776e977b621c84c7dc2664f6e53324b212f5ca2 namespace=k8s.io Jul 6 23:59:06.390400 containerd[1646]: time="2025-07-06T23:59:06.390403073Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 6 23:59:06.392231 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4abfd4ae1f87c33ea5d39d546776e977b621c84c7dc2664f6e53324b212f5ca2-rootfs.mount: Deactivated successfully. Jul 6 23:59:06.735686 systemd-resolved[1540]: Under memory pressure, flushing caches. Jul 6 23:59:06.776234 systemd-journald[1183]: Under memory pressure, flushing caches. Jul 6 23:59:06.735696 systemd-resolved[1540]: Flushed all caches. Jul 6 23:59:06.856894 kubelet[2915]: I0706 23:59:06.856857 2915 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4abfd4ae1f87c33ea5d39d546776e977b621c84c7dc2664f6e53324b212f5ca2" Jul 6 23:59:08.301033 systemd-networkd[1286]: calic3137b4a894: Link DOWN Jul 6 23:59:08.301039 systemd-networkd[1286]: calic3137b4a894: Lost carrier Jul 6 23:59:08.784758 systemd-resolved[1540]: Under memory pressure, flushing caches. Jul 6 23:59:08.792188 systemd-journald[1183]: Under memory pressure, flushing caches. Jul 6 23:59:08.784763 systemd-resolved[1540]: Flushed all caches. Jul 6 23:59:09.053945 containerd[1646]: 2025-07-06 23:59:08.264 [INFO][6799] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4abfd4ae1f87c33ea5d39d546776e977b621c84c7dc2664f6e53324b212f5ca2" Jul 6 23:59:09.053945 containerd[1646]: 2025-07-06 23:59:08.272 [INFO][6799] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="4abfd4ae1f87c33ea5d39d546776e977b621c84c7dc2664f6e53324b212f5ca2" iface="eth0" netns="/var/run/netns/cni-e76c395c-3348-2598-f638-41c6aa37476d" Jul 6 23:59:09.053945 containerd[1646]: 2025-07-06 23:59:08.274 [INFO][6799] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="4abfd4ae1f87c33ea5d39d546776e977b621c84c7dc2664f6e53324b212f5ca2" iface="eth0" netns="/var/run/netns/cni-e76c395c-3348-2598-f638-41c6aa37476d" Jul 6 23:59:09.053945 containerd[1646]: 2025-07-06 23:59:08.289 [INFO][6799] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="4abfd4ae1f87c33ea5d39d546776e977b621c84c7dc2664f6e53324b212f5ca2" after=15.508612ms iface="eth0" netns="/var/run/netns/cni-e76c395c-3348-2598-f638-41c6aa37476d" Jul 6 23:59:09.053945 containerd[1646]: 2025-07-06 23:59:08.289 [INFO][6799] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4abfd4ae1f87c33ea5d39d546776e977b621c84c7dc2664f6e53324b212f5ca2" Jul 6 23:59:09.053945 containerd[1646]: 2025-07-06 23:59:08.289 [INFO][6799] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4abfd4ae1f87c33ea5d39d546776e977b621c84c7dc2664f6e53324b212f5ca2" Jul 6 23:59:09.053945 containerd[1646]: 2025-07-06 23:59:08.956 [INFO][6809] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4abfd4ae1f87c33ea5d39d546776e977b621c84c7dc2664f6e53324b212f5ca2" HandleID="k8s-pod-network.4abfd4ae1f87c33ea5d39d546776e977b621c84c7dc2664f6e53324b212f5ca2" Workload="localhost-k8s-calico--apiserver--66d984f854--sqcdr-eth0" Jul 6 23:59:09.053945 containerd[1646]: 2025-07-06 23:59:08.958 [INFO][6809] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:59:09.053945 containerd[1646]: 2025-07-06 23:59:08.958 [INFO][6809] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:59:09.053945 containerd[1646]: 2025-07-06 23:59:09.037 [INFO][6809] ipam/ipam_plugin.go 431: Released address using handleID ContainerID="4abfd4ae1f87c33ea5d39d546776e977b621c84c7dc2664f6e53324b212f5ca2" HandleID="k8s-pod-network.4abfd4ae1f87c33ea5d39d546776e977b621c84c7dc2664f6e53324b212f5ca2" Workload="localhost-k8s-calico--apiserver--66d984f854--sqcdr-eth0" Jul 6 23:59:09.053945 containerd[1646]: 2025-07-06 23:59:09.037 [INFO][6809] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4abfd4ae1f87c33ea5d39d546776e977b621c84c7dc2664f6e53324b212f5ca2" HandleID="k8s-pod-network.4abfd4ae1f87c33ea5d39d546776e977b621c84c7dc2664f6e53324b212f5ca2" Workload="localhost-k8s-calico--apiserver--66d984f854--sqcdr-eth0" Jul 6 23:59:09.053945 containerd[1646]: 2025-07-06 23:59:09.039 [INFO][6809] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:59:09.053945 containerd[1646]: 2025-07-06 23:59:09.041 [INFO][6799] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4abfd4ae1f87c33ea5d39d546776e977b621c84c7dc2664f6e53324b212f5ca2" Jul 6 23:59:09.078106 systemd[1]: run-netns-cni\x2de76c395c\x2d3348\x2d2598\x2df638\x2d41c6aa37476d.mount: Deactivated successfully. Jul 6 23:59:09.081830 containerd[1646]: time="2025-07-06T23:59:09.078964897Z" level=info msg="TearDown network for sandbox \"4abfd4ae1f87c33ea5d39d546776e977b621c84c7dc2664f6e53324b212f5ca2\" successfully" Jul 6 23:59:09.081830 containerd[1646]: time="2025-07-06T23:59:09.079526910Z" level=info msg="StopPodSandbox for \"4abfd4ae1f87c33ea5d39d546776e977b621c84c7dc2664f6e53324b212f5ca2\" returns successfully" Jul 6 23:59:09.310427 kubelet[2915]: I0706 23:59:09.310213 2915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hjfsg\" (UniqueName: \"kubernetes.io/projected/fa6083f4-7360-4fbe-9116-052981667c66-kube-api-access-hjfsg\") pod \"fa6083f4-7360-4fbe-9116-052981667c66\" (UID: \"fa6083f4-7360-4fbe-9116-052981667c66\") " Jul 6 23:59:09.310427 kubelet[2915]: I0706 23:59:09.310270 2915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/fa6083f4-7360-4fbe-9116-052981667c66-calico-apiserver-certs\") pod \"fa6083f4-7360-4fbe-9116-052981667c66\" (UID: \"fa6083f4-7360-4fbe-9116-052981667c66\") " Jul 6 23:59:09.375939 systemd[1]: var-lib-kubelet-pods-fa6083f4\x2d7360\x2d4fbe\x2d9116\x2d052981667c66-volumes-kubernetes.io\x7esecret-calico\x2dapiserver\x2dcerts.mount: Deactivated successfully. Jul 6 23:59:09.378367 systemd[1]: var-lib-kubelet-pods-fa6083f4\x2d7360\x2d4fbe\x2d9116\x2d052981667c66-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dhjfsg.mount: Deactivated successfully. Jul 6 23:59:09.389528 kubelet[2915]: I0706 23:59:09.387058 2915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa6083f4-7360-4fbe-9116-052981667c66-kube-api-access-hjfsg" (OuterVolumeSpecName: "kube-api-access-hjfsg") pod "fa6083f4-7360-4fbe-9116-052981667c66" (UID: "fa6083f4-7360-4fbe-9116-052981667c66"). InnerVolumeSpecName "kube-api-access-hjfsg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 6 23:59:09.389925 kubelet[2915]: I0706 23:59:09.387343 2915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa6083f4-7360-4fbe-9116-052981667c66-calico-apiserver-certs" (OuterVolumeSpecName: "calico-apiserver-certs") pod "fa6083f4-7360-4fbe-9116-052981667c66" (UID: "fa6083f4-7360-4fbe-9116-052981667c66"). InnerVolumeSpecName "calico-apiserver-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 6 23:59:09.410521 kubelet[2915]: I0706 23:59:09.410500 2915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hjfsg\" (UniqueName: \"kubernetes.io/projected/fa6083f4-7360-4fbe-9116-052981667c66-kube-api-access-hjfsg\") on node \"localhost\" DevicePath \"\"" Jul 6 23:59:09.410737 kubelet[2915]: I0706 23:59:09.410558 2915 reconciler_common.go:293] "Volume detached for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/fa6083f4-7360-4fbe-9116-052981667c66-calico-apiserver-certs\") on node \"localhost\" DevicePath \"\"" Jul 6 23:59:10.673871 systemd[1]: Started sshd@9-139.178.70.109:22-139.178.68.195:59360.service - OpenSSH per-connection server daemon (139.178.68.195:59360). Jul 6 23:59:10.831952 systemd-resolved[1540]: Under memory pressure, flushing caches. Jul 6 23:59:10.831957 systemd-resolved[1540]: Flushed all caches. Jul 6 23:59:10.832662 systemd-journald[1183]: Under memory pressure, flushing caches. Jul 6 23:59:11.209331 sshd[6831]: Accepted publickey for core from 139.178.68.195 port 59360 ssh2: RSA SHA256:/9exTOE5j0h3myXsW5LwESM2vwqV1QarY1uHfK4Vy7k Jul 6 23:59:11.212639 sshd[6831]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:59:11.220548 systemd-logind[1621]: New session 12 of user core. Jul 6 23:59:11.225827 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 6 23:59:11.747733 sshd[6831]: pam_unix(sshd:session): session closed for user core Jul 6 23:59:11.754756 systemd[1]: Started sshd@10-139.178.70.109:22-139.178.68.195:59364.service - OpenSSH per-connection server daemon (139.178.68.195:59364). Jul 6 23:59:11.755020 systemd[1]: sshd@9-139.178.70.109:22-139.178.68.195:59360.service: Deactivated successfully. Jul 6 23:59:11.757452 systemd-logind[1621]: Session 12 logged out. Waiting for processes to exit. Jul 6 23:59:11.758910 systemd[1]: session-12.scope: Deactivated successfully. Jul 6 23:59:11.761383 systemd-logind[1621]: Removed session 12. Jul 6 23:59:11.800579 kubelet[2915]: I0706 23:59:11.794935 2915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fa6083f4-7360-4fbe-9116-052981667c66" path="/var/lib/kubelet/pods/fa6083f4-7360-4fbe-9116-052981667c66/volumes" Jul 6 23:59:11.812584 sshd[6844]: Accepted publickey for core from 139.178.68.195 port 59364 ssh2: RSA SHA256:/9exTOE5j0h3myXsW5LwESM2vwqV1QarY1uHfK4Vy7k Jul 6 23:59:11.814194 sshd[6844]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:59:11.816896 systemd-logind[1621]: New session 13 of user core. Jul 6 23:59:11.821760 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 6 23:59:12.168944 sshd[6844]: pam_unix(sshd:session): session closed for user core Jul 6 23:59:12.173892 systemd[1]: Started sshd@11-139.178.70.109:22-139.178.68.195:59378.service - OpenSSH per-connection server daemon (139.178.68.195:59378). Jul 6 23:59:12.175402 systemd[1]: sshd@10-139.178.70.109:22-139.178.68.195:59364.service: Deactivated successfully. Jul 6 23:59:12.178864 systemd-logind[1621]: Session 13 logged out. Waiting for processes to exit. Jul 6 23:59:12.179720 systemd[1]: session-13.scope: Deactivated successfully. Jul 6 23:59:12.180474 systemd-logind[1621]: Removed session 13. Jul 6 23:59:12.482411 sshd[6857]: Accepted publickey for core from 139.178.68.195 port 59378 ssh2: RSA SHA256:/9exTOE5j0h3myXsW5LwESM2vwqV1QarY1uHfK4Vy7k Jul 6 23:59:12.487639 sshd[6857]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:59:12.490906 systemd-logind[1621]: New session 14 of user core. Jul 6 23:59:12.495825 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 6 23:59:12.684124 sshd[6857]: pam_unix(sshd:session): session closed for user core Jul 6 23:59:12.686099 systemd[1]: sshd@11-139.178.70.109:22-139.178.68.195:59378.service: Deactivated successfully. Jul 6 23:59:12.688221 systemd[1]: session-14.scope: Deactivated successfully. Jul 6 23:59:12.688241 systemd-logind[1621]: Session 14 logged out. Waiting for processes to exit. Jul 6 23:59:12.689228 systemd-logind[1621]: Removed session 14. Jul 6 23:59:12.879863 systemd-resolved[1540]: Under memory pressure, flushing caches. Jul 6 23:59:12.880764 systemd-journald[1183]: Under memory pressure, flushing caches. Jul 6 23:59:12.879868 systemd-resolved[1540]: Flushed all caches. Jul 6 23:59:17.692739 systemd[1]: Started sshd@12-139.178.70.109:22-139.178.68.195:59388.service - OpenSSH per-connection server daemon (139.178.68.195:59388). Jul 6 23:59:17.965785 sshd[6880]: Accepted publickey for core from 139.178.68.195 port 59388 ssh2: RSA SHA256:/9exTOE5j0h3myXsW5LwESM2vwqV1QarY1uHfK4Vy7k Jul 6 23:59:17.967766 sshd[6880]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:59:17.972110 systemd-logind[1621]: New session 15 of user core. Jul 6 23:59:17.976859 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 6 23:59:18.114678 sshd[6880]: pam_unix(sshd:session): session closed for user core Jul 6 23:59:18.116622 systemd[1]: sshd@12-139.178.70.109:22-139.178.68.195:59388.service: Deactivated successfully. Jul 6 23:59:18.117991 systemd-logind[1621]: Session 15 logged out. Waiting for processes to exit. Jul 6 23:59:18.117997 systemd[1]: session-15.scope: Deactivated successfully. Jul 6 23:59:18.118782 systemd-logind[1621]: Removed session 15. Jul 6 23:59:18.576684 systemd-journald[1183]: Under memory pressure, flushing caches. Jul 6 23:59:18.575907 systemd-resolved[1540]: Under memory pressure, flushing caches. Jul 6 23:59:18.575913 systemd-resolved[1540]: Flushed all caches. Jul 6 23:59:22.607895 systemd-resolved[1540]: Under memory pressure, flushing caches. Jul 6 23:59:22.608733 systemd-journald[1183]: Under memory pressure, flushing caches. Jul 6 23:59:22.607901 systemd-resolved[1540]: Flushed all caches. Jul 6 23:59:23.121815 systemd[1]: Started sshd@13-139.178.70.109:22-139.178.68.195:41274.service - OpenSSH per-connection server daemon (139.178.68.195:41274). Jul 6 23:59:23.349403 sshd[6957]: Accepted publickey for core from 139.178.68.195 port 41274 ssh2: RSA SHA256:/9exTOE5j0h3myXsW5LwESM2vwqV1QarY1uHfK4Vy7k Jul 6 23:59:23.362286 sshd[6957]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:59:23.371150 systemd-logind[1621]: New session 16 of user core. Jul 6 23:59:23.378796 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 6 23:59:24.656693 systemd-journald[1183]: Under memory pressure, flushing caches. Jul 6 23:59:24.656225 systemd-resolved[1540]: Under memory pressure, flushing caches. Jul 6 23:59:24.656231 systemd-resolved[1540]: Flushed all caches. Jul 6 23:59:25.047680 sshd[6957]: pam_unix(sshd:session): session closed for user core Jul 6 23:59:25.055578 systemd[1]: Started sshd@14-139.178.70.109:22-139.178.68.195:41284.service - OpenSSH per-connection server daemon (139.178.68.195:41284). Jul 6 23:59:25.061282 systemd[1]: sshd@13-139.178.70.109:22-139.178.68.195:41274.service: Deactivated successfully. Jul 6 23:59:25.064148 systemd-logind[1621]: Session 16 logged out. Waiting for processes to exit. Jul 6 23:59:25.064148 systemd[1]: session-16.scope: Deactivated successfully. Jul 6 23:59:25.066445 systemd-logind[1621]: Removed session 16. Jul 6 23:59:25.161131 sshd[6989]: Accepted publickey for core from 139.178.68.195 port 41284 ssh2: RSA SHA256:/9exTOE5j0h3myXsW5LwESM2vwqV1QarY1uHfK4Vy7k Jul 6 23:59:25.162682 sshd[6989]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:59:25.170878 systemd-logind[1621]: New session 17 of user core. Jul 6 23:59:25.177799 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 6 23:59:25.952808 systemd[1]: Started sshd@15-139.178.70.109:22-139.178.68.195:41294.service - OpenSSH per-connection server daemon (139.178.68.195:41294). Jul 6 23:59:25.964542 sshd[6989]: pam_unix(sshd:session): session closed for user core Jul 6 23:59:25.985576 systemd-logind[1621]: Session 17 logged out. Waiting for processes to exit. Jul 6 23:59:25.985980 systemd[1]: sshd@14-139.178.70.109:22-139.178.68.195:41284.service: Deactivated successfully. Jul 6 23:59:25.987633 systemd[1]: session-17.scope: Deactivated successfully. Jul 6 23:59:25.989220 systemd-logind[1621]: Removed session 17. Jul 6 23:59:26.144652 sshd[7001]: Accepted publickey for core from 139.178.68.195 port 41294 ssh2: RSA SHA256:/9exTOE5j0h3myXsW5LwESM2vwqV1QarY1uHfK4Vy7k Jul 6 23:59:26.145573 sshd[7001]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:59:26.148408 systemd-logind[1621]: New session 18 of user core. Jul 6 23:59:26.150814 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 6 23:59:26.703783 systemd-resolved[1540]: Under memory pressure, flushing caches. Jul 6 23:59:26.715257 systemd-journald[1183]: Under memory pressure, flushing caches. Jul 6 23:59:26.703788 systemd-resolved[1540]: Flushed all caches. Jul 6 23:59:27.979538 update_engine[1623]: I20250706 23:59:27.979429 1623 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Jul 6 23:59:27.979538 update_engine[1623]: I20250706 23:59:27.979478 1623 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Jul 6 23:59:28.138536 update_engine[1623]: I20250706 23:59:28.022451 1623 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Jul 6 23:59:28.138536 update_engine[1623]: I20250706 23:59:28.042578 1623 omaha_request_params.cc:62] Current group set to lts Jul 6 23:59:28.138536 update_engine[1623]: I20250706 23:59:28.042752 1623 update_attempter.cc:499] Already updated boot flags. Skipping. Jul 6 23:59:28.138536 update_engine[1623]: I20250706 23:59:28.042762 1623 update_attempter.cc:643] Scheduling an action processor start. Jul 6 23:59:28.138536 update_engine[1623]: I20250706 23:59:28.042790 1623 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jul 6 23:59:28.138536 update_engine[1623]: I20250706 23:59:28.042825 1623 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Jul 6 23:59:28.138536 update_engine[1623]: I20250706 23:59:28.042878 1623 omaha_request_action.cc:271] Posting an Omaha request to disabled Jul 6 23:59:28.138536 update_engine[1623]: I20250706 23:59:28.042886 1623 omaha_request_action.cc:272] Request: Jul 6 23:59:28.138536 update_engine[1623]: Jul 6 23:59:28.138536 update_engine[1623]: Jul 6 23:59:28.138536 update_engine[1623]: Jul 6 23:59:28.138536 update_engine[1623]: Jul 6 23:59:28.138536 update_engine[1623]: Jul 6 23:59:28.138536 update_engine[1623]: Jul 6 23:59:28.138536 update_engine[1623]: Jul 6 23:59:28.138536 update_engine[1623]: Jul 6 23:59:28.138536 update_engine[1623]: I20250706 23:59:28.042891 1623 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 6 23:59:28.175702 locksmithd[1672]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Jul 6 23:59:28.196664 update_engine[1623]: I20250706 23:59:28.148168 1623 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 6 23:59:28.196664 update_engine[1623]: I20250706 23:59:28.148386 1623 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jul 6 23:59:28.196664 update_engine[1623]: E20250706 23:59:28.155702 1623 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 6 23:59:28.196664 update_engine[1623]: I20250706 23:59:28.155743 1623 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Jul 6 23:59:28.767241 systemd-resolved[1540]: Under memory pressure, flushing caches. Jul 6 23:59:28.924725 systemd-journald[1183]: Under memory pressure, flushing caches. Jul 6 23:59:28.767253 systemd-resolved[1540]: Flushed all caches. Jul 6 23:59:30.350775 sshd[7001]: pam_unix(sshd:session): session closed for user core Jul 6 23:59:30.373820 systemd[1]: Started sshd@16-139.178.70.109:22-139.178.68.195:36372.service - OpenSSH per-connection server daemon (139.178.68.195:36372). Jul 6 23:59:30.401091 systemd[1]: sshd@15-139.178.70.109:22-139.178.68.195:41294.service: Deactivated successfully. Jul 6 23:59:30.403033 systemd[1]: session-18.scope: Deactivated successfully. Jul 6 23:59:30.403061 systemd-logind[1621]: Session 18 logged out. Waiting for processes to exit. Jul 6 23:59:30.403958 systemd-logind[1621]: Removed session 18. Jul 6 23:59:30.642422 sshd[7023]: Accepted publickey for core from 139.178.68.195 port 36372 ssh2: RSA SHA256:/9exTOE5j0h3myXsW5LwESM2vwqV1QarY1uHfK4Vy7k Jul 6 23:59:30.649213 sshd[7023]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:59:30.669759 systemd-logind[1621]: New session 19 of user core. Jul 6 23:59:30.672234 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 6 23:59:30.802561 systemd-resolved[1540]: Under memory pressure, flushing caches. Jul 6 23:59:30.822153 systemd-journald[1183]: Under memory pressure, flushing caches. Jul 6 23:59:30.802567 systemd-resolved[1540]: Flushed all caches. Jul 6 23:59:32.163243 kubelet[2915]: E0706 23:59:32.120399 2915 kubelet.go:2512] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.288s" Jul 6 23:59:32.852886 systemd-resolved[1540]: Under memory pressure, flushing caches. Jul 6 23:59:32.867272 systemd-journald[1183]: Under memory pressure, flushing caches. Jul 6 23:59:32.852893 systemd-resolved[1540]: Flushed all caches. Jul 6 23:59:33.226816 sshd[7023]: pam_unix(sshd:session): session closed for user core Jul 6 23:59:33.265338 systemd[1]: Started sshd@17-139.178.70.109:22-139.178.68.195:36388.service - OpenSSH per-connection server daemon (139.178.68.195:36388). Jul 6 23:59:33.288020 systemd[1]: sshd@16-139.178.70.109:22-139.178.68.195:36372.service: Deactivated successfully. Jul 6 23:59:33.297923 systemd[1]: session-19.scope: Deactivated successfully. Jul 6 23:59:33.298230 systemd-logind[1621]: Session 19 logged out. Waiting for processes to exit. Jul 6 23:59:33.304354 systemd-logind[1621]: Removed session 19. Jul 6 23:59:33.554305 sshd[7040]: Accepted publickey for core from 139.178.68.195 port 36388 ssh2: RSA SHA256:/9exTOE5j0h3myXsW5LwESM2vwqV1QarY1uHfK4Vy7k Jul 6 23:59:33.581548 sshd[7040]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:59:33.597871 systemd-logind[1621]: New session 20 of user core. Jul 6 23:59:33.603337 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 6 23:59:34.898618 systemd-journald[1183]: Under memory pressure, flushing caches. Jul 6 23:59:34.900886 systemd-resolved[1540]: Under memory pressure, flushing caches. Jul 6 23:59:34.900895 systemd-resolved[1540]: Flushed all caches. Jul 6 23:59:36.167191 sshd[7040]: pam_unix(sshd:session): session closed for user core Jul 6 23:59:36.206951 systemd[1]: sshd@17-139.178.70.109:22-139.178.68.195:36388.service: Deactivated successfully. Jul 6 23:59:36.209993 systemd[1]: session-20.scope: Deactivated successfully. Jul 6 23:59:36.210520 systemd-logind[1621]: Session 20 logged out. Waiting for processes to exit. Jul 6 23:59:36.223982 systemd-logind[1621]: Removed session 20. Jul 6 23:59:36.946034 systemd-journald[1183]: Under memory pressure, flushing caches. Jul 6 23:59:36.947865 systemd-resolved[1540]: Under memory pressure, flushing caches. Jul 6 23:59:36.947873 systemd-resolved[1540]: Flushed all caches. Jul 6 23:59:38.786182 update_engine[1623]: I20250706 23:59:38.784745 1623 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 6 23:59:38.830316 update_engine[1623]: I20250706 23:59:38.807900 1623 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 6 23:59:38.830316 update_engine[1623]: I20250706 23:59:38.808411 1623 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jul 6 23:59:38.830316 update_engine[1623]: E20250706 23:59:38.816565 1623 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 6 23:59:38.830316 update_engine[1623]: I20250706 23:59:38.822424 1623 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Jul 6 23:59:40.593624 systemd-journald[1183]: Under memory pressure, flushing caches. Jul 6 23:59:40.637675 systemd-resolved[1540]: Under memory pressure, flushing caches. Jul 6 23:59:40.637685 systemd-resolved[1540]: Flushed all caches. Jul 6 23:59:41.207547 systemd[1]: Started sshd@18-139.178.70.109:22-139.178.68.195:55794.service - OpenSSH per-connection server daemon (139.178.68.195:55794). Jul 6 23:59:41.688486 sshd[7090]: Accepted publickey for core from 139.178.68.195 port 55794 ssh2: RSA SHA256:/9exTOE5j0h3myXsW5LwESM2vwqV1QarY1uHfK4Vy7k Jul 6 23:59:41.693669 sshd[7090]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:59:41.809315 systemd-logind[1621]: New session 21 of user core. Jul 6 23:59:41.837202 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 6 23:59:42.517317 kubelet[2915]: I0706 23:59:42.500585 2915 scope.go:117] "RemoveContainer" containerID="d58de1d7022bb21d24395e1cf1e49b7e37b6ee6df05083db0a590f234ccd90f2" Jul 6 23:59:42.648744 systemd-journald[1183]: Under memory pressure, flushing caches. Jul 6 23:59:42.650138 systemd-resolved[1540]: Under memory pressure, flushing caches. Jul 6 23:59:42.650145 systemd-resolved[1540]: Flushed all caches. Jul 6 23:59:44.711732 systemd-journald[1183]: Under memory pressure, flushing caches. Jul 6 23:59:44.704553 systemd-resolved[1540]: Under memory pressure, flushing caches. Jul 6 23:59:44.704565 systemd-resolved[1540]: Flushed all caches. Jul 6 23:59:45.119658 containerd[1646]: time="2025-07-06T23:59:45.083760723Z" level=info msg="RemoveContainer for \"d58de1d7022bb21d24395e1cf1e49b7e37b6ee6df05083db0a590f234ccd90f2\"" Jul 6 23:59:45.420155 containerd[1646]: time="2025-07-06T23:59:45.420056446Z" level=info msg="RemoveContainer for \"d58de1d7022bb21d24395e1cf1e49b7e37b6ee6df05083db0a590f234ccd90f2\" returns successfully" Jul 6 23:59:46.771905 systemd-resolved[1540]: Under memory pressure, flushing caches. Jul 6 23:59:46.985579 systemd-journald[1183]: Under memory pressure, flushing caches. Jul 6 23:59:46.771918 systemd-resolved[1540]: Flushed all caches. Jul 6 23:59:48.891290 systemd-journald[1183]: Under memory pressure, flushing caches. Jul 6 23:59:48.883395 systemd-resolved[1540]: Under memory pressure, flushing caches. Jul 6 23:59:49.216342 update_engine[1623]: I20250706 23:59:48.955682 1623 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 6 23:59:49.216342 update_engine[1623]: I20250706 23:59:49.053870 1623 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 6 23:59:49.216342 update_engine[1623]: I20250706 23:59:49.071851 1623 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jul 6 23:59:49.216342 update_engine[1623]: E20250706 23:59:49.091835 1623 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 6 23:59:49.216342 update_engine[1623]: I20250706 23:59:49.091889 1623 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Jul 6 23:59:48.883414 systemd-resolved[1540]: Flushed all caches. Jul 6 23:59:50.845741 systemd-journald[1183]: Under memory pressure, flushing caches. Jul 6 23:59:50.848636 systemd-resolved[1540]: Under memory pressure, flushing caches. Jul 6 23:59:50.848642 systemd-resolved[1540]: Flushed all caches. Jul 6 23:59:52.886483 systemd-journald[1183]: Under memory pressure, flushing caches. Jul 6 23:59:52.885715 systemd-resolved[1540]: Under memory pressure, flushing caches. Jul 6 23:59:52.885723 systemd-resolved[1540]: Flushed all caches. Jul 6 23:59:53.571022 kubelet[2915]: I0706 23:59:53.559138 2915 scope.go:117] "RemoveContainer" containerID="c494c1beb6eec56ef03fb1240e2d8284b2539c7158e49d862e67ab8d490809ef" Jul 6 23:59:54.018050 containerd[1646]: time="2025-07-06T23:59:54.008120160Z" level=info msg="RemoveContainer for \"c494c1beb6eec56ef03fb1240e2d8284b2539c7158e49d862e67ab8d490809ef\"" Jul 6 23:59:54.112996 containerd[1646]: time="2025-07-06T23:59:54.112912147Z" level=info msg="RemoveContainer for \"c494c1beb6eec56ef03fb1240e2d8284b2539c7158e49d862e67ab8d490809ef\" returns successfully" Jul 6 23:59:54.117986 kubelet[2915]: E0706 23:59:54.117960 2915 kubelet.go:2512] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="10.125s" Jul 6 23:59:54.281694 containerd[1646]: time="2025-07-06T23:59:54.278727652Z" level=info msg="StopPodSandbox for \"92d522d9ac441fafb019292f14edfec9154fd26a8f2454909143d70b72384de0\"" Jul 6 23:59:54.943389 systemd-resolved[1540]: Under memory pressure, flushing caches. Jul 6 23:59:54.996975 systemd-journald[1183]: Under memory pressure, flushing caches. Jul 6 23:59:54.943396 systemd-resolved[1540]: Flushed all caches. Jul 6 23:59:56.126720 containerd[1646]: 2025-07-06 23:59:55.132 [WARNING][7184] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="92d522d9ac441fafb019292f14edfec9154fd26a8f2454909143d70b72384de0" WorkloadEndpoint="localhost-k8s-calico--apiserver--66d984f854--dcpkk-eth0" Jul 6 23:59:56.126720 containerd[1646]: 2025-07-06 23:59:55.143 [INFO][7184] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="92d522d9ac441fafb019292f14edfec9154fd26a8f2454909143d70b72384de0" Jul 6 23:59:56.126720 containerd[1646]: 2025-07-06 23:59:55.143 [INFO][7184] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="92d522d9ac441fafb019292f14edfec9154fd26a8f2454909143d70b72384de0" iface="eth0" netns="" Jul 6 23:59:56.126720 containerd[1646]: 2025-07-06 23:59:55.143 [INFO][7184] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="92d522d9ac441fafb019292f14edfec9154fd26a8f2454909143d70b72384de0" Jul 6 23:59:56.126720 containerd[1646]: 2025-07-06 23:59:55.143 [INFO][7184] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="92d522d9ac441fafb019292f14edfec9154fd26a8f2454909143d70b72384de0" Jul 6 23:59:56.126720 containerd[1646]: 2025-07-06 23:59:55.927 [INFO][7210] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="92d522d9ac441fafb019292f14edfec9154fd26a8f2454909143d70b72384de0" HandleID="k8s-pod-network.92d522d9ac441fafb019292f14edfec9154fd26a8f2454909143d70b72384de0" Workload="localhost-k8s-calico--apiserver--66d984f854--dcpkk-eth0" Jul 6 23:59:56.126720 containerd[1646]: 2025-07-06 23:59:55.946 [INFO][7210] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:59:56.126720 containerd[1646]: 2025-07-06 23:59:55.947 [INFO][7210] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:59:56.126720 containerd[1646]: 2025-07-06 23:59:56.034 [WARNING][7210] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="92d522d9ac441fafb019292f14edfec9154fd26a8f2454909143d70b72384de0" HandleID="k8s-pod-network.92d522d9ac441fafb019292f14edfec9154fd26a8f2454909143d70b72384de0" Workload="localhost-k8s-calico--apiserver--66d984f854--dcpkk-eth0" Jul 6 23:59:56.126720 containerd[1646]: 2025-07-06 23:59:56.034 [INFO][7210] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="92d522d9ac441fafb019292f14edfec9154fd26a8f2454909143d70b72384de0" HandleID="k8s-pod-network.92d522d9ac441fafb019292f14edfec9154fd26a8f2454909143d70b72384de0" Workload="localhost-k8s-calico--apiserver--66d984f854--dcpkk-eth0" Jul 6 23:59:56.126720 containerd[1646]: 2025-07-06 23:59:56.035 [INFO][7210] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:59:56.126720 containerd[1646]: 2025-07-06 23:59:56.058 [INFO][7184] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="92d522d9ac441fafb019292f14edfec9154fd26a8f2454909143d70b72384de0" Jul 6 23:59:56.294985 containerd[1646]: time="2025-07-06T23:59:56.155470591Z" level=info msg="TearDown network for sandbox \"92d522d9ac441fafb019292f14edfec9154fd26a8f2454909143d70b72384de0\" successfully" Jul 6 23:59:56.294985 containerd[1646]: time="2025-07-06T23:59:56.159500891Z" level=info msg="StopPodSandbox for \"92d522d9ac441fafb019292f14edfec9154fd26a8f2454909143d70b72384de0\" returns successfully" Jul 6 23:59:56.368490 containerd[1646]: time="2025-07-06T23:59:56.367554818Z" level=info msg="RemovePodSandbox for \"92d522d9ac441fafb019292f14edfec9154fd26a8f2454909143d70b72384de0\"" Jul 6 23:59:56.368490 containerd[1646]: time="2025-07-06T23:59:56.367584560Z" level=info msg="Forcibly stopping sandbox \"92d522d9ac441fafb019292f14edfec9154fd26a8f2454909143d70b72384de0\"" Jul 6 23:59:56.988621 systemd-journald[1183]: Under memory pressure, flushing caches. Jul 6 23:59:56.995275 systemd-resolved[1540]: Under memory pressure, flushing caches. Jul 6 23:59:56.998461 systemd-resolved[1540]: Flushed all caches. Jul 6 23:59:57.776921 containerd[1646]: 2025-07-06 23:59:57.146 [WARNING][7225] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="92d522d9ac441fafb019292f14edfec9154fd26a8f2454909143d70b72384de0" WorkloadEndpoint="localhost-k8s-calico--apiserver--66d984f854--dcpkk-eth0" Jul 6 23:59:57.776921 containerd[1646]: 2025-07-06 23:59:57.152 [INFO][7225] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="92d522d9ac441fafb019292f14edfec9154fd26a8f2454909143d70b72384de0" Jul 6 23:59:57.776921 containerd[1646]: 2025-07-06 23:59:57.152 [INFO][7225] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="92d522d9ac441fafb019292f14edfec9154fd26a8f2454909143d70b72384de0" iface="eth0" netns="" Jul 6 23:59:57.776921 containerd[1646]: 2025-07-06 23:59:57.152 [INFO][7225] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="92d522d9ac441fafb019292f14edfec9154fd26a8f2454909143d70b72384de0" Jul 6 23:59:57.776921 containerd[1646]: 2025-07-06 23:59:57.152 [INFO][7225] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="92d522d9ac441fafb019292f14edfec9154fd26a8f2454909143d70b72384de0" Jul 6 23:59:57.776921 containerd[1646]: 2025-07-06 23:59:57.656 [INFO][7232] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="92d522d9ac441fafb019292f14edfec9154fd26a8f2454909143d70b72384de0" HandleID="k8s-pod-network.92d522d9ac441fafb019292f14edfec9154fd26a8f2454909143d70b72384de0" Workload="localhost-k8s-calico--apiserver--66d984f854--dcpkk-eth0" Jul 6 23:59:57.776921 containerd[1646]: 2025-07-06 23:59:57.663 [INFO][7232] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:59:57.776921 containerd[1646]: 2025-07-06 23:59:57.666 [INFO][7232] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:59:57.776921 containerd[1646]: 2025-07-06 23:59:57.740 [WARNING][7232] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="92d522d9ac441fafb019292f14edfec9154fd26a8f2454909143d70b72384de0" HandleID="k8s-pod-network.92d522d9ac441fafb019292f14edfec9154fd26a8f2454909143d70b72384de0" Workload="localhost-k8s-calico--apiserver--66d984f854--dcpkk-eth0" Jul 6 23:59:57.776921 containerd[1646]: 2025-07-06 23:59:57.740 [INFO][7232] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="92d522d9ac441fafb019292f14edfec9154fd26a8f2454909143d70b72384de0" HandleID="k8s-pod-network.92d522d9ac441fafb019292f14edfec9154fd26a8f2454909143d70b72384de0" Workload="localhost-k8s-calico--apiserver--66d984f854--dcpkk-eth0" Jul 6 23:59:57.776921 containerd[1646]: 2025-07-06 23:59:57.741 [INFO][7232] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:59:57.776921 containerd[1646]: 2025-07-06 23:59:57.756 [INFO][7225] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="92d522d9ac441fafb019292f14edfec9154fd26a8f2454909143d70b72384de0" Jul 6 23:59:57.857736 containerd[1646]: time="2025-07-06T23:59:57.846964146Z" level=info msg="TearDown network for sandbox \"92d522d9ac441fafb019292f14edfec9154fd26a8f2454909143d70b72384de0\" successfully" Jul 6 23:59:57.997441 containerd[1646]: time="2025-07-06T23:59:57.992377087Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"92d522d9ac441fafb019292f14edfec9154fd26a8f2454909143d70b72384de0\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 6 23:59:58.009813 containerd[1646]: time="2025-07-06T23:59:58.009763014Z" level=info msg="RemovePodSandbox \"92d522d9ac441fafb019292f14edfec9154fd26a8f2454909143d70b72384de0\" returns successfully" Jul 6 23:59:58.412012 containerd[1646]: time="2025-07-06T23:59:58.410078983Z" level=info msg="StopPodSandbox for \"4abfd4ae1f87c33ea5d39d546776e977b621c84c7dc2664f6e53324b212f5ca2\"" Jul 6 23:59:58.817536 sshd[7090]: pam_unix(sshd:session): session closed for user core Jul 6 23:59:58.850801 systemd[1]: sshd@18-139.178.70.109:22-139.178.68.195:55794.service: Deactivated successfully. Jul 6 23:59:58.854445 systemd-logind[1621]: Session 21 logged out. Waiting for processes to exit. Jul 6 23:59:58.855014 systemd[1]: session-21.scope: Deactivated successfully. Jul 6 23:59:58.860048 systemd-logind[1621]: Removed session 21. Jul 6 23:59:59.025147 systemd-journald[1183]: Under memory pressure, flushing caches. Jul 6 23:59:59.029412 systemd-resolved[1540]: Under memory pressure, flushing caches. Jul 6 23:59:59.029419 systemd-resolved[1540]: Flushed all caches. Jul 6 23:59:59.771111 update_engine[1623]: I20250706 23:59:59.763998 1623 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 6 23:59:59.850336 update_engine[1623]: I20250706 23:59:59.786578 1623 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 6 23:59:59.850336 update_engine[1623]: I20250706 23:59:59.791828 1623 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jul 6 23:59:59.850336 update_engine[1623]: E20250706 23:59:59.798185 1623 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 6 23:59:59.850336 update_engine[1623]: I20250706 23:59:59.798223 1623 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jul 6 23:59:59.850336 update_engine[1623]: I20250706 23:59:59.798228 1623 omaha_request_action.cc:617] Omaha request response: Jul 6 23:59:59.850336 update_engine[1623]: E20250706 23:59:59.799715 1623 omaha_request_action.cc:636] Omaha request network transfer failed. Jul 6 23:59:59.877625 update_engine[1623]: I20250706 23:59:59.877120 1623 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Jul 6 23:59:59.877625 update_engine[1623]: I20250706 23:59:59.877156 1623 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jul 6 23:59:59.877625 update_engine[1623]: I20250706 23:59:59.877161 1623 update_attempter.cc:306] Processing Done. Jul 6 23:59:59.881164 update_engine[1623]: E20250706 23:59:59.881138 1623 update_attempter.cc:619] Update failed. Jul 6 23:59:59.881229 update_engine[1623]: I20250706 23:59:59.881218 1623 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Jul 6 23:59:59.882781 update_engine[1623]: I20250706 23:59:59.881260 1623 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Jul 6 23:59:59.882781 update_engine[1623]: I20250706 23:59:59.881268 1623 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Jul 6 23:59:59.887139 update_engine[1623]: I20250706 23:59:59.886761 1623 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jul 6 23:59:59.887139 update_engine[1623]: I20250706 23:59:59.886799 1623 omaha_request_action.cc:271] Posting an Omaha request to disabled Jul 6 23:59:59.887139 update_engine[1623]: I20250706 23:59:59.886804 1623 omaha_request_action.cc:272] Request: Jul 6 23:59:59.887139 update_engine[1623]: Jul 6 23:59:59.887139 update_engine[1623]: Jul 6 23:59:59.887139 update_engine[1623]: Jul 6 23:59:59.887139 update_engine[1623]: Jul 6 23:59:59.887139 update_engine[1623]: Jul 6 23:59:59.887139 update_engine[1623]: Jul 6 23:59:59.887139 update_engine[1623]: I20250706 23:59:59.886808 1623 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 6 23:59:59.887139 update_engine[1623]: I20250706 23:59:59.886929 1623 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 6 23:59:59.891360 update_engine[1623]: I20250706 23:59:59.889839 1623 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jul 6 23:59:59.897427 update_engine[1623]: E20250706 23:59:59.897265 1623 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 6 23:59:59.897427 update_engine[1623]: I20250706 23:59:59.897321 1623 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jul 6 23:59:59.897427 update_engine[1623]: I20250706 23:59:59.897330 1623 omaha_request_action.cc:617] Omaha request response: Jul 6 23:59:59.897427 update_engine[1623]: I20250706 23:59:59.897336 1623 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jul 6 23:59:59.897427 update_engine[1623]: I20250706 23:59:59.897338 1623 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jul 6 23:59:59.897427 update_engine[1623]: I20250706 23:59:59.897341 1623 update_attempter.cc:306] Processing Done. Jul 6 23:59:59.897427 update_engine[1623]: I20250706 23:59:59.897346 1623 update_attempter.cc:310] Error event sent. Jul 6 23:59:59.897427 update_engine[1623]: I20250706 23:59:59.897360 1623 update_check_scheduler.cc:74] Next update check in 48m9s Jul 6 23:59:59.913932 locksmithd[1672]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Jul 6 23:59:59.913932 locksmithd[1672]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Jul 7 00:00:00.157964 containerd[1646]: 2025-07-06 23:59:59.290 [WARNING][7246] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="4abfd4ae1f87c33ea5d39d546776e977b621c84c7dc2664f6e53324b212f5ca2" WorkloadEndpoint="localhost-k8s-calico--apiserver--66d984f854--sqcdr-eth0" Jul 7 00:00:00.157964 containerd[1646]: 2025-07-06 23:59:59.310 [INFO][7246] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4abfd4ae1f87c33ea5d39d546776e977b621c84c7dc2664f6e53324b212f5ca2" Jul 7 00:00:00.157964 containerd[1646]: 2025-07-06 23:59:59.310 [INFO][7246] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4abfd4ae1f87c33ea5d39d546776e977b621c84c7dc2664f6e53324b212f5ca2" iface="eth0" netns="" Jul 7 00:00:00.157964 containerd[1646]: 2025-07-06 23:59:59.313 [INFO][7246] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4abfd4ae1f87c33ea5d39d546776e977b621c84c7dc2664f6e53324b212f5ca2" Jul 7 00:00:00.157964 containerd[1646]: 2025-07-06 23:59:59.313 [INFO][7246] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4abfd4ae1f87c33ea5d39d546776e977b621c84c7dc2664f6e53324b212f5ca2" Jul 7 00:00:00.157964 containerd[1646]: 2025-07-06 23:59:59.975 [INFO][7258] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4abfd4ae1f87c33ea5d39d546776e977b621c84c7dc2664f6e53324b212f5ca2" HandleID="k8s-pod-network.4abfd4ae1f87c33ea5d39d546776e977b621c84c7dc2664f6e53324b212f5ca2" Workload="localhost-k8s-calico--apiserver--66d984f854--sqcdr-eth0" Jul 7 00:00:00.157964 containerd[1646]: 2025-07-06 23:59:59.986 [INFO][7258] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:00:00.157964 containerd[1646]: 2025-07-06 23:59:59.989 [INFO][7258] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:00:00.157964 containerd[1646]: 2025-07-07 00:00:00.083 [WARNING][7258] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4abfd4ae1f87c33ea5d39d546776e977b621c84c7dc2664f6e53324b212f5ca2" HandleID="k8s-pod-network.4abfd4ae1f87c33ea5d39d546776e977b621c84c7dc2664f6e53324b212f5ca2" Workload="localhost-k8s-calico--apiserver--66d984f854--sqcdr-eth0" Jul 7 00:00:00.157964 containerd[1646]: 2025-07-07 00:00:00.083 [INFO][7258] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4abfd4ae1f87c33ea5d39d546776e977b621c84c7dc2664f6e53324b212f5ca2" HandleID="k8s-pod-network.4abfd4ae1f87c33ea5d39d546776e977b621c84c7dc2664f6e53324b212f5ca2" Workload="localhost-k8s-calico--apiserver--66d984f854--sqcdr-eth0" Jul 7 00:00:00.157964 containerd[1646]: 2025-07-07 00:00:00.084 [INFO][7258] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:00:00.157964 containerd[1646]: 2025-07-07 00:00:00.102 [INFO][7246] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4abfd4ae1f87c33ea5d39d546776e977b621c84c7dc2664f6e53324b212f5ca2" Jul 7 00:00:00.204259 containerd[1646]: time="2025-07-07T00:00:00.187015261Z" level=info msg="TearDown network for sandbox \"4abfd4ae1f87c33ea5d39d546776e977b621c84c7dc2664f6e53324b212f5ca2\" successfully" Jul 7 00:00:00.228913 containerd[1646]: time="2025-07-07T00:00:00.228881162Z" level=info msg="StopPodSandbox for \"4abfd4ae1f87c33ea5d39d546776e977b621c84c7dc2664f6e53324b212f5ca2\" returns successfully" Jul 7 00:00:00.560223 containerd[1646]: time="2025-07-07T00:00:00.558530231Z" level=info msg="RemovePodSandbox for \"4abfd4ae1f87c33ea5d39d546776e977b621c84c7dc2664f6e53324b212f5ca2\"" Jul 7 00:00:00.560223 containerd[1646]: time="2025-07-07T00:00:00.558558566Z" level=info msg="Forcibly stopping sandbox \"4abfd4ae1f87c33ea5d39d546776e977b621c84c7dc2664f6e53324b212f5ca2\"" Jul 7 00:00:00.786804 systemd[1]: Started logrotate.service - Rotate and Compress System Logs. Jul 7 00:00:00.810448 systemd[1]: Starting mdadm.service - Initiates a check run of an MD array's redundancy information.... Jul 7 00:00:01.036839 systemd[1]: logrotate.service: Deactivated successfully. Jul 7 00:00:01.058895 systemd[1]: mdadm.service: Deactivated successfully. Jul 7 00:00:01.059244 systemd[1]: Finished mdadm.service - Initiates a check run of an MD array's redundancy information.. Jul 7 00:00:01.077956 systemd-journald[1183]: Under memory pressure, flushing caches. Jul 7 00:00:01.077463 systemd-resolved[1540]: Under memory pressure, flushing caches. Jul 7 00:00:01.077477 systemd-resolved[1540]: Flushed all caches. Jul 7 00:00:01.247222 kubelet[2915]: W0707 00:00:01.188718 2915 watcher.go:93] Error while processing event ("/sys/fs/cgroup/blkio/system.slice/logrotate.service": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/blkio/system.slice/logrotate.service: no such file or directory Jul 7 00:00:01.247222 kubelet[2915]: W0707 00:00:01.247128 2915 watcher.go:93] Error while processing event ("/sys/fs/cgroup/memory/system.slice/logrotate.service": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/memory/system.slice/logrotate.service: no such file or directory Jul 7 00:00:01.247222 kubelet[2915]: W0707 00:00:01.247154 2915 watcher.go:93] Error while processing event ("/sys/fs/cgroup/devices/system.slice/logrotate.service": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/devices/system.slice/logrotate.service: no such file or directory Jul 7 00:00:01.247222 kubelet[2915]: W0707 00:00:01.247166 2915 watcher.go:93] Error while processing event ("/sys/fs/cgroup/pids/system.slice/logrotate.service": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/pids/system.slice/logrotate.service: no such file or directory Jul 7 00:00:01.247222 kubelet[2915]: W0707 00:00:01.247177 2915 watcher.go:93] Error while processing event ("/sys/fs/cgroup/cpu,cpuacct/system.slice/mdadm.service": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/cpu,cpuacct/system.slice/mdadm.service: no such file or directory Jul 7 00:00:01.247222 kubelet[2915]: W0707 00:00:01.247188 2915 watcher.go:93] Error while processing event ("/sys/fs/cgroup/blkio/system.slice/mdadm.service": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/blkio/system.slice/mdadm.service: no such file or directory Jul 7 00:00:01.247222 kubelet[2915]: W0707 00:00:01.247198 2915 watcher.go:93] Error while processing event ("/sys/fs/cgroup/memory/system.slice/mdadm.service": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/memory/system.slice/mdadm.service: no such file or directory Jul 7 00:00:01.272939 kubelet[2915]: W0707 00:00:01.247205 2915 watcher.go:93] Error while processing event ("/sys/fs/cgroup/devices/system.slice/mdadm.service": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/devices/system.slice/mdadm.service: no such file or directory Jul 7 00:00:01.272939 kubelet[2915]: W0707 00:00:01.247219 2915 watcher.go:93] Error while processing event ("/sys/fs/cgroup/pids/system.slice/mdadm.service": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/pids/system.slice/mdadm.service: no such file or directory