May 8 00:34:20.759627 kernel: Linux version 6.6.88-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Wed May 7 22:54:21 -00 2025 May 8 00:34:20.759645 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=86cfbfcc89a9c46f6cbba5bdb3509d1ce1367f0c93b0b0e4c6bdcad1a2064c90 May 8 00:34:20.759651 kernel: Disabled fast string operations May 8 00:34:20.759655 kernel: BIOS-provided physical RAM map: May 8 00:34:20.759659 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ebff] usable May 8 00:34:20.759663 kernel: BIOS-e820: [mem 0x000000000009ec00-0x000000000009ffff] reserved May 8 00:34:20.759669 kernel: BIOS-e820: [mem 0x00000000000dc000-0x00000000000fffff] reserved May 8 00:34:20.759674 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007fedffff] usable May 8 00:34:20.759678 kernel: BIOS-e820: [mem 0x000000007fee0000-0x000000007fefefff] ACPI data May 8 00:34:20.759682 kernel: BIOS-e820: [mem 0x000000007feff000-0x000000007fefffff] ACPI NVS May 8 00:34:20.759686 kernel: BIOS-e820: [mem 0x000000007ff00000-0x000000007fffffff] usable May 8 00:34:20.759690 kernel: BIOS-e820: [mem 0x00000000f0000000-0x00000000f7ffffff] reserved May 8 00:34:20.759695 kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec0ffff] reserved May 8 00:34:20.759699 kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved May 8 00:34:20.759705 kernel: BIOS-e820: [mem 0x00000000fffe0000-0x00000000ffffffff] reserved May 8 00:34:20.759710 kernel: NX (Execute Disable) protection: active May 8 00:34:20.759715 kernel: APIC: Static calls initialized May 8 00:34:20.759720 kernel: SMBIOS 2.7 present. May 8 00:34:20.759725 kernel: DMI: VMware, Inc. VMware Virtual Platform/440BX Desktop Reference Platform, BIOS 6.00 05/28/2020 May 8 00:34:20.759729 kernel: vmware: hypercall mode: 0x00 May 8 00:34:20.759734 kernel: Hypervisor detected: VMware May 8 00:34:20.759739 kernel: vmware: TSC freq read from hypervisor : 3408.000 MHz May 8 00:34:20.759745 kernel: vmware: Host bus clock speed read from hypervisor : 66000000 Hz May 8 00:34:20.759749 kernel: vmware: using clock offset of 4097046173 ns May 8 00:34:20.759754 kernel: tsc: Detected 3408.000 MHz processor May 8 00:34:20.759759 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved May 8 00:34:20.759765 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable May 8 00:34:20.759770 kernel: last_pfn = 0x80000 max_arch_pfn = 0x400000000 May 8 00:34:20.759775 kernel: total RAM covered: 3072M May 8 00:34:20.759779 kernel: Found optimal setting for mtrr clean up May 8 00:34:20.759787 kernel: gran_size: 64K chunk_size: 64K num_reg: 2 lose cover RAM: 0G May 8 00:34:20.759793 kernel: MTRR map: 6 entries (5 fixed + 1 variable; max 21), built from 8 variable MTRRs May 8 00:34:20.759798 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT May 8 00:34:20.759802 kernel: Using GB pages for direct mapping May 8 00:34:20.759807 kernel: ACPI: Early table checksum verification disabled May 8 00:34:20.759812 kernel: ACPI: RSDP 0x00000000000F6A00 000024 (v02 PTLTD ) May 8 00:34:20.759817 kernel: ACPI: XSDT 0x000000007FEE965B 00005C (v01 INTEL 440BX 06040000 VMW 01324272) May 8 00:34:20.759822 kernel: ACPI: FACP 0x000000007FEFEE73 0000F4 (v04 INTEL 440BX 06040000 PTL 000F4240) May 8 00:34:20.759827 kernel: ACPI: DSDT 0x000000007FEEAD55 01411E (v01 PTLTD Custom 06040000 MSFT 03000001) May 8 00:34:20.759832 kernel: ACPI: FACS 0x000000007FEFFFC0 000040 May 8 00:34:20.759840 kernel: ACPI: FACS 0x000000007FEFFFC0 000040 May 8 00:34:20.759845 kernel: ACPI: BOOT 0x000000007FEEAD2D 000028 (v01 PTLTD $SBFTBL$ 06040000 LTP 00000001) May 8 00:34:20.759850 kernel: ACPI: APIC 0x000000007FEEA5EB 000742 (v01 PTLTD ? APIC 06040000 LTP 00000000) May 8 00:34:20.759855 kernel: ACPI: MCFG 0x000000007FEEA5AF 00003C (v01 PTLTD $PCITBL$ 06040000 LTP 00000001) May 8 00:34:20.759860 kernel: ACPI: SRAT 0x000000007FEE9757 0008A8 (v02 VMWARE MEMPLUG 06040000 VMW 00000001) May 8 00:34:20.759867 kernel: ACPI: HPET 0x000000007FEE971F 000038 (v01 VMWARE VMW HPET 06040000 VMW 00000001) May 8 00:34:20.759873 kernel: ACPI: WAET 0x000000007FEE96F7 000028 (v01 VMWARE VMW WAET 06040000 VMW 00000001) May 8 00:34:20.759878 kernel: ACPI: Reserving FACP table memory at [mem 0x7fefee73-0x7fefef66] May 8 00:34:20.759883 kernel: ACPI: Reserving DSDT table memory at [mem 0x7feead55-0x7fefee72] May 8 00:34:20.759888 kernel: ACPI: Reserving FACS table memory at [mem 0x7fefffc0-0x7fefffff] May 8 00:34:20.759893 kernel: ACPI: Reserving FACS table memory at [mem 0x7fefffc0-0x7fefffff] May 8 00:34:20.759898 kernel: ACPI: Reserving BOOT table memory at [mem 0x7feead2d-0x7feead54] May 8 00:34:20.759903 kernel: ACPI: Reserving APIC table memory at [mem 0x7feea5eb-0x7feead2c] May 8 00:34:20.759909 kernel: ACPI: Reserving MCFG table memory at [mem 0x7feea5af-0x7feea5ea] May 8 00:34:20.759914 kernel: ACPI: Reserving SRAT table memory at [mem 0x7fee9757-0x7fee9ffe] May 8 00:34:20.759920 kernel: ACPI: Reserving HPET table memory at [mem 0x7fee971f-0x7fee9756] May 8 00:34:20.759925 kernel: ACPI: Reserving WAET table memory at [mem 0x7fee96f7-0x7fee971e] May 8 00:34:20.759930 kernel: system APIC only can use physical flat May 8 00:34:20.759935 kernel: APIC: Switched APIC routing to: physical flat May 8 00:34:20.759940 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 May 8 00:34:20.759946 kernel: SRAT: PXM 0 -> APIC 0x02 -> Node 0 May 8 00:34:20.759951 kernel: SRAT: PXM 0 -> APIC 0x04 -> Node 0 May 8 00:34:20.759956 kernel: SRAT: PXM 0 -> APIC 0x06 -> Node 0 May 8 00:34:20.759961 kernel: SRAT: PXM 0 -> APIC 0x08 -> Node 0 May 8 00:34:20.759967 kernel: SRAT: PXM 0 -> APIC 0x0a -> Node 0 May 8 00:34:20.759972 kernel: SRAT: PXM 0 -> APIC 0x0c -> Node 0 May 8 00:34:20.759977 kernel: SRAT: PXM 0 -> APIC 0x0e -> Node 0 May 8 00:34:20.759982 kernel: SRAT: PXM 0 -> APIC 0x10 -> Node 0 May 8 00:34:20.759987 kernel: SRAT: PXM 0 -> APIC 0x12 -> Node 0 May 8 00:34:20.759992 kernel: SRAT: PXM 0 -> APIC 0x14 -> Node 0 May 8 00:34:20.759997 kernel: SRAT: PXM 0 -> APIC 0x16 -> Node 0 May 8 00:34:20.760002 kernel: SRAT: PXM 0 -> APIC 0x18 -> Node 0 May 8 00:34:20.760007 kernel: SRAT: PXM 0 -> APIC 0x1a -> Node 0 May 8 00:34:20.760012 kernel: SRAT: PXM 0 -> APIC 0x1c -> Node 0 May 8 00:34:20.760018 kernel: SRAT: PXM 0 -> APIC 0x1e -> Node 0 May 8 00:34:20.760023 kernel: SRAT: PXM 0 -> APIC 0x20 -> Node 0 May 8 00:34:20.760028 kernel: SRAT: PXM 0 -> APIC 0x22 -> Node 0 May 8 00:34:20.760033 kernel: SRAT: PXM 0 -> APIC 0x24 -> Node 0 May 8 00:34:20.760047 kernel: SRAT: PXM 0 -> APIC 0x26 -> Node 0 May 8 00:34:20.760052 kernel: SRAT: PXM 0 -> APIC 0x28 -> Node 0 May 8 00:34:20.760057 kernel: SRAT: PXM 0 -> APIC 0x2a -> Node 0 May 8 00:34:20.760062 kernel: SRAT: PXM 0 -> APIC 0x2c -> Node 0 May 8 00:34:20.760067 kernel: SRAT: PXM 0 -> APIC 0x2e -> Node 0 May 8 00:34:20.760072 kernel: SRAT: PXM 0 -> APIC 0x30 -> Node 0 May 8 00:34:20.760079 kernel: SRAT: PXM 0 -> APIC 0x32 -> Node 0 May 8 00:34:20.760084 kernel: SRAT: PXM 0 -> APIC 0x34 -> Node 0 May 8 00:34:20.760089 kernel: SRAT: PXM 0 -> APIC 0x36 -> Node 0 May 8 00:34:20.760094 kernel: SRAT: PXM 0 -> APIC 0x38 -> Node 0 May 8 00:34:20.760099 kernel: SRAT: PXM 0 -> APIC 0x3a -> Node 0 May 8 00:34:20.760104 kernel: SRAT: PXM 0 -> APIC 0x3c -> Node 0 May 8 00:34:20.760109 kernel: SRAT: PXM 0 -> APIC 0x3e -> Node 0 May 8 00:34:20.760114 kernel: SRAT: PXM 0 -> APIC 0x40 -> Node 0 May 8 00:34:20.760119 kernel: SRAT: PXM 0 -> APIC 0x42 -> Node 0 May 8 00:34:20.760124 kernel: SRAT: PXM 0 -> APIC 0x44 -> Node 0 May 8 00:34:20.760129 kernel: SRAT: PXM 0 -> APIC 0x46 -> Node 0 May 8 00:34:20.760135 kernel: SRAT: PXM 0 -> APIC 0x48 -> Node 0 May 8 00:34:20.760140 kernel: SRAT: PXM 0 -> APIC 0x4a -> Node 0 May 8 00:34:20.760145 kernel: SRAT: PXM 0 -> APIC 0x4c -> Node 0 May 8 00:34:20.760150 kernel: SRAT: PXM 0 -> APIC 0x4e -> Node 0 May 8 00:34:20.760155 kernel: SRAT: PXM 0 -> APIC 0x50 -> Node 0 May 8 00:34:20.760161 kernel: SRAT: PXM 0 -> APIC 0x52 -> Node 0 May 8 00:34:20.760166 kernel: SRAT: PXM 0 -> APIC 0x54 -> Node 0 May 8 00:34:20.760170 kernel: SRAT: PXM 0 -> APIC 0x56 -> Node 0 May 8 00:34:20.760176 kernel: SRAT: PXM 0 -> APIC 0x58 -> Node 0 May 8 00:34:20.760180 kernel: SRAT: PXM 0 -> APIC 0x5a -> Node 0 May 8 00:34:20.760187 kernel: SRAT: PXM 0 -> APIC 0x5c -> Node 0 May 8 00:34:20.760192 kernel: SRAT: PXM 0 -> APIC 0x5e -> Node 0 May 8 00:34:20.760197 kernel: SRAT: PXM 0 -> APIC 0x60 -> Node 0 May 8 00:34:20.760202 kernel: SRAT: PXM 0 -> APIC 0x62 -> Node 0 May 8 00:34:20.760207 kernel: SRAT: PXM 0 -> APIC 0x64 -> Node 0 May 8 00:34:20.760212 kernel: SRAT: PXM 0 -> APIC 0x66 -> Node 0 May 8 00:34:20.760217 kernel: SRAT: PXM 0 -> APIC 0x68 -> Node 0 May 8 00:34:20.760222 kernel: SRAT: PXM 0 -> APIC 0x6a -> Node 0 May 8 00:34:20.760227 kernel: SRAT: PXM 0 -> APIC 0x6c -> Node 0 May 8 00:34:20.760232 kernel: SRAT: PXM 0 -> APIC 0x6e -> Node 0 May 8 00:34:20.760238 kernel: SRAT: PXM 0 -> APIC 0x70 -> Node 0 May 8 00:34:20.760244 kernel: SRAT: PXM 0 -> APIC 0x72 -> Node 0 May 8 00:34:20.760252 kernel: SRAT: PXM 0 -> APIC 0x74 -> Node 0 May 8 00:34:20.760265 kernel: SRAT: PXM 0 -> APIC 0x76 -> Node 0 May 8 00:34:20.760276 kernel: SRAT: PXM 0 -> APIC 0x78 -> Node 0 May 8 00:34:20.760282 kernel: SRAT: PXM 0 -> APIC 0x7a -> Node 0 May 8 00:34:20.760288 kernel: SRAT: PXM 0 -> APIC 0x7c -> Node 0 May 8 00:34:20.760293 kernel: SRAT: PXM 0 -> APIC 0x7e -> Node 0 May 8 00:34:20.760301 kernel: SRAT: PXM 0 -> APIC 0x80 -> Node 0 May 8 00:34:20.760310 kernel: SRAT: PXM 0 -> APIC 0x82 -> Node 0 May 8 00:34:20.760318 kernel: SRAT: PXM 0 -> APIC 0x84 -> Node 0 May 8 00:34:20.760328 kernel: SRAT: PXM 0 -> APIC 0x86 -> Node 0 May 8 00:34:20.760333 kernel: SRAT: PXM 0 -> APIC 0x88 -> Node 0 May 8 00:34:20.760339 kernel: SRAT: PXM 0 -> APIC 0x8a -> Node 0 May 8 00:34:20.760344 kernel: SRAT: PXM 0 -> APIC 0x8c -> Node 0 May 8 00:34:20.760350 kernel: SRAT: PXM 0 -> APIC 0x8e -> Node 0 May 8 00:34:20.760355 kernel: SRAT: PXM 0 -> APIC 0x90 -> Node 0 May 8 00:34:20.760360 kernel: SRAT: PXM 0 -> APIC 0x92 -> Node 0 May 8 00:34:20.760366 kernel: SRAT: PXM 0 -> APIC 0x94 -> Node 0 May 8 00:34:20.760373 kernel: SRAT: PXM 0 -> APIC 0x96 -> Node 0 May 8 00:34:20.760379 kernel: SRAT: PXM 0 -> APIC 0x98 -> Node 0 May 8 00:34:20.760384 kernel: SRAT: PXM 0 -> APIC 0x9a -> Node 0 May 8 00:34:20.760389 kernel: SRAT: PXM 0 -> APIC 0x9c -> Node 0 May 8 00:34:20.760394 kernel: SRAT: PXM 0 -> APIC 0x9e -> Node 0 May 8 00:34:20.760400 kernel: SRAT: PXM 0 -> APIC 0xa0 -> Node 0 May 8 00:34:20.760405 kernel: SRAT: PXM 0 -> APIC 0xa2 -> Node 0 May 8 00:34:20.760410 kernel: SRAT: PXM 0 -> APIC 0xa4 -> Node 0 May 8 00:34:20.760416 kernel: SRAT: PXM 0 -> APIC 0xa6 -> Node 0 May 8 00:34:20.760421 kernel: SRAT: PXM 0 -> APIC 0xa8 -> Node 0 May 8 00:34:20.760428 kernel: SRAT: PXM 0 -> APIC 0xaa -> Node 0 May 8 00:34:20.760433 kernel: SRAT: PXM 0 -> APIC 0xac -> Node 0 May 8 00:34:20.760438 kernel: SRAT: PXM 0 -> APIC 0xae -> Node 0 May 8 00:34:20.760443 kernel: SRAT: PXM 0 -> APIC 0xb0 -> Node 0 May 8 00:34:20.760449 kernel: SRAT: PXM 0 -> APIC 0xb2 -> Node 0 May 8 00:34:20.760454 kernel: SRAT: PXM 0 -> APIC 0xb4 -> Node 0 May 8 00:34:20.760459 kernel: SRAT: PXM 0 -> APIC 0xb6 -> Node 0 May 8 00:34:20.760465 kernel: SRAT: PXM 0 -> APIC 0xb8 -> Node 0 May 8 00:34:20.760470 kernel: SRAT: PXM 0 -> APIC 0xba -> Node 0 May 8 00:34:20.760476 kernel: SRAT: PXM 0 -> APIC 0xbc -> Node 0 May 8 00:34:20.760482 kernel: SRAT: PXM 0 -> APIC 0xbe -> Node 0 May 8 00:34:20.760487 kernel: SRAT: PXM 0 -> APIC 0xc0 -> Node 0 May 8 00:34:20.760493 kernel: SRAT: PXM 0 -> APIC 0xc2 -> Node 0 May 8 00:34:20.760498 kernel: SRAT: PXM 0 -> APIC 0xc4 -> Node 0 May 8 00:34:20.760503 kernel: SRAT: PXM 0 -> APIC 0xc6 -> Node 0 May 8 00:34:20.760508 kernel: SRAT: PXM 0 -> APIC 0xc8 -> Node 0 May 8 00:34:20.760514 kernel: SRAT: PXM 0 -> APIC 0xca -> Node 0 May 8 00:34:20.760519 kernel: SRAT: PXM 0 -> APIC 0xcc -> Node 0 May 8 00:34:20.760524 kernel: SRAT: PXM 0 -> APIC 0xce -> Node 0 May 8 00:34:20.760530 kernel: SRAT: PXM 0 -> APIC 0xd0 -> Node 0 May 8 00:34:20.760536 kernel: SRAT: PXM 0 -> APIC 0xd2 -> Node 0 May 8 00:34:20.760541 kernel: SRAT: PXM 0 -> APIC 0xd4 -> Node 0 May 8 00:34:20.760547 kernel: SRAT: PXM 0 -> APIC 0xd6 -> Node 0 May 8 00:34:20.760552 kernel: SRAT: PXM 0 -> APIC 0xd8 -> Node 0 May 8 00:34:20.760557 kernel: SRAT: PXM 0 -> APIC 0xda -> Node 0 May 8 00:34:20.760563 kernel: SRAT: PXM 0 -> APIC 0xdc -> Node 0 May 8 00:34:20.760568 kernel: SRAT: PXM 0 -> APIC 0xde -> Node 0 May 8 00:34:20.760573 kernel: SRAT: PXM 0 -> APIC 0xe0 -> Node 0 May 8 00:34:20.760579 kernel: SRAT: PXM 0 -> APIC 0xe2 -> Node 0 May 8 00:34:20.760584 kernel: SRAT: PXM 0 -> APIC 0xe4 -> Node 0 May 8 00:34:20.760591 kernel: SRAT: PXM 0 -> APIC 0xe6 -> Node 0 May 8 00:34:20.760596 kernel: SRAT: PXM 0 -> APIC 0xe8 -> Node 0 May 8 00:34:20.760601 kernel: SRAT: PXM 0 -> APIC 0xea -> Node 0 May 8 00:34:20.760607 kernel: SRAT: PXM 0 -> APIC 0xec -> Node 0 May 8 00:34:20.760612 kernel: SRAT: PXM 0 -> APIC 0xee -> Node 0 May 8 00:34:20.760617 kernel: SRAT: PXM 0 -> APIC 0xf0 -> Node 0 May 8 00:34:20.760623 kernel: SRAT: PXM 0 -> APIC 0xf2 -> Node 0 May 8 00:34:20.760628 kernel: SRAT: PXM 0 -> APIC 0xf4 -> Node 0 May 8 00:34:20.760633 kernel: SRAT: PXM 0 -> APIC 0xf6 -> Node 0 May 8 00:34:20.760639 kernel: SRAT: PXM 0 -> APIC 0xf8 -> Node 0 May 8 00:34:20.760644 kernel: SRAT: PXM 0 -> APIC 0xfa -> Node 0 May 8 00:34:20.760650 kernel: SRAT: PXM 0 -> APIC 0xfc -> Node 0 May 8 00:34:20.760656 kernel: SRAT: PXM 0 -> APIC 0xfe -> Node 0 May 8 00:34:20.760661 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] May 8 00:34:20.760667 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] May 8 00:34:20.760672 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000-0xbfffffff] hotplug May 8 00:34:20.760678 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7fffffff] -> [mem 0x00000000-0x7fffffff] May 8 00:34:20.760684 kernel: NODE_DATA(0) allocated [mem 0x7fffa000-0x7fffffff] May 8 00:34:20.760690 kernel: Zone ranges: May 8 00:34:20.760695 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] May 8 00:34:20.760702 kernel: DMA32 [mem 0x0000000001000000-0x000000007fffffff] May 8 00:34:20.760708 kernel: Normal empty May 8 00:34:20.760713 kernel: Movable zone start for each node May 8 00:34:20.760719 kernel: Early memory node ranges May 8 00:34:20.760724 kernel: node 0: [mem 0x0000000000001000-0x000000000009dfff] May 8 00:34:20.760729 kernel: node 0: [mem 0x0000000000100000-0x000000007fedffff] May 8 00:34:20.760735 kernel: node 0: [mem 0x000000007ff00000-0x000000007fffffff] May 8 00:34:20.760741 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007fffffff] May 8 00:34:20.760746 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 8 00:34:20.760751 kernel: On node 0, zone DMA: 98 pages in unavailable ranges May 8 00:34:20.760758 kernel: On node 0, zone DMA32: 32 pages in unavailable ranges May 8 00:34:20.760763 kernel: ACPI: PM-Timer IO Port: 0x1008 May 8 00:34:20.760769 kernel: system APIC only can use physical flat May 8 00:34:20.760774 kernel: ACPI: LAPIC_NMI (acpi_id[0x00] high edge lint[0x1]) May 8 00:34:20.760780 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1]) May 8 00:34:20.760785 kernel: ACPI: LAPIC_NMI (acpi_id[0x02] high edge lint[0x1]) May 8 00:34:20.760791 kernel: ACPI: LAPIC_NMI (acpi_id[0x03] high edge lint[0x1]) May 8 00:34:20.760796 kernel: ACPI: LAPIC_NMI (acpi_id[0x04] high edge lint[0x1]) May 8 00:34:20.760802 kernel: ACPI: LAPIC_NMI (acpi_id[0x05] high edge lint[0x1]) May 8 00:34:20.760808 kernel: ACPI: LAPIC_NMI (acpi_id[0x06] high edge lint[0x1]) May 8 00:34:20.760814 kernel: ACPI: LAPIC_NMI (acpi_id[0x07] high edge lint[0x1]) May 8 00:34:20.760819 kernel: ACPI: LAPIC_NMI (acpi_id[0x08] high edge lint[0x1]) May 8 00:34:20.760825 kernel: ACPI: LAPIC_NMI (acpi_id[0x09] high edge lint[0x1]) May 8 00:34:20.760830 kernel: ACPI: LAPIC_NMI (acpi_id[0x0a] high edge lint[0x1]) May 8 00:34:20.760836 kernel: ACPI: LAPIC_NMI (acpi_id[0x0b] high edge lint[0x1]) May 8 00:34:20.760841 kernel: ACPI: LAPIC_NMI (acpi_id[0x0c] high edge lint[0x1]) May 8 00:34:20.760846 kernel: ACPI: LAPIC_NMI (acpi_id[0x0d] high edge lint[0x1]) May 8 00:34:20.760852 kernel: ACPI: LAPIC_NMI (acpi_id[0x0e] high edge lint[0x1]) May 8 00:34:20.760858 kernel: ACPI: LAPIC_NMI (acpi_id[0x0f] high edge lint[0x1]) May 8 00:34:20.760864 kernel: ACPI: LAPIC_NMI (acpi_id[0x10] high edge lint[0x1]) May 8 00:34:20.760869 kernel: ACPI: LAPIC_NMI (acpi_id[0x11] high edge lint[0x1]) May 8 00:34:20.760875 kernel: ACPI: LAPIC_NMI (acpi_id[0x12] high edge lint[0x1]) May 8 00:34:20.760880 kernel: ACPI: LAPIC_NMI (acpi_id[0x13] high edge lint[0x1]) May 8 00:34:20.760886 kernel: ACPI: LAPIC_NMI (acpi_id[0x14] high edge lint[0x1]) May 8 00:34:20.760891 kernel: ACPI: LAPIC_NMI (acpi_id[0x15] high edge lint[0x1]) May 8 00:34:20.760896 kernel: ACPI: LAPIC_NMI (acpi_id[0x16] high edge lint[0x1]) May 8 00:34:20.760902 kernel: ACPI: LAPIC_NMI (acpi_id[0x17] high edge lint[0x1]) May 8 00:34:20.760907 kernel: ACPI: LAPIC_NMI (acpi_id[0x18] high edge lint[0x1]) May 8 00:34:20.760914 kernel: ACPI: LAPIC_NMI (acpi_id[0x19] high edge lint[0x1]) May 8 00:34:20.760919 kernel: ACPI: LAPIC_NMI (acpi_id[0x1a] high edge lint[0x1]) May 8 00:34:20.760925 kernel: ACPI: LAPIC_NMI (acpi_id[0x1b] high edge lint[0x1]) May 8 00:34:20.760930 kernel: ACPI: LAPIC_NMI (acpi_id[0x1c] high edge lint[0x1]) May 8 00:34:20.760938 kernel: ACPI: LAPIC_NMI (acpi_id[0x1d] high edge lint[0x1]) May 8 00:34:20.760947 kernel: ACPI: LAPIC_NMI (acpi_id[0x1e] high edge lint[0x1]) May 8 00:34:20.760956 kernel: ACPI: LAPIC_NMI (acpi_id[0x1f] high edge lint[0x1]) May 8 00:34:20.760965 kernel: ACPI: LAPIC_NMI (acpi_id[0x20] high edge lint[0x1]) May 8 00:34:20.760972 kernel: ACPI: LAPIC_NMI (acpi_id[0x21] high edge lint[0x1]) May 8 00:34:20.760982 kernel: ACPI: LAPIC_NMI (acpi_id[0x22] high edge lint[0x1]) May 8 00:34:20.760988 kernel: ACPI: LAPIC_NMI (acpi_id[0x23] high edge lint[0x1]) May 8 00:34:20.760993 kernel: ACPI: LAPIC_NMI (acpi_id[0x24] high edge lint[0x1]) May 8 00:34:20.760998 kernel: ACPI: LAPIC_NMI (acpi_id[0x25] high edge lint[0x1]) May 8 00:34:20.761004 kernel: ACPI: LAPIC_NMI (acpi_id[0x26] high edge lint[0x1]) May 8 00:34:20.761009 kernel: ACPI: LAPIC_NMI (acpi_id[0x27] high edge lint[0x1]) May 8 00:34:20.761017 kernel: ACPI: LAPIC_NMI (acpi_id[0x28] high edge lint[0x1]) May 8 00:34:20.761024 kernel: ACPI: LAPIC_NMI (acpi_id[0x29] high edge lint[0x1]) May 8 00:34:20.761030 kernel: ACPI: LAPIC_NMI (acpi_id[0x2a] high edge lint[0x1]) May 8 00:34:20.761035 kernel: ACPI: LAPIC_NMI (acpi_id[0x2b] high edge lint[0x1]) May 8 00:34:20.761089 kernel: ACPI: LAPIC_NMI (acpi_id[0x2c] high edge lint[0x1]) May 8 00:34:20.761097 kernel: ACPI: LAPIC_NMI (acpi_id[0x2d] high edge lint[0x1]) May 8 00:34:20.761106 kernel: ACPI: LAPIC_NMI (acpi_id[0x2e] high edge lint[0x1]) May 8 00:34:20.761114 kernel: ACPI: LAPIC_NMI (acpi_id[0x2f] high edge lint[0x1]) May 8 00:34:20.761119 kernel: ACPI: LAPIC_NMI (acpi_id[0x30] high edge lint[0x1]) May 8 00:34:20.761124 kernel: ACPI: LAPIC_NMI (acpi_id[0x31] high edge lint[0x1]) May 8 00:34:20.761130 kernel: ACPI: LAPIC_NMI (acpi_id[0x32] high edge lint[0x1]) May 8 00:34:20.761135 kernel: ACPI: LAPIC_NMI (acpi_id[0x33] high edge lint[0x1]) May 8 00:34:20.761141 kernel: ACPI: LAPIC_NMI (acpi_id[0x34] high edge lint[0x1]) May 8 00:34:20.761148 kernel: ACPI: LAPIC_NMI (acpi_id[0x35] high edge lint[0x1]) May 8 00:34:20.761154 kernel: ACPI: LAPIC_NMI (acpi_id[0x36] high edge lint[0x1]) May 8 00:34:20.761159 kernel: ACPI: LAPIC_NMI (acpi_id[0x37] high edge lint[0x1]) May 8 00:34:20.761164 kernel: ACPI: LAPIC_NMI (acpi_id[0x38] high edge lint[0x1]) May 8 00:34:20.761170 kernel: ACPI: LAPIC_NMI (acpi_id[0x39] high edge lint[0x1]) May 8 00:34:20.761175 kernel: ACPI: LAPIC_NMI (acpi_id[0x3a] high edge lint[0x1]) May 8 00:34:20.761181 kernel: ACPI: LAPIC_NMI (acpi_id[0x3b] high edge lint[0x1]) May 8 00:34:20.761186 kernel: ACPI: LAPIC_NMI (acpi_id[0x3c] high edge lint[0x1]) May 8 00:34:20.761192 kernel: ACPI: LAPIC_NMI (acpi_id[0x3d] high edge lint[0x1]) May 8 00:34:20.761197 kernel: ACPI: LAPIC_NMI (acpi_id[0x3e] high edge lint[0x1]) May 8 00:34:20.761203 kernel: ACPI: LAPIC_NMI (acpi_id[0x3f] high edge lint[0x1]) May 8 00:34:20.761209 kernel: ACPI: LAPIC_NMI (acpi_id[0x40] high edge lint[0x1]) May 8 00:34:20.761214 kernel: ACPI: LAPIC_NMI (acpi_id[0x41] high edge lint[0x1]) May 8 00:34:20.761220 kernel: ACPI: LAPIC_NMI (acpi_id[0x42] high edge lint[0x1]) May 8 00:34:20.761225 kernel: ACPI: LAPIC_NMI (acpi_id[0x43] high edge lint[0x1]) May 8 00:34:20.761231 kernel: ACPI: LAPIC_NMI (acpi_id[0x44] high edge lint[0x1]) May 8 00:34:20.761236 kernel: ACPI: LAPIC_NMI (acpi_id[0x45] high edge lint[0x1]) May 8 00:34:20.761241 kernel: ACPI: LAPIC_NMI (acpi_id[0x46] high edge lint[0x1]) May 8 00:34:20.761247 kernel: ACPI: LAPIC_NMI (acpi_id[0x47] high edge lint[0x1]) May 8 00:34:20.761253 kernel: ACPI: LAPIC_NMI (acpi_id[0x48] high edge lint[0x1]) May 8 00:34:20.761259 kernel: ACPI: LAPIC_NMI (acpi_id[0x49] high edge lint[0x1]) May 8 00:34:20.761264 kernel: ACPI: LAPIC_NMI (acpi_id[0x4a] high edge lint[0x1]) May 8 00:34:20.761269 kernel: ACPI: LAPIC_NMI (acpi_id[0x4b] high edge lint[0x1]) May 8 00:34:20.761275 kernel: ACPI: LAPIC_NMI (acpi_id[0x4c] high edge lint[0x1]) May 8 00:34:20.761280 kernel: ACPI: LAPIC_NMI (acpi_id[0x4d] high edge lint[0x1]) May 8 00:34:20.761285 kernel: ACPI: LAPIC_NMI (acpi_id[0x4e] high edge lint[0x1]) May 8 00:34:20.761291 kernel: ACPI: LAPIC_NMI (acpi_id[0x4f] high edge lint[0x1]) May 8 00:34:20.761296 kernel: ACPI: LAPIC_NMI (acpi_id[0x50] high edge lint[0x1]) May 8 00:34:20.761302 kernel: ACPI: LAPIC_NMI (acpi_id[0x51] high edge lint[0x1]) May 8 00:34:20.761308 kernel: ACPI: LAPIC_NMI (acpi_id[0x52] high edge lint[0x1]) May 8 00:34:20.761314 kernel: ACPI: LAPIC_NMI (acpi_id[0x53] high edge lint[0x1]) May 8 00:34:20.761320 kernel: ACPI: LAPIC_NMI (acpi_id[0x54] high edge lint[0x1]) May 8 00:34:20.761327 kernel: ACPI: LAPIC_NMI (acpi_id[0x55] high edge lint[0x1]) May 8 00:34:20.761332 kernel: ACPI: LAPIC_NMI (acpi_id[0x56] high edge lint[0x1]) May 8 00:34:20.761338 kernel: ACPI: LAPIC_NMI (acpi_id[0x57] high edge lint[0x1]) May 8 00:34:20.761343 kernel: ACPI: LAPIC_NMI (acpi_id[0x58] high edge lint[0x1]) May 8 00:34:20.761351 kernel: ACPI: LAPIC_NMI (acpi_id[0x59] high edge lint[0x1]) May 8 00:34:20.761360 kernel: ACPI: LAPIC_NMI (acpi_id[0x5a] high edge lint[0x1]) May 8 00:34:20.761368 kernel: ACPI: LAPIC_NMI (acpi_id[0x5b] high edge lint[0x1]) May 8 00:34:20.761375 kernel: ACPI: LAPIC_NMI (acpi_id[0x5c] high edge lint[0x1]) May 8 00:34:20.761380 kernel: ACPI: LAPIC_NMI (acpi_id[0x5d] high edge lint[0x1]) May 8 00:34:20.761386 kernel: ACPI: LAPIC_NMI (acpi_id[0x5e] high edge lint[0x1]) May 8 00:34:20.761391 kernel: ACPI: LAPIC_NMI (acpi_id[0x5f] high edge lint[0x1]) May 8 00:34:20.761397 kernel: ACPI: LAPIC_NMI (acpi_id[0x60] high edge lint[0x1]) May 8 00:34:20.761402 kernel: ACPI: LAPIC_NMI (acpi_id[0x61] high edge lint[0x1]) May 8 00:34:20.761407 kernel: ACPI: LAPIC_NMI (acpi_id[0x62] high edge lint[0x1]) May 8 00:34:20.761413 kernel: ACPI: LAPIC_NMI (acpi_id[0x63] high edge lint[0x1]) May 8 00:34:20.761418 kernel: ACPI: LAPIC_NMI (acpi_id[0x64] high edge lint[0x1]) May 8 00:34:20.761425 kernel: ACPI: LAPIC_NMI (acpi_id[0x65] high edge lint[0x1]) May 8 00:34:20.761431 kernel: ACPI: LAPIC_NMI (acpi_id[0x66] high edge lint[0x1]) May 8 00:34:20.761436 kernel: ACPI: LAPIC_NMI (acpi_id[0x67] high edge lint[0x1]) May 8 00:34:20.761441 kernel: ACPI: LAPIC_NMI (acpi_id[0x68] high edge lint[0x1]) May 8 00:34:20.761447 kernel: ACPI: LAPIC_NMI (acpi_id[0x69] high edge lint[0x1]) May 8 00:34:20.761452 kernel: ACPI: LAPIC_NMI (acpi_id[0x6a] high edge lint[0x1]) May 8 00:34:20.761458 kernel: ACPI: LAPIC_NMI (acpi_id[0x6b] high edge lint[0x1]) May 8 00:34:20.761463 kernel: ACPI: LAPIC_NMI (acpi_id[0x6c] high edge lint[0x1]) May 8 00:34:20.761469 kernel: ACPI: LAPIC_NMI (acpi_id[0x6d] high edge lint[0x1]) May 8 00:34:20.761474 kernel: ACPI: LAPIC_NMI (acpi_id[0x6e] high edge lint[0x1]) May 8 00:34:20.761480 kernel: ACPI: LAPIC_NMI (acpi_id[0x6f] high edge lint[0x1]) May 8 00:34:20.761486 kernel: ACPI: LAPIC_NMI (acpi_id[0x70] high edge lint[0x1]) May 8 00:34:20.761491 kernel: ACPI: LAPIC_NMI (acpi_id[0x71] high edge lint[0x1]) May 8 00:34:20.761497 kernel: ACPI: LAPIC_NMI (acpi_id[0x72] high edge lint[0x1]) May 8 00:34:20.761502 kernel: ACPI: LAPIC_NMI (acpi_id[0x73] high edge lint[0x1]) May 8 00:34:20.761508 kernel: ACPI: LAPIC_NMI (acpi_id[0x74] high edge lint[0x1]) May 8 00:34:20.761513 kernel: ACPI: LAPIC_NMI (acpi_id[0x75] high edge lint[0x1]) May 8 00:34:20.761518 kernel: ACPI: LAPIC_NMI (acpi_id[0x76] high edge lint[0x1]) May 8 00:34:20.761524 kernel: ACPI: LAPIC_NMI (acpi_id[0x77] high edge lint[0x1]) May 8 00:34:20.761530 kernel: ACPI: LAPIC_NMI (acpi_id[0x78] high edge lint[0x1]) May 8 00:34:20.761536 kernel: ACPI: LAPIC_NMI (acpi_id[0x79] high edge lint[0x1]) May 8 00:34:20.761541 kernel: ACPI: LAPIC_NMI (acpi_id[0x7a] high edge lint[0x1]) May 8 00:34:20.761549 kernel: ACPI: LAPIC_NMI (acpi_id[0x7b] high edge lint[0x1]) May 8 00:34:20.761555 kernel: ACPI: LAPIC_NMI (acpi_id[0x7c] high edge lint[0x1]) May 8 00:34:20.761561 kernel: ACPI: LAPIC_NMI (acpi_id[0x7d] high edge lint[0x1]) May 8 00:34:20.761566 kernel: ACPI: LAPIC_NMI (acpi_id[0x7e] high edge lint[0x1]) May 8 00:34:20.761572 kernel: ACPI: LAPIC_NMI (acpi_id[0x7f] high edge lint[0x1]) May 8 00:34:20.761577 kernel: IOAPIC[0]: apic_id 1, version 17, address 0xfec00000, GSI 0-23 May 8 00:34:20.761583 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 high edge) May 8 00:34:20.761590 kernel: ACPI: Using ACPI (MADT) for SMP configuration information May 8 00:34:20.761595 kernel: ACPI: HPET id: 0x8086af01 base: 0xfed00000 May 8 00:34:20.761604 kernel: TSC deadline timer available May 8 00:34:20.761611 kernel: smpboot: Allowing 128 CPUs, 126 hotplug CPUs May 8 00:34:20.761617 kernel: [mem 0x80000000-0xefffffff] available for PCI devices May 8 00:34:20.761622 kernel: Booting paravirtualized kernel on VMware hypervisor May 8 00:34:20.761628 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns May 8 00:34:20.761634 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:128 nr_cpu_ids:128 nr_node_ids:1 May 8 00:34:20.761640 kernel: percpu: Embedded 58 pages/cpu s197096 r8192 d32280 u262144 May 8 00:34:20.761647 kernel: pcpu-alloc: s197096 r8192 d32280 u262144 alloc=1*2097152 May 8 00:34:20.761653 kernel: pcpu-alloc: [0] 000 001 002 003 004 005 006 007 May 8 00:34:20.761658 kernel: pcpu-alloc: [0] 008 009 010 011 012 013 014 015 May 8 00:34:20.761667 kernel: pcpu-alloc: [0] 016 017 018 019 020 021 022 023 May 8 00:34:20.761675 kernel: pcpu-alloc: [0] 024 025 026 027 028 029 030 031 May 8 00:34:20.761681 kernel: pcpu-alloc: [0] 032 033 034 035 036 037 038 039 May 8 00:34:20.761694 kernel: pcpu-alloc: [0] 040 041 042 043 044 045 046 047 May 8 00:34:20.761700 kernel: pcpu-alloc: [0] 048 049 050 051 052 053 054 055 May 8 00:34:20.761706 kernel: pcpu-alloc: [0] 056 057 058 059 060 061 062 063 May 8 00:34:20.761713 kernel: pcpu-alloc: [0] 064 065 066 067 068 069 070 071 May 8 00:34:20.761718 kernel: pcpu-alloc: [0] 072 073 074 075 076 077 078 079 May 8 00:34:20.761724 kernel: pcpu-alloc: [0] 080 081 082 083 084 085 086 087 May 8 00:34:20.761730 kernel: pcpu-alloc: [0] 088 089 090 091 092 093 094 095 May 8 00:34:20.761736 kernel: pcpu-alloc: [0] 096 097 098 099 100 101 102 103 May 8 00:34:20.761741 kernel: pcpu-alloc: [0] 104 105 106 107 108 109 110 111 May 8 00:34:20.761747 kernel: pcpu-alloc: [0] 112 113 114 115 116 117 118 119 May 8 00:34:20.761753 kernel: pcpu-alloc: [0] 120 121 122 123 124 125 126 127 May 8 00:34:20.761760 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=86cfbfcc89a9c46f6cbba5bdb3509d1ce1367f0c93b0b0e4c6bdcad1a2064c90 May 8 00:34:20.761766 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 8 00:34:20.761772 kernel: random: crng init done May 8 00:34:20.761778 kernel: printk: log_buf_len individual max cpu contribution: 4096 bytes May 8 00:34:20.761784 kernel: printk: log_buf_len total cpu_extra contributions: 520192 bytes May 8 00:34:20.761790 kernel: printk: log_buf_len min size: 262144 bytes May 8 00:34:20.761795 kernel: printk: log_buf_len: 1048576 bytes May 8 00:34:20.761801 kernel: printk: early log buf free: 239648(91%) May 8 00:34:20.761807 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 8 00:34:20.761814 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) May 8 00:34:20.761820 kernel: Fallback order for Node 0: 0 May 8 00:34:20.761826 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515808 May 8 00:34:20.761832 kernel: Policy zone: DMA32 May 8 00:34:20.761838 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 8 00:34:20.761844 kernel: Memory: 1936368K/2096628K available (12288K kernel code, 2295K rwdata, 22740K rodata, 42856K init, 2336K bss, 160000K reserved, 0K cma-reserved) May 8 00:34:20.761852 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=128, Nodes=1 May 8 00:34:20.761858 kernel: ftrace: allocating 37944 entries in 149 pages May 8 00:34:20.761864 kernel: ftrace: allocated 149 pages with 4 groups May 8 00:34:20.761869 kernel: Dynamic Preempt: voluntary May 8 00:34:20.761876 kernel: rcu: Preemptible hierarchical RCU implementation. May 8 00:34:20.761882 kernel: rcu: RCU event tracing is enabled. May 8 00:34:20.761888 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=128. May 8 00:34:20.761894 kernel: Trampoline variant of Tasks RCU enabled. May 8 00:34:20.761901 kernel: Rude variant of Tasks RCU enabled. May 8 00:34:20.761907 kernel: Tracing variant of Tasks RCU enabled. May 8 00:34:20.761913 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 8 00:34:20.761919 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=128 May 8 00:34:20.761924 kernel: NR_IRQS: 33024, nr_irqs: 1448, preallocated irqs: 16 May 8 00:34:20.761930 kernel: rcu: srcu_init: Setting srcu_struct sizes to big. May 8 00:34:20.761936 kernel: Console: colour VGA+ 80x25 May 8 00:34:20.761942 kernel: printk: console [tty0] enabled May 8 00:34:20.761948 kernel: printk: console [ttyS0] enabled May 8 00:34:20.761954 kernel: ACPI: Core revision 20230628 May 8 00:34:20.761961 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 133484882848 ns May 8 00:34:20.761967 kernel: APIC: Switch to symmetric I/O mode setup May 8 00:34:20.761973 kernel: x2apic enabled May 8 00:34:20.761979 kernel: APIC: Switched APIC routing to: physical x2apic May 8 00:34:20.761985 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 May 8 00:34:20.761991 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x311fd3cd494, max_idle_ns: 440795223879 ns May 8 00:34:20.761997 kernel: Calibrating delay loop (skipped) preset value.. 6816.00 BogoMIPS (lpj=3408000) May 8 00:34:20.762003 kernel: Disabled fast string operations May 8 00:34:20.762009 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 May 8 00:34:20.762016 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 May 8 00:34:20.762022 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization May 8 00:34:20.762028 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit May 8 00:34:20.762033 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall May 8 00:34:20.762225 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS May 8 00:34:20.762235 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch May 8 00:34:20.762242 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT May 8 00:34:20.762248 kernel: RETBleed: Mitigation: Enhanced IBRS May 8 00:34:20.762254 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier May 8 00:34:20.762261 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl May 8 00:34:20.762267 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode May 8 00:34:20.762273 kernel: SRBDS: Unknown: Dependent on hypervisor status May 8 00:34:20.762279 kernel: GDS: Unknown: Dependent on hypervisor status May 8 00:34:20.762285 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' May 8 00:34:20.762291 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' May 8 00:34:20.762297 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' May 8 00:34:20.762303 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 May 8 00:34:20.762310 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. May 8 00:34:20.762316 kernel: Freeing SMP alternatives memory: 32K May 8 00:34:20.762324 kernel: pid_max: default: 131072 minimum: 1024 May 8 00:34:20.762331 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity May 8 00:34:20.762337 kernel: landlock: Up and running. May 8 00:34:20.762344 kernel: SELinux: Initializing. May 8 00:34:20.762353 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) May 8 00:34:20.762360 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) May 8 00:34:20.762369 kernel: smpboot: CPU0: Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz (family: 0x6, model: 0x9e, stepping: 0xd) May 8 00:34:20.762377 kernel: RCU Tasks: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. May 8 00:34:20.762383 kernel: RCU Tasks Rude: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. May 8 00:34:20.762389 kernel: RCU Tasks Trace: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. May 8 00:34:20.762395 kernel: Performance Events: Skylake events, core PMU driver. May 8 00:34:20.762401 kernel: core: CPUID marked event: 'cpu cycles' unavailable May 8 00:34:20.762407 kernel: core: CPUID marked event: 'instructions' unavailable May 8 00:34:20.762413 kernel: core: CPUID marked event: 'bus cycles' unavailable May 8 00:34:20.762419 kernel: core: CPUID marked event: 'cache references' unavailable May 8 00:34:20.762424 kernel: core: CPUID marked event: 'cache misses' unavailable May 8 00:34:20.762431 kernel: core: CPUID marked event: 'branch instructions' unavailable May 8 00:34:20.762437 kernel: core: CPUID marked event: 'branch misses' unavailable May 8 00:34:20.762443 kernel: ... version: 1 May 8 00:34:20.762448 kernel: ... bit width: 48 May 8 00:34:20.762454 kernel: ... generic registers: 4 May 8 00:34:20.762460 kernel: ... value mask: 0000ffffffffffff May 8 00:34:20.762466 kernel: ... max period: 000000007fffffff May 8 00:34:20.762472 kernel: ... fixed-purpose events: 0 May 8 00:34:20.762478 kernel: ... event mask: 000000000000000f May 8 00:34:20.762485 kernel: signal: max sigframe size: 1776 May 8 00:34:20.762490 kernel: rcu: Hierarchical SRCU implementation. May 8 00:34:20.762497 kernel: rcu: Max phase no-delay instances is 400. May 8 00:34:20.762504 kernel: NMI watchdog: Perf NMI watchdog permanently disabled May 8 00:34:20.762511 kernel: smp: Bringing up secondary CPUs ... May 8 00:34:20.762517 kernel: smpboot: x86: Booting SMP configuration: May 8 00:34:20.762523 kernel: .... node #0, CPUs: #1 May 8 00:34:20.762529 kernel: Disabled fast string operations May 8 00:34:20.762536 kernel: smpboot: CPU 1 Converting physical 2 to logical package 1 May 8 00:34:20.762547 kernel: smpboot: CPU 1 Converting physical 0 to logical die 1 May 8 00:34:20.762557 kernel: smp: Brought up 1 node, 2 CPUs May 8 00:34:20.762567 kernel: smpboot: Max logical packages: 128 May 8 00:34:20.762576 kernel: smpboot: Total of 2 processors activated (13632.00 BogoMIPS) May 8 00:34:20.762582 kernel: devtmpfs: initialized May 8 00:34:20.762588 kernel: x86/mm: Memory block size: 128MB May 8 00:34:20.762594 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7feff000-0x7fefffff] (4096 bytes) May 8 00:34:20.762600 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 8 00:34:20.762606 kernel: futex hash table entries: 32768 (order: 9, 2097152 bytes, linear) May 8 00:34:20.762612 kernel: pinctrl core: initialized pinctrl subsystem May 8 00:34:20.762619 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 8 00:34:20.762625 kernel: audit: initializing netlink subsys (disabled) May 8 00:34:20.762631 kernel: audit: type=2000 audit(1746664459.069:1): state=initialized audit_enabled=0 res=1 May 8 00:34:20.762637 kernel: thermal_sys: Registered thermal governor 'step_wise' May 8 00:34:20.762642 kernel: thermal_sys: Registered thermal governor 'user_space' May 8 00:34:20.762648 kernel: cpuidle: using governor menu May 8 00:34:20.762654 kernel: Simple Boot Flag at 0x36 set to 0x80 May 8 00:34:20.762660 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 8 00:34:20.762666 kernel: dca service started, version 1.12.1 May 8 00:34:20.762673 kernel: PCI: MMCONFIG for domain 0000 [bus 00-7f] at [mem 0xf0000000-0xf7ffffff] (base 0xf0000000) May 8 00:34:20.762679 kernel: PCI: Using configuration type 1 for base access May 8 00:34:20.762685 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. May 8 00:34:20.762691 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 8 00:34:20.762697 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page May 8 00:34:20.762703 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 8 00:34:20.762708 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page May 8 00:34:20.762714 kernel: ACPI: Added _OSI(Module Device) May 8 00:34:20.762720 kernel: ACPI: Added _OSI(Processor Device) May 8 00:34:20.762727 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 8 00:34:20.762733 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 8 00:34:20.762739 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 8 00:34:20.762745 kernel: ACPI: [Firmware Bug]: BIOS _OSI(Linux) query ignored May 8 00:34:20.762751 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC May 8 00:34:20.762757 kernel: ACPI: Interpreter enabled May 8 00:34:20.762762 kernel: ACPI: PM: (supports S0 S1 S5) May 8 00:34:20.762768 kernel: ACPI: Using IOAPIC for interrupt routing May 8 00:34:20.762774 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug May 8 00:34:20.762781 kernel: PCI: Using E820 reservations for host bridge windows May 8 00:34:20.762787 kernel: ACPI: Enabled 4 GPEs in block 00 to 0F May 8 00:34:20.762793 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-7f]) May 8 00:34:20.762873 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 8 00:34:20.762929 kernel: acpi PNP0A03:00: _OSC: platform does not support [AER LTR] May 8 00:34:20.762982 kernel: acpi PNP0A03:00: _OSC: OS now controls [PCIeHotplug PME PCIeCapability] May 8 00:34:20.762991 kernel: PCI host bridge to bus 0000:00 May 8 00:34:20.763075 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] May 8 00:34:20.763126 kernel: pci_bus 0000:00: root bus resource [mem 0x000cc000-0x000dbfff window] May 8 00:34:20.763174 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] May 8 00:34:20.763238 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] May 8 00:34:20.763285 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xfeff window] May 8 00:34:20.763329 kernel: pci_bus 0000:00: root bus resource [bus 00-7f] May 8 00:34:20.763388 kernel: pci 0000:00:00.0: [8086:7190] type 00 class 0x060000 May 8 00:34:20.763450 kernel: pci 0000:00:01.0: [8086:7191] type 01 class 0x060400 May 8 00:34:20.763518 kernel: pci 0000:00:07.0: [8086:7110] type 00 class 0x060100 May 8 00:34:20.763576 kernel: pci 0000:00:07.1: [8086:7111] type 00 class 0x01018a May 8 00:34:20.763627 kernel: pci 0000:00:07.1: reg 0x20: [io 0x1060-0x106f] May 8 00:34:20.763687 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] May 8 00:34:20.763750 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x14: [io 0x03f6] May 8 00:34:20.763804 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] May 8 00:34:20.763853 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x1c: [io 0x0376] May 8 00:34:20.763906 kernel: pci 0000:00:07.3: [8086:7113] type 00 class 0x068000 May 8 00:34:20.763960 kernel: pci 0000:00:07.3: quirk: [io 0x1000-0x103f] claimed by PIIX4 ACPI May 8 00:34:20.764010 kernel: pci 0000:00:07.3: quirk: [io 0x1040-0x104f] claimed by PIIX4 SMB May 8 00:34:20.764093 kernel: pci 0000:00:07.7: [15ad:0740] type 00 class 0x088000 May 8 00:34:20.764154 kernel: pci 0000:00:07.7: reg 0x10: [io 0x1080-0x10bf] May 8 00:34:20.764204 kernel: pci 0000:00:07.7: reg 0x14: [mem 0xfebfe000-0xfebfffff 64bit] May 8 00:34:20.764258 kernel: pci 0000:00:0f.0: [15ad:0405] type 00 class 0x030000 May 8 00:34:20.764311 kernel: pci 0000:00:0f.0: reg 0x10: [io 0x1070-0x107f] May 8 00:34:20.764378 kernel: pci 0000:00:0f.0: reg 0x14: [mem 0xe8000000-0xefffffff pref] May 8 00:34:20.764430 kernel: pci 0000:00:0f.0: reg 0x18: [mem 0xfe000000-0xfe7fffff] May 8 00:34:20.764480 kernel: pci 0000:00:0f.0: reg 0x30: [mem 0x00000000-0x00007fff pref] May 8 00:34:20.764532 kernel: pci 0000:00:0f.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] May 8 00:34:20.764585 kernel: pci 0000:00:11.0: [15ad:0790] type 01 class 0x060401 May 8 00:34:20.764639 kernel: pci 0000:00:15.0: [15ad:07a0] type 01 class 0x060400 May 8 00:34:20.764696 kernel: pci 0000:00:15.0: PME# supported from D0 D3hot D3cold May 8 00:34:20.764755 kernel: pci 0000:00:15.1: [15ad:07a0] type 01 class 0x060400 May 8 00:34:20.764806 kernel: pci 0000:00:15.1: PME# supported from D0 D3hot D3cold May 8 00:34:20.764862 kernel: pci 0000:00:15.2: [15ad:07a0] type 01 class 0x060400 May 8 00:34:20.764912 kernel: pci 0000:00:15.2: PME# supported from D0 D3hot D3cold May 8 00:34:20.764972 kernel: pci 0000:00:15.3: [15ad:07a0] type 01 class 0x060400 May 8 00:34:20.765032 kernel: pci 0000:00:15.3: PME# supported from D0 D3hot D3cold May 8 00:34:20.765358 kernel: pci 0000:00:15.4: [15ad:07a0] type 01 class 0x060400 May 8 00:34:20.767072 kernel: pci 0000:00:15.4: PME# supported from D0 D3hot D3cold May 8 00:34:20.767144 kernel: pci 0000:00:15.5: [15ad:07a0] type 01 class 0x060400 May 8 00:34:20.767203 kernel: pci 0000:00:15.5: PME# supported from D0 D3hot D3cold May 8 00:34:20.767269 kernel: pci 0000:00:15.6: [15ad:07a0] type 01 class 0x060400 May 8 00:34:20.767323 kernel: pci 0000:00:15.6: PME# supported from D0 D3hot D3cold May 8 00:34:20.767377 kernel: pci 0000:00:15.7: [15ad:07a0] type 01 class 0x060400 May 8 00:34:20.767429 kernel: pci 0000:00:15.7: PME# supported from D0 D3hot D3cold May 8 00:34:20.767483 kernel: pci 0000:00:16.0: [15ad:07a0] type 01 class 0x060400 May 8 00:34:20.767543 kernel: pci 0000:00:16.0: PME# supported from D0 D3hot D3cold May 8 00:34:20.767606 kernel: pci 0000:00:16.1: [15ad:07a0] type 01 class 0x060400 May 8 00:34:20.767658 kernel: pci 0000:00:16.1: PME# supported from D0 D3hot D3cold May 8 00:34:20.767711 kernel: pci 0000:00:16.2: [15ad:07a0] type 01 class 0x060400 May 8 00:34:20.767761 kernel: pci 0000:00:16.2: PME# supported from D0 D3hot D3cold May 8 00:34:20.767817 kernel: pci 0000:00:16.3: [15ad:07a0] type 01 class 0x060400 May 8 00:34:20.767869 kernel: pci 0000:00:16.3: PME# supported from D0 D3hot D3cold May 8 00:34:20.767930 kernel: pci 0000:00:16.4: [15ad:07a0] type 01 class 0x060400 May 8 00:34:20.767982 kernel: pci 0000:00:16.4: PME# supported from D0 D3hot D3cold May 8 00:34:20.768055 kernel: pci 0000:00:16.5: [15ad:07a0] type 01 class 0x060400 May 8 00:34:20.768119 kernel: pci 0000:00:16.5: PME# supported from D0 D3hot D3cold May 8 00:34:20.768178 kernel: pci 0000:00:16.6: [15ad:07a0] type 01 class 0x060400 May 8 00:34:20.768227 kernel: pci 0000:00:16.6: PME# supported from D0 D3hot D3cold May 8 00:34:20.768282 kernel: pci 0000:00:16.7: [15ad:07a0] type 01 class 0x060400 May 8 00:34:20.768339 kernel: pci 0000:00:16.7: PME# supported from D0 D3hot D3cold May 8 00:34:20.768401 kernel: pci 0000:00:17.0: [15ad:07a0] type 01 class 0x060400 May 8 00:34:20.768452 kernel: pci 0000:00:17.0: PME# supported from D0 D3hot D3cold May 8 00:34:20.768508 kernel: pci 0000:00:17.1: [15ad:07a0] type 01 class 0x060400 May 8 00:34:20.768559 kernel: pci 0000:00:17.1: PME# supported from D0 D3hot D3cold May 8 00:34:20.768616 kernel: pci 0000:00:17.2: [15ad:07a0] type 01 class 0x060400 May 8 00:34:20.768666 kernel: pci 0000:00:17.2: PME# supported from D0 D3hot D3cold May 8 00:34:20.768735 kernel: pci 0000:00:17.3: [15ad:07a0] type 01 class 0x060400 May 8 00:34:20.768789 kernel: pci 0000:00:17.3: PME# supported from D0 D3hot D3cold May 8 00:34:20.768842 kernel: pci 0000:00:17.4: [15ad:07a0] type 01 class 0x060400 May 8 00:34:20.768896 kernel: pci 0000:00:17.4: PME# supported from D0 D3hot D3cold May 8 00:34:20.768949 kernel: pci 0000:00:17.5: [15ad:07a0] type 01 class 0x060400 May 8 00:34:20.768999 kernel: pci 0000:00:17.5: PME# supported from D0 D3hot D3cold May 8 00:34:20.771146 kernel: pci 0000:00:17.6: [15ad:07a0] type 01 class 0x060400 May 8 00:34:20.771213 kernel: pci 0000:00:17.6: PME# supported from D0 D3hot D3cold May 8 00:34:20.771273 kernel: pci 0000:00:17.7: [15ad:07a0] type 01 class 0x060400 May 8 00:34:20.771329 kernel: pci 0000:00:17.7: PME# supported from D0 D3hot D3cold May 8 00:34:20.771384 kernel: pci 0000:00:18.0: [15ad:07a0] type 01 class 0x060400 May 8 00:34:20.771436 kernel: pci 0000:00:18.0: PME# supported from D0 D3hot D3cold May 8 00:34:20.771489 kernel: pci 0000:00:18.1: [15ad:07a0] type 01 class 0x060400 May 8 00:34:20.771540 kernel: pci 0000:00:18.1: PME# supported from D0 D3hot D3cold May 8 00:34:20.771602 kernel: pci 0000:00:18.2: [15ad:07a0] type 01 class 0x060400 May 8 00:34:20.771657 kernel: pci 0000:00:18.2: PME# supported from D0 D3hot D3cold May 8 00:34:20.771711 kernel: pci 0000:00:18.3: [15ad:07a0] type 01 class 0x060400 May 8 00:34:20.771765 kernel: pci 0000:00:18.3: PME# supported from D0 D3hot D3cold May 8 00:34:20.771821 kernel: pci 0000:00:18.4: [15ad:07a0] type 01 class 0x060400 May 8 00:34:20.771895 kernel: pci 0000:00:18.4: PME# supported from D0 D3hot D3cold May 8 00:34:20.771953 kernel: pci 0000:00:18.5: [15ad:07a0] type 01 class 0x060400 May 8 00:34:20.772007 kernel: pci 0000:00:18.5: PME# supported from D0 D3hot D3cold May 8 00:34:20.772074 kernel: pci 0000:00:18.6: [15ad:07a0] type 01 class 0x060400 May 8 00:34:20.772146 kernel: pci 0000:00:18.6: PME# supported from D0 D3hot D3cold May 8 00:34:20.772201 kernel: pci 0000:00:18.7: [15ad:07a0] type 01 class 0x060400 May 8 00:34:20.772266 kernel: pci 0000:00:18.7: PME# supported from D0 D3hot D3cold May 8 00:34:20.772320 kernel: pci_bus 0000:01: extended config space not accessible May 8 00:34:20.772372 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] May 8 00:34:20.772433 kernel: pci_bus 0000:02: extended config space not accessible May 8 00:34:20.772442 kernel: acpiphp: Slot [32] registered May 8 00:34:20.772449 kernel: acpiphp: Slot [33] registered May 8 00:34:20.772454 kernel: acpiphp: Slot [34] registered May 8 00:34:20.772460 kernel: acpiphp: Slot [35] registered May 8 00:34:20.772466 kernel: acpiphp: Slot [36] registered May 8 00:34:20.772472 kernel: acpiphp: Slot [37] registered May 8 00:34:20.772478 kernel: acpiphp: Slot [38] registered May 8 00:34:20.772486 kernel: acpiphp: Slot [39] registered May 8 00:34:20.772492 kernel: acpiphp: Slot [40] registered May 8 00:34:20.772498 kernel: acpiphp: Slot [41] registered May 8 00:34:20.772503 kernel: acpiphp: Slot [42] registered May 8 00:34:20.772509 kernel: acpiphp: Slot [43] registered May 8 00:34:20.772515 kernel: acpiphp: Slot [44] registered May 8 00:34:20.772521 kernel: acpiphp: Slot [45] registered May 8 00:34:20.772527 kernel: acpiphp: Slot [46] registered May 8 00:34:20.772533 kernel: acpiphp: Slot [47] registered May 8 00:34:20.772539 kernel: acpiphp: Slot [48] registered May 8 00:34:20.772546 kernel: acpiphp: Slot [49] registered May 8 00:34:20.772555 kernel: acpiphp: Slot [50] registered May 8 00:34:20.772561 kernel: acpiphp: Slot [51] registered May 8 00:34:20.772566 kernel: acpiphp: Slot [52] registered May 8 00:34:20.772572 kernel: acpiphp: Slot [53] registered May 8 00:34:20.772578 kernel: acpiphp: Slot [54] registered May 8 00:34:20.772584 kernel: acpiphp: Slot [55] registered May 8 00:34:20.772590 kernel: acpiphp: Slot [56] registered May 8 00:34:20.772595 kernel: acpiphp: Slot [57] registered May 8 00:34:20.772602 kernel: acpiphp: Slot [58] registered May 8 00:34:20.772608 kernel: acpiphp: Slot [59] registered May 8 00:34:20.772614 kernel: acpiphp: Slot [60] registered May 8 00:34:20.772621 kernel: acpiphp: Slot [61] registered May 8 00:34:20.772626 kernel: acpiphp: Slot [62] registered May 8 00:34:20.772632 kernel: acpiphp: Slot [63] registered May 8 00:34:20.772687 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] (subtractive decode) May 8 00:34:20.772748 kernel: pci 0000:00:11.0: bridge window [io 0x2000-0x3fff] May 8 00:34:20.772808 kernel: pci 0000:00:11.0: bridge window [mem 0xfd600000-0xfdffffff] May 8 00:34:20.772864 kernel: pci 0000:00:11.0: bridge window [mem 0xe7b00000-0xe7ffffff 64bit pref] May 8 00:34:20.772914 kernel: pci 0000:00:11.0: bridge window [mem 0x000a0000-0x000bffff window] (subtractive decode) May 8 00:34:20.772964 kernel: pci 0000:00:11.0: bridge window [mem 0x000cc000-0x000dbfff window] (subtractive decode) May 8 00:34:20.773017 kernel: pci 0000:00:11.0: bridge window [mem 0xc0000000-0xfebfffff window] (subtractive decode) May 8 00:34:20.774618 kernel: pci 0000:00:11.0: bridge window [io 0x0000-0x0cf7 window] (subtractive decode) May 8 00:34:20.774679 kernel: pci 0000:00:11.0: bridge window [io 0x0d00-0xfeff window] (subtractive decode) May 8 00:34:20.774747 kernel: pci 0000:03:00.0: [15ad:07c0] type 00 class 0x010700 May 8 00:34:20.774806 kernel: pci 0000:03:00.0: reg 0x10: [io 0x4000-0x4007] May 8 00:34:20.774857 kernel: pci 0000:03:00.0: reg 0x14: [mem 0xfd5f8000-0xfd5fffff 64bit] May 8 00:34:20.774909 kernel: pci 0000:03:00.0: reg 0x30: [mem 0x00000000-0x0000ffff pref] May 8 00:34:20.775012 kernel: pci 0000:03:00.0: PME# supported from D0 D3hot D3cold May 8 00:34:20.775084 kernel: pci 0000:03:00.0: disabling ASPM on pre-1.1 PCIe device. You can enable it with 'pcie_aspm=force' May 8 00:34:20.775143 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] May 8 00:34:20.775194 kernel: pci 0000:00:15.0: bridge window [io 0x4000-0x4fff] May 8 00:34:20.775253 kernel: pci 0000:00:15.0: bridge window [mem 0xfd500000-0xfd5fffff] May 8 00:34:20.775305 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] May 8 00:34:20.775356 kernel: pci 0000:00:15.1: bridge window [io 0x8000-0x8fff] May 8 00:34:20.775405 kernel: pci 0000:00:15.1: bridge window [mem 0xfd100000-0xfd1fffff] May 8 00:34:20.775455 kernel: pci 0000:00:15.1: bridge window [mem 0xe7800000-0xe78fffff 64bit pref] May 8 00:34:20.775511 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] May 8 00:34:20.775562 kernel: pci 0000:00:15.2: bridge window [io 0xc000-0xcfff] May 8 00:34:20.775611 kernel: pci 0000:00:15.2: bridge window [mem 0xfcd00000-0xfcdfffff] May 8 00:34:20.775663 kernel: pci 0000:00:15.2: bridge window [mem 0xe7400000-0xe74fffff 64bit pref] May 8 00:34:20.775716 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] May 8 00:34:20.775767 kernel: pci 0000:00:15.3: bridge window [mem 0xfc900000-0xfc9fffff] May 8 00:34:20.775829 kernel: pci 0000:00:15.3: bridge window [mem 0xe7000000-0xe70fffff 64bit pref] May 8 00:34:20.775927 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] May 8 00:34:20.776009 kernel: pci 0000:00:15.4: bridge window [mem 0xfc500000-0xfc5fffff] May 8 00:34:20.776177 kernel: pci 0000:00:15.4: bridge window [mem 0xe6c00000-0xe6cfffff 64bit pref] May 8 00:34:20.776237 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] May 8 00:34:20.776286 kernel: pci 0000:00:15.5: bridge window [mem 0xfc100000-0xfc1fffff] May 8 00:34:20.776335 kernel: pci 0000:00:15.5: bridge window [mem 0xe6800000-0xe68fffff 64bit pref] May 8 00:34:20.776396 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] May 8 00:34:20.776447 kernel: pci 0000:00:15.6: bridge window [mem 0xfbd00000-0xfbdfffff] May 8 00:34:20.777089 kernel: pci 0000:00:15.6: bridge window [mem 0xe6400000-0xe64fffff 64bit pref] May 8 00:34:20.777163 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] May 8 00:34:20.777215 kernel: pci 0000:00:15.7: bridge window [mem 0xfb900000-0xfb9fffff] May 8 00:34:20.777266 kernel: pci 0000:00:15.7: bridge window [mem 0xe6000000-0xe60fffff 64bit pref] May 8 00:34:20.777344 kernel: pci 0000:0b:00.0: [15ad:07b0] type 00 class 0x020000 May 8 00:34:20.777423 kernel: pci 0000:0b:00.0: reg 0x10: [mem 0xfd4fc000-0xfd4fcfff] May 8 00:34:20.777475 kernel: pci 0000:0b:00.0: reg 0x14: [mem 0xfd4fd000-0xfd4fdfff] May 8 00:34:20.777526 kernel: pci 0000:0b:00.0: reg 0x18: [mem 0xfd4fe000-0xfd4fffff] May 8 00:34:20.777580 kernel: pci 0000:0b:00.0: reg 0x1c: [io 0x5000-0x500f] May 8 00:34:20.777630 kernel: pci 0000:0b:00.0: reg 0x30: [mem 0x00000000-0x0000ffff pref] May 8 00:34:20.777690 kernel: pci 0000:0b:00.0: supports D1 D2 May 8 00:34:20.777741 kernel: pci 0000:0b:00.0: PME# supported from D0 D1 D2 D3hot D3cold May 8 00:34:20.777802 kernel: pci 0000:0b:00.0: disabling ASPM on pre-1.1 PCIe device. You can enable it with 'pcie_aspm=force' May 8 00:34:20.777856 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] May 8 00:34:20.777918 kernel: pci 0000:00:16.0: bridge window [io 0x5000-0x5fff] May 8 00:34:20.779116 kernel: pci 0000:00:16.0: bridge window [mem 0xfd400000-0xfd4fffff] May 8 00:34:20.779174 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] May 8 00:34:20.779227 kernel: pci 0000:00:16.1: bridge window [io 0x9000-0x9fff] May 8 00:34:20.779277 kernel: pci 0000:00:16.1: bridge window [mem 0xfd000000-0xfd0fffff] May 8 00:34:20.779328 kernel: pci 0000:00:16.1: bridge window [mem 0xe7700000-0xe77fffff 64bit pref] May 8 00:34:20.779396 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] May 8 00:34:20.779448 kernel: pci 0000:00:16.2: bridge window [io 0xd000-0xdfff] May 8 00:34:20.779502 kernel: pci 0000:00:16.2: bridge window [mem 0xfcc00000-0xfccfffff] May 8 00:34:20.779565 kernel: pci 0000:00:16.2: bridge window [mem 0xe7300000-0xe73fffff 64bit pref] May 8 00:34:20.779633 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] May 8 00:34:20.779693 kernel: pci 0000:00:16.3: bridge window [mem 0xfc800000-0xfc8fffff] May 8 00:34:20.779753 kernel: pci 0000:00:16.3: bridge window [mem 0xe6f00000-0xe6ffffff 64bit pref] May 8 00:34:20.779815 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] May 8 00:34:20.779875 kernel: pci 0000:00:16.4: bridge window [mem 0xfc400000-0xfc4fffff] May 8 00:34:20.779936 kernel: pci 0000:00:16.4: bridge window [mem 0xe6b00000-0xe6bfffff 64bit pref] May 8 00:34:20.780002 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] May 8 00:34:20.782103 kernel: pci 0000:00:16.5: bridge window [mem 0xfc000000-0xfc0fffff] May 8 00:34:20.782164 kernel: pci 0000:00:16.5: bridge window [mem 0xe6700000-0xe67fffff 64bit pref] May 8 00:34:20.782219 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] May 8 00:34:20.782270 kernel: pci 0000:00:16.6: bridge window [mem 0xfbc00000-0xfbcfffff] May 8 00:34:20.782320 kernel: pci 0000:00:16.6: bridge window [mem 0xe6300000-0xe63fffff 64bit pref] May 8 00:34:20.782371 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] May 8 00:34:20.782421 kernel: pci 0000:00:16.7: bridge window [mem 0xfb800000-0xfb8fffff] May 8 00:34:20.782470 kernel: pci 0000:00:16.7: bridge window [mem 0xe5f00000-0xe5ffffff 64bit pref] May 8 00:34:20.782526 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] May 8 00:34:20.782576 kernel: pci 0000:00:17.0: bridge window [io 0x6000-0x6fff] May 8 00:34:20.782625 kernel: pci 0000:00:17.0: bridge window [mem 0xfd300000-0xfd3fffff] May 8 00:34:20.782674 kernel: pci 0000:00:17.0: bridge window [mem 0xe7a00000-0xe7afffff 64bit pref] May 8 00:34:20.782726 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] May 8 00:34:20.782774 kernel: pci 0000:00:17.1: bridge window [io 0xa000-0xafff] May 8 00:34:20.782824 kernel: pci 0000:00:17.1: bridge window [mem 0xfcf00000-0xfcffffff] May 8 00:34:20.782880 kernel: pci 0000:00:17.1: bridge window [mem 0xe7600000-0xe76fffff 64bit pref] May 8 00:34:20.782935 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] May 8 00:34:20.782986 kernel: pci 0000:00:17.2: bridge window [io 0xe000-0xefff] May 8 00:34:20.783036 kernel: pci 0000:00:17.2: bridge window [mem 0xfcb00000-0xfcbfffff] May 8 00:34:20.785766 kernel: pci 0000:00:17.2: bridge window [mem 0xe7200000-0xe72fffff 64bit pref] May 8 00:34:20.785825 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] May 8 00:34:20.785878 kernel: pci 0000:00:17.3: bridge window [mem 0xfc700000-0xfc7fffff] May 8 00:34:20.785929 kernel: pci 0000:00:17.3: bridge window [mem 0xe6e00000-0xe6efffff 64bit pref] May 8 00:34:20.785984 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] May 8 00:34:20.786034 kernel: pci 0000:00:17.4: bridge window [mem 0xfc300000-0xfc3fffff] May 8 00:34:20.786100 kernel: pci 0000:00:17.4: bridge window [mem 0xe6a00000-0xe6afffff 64bit pref] May 8 00:34:20.786152 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] May 8 00:34:20.786202 kernel: pci 0000:00:17.5: bridge window [mem 0xfbf00000-0xfbffffff] May 8 00:34:20.786252 kernel: pci 0000:00:17.5: bridge window [mem 0xe6600000-0xe66fffff 64bit pref] May 8 00:34:20.786303 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] May 8 00:34:20.786352 kernel: pci 0000:00:17.6: bridge window [mem 0xfbb00000-0xfbbfffff] May 8 00:34:20.786405 kernel: pci 0000:00:17.6: bridge window [mem 0xe6200000-0xe62fffff 64bit pref] May 8 00:34:20.786456 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] May 8 00:34:20.786505 kernel: pci 0000:00:17.7: bridge window [mem 0xfb700000-0xfb7fffff] May 8 00:34:20.786555 kernel: pci 0000:00:17.7: bridge window [mem 0xe5e00000-0xe5efffff 64bit pref] May 8 00:34:20.786605 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] May 8 00:34:20.786655 kernel: pci 0000:00:18.0: bridge window [io 0x7000-0x7fff] May 8 00:34:20.786703 kernel: pci 0000:00:18.0: bridge window [mem 0xfd200000-0xfd2fffff] May 8 00:34:20.786752 kernel: pci 0000:00:18.0: bridge window [mem 0xe7900000-0xe79fffff 64bit pref] May 8 00:34:20.786806 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] May 8 00:34:20.786856 kernel: pci 0000:00:18.1: bridge window [io 0xb000-0xbfff] May 8 00:34:20.786905 kernel: pci 0000:00:18.1: bridge window [mem 0xfce00000-0xfcefffff] May 8 00:34:20.786954 kernel: pci 0000:00:18.1: bridge window [mem 0xe7500000-0xe75fffff 64bit pref] May 8 00:34:20.787005 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] May 8 00:34:20.787071 kernel: pci 0000:00:18.2: bridge window [mem 0xfca00000-0xfcafffff] May 8 00:34:20.787123 kernel: pci 0000:00:18.2: bridge window [mem 0xe7100000-0xe71fffff 64bit pref] May 8 00:34:20.787175 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] May 8 00:34:20.787240 kernel: pci 0000:00:18.3: bridge window [mem 0xfc600000-0xfc6fffff] May 8 00:34:20.787289 kernel: pci 0000:00:18.3: bridge window [mem 0xe6d00000-0xe6dfffff 64bit pref] May 8 00:34:20.787353 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] May 8 00:34:20.787404 kernel: pci 0000:00:18.4: bridge window [mem 0xfc200000-0xfc2fffff] May 8 00:34:20.787453 kernel: pci 0000:00:18.4: bridge window [mem 0xe6900000-0xe69fffff 64bit pref] May 8 00:34:20.787506 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] May 8 00:34:20.787555 kernel: pci 0000:00:18.5: bridge window [mem 0xfbe00000-0xfbefffff] May 8 00:34:20.787605 kernel: pci 0000:00:18.5: bridge window [mem 0xe6500000-0xe65fffff 64bit pref] May 8 00:34:20.787660 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] May 8 00:34:20.787711 kernel: pci 0000:00:18.6: bridge window [mem 0xfba00000-0xfbafffff] May 8 00:34:20.787760 kernel: pci 0000:00:18.6: bridge window [mem 0xe6100000-0xe61fffff 64bit pref] May 8 00:34:20.787811 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] May 8 00:34:20.787860 kernel: pci 0000:00:18.7: bridge window [mem 0xfb600000-0xfb6fffff] May 8 00:34:20.787909 kernel: pci 0000:00:18.7: bridge window [mem 0xe5d00000-0xe5dfffff 64bit pref] May 8 00:34:20.787918 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 9 May 8 00:34:20.787924 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 0 May 8 00:34:20.787932 kernel: ACPI: PCI: Interrupt link LNKB disabled May 8 00:34:20.787938 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 May 8 00:34:20.787944 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 10 May 8 00:34:20.787950 kernel: iommu: Default domain type: Translated May 8 00:34:20.787956 kernel: iommu: DMA domain TLB invalidation policy: lazy mode May 8 00:34:20.787962 kernel: PCI: Using ACPI for IRQ routing May 8 00:34:20.787968 kernel: PCI: pci_cache_line_size set to 64 bytes May 8 00:34:20.787974 kernel: e820: reserve RAM buffer [mem 0x0009ec00-0x0009ffff] May 8 00:34:20.787980 kernel: e820: reserve RAM buffer [mem 0x7fee0000-0x7fffffff] May 8 00:34:20.788032 kernel: pci 0000:00:0f.0: vgaarb: setting as boot VGA device May 8 00:34:20.790149 kernel: pci 0000:00:0f.0: vgaarb: bridge control possible May 8 00:34:20.790201 kernel: pci 0000:00:0f.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none May 8 00:34:20.790210 kernel: vgaarb: loaded May 8 00:34:20.790217 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 May 8 00:34:20.790223 kernel: hpet0: 16 comparators, 64-bit 14.318180 MHz counter May 8 00:34:20.790229 kernel: clocksource: Switched to clocksource tsc-early May 8 00:34:20.790235 kernel: VFS: Disk quotas dquot_6.6.0 May 8 00:34:20.790241 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 8 00:34:20.790250 kernel: pnp: PnP ACPI init May 8 00:34:20.790303 kernel: system 00:00: [io 0x1000-0x103f] has been reserved May 8 00:34:20.790350 kernel: system 00:00: [io 0x1040-0x104f] has been reserved May 8 00:34:20.790395 kernel: system 00:00: [io 0x0cf0-0x0cf1] has been reserved May 8 00:34:20.790443 kernel: system 00:04: [mem 0xfed00000-0xfed003ff] has been reserved May 8 00:34:20.790493 kernel: pnp 00:06: [dma 2] May 8 00:34:20.790542 kernel: system 00:07: [io 0xfce0-0xfcff] has been reserved May 8 00:34:20.790590 kernel: system 00:07: [mem 0xf0000000-0xf7ffffff] has been reserved May 8 00:34:20.790634 kernel: system 00:07: [mem 0xfe800000-0xfe9fffff] has been reserved May 8 00:34:20.790642 kernel: pnp: PnP ACPI: found 8 devices May 8 00:34:20.790649 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns May 8 00:34:20.790656 kernel: NET: Registered PF_INET protocol family May 8 00:34:20.790662 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) May 8 00:34:20.790668 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) May 8 00:34:20.790676 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 8 00:34:20.790682 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) May 8 00:34:20.790688 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) May 8 00:34:20.790693 kernel: TCP: Hash tables configured (established 16384 bind 16384) May 8 00:34:20.790699 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) May 8 00:34:20.790705 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) May 8 00:34:20.790711 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 8 00:34:20.790717 kernel: NET: Registered PF_XDP protocol family May 8 00:34:20.790769 kernel: pci 0000:00:15.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 May 8 00:34:20.790824 kernel: pci 0000:00:15.3: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 May 8 00:34:20.790875 kernel: pci 0000:00:15.4: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 May 8 00:34:20.790927 kernel: pci 0000:00:15.5: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 May 8 00:34:20.790979 kernel: pci 0000:00:15.6: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 May 8 00:34:20.791029 kernel: pci 0000:00:15.7: bridge window [io 0x1000-0x0fff] to [bus 0a] add_size 1000 May 8 00:34:20.791091 kernel: pci 0000:00:16.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 0b] add_size 200000 add_align 100000 May 8 00:34:20.791145 kernel: pci 0000:00:16.3: bridge window [io 0x1000-0x0fff] to [bus 0e] add_size 1000 May 8 00:34:20.791195 kernel: pci 0000:00:16.4: bridge window [io 0x1000-0x0fff] to [bus 0f] add_size 1000 May 8 00:34:20.791245 kernel: pci 0000:00:16.5: bridge window [io 0x1000-0x0fff] to [bus 10] add_size 1000 May 8 00:34:20.791295 kernel: pci 0000:00:16.6: bridge window [io 0x1000-0x0fff] to [bus 11] add_size 1000 May 8 00:34:20.791344 kernel: pci 0000:00:16.7: bridge window [io 0x1000-0x0fff] to [bus 12] add_size 1000 May 8 00:34:20.791394 kernel: pci 0000:00:17.3: bridge window [io 0x1000-0x0fff] to [bus 16] add_size 1000 May 8 00:34:20.791446 kernel: pci 0000:00:17.4: bridge window [io 0x1000-0x0fff] to [bus 17] add_size 1000 May 8 00:34:20.791496 kernel: pci 0000:00:17.5: bridge window [io 0x1000-0x0fff] to [bus 18] add_size 1000 May 8 00:34:20.791546 kernel: pci 0000:00:17.6: bridge window [io 0x1000-0x0fff] to [bus 19] add_size 1000 May 8 00:34:20.791596 kernel: pci 0000:00:17.7: bridge window [io 0x1000-0x0fff] to [bus 1a] add_size 1000 May 8 00:34:20.791645 kernel: pci 0000:00:18.2: bridge window [io 0x1000-0x0fff] to [bus 1d] add_size 1000 May 8 00:34:20.791705 kernel: pci 0000:00:18.3: bridge window [io 0x1000-0x0fff] to [bus 1e] add_size 1000 May 8 00:34:20.791756 kernel: pci 0000:00:18.4: bridge window [io 0x1000-0x0fff] to [bus 1f] add_size 1000 May 8 00:34:20.791806 kernel: pci 0000:00:18.5: bridge window [io 0x1000-0x0fff] to [bus 20] add_size 1000 May 8 00:34:20.791855 kernel: pci 0000:00:18.6: bridge window [io 0x1000-0x0fff] to [bus 21] add_size 1000 May 8 00:34:20.791910 kernel: pci 0000:00:18.7: bridge window [io 0x1000-0x0fff] to [bus 22] add_size 1000 May 8 00:34:20.791963 kernel: pci 0000:00:15.0: BAR 15: assigned [mem 0xc0000000-0xc01fffff 64bit pref] May 8 00:34:20.792016 kernel: pci 0000:00:16.0: BAR 15: assigned [mem 0xc0200000-0xc03fffff 64bit pref] May 8 00:34:20.793967 kernel: pci 0000:00:15.3: BAR 13: no space for [io size 0x1000] May 8 00:34:20.794029 kernel: pci 0000:00:15.3: BAR 13: failed to assign [io size 0x1000] May 8 00:34:20.794102 kernel: pci 0000:00:15.4: BAR 13: no space for [io size 0x1000] May 8 00:34:20.794269 kernel: pci 0000:00:15.4: BAR 13: failed to assign [io size 0x1000] May 8 00:34:20.794337 kernel: pci 0000:00:15.5: BAR 13: no space for [io size 0x1000] May 8 00:34:20.794400 kernel: pci 0000:00:15.5: BAR 13: failed to assign [io size 0x1000] May 8 00:34:20.794452 kernel: pci 0000:00:15.6: BAR 13: no space for [io size 0x1000] May 8 00:34:20.794505 kernel: pci 0000:00:15.6: BAR 13: failed to assign [io size 0x1000] May 8 00:34:20.794556 kernel: pci 0000:00:15.7: BAR 13: no space for [io size 0x1000] May 8 00:34:20.794605 kernel: pci 0000:00:15.7: BAR 13: failed to assign [io size 0x1000] May 8 00:34:20.794656 kernel: pci 0000:00:16.3: BAR 13: no space for [io size 0x1000] May 8 00:34:20.794705 kernel: pci 0000:00:16.3: BAR 13: failed to assign [io size 0x1000] May 8 00:34:20.794756 kernel: pci 0000:00:16.4: BAR 13: no space for [io size 0x1000] May 8 00:34:20.794807 kernel: pci 0000:00:16.4: BAR 13: failed to assign [io size 0x1000] May 8 00:34:20.794857 kernel: pci 0000:00:16.5: BAR 13: no space for [io size 0x1000] May 8 00:34:20.794910 kernel: pci 0000:00:16.5: BAR 13: failed to assign [io size 0x1000] May 8 00:34:20.794961 kernel: pci 0000:00:16.6: BAR 13: no space for [io size 0x1000] May 8 00:34:20.795010 kernel: pci 0000:00:16.6: BAR 13: failed to assign [io size 0x1000] May 8 00:34:20.795100 kernel: pci 0000:00:16.7: BAR 13: no space for [io size 0x1000] May 8 00:34:20.795150 kernel: pci 0000:00:16.7: BAR 13: failed to assign [io size 0x1000] May 8 00:34:20.795199 kernel: pci 0000:00:17.3: BAR 13: no space for [io size 0x1000] May 8 00:34:20.795248 kernel: pci 0000:00:17.3: BAR 13: failed to assign [io size 0x1000] May 8 00:34:20.795297 kernel: pci 0000:00:17.4: BAR 13: no space for [io size 0x1000] May 8 00:34:20.795346 kernel: pci 0000:00:17.4: BAR 13: failed to assign [io size 0x1000] May 8 00:34:20.795399 kernel: pci 0000:00:17.5: BAR 13: no space for [io size 0x1000] May 8 00:34:20.795449 kernel: pci 0000:00:17.5: BAR 13: failed to assign [io size 0x1000] May 8 00:34:20.795498 kernel: pci 0000:00:17.6: BAR 13: no space for [io size 0x1000] May 8 00:34:20.795548 kernel: pci 0000:00:17.6: BAR 13: failed to assign [io size 0x1000] May 8 00:34:20.795598 kernel: pci 0000:00:17.7: BAR 13: no space for [io size 0x1000] May 8 00:34:20.795647 kernel: pci 0000:00:17.7: BAR 13: failed to assign [io size 0x1000] May 8 00:34:20.795697 kernel: pci 0000:00:18.2: BAR 13: no space for [io size 0x1000] May 8 00:34:20.795747 kernel: pci 0000:00:18.2: BAR 13: failed to assign [io size 0x1000] May 8 00:34:20.795799 kernel: pci 0000:00:18.3: BAR 13: no space for [io size 0x1000] May 8 00:34:20.795849 kernel: pci 0000:00:18.3: BAR 13: failed to assign [io size 0x1000] May 8 00:34:20.795899 kernel: pci 0000:00:18.4: BAR 13: no space for [io size 0x1000] May 8 00:34:20.795957 kernel: pci 0000:00:18.4: BAR 13: failed to assign [io size 0x1000] May 8 00:34:20.796008 kernel: pci 0000:00:18.5: BAR 13: no space for [io size 0x1000] May 8 00:34:20.796068 kernel: pci 0000:00:18.5: BAR 13: failed to assign [io size 0x1000] May 8 00:34:20.796120 kernel: pci 0000:00:18.6: BAR 13: no space for [io size 0x1000] May 8 00:34:20.796170 kernel: pci 0000:00:18.6: BAR 13: failed to assign [io size 0x1000] May 8 00:34:20.796223 kernel: pci 0000:00:18.7: BAR 13: no space for [io size 0x1000] May 8 00:34:20.796273 kernel: pci 0000:00:18.7: BAR 13: failed to assign [io size 0x1000] May 8 00:34:20.796328 kernel: pci 0000:00:18.7: BAR 13: no space for [io size 0x1000] May 8 00:34:20.796387 kernel: pci 0000:00:18.7: BAR 13: failed to assign [io size 0x1000] May 8 00:34:20.796438 kernel: pci 0000:00:18.6: BAR 13: no space for [io size 0x1000] May 8 00:34:20.796488 kernel: pci 0000:00:18.6: BAR 13: failed to assign [io size 0x1000] May 8 00:34:20.796545 kernel: pci 0000:00:18.5: BAR 13: no space for [io size 0x1000] May 8 00:34:20.796595 kernel: pci 0000:00:18.5: BAR 13: failed to assign [io size 0x1000] May 8 00:34:20.796652 kernel: pci 0000:00:18.4: BAR 13: no space for [io size 0x1000] May 8 00:34:20.796716 kernel: pci 0000:00:18.4: BAR 13: failed to assign [io size 0x1000] May 8 00:34:20.796774 kernel: pci 0000:00:18.3: BAR 13: no space for [io size 0x1000] May 8 00:34:20.796824 kernel: pci 0000:00:18.3: BAR 13: failed to assign [io size 0x1000] May 8 00:34:20.796874 kernel: pci 0000:00:18.2: BAR 13: no space for [io size 0x1000] May 8 00:34:20.796933 kernel: pci 0000:00:18.2: BAR 13: failed to assign [io size 0x1000] May 8 00:34:20.796985 kernel: pci 0000:00:17.7: BAR 13: no space for [io size 0x1000] May 8 00:34:20.798083 kernel: pci 0000:00:17.7: BAR 13: failed to assign [io size 0x1000] May 8 00:34:20.798167 kernel: pci 0000:00:17.6: BAR 13: no space for [io size 0x1000] May 8 00:34:20.798234 kernel: pci 0000:00:17.6: BAR 13: failed to assign [io size 0x1000] May 8 00:34:20.798293 kernel: pci 0000:00:17.5: BAR 13: no space for [io size 0x1000] May 8 00:34:20.798343 kernel: pci 0000:00:17.5: BAR 13: failed to assign [io size 0x1000] May 8 00:34:20.798393 kernel: pci 0000:00:17.4: BAR 13: no space for [io size 0x1000] May 8 00:34:20.798447 kernel: pci 0000:00:17.4: BAR 13: failed to assign [io size 0x1000] May 8 00:34:20.798498 kernel: pci 0000:00:17.3: BAR 13: no space for [io size 0x1000] May 8 00:34:20.798546 kernel: pci 0000:00:17.3: BAR 13: failed to assign [io size 0x1000] May 8 00:34:20.798595 kernel: pci 0000:00:16.7: BAR 13: no space for [io size 0x1000] May 8 00:34:20.798645 kernel: pci 0000:00:16.7: BAR 13: failed to assign [io size 0x1000] May 8 00:34:20.798702 kernel: pci 0000:00:16.6: BAR 13: no space for [io size 0x1000] May 8 00:34:20.798769 kernel: pci 0000:00:16.6: BAR 13: failed to assign [io size 0x1000] May 8 00:34:20.798824 kernel: pci 0000:00:16.5: BAR 13: no space for [io size 0x1000] May 8 00:34:20.798882 kernel: pci 0000:00:16.5: BAR 13: failed to assign [io size 0x1000] May 8 00:34:20.798939 kernel: pci 0000:00:16.4: BAR 13: no space for [io size 0x1000] May 8 00:34:20.798990 kernel: pci 0000:00:16.4: BAR 13: failed to assign [io size 0x1000] May 8 00:34:20.799095 kernel: pci 0000:00:16.3: BAR 13: no space for [io size 0x1000] May 8 00:34:20.799152 kernel: pci 0000:00:16.3: BAR 13: failed to assign [io size 0x1000] May 8 00:34:20.799208 kernel: pci 0000:00:15.7: BAR 13: no space for [io size 0x1000] May 8 00:34:20.799258 kernel: pci 0000:00:15.7: BAR 13: failed to assign [io size 0x1000] May 8 00:34:20.799322 kernel: pci 0000:00:15.6: BAR 13: no space for [io size 0x1000] May 8 00:34:20.799387 kernel: pci 0000:00:15.6: BAR 13: failed to assign [io size 0x1000] May 8 00:34:20.799448 kernel: pci 0000:00:15.5: BAR 13: no space for [io size 0x1000] May 8 00:34:20.799504 kernel: pci 0000:00:15.5: BAR 13: failed to assign [io size 0x1000] May 8 00:34:20.799554 kernel: pci 0000:00:15.4: BAR 13: no space for [io size 0x1000] May 8 00:34:20.799603 kernel: pci 0000:00:15.4: BAR 13: failed to assign [io size 0x1000] May 8 00:34:20.799656 kernel: pci 0000:00:15.3: BAR 13: no space for [io size 0x1000] May 8 00:34:20.799709 kernel: pci 0000:00:15.3: BAR 13: failed to assign [io size 0x1000] May 8 00:34:20.799760 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] May 8 00:34:20.799811 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] May 8 00:34:20.799864 kernel: pci 0000:00:11.0: bridge window [io 0x2000-0x3fff] May 8 00:34:20.799929 kernel: pci 0000:00:11.0: bridge window [mem 0xfd600000-0xfdffffff] May 8 00:34:20.799991 kernel: pci 0000:00:11.0: bridge window [mem 0xe7b00000-0xe7ffffff 64bit pref] May 8 00:34:20.800083 kernel: pci 0000:03:00.0: BAR 6: assigned [mem 0xfd500000-0xfd50ffff pref] May 8 00:34:20.800142 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] May 8 00:34:20.800194 kernel: pci 0000:00:15.0: bridge window [io 0x4000-0x4fff] May 8 00:34:20.800244 kernel: pci 0000:00:15.0: bridge window [mem 0xfd500000-0xfd5fffff] May 8 00:34:20.800293 kernel: pci 0000:00:15.0: bridge window [mem 0xc0000000-0xc01fffff 64bit pref] May 8 00:34:20.800348 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] May 8 00:34:20.800403 kernel: pci 0000:00:15.1: bridge window [io 0x8000-0x8fff] May 8 00:34:20.800456 kernel: pci 0000:00:15.1: bridge window [mem 0xfd100000-0xfd1fffff] May 8 00:34:20.800526 kernel: pci 0000:00:15.1: bridge window [mem 0xe7800000-0xe78fffff 64bit pref] May 8 00:34:20.800586 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] May 8 00:34:20.800649 kernel: pci 0000:00:15.2: bridge window [io 0xc000-0xcfff] May 8 00:34:20.800699 kernel: pci 0000:00:15.2: bridge window [mem 0xfcd00000-0xfcdfffff] May 8 00:34:20.800749 kernel: pci 0000:00:15.2: bridge window [mem 0xe7400000-0xe74fffff 64bit pref] May 8 00:34:20.800798 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] May 8 00:34:20.800847 kernel: pci 0000:00:15.3: bridge window [mem 0xfc900000-0xfc9fffff] May 8 00:34:20.800905 kernel: pci 0000:00:15.3: bridge window [mem 0xe7000000-0xe70fffff 64bit pref] May 8 00:34:20.800955 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] May 8 00:34:20.801004 kernel: pci 0000:00:15.4: bridge window [mem 0xfc500000-0xfc5fffff] May 8 00:34:20.801445 kernel: pci 0000:00:15.4: bridge window [mem 0xe6c00000-0xe6cfffff 64bit pref] May 8 00:34:20.801502 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] May 8 00:34:20.801553 kernel: pci 0000:00:15.5: bridge window [mem 0xfc100000-0xfc1fffff] May 8 00:34:20.801618 kernel: pci 0000:00:15.5: bridge window [mem 0xe6800000-0xe68fffff 64bit pref] May 8 00:34:20.801685 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] May 8 00:34:20.801738 kernel: pci 0000:00:15.6: bridge window [mem 0xfbd00000-0xfbdfffff] May 8 00:34:20.801800 kernel: pci 0000:00:15.6: bridge window [mem 0xe6400000-0xe64fffff 64bit pref] May 8 00:34:20.801852 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] May 8 00:34:20.801906 kernel: pci 0000:00:15.7: bridge window [mem 0xfb900000-0xfb9fffff] May 8 00:34:20.801957 kernel: pci 0000:00:15.7: bridge window [mem 0xe6000000-0xe60fffff 64bit pref] May 8 00:34:20.802011 kernel: pci 0000:0b:00.0: BAR 6: assigned [mem 0xfd400000-0xfd40ffff pref] May 8 00:34:20.802217 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] May 8 00:34:20.802281 kernel: pci 0000:00:16.0: bridge window [io 0x5000-0x5fff] May 8 00:34:20.802342 kernel: pci 0000:00:16.0: bridge window [mem 0xfd400000-0xfd4fffff] May 8 00:34:20.802399 kernel: pci 0000:00:16.0: bridge window [mem 0xc0200000-0xc03fffff 64bit pref] May 8 00:34:20.802451 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] May 8 00:34:20.802502 kernel: pci 0000:00:16.1: bridge window [io 0x9000-0x9fff] May 8 00:34:20.802551 kernel: pci 0000:00:16.1: bridge window [mem 0xfd000000-0xfd0fffff] May 8 00:34:20.802600 kernel: pci 0000:00:16.1: bridge window [mem 0xe7700000-0xe77fffff 64bit pref] May 8 00:34:20.802657 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] May 8 00:34:20.802716 kernel: pci 0000:00:16.2: bridge window [io 0xd000-0xdfff] May 8 00:34:20.802778 kernel: pci 0000:00:16.2: bridge window [mem 0xfcc00000-0xfccfffff] May 8 00:34:20.802834 kernel: pci 0000:00:16.2: bridge window [mem 0xe7300000-0xe73fffff 64bit pref] May 8 00:34:20.803162 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] May 8 00:34:20.803221 kernel: pci 0000:00:16.3: bridge window [mem 0xfc800000-0xfc8fffff] May 8 00:34:20.803282 kernel: pci 0000:00:16.3: bridge window [mem 0xe6f00000-0xe6ffffff 64bit pref] May 8 00:34:20.803342 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] May 8 00:34:20.803392 kernel: pci 0000:00:16.4: bridge window [mem 0xfc400000-0xfc4fffff] May 8 00:34:20.803452 kernel: pci 0000:00:16.4: bridge window [mem 0xe6b00000-0xe6bfffff 64bit pref] May 8 00:34:20.803503 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] May 8 00:34:20.803552 kernel: pci 0000:00:16.5: bridge window [mem 0xfc000000-0xfc0fffff] May 8 00:34:20.803605 kernel: pci 0000:00:16.5: bridge window [mem 0xe6700000-0xe67fffff 64bit pref] May 8 00:34:20.803662 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] May 8 00:34:20.803712 kernel: pci 0000:00:16.6: bridge window [mem 0xfbc00000-0xfbcfffff] May 8 00:34:20.803770 kernel: pci 0000:00:16.6: bridge window [mem 0xe6300000-0xe63fffff 64bit pref] May 8 00:34:20.803825 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] May 8 00:34:20.803889 kernel: pci 0000:00:16.7: bridge window [mem 0xfb800000-0xfb8fffff] May 8 00:34:20.803949 kernel: pci 0000:00:16.7: bridge window [mem 0xe5f00000-0xe5ffffff 64bit pref] May 8 00:34:20.804000 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] May 8 00:34:20.804067 kernel: pci 0000:00:17.0: bridge window [io 0x6000-0x6fff] May 8 00:34:20.804126 kernel: pci 0000:00:17.0: bridge window [mem 0xfd300000-0xfd3fffff] May 8 00:34:20.804177 kernel: pci 0000:00:17.0: bridge window [mem 0xe7a00000-0xe7afffff 64bit pref] May 8 00:34:20.804228 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] May 8 00:34:20.804277 kernel: pci 0000:00:17.1: bridge window [io 0xa000-0xafff] May 8 00:34:20.804333 kernel: pci 0000:00:17.1: bridge window [mem 0xfcf00000-0xfcffffff] May 8 00:34:20.804383 kernel: pci 0000:00:17.1: bridge window [mem 0xe7600000-0xe76fffff 64bit pref] May 8 00:34:20.804439 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] May 8 00:34:20.804505 kernel: pci 0000:00:17.2: bridge window [io 0xe000-0xefff] May 8 00:34:20.804560 kernel: pci 0000:00:17.2: bridge window [mem 0xfcb00000-0xfcbfffff] May 8 00:34:20.804616 kernel: pci 0000:00:17.2: bridge window [mem 0xe7200000-0xe72fffff 64bit pref] May 8 00:34:20.804669 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] May 8 00:34:20.804719 kernel: pci 0000:00:17.3: bridge window [mem 0xfc700000-0xfc7fffff] May 8 00:34:20.804768 kernel: pci 0000:00:17.3: bridge window [mem 0xe6e00000-0xe6efffff 64bit pref] May 8 00:34:20.804818 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] May 8 00:34:20.804875 kernel: pci 0000:00:17.4: bridge window [mem 0xfc300000-0xfc3fffff] May 8 00:34:20.804938 kernel: pci 0000:00:17.4: bridge window [mem 0xe6a00000-0xe6afffff 64bit pref] May 8 00:34:20.805001 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] May 8 00:34:20.805070 kernel: pci 0000:00:17.5: bridge window [mem 0xfbf00000-0xfbffffff] May 8 00:34:20.805125 kernel: pci 0000:00:17.5: bridge window [mem 0xe6600000-0xe66fffff 64bit pref] May 8 00:34:20.805182 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] May 8 00:34:20.805232 kernel: pci 0000:00:17.6: bridge window [mem 0xfbb00000-0xfbbfffff] May 8 00:34:20.805281 kernel: pci 0000:00:17.6: bridge window [mem 0xe6200000-0xe62fffff 64bit pref] May 8 00:34:20.805331 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] May 8 00:34:20.805381 kernel: pci 0000:00:17.7: bridge window [mem 0xfb700000-0xfb7fffff] May 8 00:34:20.805433 kernel: pci 0000:00:17.7: bridge window [mem 0xe5e00000-0xe5efffff 64bit pref] May 8 00:34:20.805486 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] May 8 00:34:20.805552 kernel: pci 0000:00:18.0: bridge window [io 0x7000-0x7fff] May 8 00:34:20.805603 kernel: pci 0000:00:18.0: bridge window [mem 0xfd200000-0xfd2fffff] May 8 00:34:20.805664 kernel: pci 0000:00:18.0: bridge window [mem 0xe7900000-0xe79fffff 64bit pref] May 8 00:34:20.805718 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] May 8 00:34:20.805767 kernel: pci 0000:00:18.1: bridge window [io 0xb000-0xbfff] May 8 00:34:20.805816 kernel: pci 0000:00:18.1: bridge window [mem 0xfce00000-0xfcefffff] May 8 00:34:20.805865 kernel: pci 0000:00:18.1: bridge window [mem 0xe7500000-0xe75fffff 64bit pref] May 8 00:34:20.805918 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] May 8 00:34:20.805978 kernel: pci 0000:00:18.2: bridge window [mem 0xfca00000-0xfcafffff] May 8 00:34:20.806044 kernel: pci 0000:00:18.2: bridge window [mem 0xe7100000-0xe71fffff 64bit pref] May 8 00:34:20.806115 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] May 8 00:34:20.806180 kernel: pci 0000:00:18.3: bridge window [mem 0xfc600000-0xfc6fffff] May 8 00:34:20.806242 kernel: pci 0000:00:18.3: bridge window [mem 0xe6d00000-0xe6dfffff 64bit pref] May 8 00:34:20.806294 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] May 8 00:34:20.806343 kernel: pci 0000:00:18.4: bridge window [mem 0xfc200000-0xfc2fffff] May 8 00:34:20.806392 kernel: pci 0000:00:18.4: bridge window [mem 0xe6900000-0xe69fffff 64bit pref] May 8 00:34:20.806446 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] May 8 00:34:20.806505 kernel: pci 0000:00:18.5: bridge window [mem 0xfbe00000-0xfbefffff] May 8 00:34:20.806574 kernel: pci 0000:00:18.5: bridge window [mem 0xe6500000-0xe65fffff 64bit pref] May 8 00:34:20.806636 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] May 8 00:34:20.806694 kernel: pci 0000:00:18.6: bridge window [mem 0xfba00000-0xfbafffff] May 8 00:34:20.806764 kernel: pci 0000:00:18.6: bridge window [mem 0xe6100000-0xe61fffff 64bit pref] May 8 00:34:20.806820 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] May 8 00:34:20.806870 kernel: pci 0000:00:18.7: bridge window [mem 0xfb600000-0xfb6fffff] May 8 00:34:20.806919 kernel: pci 0000:00:18.7: bridge window [mem 0xe5d00000-0xe5dfffff 64bit pref] May 8 00:34:20.806969 kernel: pci_bus 0000:00: resource 4 [mem 0x000a0000-0x000bffff window] May 8 00:34:20.807015 kernel: pci_bus 0000:00: resource 5 [mem 0x000cc000-0x000dbfff window] May 8 00:34:20.807117 kernel: pci_bus 0000:00: resource 6 [mem 0xc0000000-0xfebfffff window] May 8 00:34:20.807170 kernel: pci_bus 0000:00: resource 7 [io 0x0000-0x0cf7 window] May 8 00:34:20.807222 kernel: pci_bus 0000:00: resource 8 [io 0x0d00-0xfeff window] May 8 00:34:20.807284 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x3fff] May 8 00:34:20.807343 kernel: pci_bus 0000:02: resource 1 [mem 0xfd600000-0xfdffffff] May 8 00:34:20.807390 kernel: pci_bus 0000:02: resource 2 [mem 0xe7b00000-0xe7ffffff 64bit pref] May 8 00:34:20.807435 kernel: pci_bus 0000:02: resource 4 [mem 0x000a0000-0x000bffff window] May 8 00:34:20.807480 kernel: pci_bus 0000:02: resource 5 [mem 0x000cc000-0x000dbfff window] May 8 00:34:20.807525 kernel: pci_bus 0000:02: resource 6 [mem 0xc0000000-0xfebfffff window] May 8 00:34:20.807570 kernel: pci_bus 0000:02: resource 7 [io 0x0000-0x0cf7 window] May 8 00:34:20.807626 kernel: pci_bus 0000:02: resource 8 [io 0x0d00-0xfeff window] May 8 00:34:20.807684 kernel: pci_bus 0000:03: resource 0 [io 0x4000-0x4fff] May 8 00:34:20.807737 kernel: pci_bus 0000:03: resource 1 [mem 0xfd500000-0xfd5fffff] May 8 00:34:20.807792 kernel: pci_bus 0000:03: resource 2 [mem 0xc0000000-0xc01fffff 64bit pref] May 8 00:34:20.807854 kernel: pci_bus 0000:04: resource 0 [io 0x8000-0x8fff] May 8 00:34:20.807902 kernel: pci_bus 0000:04: resource 1 [mem 0xfd100000-0xfd1fffff] May 8 00:34:20.807949 kernel: pci_bus 0000:04: resource 2 [mem 0xe7800000-0xe78fffff 64bit pref] May 8 00:34:20.808006 kernel: pci_bus 0000:05: resource 0 [io 0xc000-0xcfff] May 8 00:34:20.808065 kernel: pci_bus 0000:05: resource 1 [mem 0xfcd00000-0xfcdfffff] May 8 00:34:20.808113 kernel: pci_bus 0000:05: resource 2 [mem 0xe7400000-0xe74fffff 64bit pref] May 8 00:34:20.808172 kernel: pci_bus 0000:06: resource 1 [mem 0xfc900000-0xfc9fffff] May 8 00:34:20.808237 kernel: pci_bus 0000:06: resource 2 [mem 0xe7000000-0xe70fffff 64bit pref] May 8 00:34:20.808294 kernel: pci_bus 0000:07: resource 1 [mem 0xfc500000-0xfc5fffff] May 8 00:34:20.808351 kernel: pci_bus 0000:07: resource 2 [mem 0xe6c00000-0xe6cfffff 64bit pref] May 8 00:34:20.808421 kernel: pci_bus 0000:08: resource 1 [mem 0xfc100000-0xfc1fffff] May 8 00:34:20.808471 kernel: pci_bus 0000:08: resource 2 [mem 0xe6800000-0xe68fffff 64bit pref] May 8 00:34:20.808522 kernel: pci_bus 0000:09: resource 1 [mem 0xfbd00000-0xfbdfffff] May 8 00:34:20.808568 kernel: pci_bus 0000:09: resource 2 [mem 0xe6400000-0xe64fffff 64bit pref] May 8 00:34:20.808628 kernel: pci_bus 0000:0a: resource 1 [mem 0xfb900000-0xfb9fffff] May 8 00:34:20.808688 kernel: pci_bus 0000:0a: resource 2 [mem 0xe6000000-0xe60fffff 64bit pref] May 8 00:34:20.808754 kernel: pci_bus 0000:0b: resource 0 [io 0x5000-0x5fff] May 8 00:34:20.808811 kernel: pci_bus 0000:0b: resource 1 [mem 0xfd400000-0xfd4fffff] May 8 00:34:20.808869 kernel: pci_bus 0000:0b: resource 2 [mem 0xc0200000-0xc03fffff 64bit pref] May 8 00:34:20.808926 kernel: pci_bus 0000:0c: resource 0 [io 0x9000-0x9fff] May 8 00:34:20.808983 kernel: pci_bus 0000:0c: resource 1 [mem 0xfd000000-0xfd0fffff] May 8 00:34:20.809030 kernel: pci_bus 0000:0c: resource 2 [mem 0xe7700000-0xe77fffff 64bit pref] May 8 00:34:20.809168 kernel: pci_bus 0000:0d: resource 0 [io 0xd000-0xdfff] May 8 00:34:20.809224 kernel: pci_bus 0000:0d: resource 1 [mem 0xfcc00000-0xfccfffff] May 8 00:34:20.809282 kernel: pci_bus 0000:0d: resource 2 [mem 0xe7300000-0xe73fffff 64bit pref] May 8 00:34:20.809346 kernel: pci_bus 0000:0e: resource 1 [mem 0xfc800000-0xfc8fffff] May 8 00:34:20.809403 kernel: pci_bus 0000:0e: resource 2 [mem 0xe6f00000-0xe6ffffff 64bit pref] May 8 00:34:20.809468 kernel: pci_bus 0000:0f: resource 1 [mem 0xfc400000-0xfc4fffff] May 8 00:34:20.809528 kernel: pci_bus 0000:0f: resource 2 [mem 0xe6b00000-0xe6bfffff 64bit pref] May 8 00:34:20.809583 kernel: pci_bus 0000:10: resource 1 [mem 0xfc000000-0xfc0fffff] May 8 00:34:20.809630 kernel: pci_bus 0000:10: resource 2 [mem 0xe6700000-0xe67fffff 64bit pref] May 8 00:34:20.809680 kernel: pci_bus 0000:11: resource 1 [mem 0xfbc00000-0xfbcfffff] May 8 00:34:20.809730 kernel: pci_bus 0000:11: resource 2 [mem 0xe6300000-0xe63fffff 64bit pref] May 8 00:34:20.809791 kernel: pci_bus 0000:12: resource 1 [mem 0xfb800000-0xfb8fffff] May 8 00:34:20.809849 kernel: pci_bus 0000:12: resource 2 [mem 0xe5f00000-0xe5ffffff 64bit pref] May 8 00:34:20.809922 kernel: pci_bus 0000:13: resource 0 [io 0x6000-0x6fff] May 8 00:34:20.809974 kernel: pci_bus 0000:13: resource 1 [mem 0xfd300000-0xfd3fffff] May 8 00:34:20.810020 kernel: pci_bus 0000:13: resource 2 [mem 0xe7a00000-0xe7afffff 64bit pref] May 8 00:34:20.810146 kernel: pci_bus 0000:14: resource 0 [io 0xa000-0xafff] May 8 00:34:20.810194 kernel: pci_bus 0000:14: resource 1 [mem 0xfcf00000-0xfcffffff] May 8 00:34:20.810240 kernel: pci_bus 0000:14: resource 2 [mem 0xe7600000-0xe76fffff 64bit pref] May 8 00:34:20.810303 kernel: pci_bus 0000:15: resource 0 [io 0xe000-0xefff] May 8 00:34:20.810357 kernel: pci_bus 0000:15: resource 1 [mem 0xfcb00000-0xfcbfffff] May 8 00:34:20.810416 kernel: pci_bus 0000:15: resource 2 [mem 0xe7200000-0xe72fffff 64bit pref] May 8 00:34:20.810481 kernel: pci_bus 0000:16: resource 1 [mem 0xfc700000-0xfc7fffff] May 8 00:34:20.810528 kernel: pci_bus 0000:16: resource 2 [mem 0xe6e00000-0xe6efffff 64bit pref] May 8 00:34:20.810582 kernel: pci_bus 0000:17: resource 1 [mem 0xfc300000-0xfc3fffff] May 8 00:34:20.810636 kernel: pci_bus 0000:17: resource 2 [mem 0xe6a00000-0xe6afffff 64bit pref] May 8 00:34:20.810689 kernel: pci_bus 0000:18: resource 1 [mem 0xfbf00000-0xfbffffff] May 8 00:34:20.810735 kernel: pci_bus 0000:18: resource 2 [mem 0xe6600000-0xe66fffff 64bit pref] May 8 00:34:20.810784 kernel: pci_bus 0000:19: resource 1 [mem 0xfbb00000-0xfbbfffff] May 8 00:34:20.810840 kernel: pci_bus 0000:19: resource 2 [mem 0xe6200000-0xe62fffff 64bit pref] May 8 00:34:20.810897 kernel: pci_bus 0000:1a: resource 1 [mem 0xfb700000-0xfb7fffff] May 8 00:34:20.810952 kernel: pci_bus 0000:1a: resource 2 [mem 0xe5e00000-0xe5efffff 64bit pref] May 8 00:34:20.811020 kernel: pci_bus 0000:1b: resource 0 [io 0x7000-0x7fff] May 8 00:34:20.811113 kernel: pci_bus 0000:1b: resource 1 [mem 0xfd200000-0xfd2fffff] May 8 00:34:20.811172 kernel: pci_bus 0000:1b: resource 2 [mem 0xe7900000-0xe79fffff 64bit pref] May 8 00:34:20.811227 kernel: pci_bus 0000:1c: resource 0 [io 0xb000-0xbfff] May 8 00:34:20.811273 kernel: pci_bus 0000:1c: resource 1 [mem 0xfce00000-0xfcefffff] May 8 00:34:20.811319 kernel: pci_bus 0000:1c: resource 2 [mem 0xe7500000-0xe75fffff 64bit pref] May 8 00:34:20.811375 kernel: pci_bus 0000:1d: resource 1 [mem 0xfca00000-0xfcafffff] May 8 00:34:20.811430 kernel: pci_bus 0000:1d: resource 2 [mem 0xe7100000-0xe71fffff 64bit pref] May 8 00:34:20.811493 kernel: pci_bus 0000:1e: resource 1 [mem 0xfc600000-0xfc6fffff] May 8 00:34:20.811550 kernel: pci_bus 0000:1e: resource 2 [mem 0xe6d00000-0xe6dfffff 64bit pref] May 8 00:34:20.811610 kernel: pci_bus 0000:1f: resource 1 [mem 0xfc200000-0xfc2fffff] May 8 00:34:20.811668 kernel: pci_bus 0000:1f: resource 2 [mem 0xe6900000-0xe69fffff 64bit pref] May 8 00:34:20.811729 kernel: pci_bus 0000:20: resource 1 [mem 0xfbe00000-0xfbefffff] May 8 00:34:20.811776 kernel: pci_bus 0000:20: resource 2 [mem 0xe6500000-0xe65fffff 64bit pref] May 8 00:34:20.811826 kernel: pci_bus 0000:21: resource 1 [mem 0xfba00000-0xfbafffff] May 8 00:34:20.811872 kernel: pci_bus 0000:21: resource 2 [mem 0xe6100000-0xe61fffff 64bit pref] May 8 00:34:20.811935 kernel: pci_bus 0000:22: resource 1 [mem 0xfb600000-0xfb6fffff] May 8 00:34:20.811988 kernel: pci_bus 0000:22: resource 2 [mem 0xe5d00000-0xe5dfffff 64bit pref] May 8 00:34:20.812077 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers May 8 00:34:20.812088 kernel: PCI: CLS 32 bytes, default 64 May 8 00:34:20.812095 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer May 8 00:34:20.812102 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x311fd3cd494, max_idle_ns: 440795223879 ns May 8 00:34:20.812111 kernel: clocksource: Switched to clocksource tsc May 8 00:34:20.812118 kernel: Initialise system trusted keyrings May 8 00:34:20.812124 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 May 8 00:34:20.812132 kernel: Key type asymmetric registered May 8 00:34:20.812142 kernel: Asymmetric key parser 'x509' registered May 8 00:34:20.812152 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) May 8 00:34:20.812159 kernel: io scheduler mq-deadline registered May 8 00:34:20.812165 kernel: io scheduler kyber registered May 8 00:34:20.812172 kernel: io scheduler bfq registered May 8 00:34:20.812240 kernel: pcieport 0000:00:15.0: PME: Signaling with IRQ 24 May 8 00:34:20.812298 kernel: pcieport 0000:00:15.0: pciehp: Slot #160 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 8 00:34:20.812351 kernel: pcieport 0000:00:15.1: PME: Signaling with IRQ 25 May 8 00:34:20.812404 kernel: pcieport 0000:00:15.1: pciehp: Slot #161 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 8 00:34:20.812467 kernel: pcieport 0000:00:15.2: PME: Signaling with IRQ 26 May 8 00:34:20.812530 kernel: pcieport 0000:00:15.2: pciehp: Slot #162 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 8 00:34:20.812594 kernel: pcieport 0000:00:15.3: PME: Signaling with IRQ 27 May 8 00:34:20.812657 kernel: pcieport 0000:00:15.3: pciehp: Slot #163 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 8 00:34:20.812717 kernel: pcieport 0000:00:15.4: PME: Signaling with IRQ 28 May 8 00:34:20.812775 kernel: pcieport 0000:00:15.4: pciehp: Slot #164 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 8 00:34:20.812834 kernel: pcieport 0000:00:15.5: PME: Signaling with IRQ 29 May 8 00:34:20.812885 kernel: pcieport 0000:00:15.5: pciehp: Slot #165 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 8 00:34:20.812940 kernel: pcieport 0000:00:15.6: PME: Signaling with IRQ 30 May 8 00:34:20.813002 kernel: pcieport 0000:00:15.6: pciehp: Slot #166 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 8 00:34:20.813112 kernel: pcieport 0000:00:15.7: PME: Signaling with IRQ 31 May 8 00:34:20.813171 kernel: pcieport 0000:00:15.7: pciehp: Slot #167 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 8 00:34:20.813237 kernel: pcieport 0000:00:16.0: PME: Signaling with IRQ 32 May 8 00:34:20.813297 kernel: pcieport 0000:00:16.0: pciehp: Slot #192 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 8 00:34:20.813349 kernel: pcieport 0000:00:16.1: PME: Signaling with IRQ 33 May 8 00:34:20.813400 kernel: pcieport 0000:00:16.1: pciehp: Slot #193 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 8 00:34:20.813457 kernel: pcieport 0000:00:16.2: PME: Signaling with IRQ 34 May 8 00:34:20.813517 kernel: pcieport 0000:00:16.2: pciehp: Slot #194 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 8 00:34:20.813581 kernel: pcieport 0000:00:16.3: PME: Signaling with IRQ 35 May 8 00:34:20.813647 kernel: pcieport 0000:00:16.3: pciehp: Slot #195 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 8 00:34:20.813711 kernel: pcieport 0000:00:16.4: PME: Signaling with IRQ 36 May 8 00:34:20.813777 kernel: pcieport 0000:00:16.4: pciehp: Slot #196 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 8 00:34:20.813835 kernel: pcieport 0000:00:16.5: PME: Signaling with IRQ 37 May 8 00:34:20.813886 kernel: pcieport 0000:00:16.5: pciehp: Slot #197 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 8 00:34:20.813940 kernel: pcieport 0000:00:16.6: PME: Signaling with IRQ 38 May 8 00:34:20.813998 kernel: pcieport 0000:00:16.6: pciehp: Slot #198 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 8 00:34:20.814121 kernel: pcieport 0000:00:16.7: PME: Signaling with IRQ 39 May 8 00:34:20.814187 kernel: pcieport 0000:00:16.7: pciehp: Slot #199 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 8 00:34:20.814239 kernel: pcieport 0000:00:17.0: PME: Signaling with IRQ 40 May 8 00:34:20.814296 kernel: pcieport 0000:00:17.0: pciehp: Slot #224 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 8 00:34:20.814364 kernel: pcieport 0000:00:17.1: PME: Signaling with IRQ 41 May 8 00:34:20.814416 kernel: pcieport 0000:00:17.1: pciehp: Slot #225 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 8 00:34:20.814466 kernel: pcieport 0000:00:17.2: PME: Signaling with IRQ 42 May 8 00:34:20.814515 kernel: pcieport 0000:00:17.2: pciehp: Slot #226 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 8 00:34:20.814577 kernel: pcieport 0000:00:17.3: PME: Signaling with IRQ 43 May 8 00:34:20.814638 kernel: pcieport 0000:00:17.3: pciehp: Slot #227 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 8 00:34:20.814701 kernel: pcieport 0000:00:17.4: PME: Signaling with IRQ 44 May 8 00:34:20.814762 kernel: pcieport 0000:00:17.4: pciehp: Slot #228 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 8 00:34:20.814818 kernel: pcieport 0000:00:17.5: PME: Signaling with IRQ 45 May 8 00:34:20.814873 kernel: pcieport 0000:00:17.5: pciehp: Slot #229 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 8 00:34:20.814952 kernel: pcieport 0000:00:17.6: PME: Signaling with IRQ 46 May 8 00:34:20.815006 kernel: pcieport 0000:00:17.6: pciehp: Slot #230 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 8 00:34:20.815097 kernel: pcieport 0000:00:17.7: PME: Signaling with IRQ 47 May 8 00:34:20.815154 kernel: pcieport 0000:00:17.7: pciehp: Slot #231 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 8 00:34:20.815215 kernel: pcieport 0000:00:18.0: PME: Signaling with IRQ 48 May 8 00:34:20.815283 kernel: pcieport 0000:00:18.0: pciehp: Slot #256 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 8 00:34:20.815334 kernel: pcieport 0000:00:18.1: PME: Signaling with IRQ 49 May 8 00:34:20.815396 kernel: pcieport 0000:00:18.1: pciehp: Slot #257 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 8 00:34:20.815450 kernel: pcieport 0000:00:18.2: PME: Signaling with IRQ 50 May 8 00:34:20.815500 kernel: pcieport 0000:00:18.2: pciehp: Slot #258 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 8 00:34:20.815550 kernel: pcieport 0000:00:18.3: PME: Signaling with IRQ 51 May 8 00:34:20.815604 kernel: pcieport 0000:00:18.3: pciehp: Slot #259 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 8 00:34:20.815663 kernel: pcieport 0000:00:18.4: PME: Signaling with IRQ 52 May 8 00:34:20.815729 kernel: pcieport 0000:00:18.4: pciehp: Slot #260 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 8 00:34:20.815791 kernel: pcieport 0000:00:18.5: PME: Signaling with IRQ 53 May 8 00:34:20.815844 kernel: pcieport 0000:00:18.5: pciehp: Slot #261 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 8 00:34:20.815900 kernel: pcieport 0000:00:18.6: PME: Signaling with IRQ 54 May 8 00:34:20.815959 kernel: pcieport 0000:00:18.6: pciehp: Slot #262 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 8 00:34:20.816013 kernel: pcieport 0000:00:18.7: PME: Signaling with IRQ 55 May 8 00:34:20.816093 kernel: pcieport 0000:00:18.7: pciehp: Slot #263 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 8 00:34:20.816104 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 May 8 00:34:20.816111 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 8 00:34:20.816117 kernel: 00:05: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A May 8 00:34:20.816123 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBC,PNP0f13:MOUS] at 0x60,0x64 irq 1,12 May 8 00:34:20.816130 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 May 8 00:34:20.816139 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 May 8 00:34:20.816203 kernel: rtc_cmos 00:01: registered as rtc0 May 8 00:34:20.816262 kernel: rtc_cmos 00:01: setting system clock to 2025-05-08T00:34:20 UTC (1746664460) May 8 00:34:20.816314 kernel: rtc_cmos 00:01: alarms up to one month, y3k, 114 bytes nvram May 8 00:34:20.816328 kernel: intel_pstate: CPU model not supported May 8 00:34:20.816335 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 May 8 00:34:20.816341 kernel: NET: Registered PF_INET6 protocol family May 8 00:34:20.816348 kernel: Segment Routing with IPv6 May 8 00:34:20.816354 kernel: In-situ OAM (IOAM) with IPv6 May 8 00:34:20.816363 kernel: NET: Registered PF_PACKET protocol family May 8 00:34:20.816370 kernel: Key type dns_resolver registered May 8 00:34:20.816376 kernel: IPI shorthand broadcast: enabled May 8 00:34:20.816382 kernel: sched_clock: Marking stable (942371927, 242356166)->(1256642861, -71914768) May 8 00:34:20.816389 kernel: registered taskstats version 1 May 8 00:34:20.816395 kernel: Loading compiled-in X.509 certificates May 8 00:34:20.816401 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.88-flatcar: 75e4e434c57439d3f2eaf7797bbbcdd698dafd0e' May 8 00:34:20.816408 kernel: Key type .fscrypt registered May 8 00:34:20.816417 kernel: Key type fscrypt-provisioning registered May 8 00:34:20.816425 kernel: ima: No TPM chip found, activating TPM-bypass! May 8 00:34:20.816431 kernel: ima: Allocated hash algorithm: sha1 May 8 00:34:20.816439 kernel: ima: No architecture policies found May 8 00:34:20.816450 kernel: clk: Disabling unused clocks May 8 00:34:20.816458 kernel: Freeing unused kernel image (initmem) memory: 42856K May 8 00:34:20.816464 kernel: Write protecting the kernel read-only data: 36864k May 8 00:34:20.816470 kernel: Freeing unused kernel image (rodata/data gap) memory: 1836K May 8 00:34:20.816476 kernel: Run /init as init process May 8 00:34:20.816487 kernel: with arguments: May 8 00:34:20.816495 kernel: /init May 8 00:34:20.816502 kernel: with environment: May 8 00:34:20.816508 kernel: HOME=/ May 8 00:34:20.816514 kernel: TERM=linux May 8 00:34:20.816520 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 8 00:34:20.816527 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 8 00:34:20.816535 systemd[1]: Detected virtualization vmware. May 8 00:34:20.816542 systemd[1]: Detected architecture x86-64. May 8 00:34:20.816550 systemd[1]: Running in initrd. May 8 00:34:20.816557 systemd[1]: No hostname configured, using default hostname. May 8 00:34:20.816563 systemd[1]: Hostname set to . May 8 00:34:20.816569 systemd[1]: Initializing machine ID from random generator. May 8 00:34:20.816576 systemd[1]: Queued start job for default target initrd.target. May 8 00:34:20.816582 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 8 00:34:20.816590 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 8 00:34:20.816597 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 8 00:34:20.816605 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 8 00:34:20.816611 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 8 00:34:20.816618 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 8 00:34:20.816626 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 8 00:34:20.816632 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 8 00:34:20.816639 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 8 00:34:20.816647 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 8 00:34:20.816653 systemd[1]: Reached target paths.target - Path Units. May 8 00:34:20.816660 systemd[1]: Reached target slices.target - Slice Units. May 8 00:34:20.816670 systemd[1]: Reached target swap.target - Swaps. May 8 00:34:20.816677 systemd[1]: Reached target timers.target - Timer Units. May 8 00:34:20.816683 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 8 00:34:20.816690 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 8 00:34:20.816701 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 8 00:34:20.816711 systemd[1]: Listening on systemd-journald.socket - Journal Socket. May 8 00:34:20.816719 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 8 00:34:20.816725 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 8 00:34:20.816732 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 8 00:34:20.816739 systemd[1]: Reached target sockets.target - Socket Units. May 8 00:34:20.816747 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 8 00:34:20.816754 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 8 00:34:20.816765 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 8 00:34:20.816775 systemd[1]: Starting systemd-fsck-usr.service... May 8 00:34:20.816782 systemd[1]: Starting systemd-journald.service - Journal Service... May 8 00:34:20.816790 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 8 00:34:20.816797 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 8 00:34:20.816816 systemd-journald[214]: Collecting audit messages is disabled. May 8 00:34:20.816836 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 8 00:34:20.816845 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 8 00:34:20.816856 systemd[1]: Finished systemd-fsck-usr.service. May 8 00:34:20.816864 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 8 00:34:20.816870 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 8 00:34:20.816879 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 8 00:34:20.816885 kernel: Bridge firewalling registered May 8 00:34:20.816892 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 8 00:34:20.816898 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 8 00:34:20.816905 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 8 00:34:20.816911 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 8 00:34:20.816918 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 8 00:34:20.816925 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 8 00:34:20.816934 systemd-journald[214]: Journal started May 8 00:34:20.816949 systemd-journald[214]: Runtime Journal (/run/log/journal/0d0bcc9349fd49a6858ad893107d902e) is 4.8M, max 38.6M, 33.8M free. May 8 00:34:20.776515 systemd-modules-load[215]: Inserted module 'overlay' May 8 00:34:20.797778 systemd-modules-load[215]: Inserted module 'br_netfilter' May 8 00:34:20.819312 systemd[1]: Started systemd-journald.service - Journal Service. May 8 00:34:20.824132 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 8 00:34:20.827065 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 8 00:34:20.829144 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 8 00:34:20.832026 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 8 00:34:20.835298 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 8 00:34:20.837133 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 8 00:34:20.841165 dracut-cmdline[244]: dracut-dracut-053 May 8 00:34:20.842377 dracut-cmdline[244]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=86cfbfcc89a9c46f6cbba5bdb3509d1ce1367f0c93b0b0e4c6bdcad1a2064c90 May 8 00:34:20.859274 systemd-resolved[251]: Positive Trust Anchors: May 8 00:34:20.859283 systemd-resolved[251]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 8 00:34:20.859312 systemd-resolved[251]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 8 00:34:20.861150 systemd-resolved[251]: Defaulting to hostname 'linux'. May 8 00:34:20.861890 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 8 00:34:20.862075 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 8 00:34:20.894069 kernel: SCSI subsystem initialized May 8 00:34:20.900051 kernel: Loading iSCSI transport class v2.0-870. May 8 00:34:20.907051 kernel: iscsi: registered transport (tcp) May 8 00:34:20.920049 kernel: iscsi: registered transport (qla4xxx) May 8 00:34:20.920067 kernel: QLogic iSCSI HBA Driver May 8 00:34:20.939660 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 8 00:34:20.944142 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 8 00:34:20.959321 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 8 00:34:20.960404 kernel: device-mapper: uevent: version 1.0.3 May 8 00:34:20.960414 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com May 8 00:34:20.993071 kernel: raid6: avx2x4 gen() 44245 MB/s May 8 00:34:21.010065 kernel: raid6: avx2x2 gen() 46366 MB/s May 8 00:34:21.027333 kernel: raid6: avx2x1 gen() 35277 MB/s May 8 00:34:21.027380 kernel: raid6: using algorithm avx2x2 gen() 46366 MB/s May 8 00:34:21.045317 kernel: raid6: .... xor() 28772 MB/s, rmw enabled May 8 00:34:21.045366 kernel: raid6: using avx2x2 recovery algorithm May 8 00:34:21.059064 kernel: xor: automatically using best checksumming function avx May 8 00:34:21.158063 kernel: Btrfs loaded, zoned=no, fsverity=no May 8 00:34:21.164454 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 8 00:34:21.169140 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 8 00:34:21.176215 systemd-udevd[433]: Using default interface naming scheme 'v255'. May 8 00:34:21.178686 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 8 00:34:21.184143 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 8 00:34:21.190913 dracut-pre-trigger[435]: rd.md=0: removing MD RAID activation May 8 00:34:21.205879 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 8 00:34:21.210122 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 8 00:34:21.288793 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 8 00:34:21.291169 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 8 00:34:21.305787 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 8 00:34:21.306538 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 8 00:34:21.306805 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 8 00:34:21.307004 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 8 00:34:21.312179 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 8 00:34:21.320480 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 8 00:34:21.361066 kernel: VMware PVSCSI driver - version 1.0.7.0-k May 8 00:34:21.367544 kernel: vmw_pvscsi: using 64bit dma May 8 00:34:21.367579 kernel: vmw_pvscsi: max_id: 16 May 8 00:34:21.367592 kernel: vmw_pvscsi: setting ring_pages to 8 May 8 00:34:21.370251 kernel: VMware vmxnet3 virtual NIC driver - version 1.7.0.0-k-NAPI May 8 00:34:21.370274 kernel: vmxnet3 0000:0b:00.0: # of Tx queues : 2, # of Rx queues : 2 May 8 00:34:21.381317 kernel: vmxnet3 0000:0b:00.0 eth0: NIC Link is Up 10000 Mbps May 8 00:34:21.385055 kernel: vmw_pvscsi: enabling reqCallThreshold May 8 00:34:21.385084 kernel: vmw_pvscsi: driver-based request coalescing enabled May 8 00:34:21.385097 kernel: vmw_pvscsi: using MSI-X May 8 00:34:21.385114 kernel: cryptd: max_cpu_qlen set to 1000 May 8 00:34:21.385126 kernel: scsi host0: VMware PVSCSI storage adapter rev 2, req/cmp/msg rings: 8/8/1 pages, cmd_per_lun=254 May 8 00:34:21.393186 kernel: vmxnet3 0000:0b:00.0 ens192: renamed from eth0 May 8 00:34:21.393355 kernel: vmw_pvscsi 0000:03:00.0: VMware PVSCSI rev 2 host #0 May 8 00:34:21.398399 kernel: scsi 0:0:0:0: Direct-Access VMware Virtual disk 2.0 PQ: 0 ANSI: 6 May 8 00:34:21.398433 kernel: AVX2 version of gcm_enc/dec engaged. May 8 00:34:21.399174 kernel: AES CTR mode by8 optimization enabled May 8 00:34:21.402142 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 8 00:34:21.402216 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 8 00:34:21.402523 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 8 00:34:21.402831 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 8 00:34:21.402947 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 8 00:34:21.403076 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 8 00:34:21.407090 kernel: libata version 3.00 loaded. May 8 00:34:21.408077 kernel: ata_piix 0000:00:07.1: version 2.13 May 8 00:34:21.419533 kernel: scsi host1: ata_piix May 8 00:34:21.419618 kernel: scsi host2: ata_piix May 8 00:34:21.419685 kernel: ata1: PATA max UDMA/33 cmd 0x1f0 ctl 0x3f6 bmdma 0x1060 irq 14 May 8 00:34:21.419695 kernel: ata2: PATA max UDMA/33 cmd 0x170 ctl 0x376 bmdma 0x1068 irq 15 May 8 00:34:21.408289 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 8 00:34:21.425687 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 8 00:34:21.430143 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 8 00:34:21.441446 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 8 00:34:21.589128 kernel: ata2.00: ATAPI: VMware Virtual IDE CDROM Drive, 00000001, max UDMA/33 May 8 00:34:21.597284 kernel: scsi 2:0:0:0: CD-ROM NECVMWar VMware IDE CDR10 1.00 PQ: 0 ANSI: 5 May 8 00:34:21.609064 kernel: sd 0:0:0:0: [sda] 17805312 512-byte logical blocks: (9.12 GB/8.49 GiB) May 8 00:34:21.647654 kernel: sd 0:0:0:0: [sda] Write Protect is off May 8 00:34:21.647744 kernel: sd 0:0:0:0: [sda] Mode Sense: 31 00 00 00 May 8 00:34:21.647821 kernel: sd 0:0:0:0: [sda] Cache data unavailable May 8 00:34:21.647886 kernel: sd 0:0:0:0: [sda] Assuming drive cache: write through May 8 00:34:21.647959 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 1x/1x writer dvd-ram cd/rw xa/form2 cdda tray May 8 00:34:21.648027 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 May 8 00:34:21.648062 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 May 8 00:34:21.648131 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 8 00:34:21.648139 kernel: sd 0:0:0:0: [sda] Attached SCSI disk May 8 00:34:21.677828 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by (udev-worker) (487) May 8 00:34:21.681941 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_disk ROOT. May 8 00:34:21.685149 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_disk OEM. May 8 00:34:21.687904 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_disk EFI-SYSTEM. May 8 00:34:21.693055 kernel: BTRFS: device fsid 28014d97-e6d7-4db4-b1d9-76a980e09972 devid 1 transid 39 /dev/sda3 scanned by (udev-worker) (485) May 8 00:34:21.698940 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_disk USR-A. May 8 00:34:21.699249 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_disk USR-A. May 8 00:34:21.704158 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 8 00:34:21.771078 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 8 00:34:21.778101 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 8 00:34:22.779091 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 8 00:34:22.779722 disk-uuid[593]: The operation has completed successfully. May 8 00:34:22.836374 systemd[1]: disk-uuid.service: Deactivated successfully. May 8 00:34:22.836453 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 8 00:34:22.840151 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 8 00:34:22.847211 sh[609]: Success May 8 00:34:22.872076 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" May 8 00:34:22.941484 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 8 00:34:22.943115 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 8 00:34:22.943483 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 8 00:34:23.029624 kernel: BTRFS info (device dm-0): first mount of filesystem 28014d97-e6d7-4db4-b1d9-76a980e09972 May 8 00:34:23.029670 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm May 8 00:34:23.029684 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead May 8 00:34:23.031185 kernel: BTRFS info (device dm-0): disabling log replay at mount time May 8 00:34:23.032335 kernel: BTRFS info (device dm-0): using free space tree May 8 00:34:23.041059 kernel: BTRFS info (device dm-0): enabling ssd optimizations May 8 00:34:23.042960 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 8 00:34:23.052173 systemd[1]: Starting afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments... May 8 00:34:23.053629 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 8 00:34:23.069868 kernel: BTRFS info (device sda6): first mount of filesystem a884989d-7a9b-4fbd-878f-8ac586ff8595 May 8 00:34:23.069906 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm May 8 00:34:23.069921 kernel: BTRFS info (device sda6): using free space tree May 8 00:34:23.077061 kernel: BTRFS info (device sda6): enabling ssd optimizations May 8 00:34:23.084969 systemd[1]: mnt-oem.mount: Deactivated successfully. May 8 00:34:23.086114 kernel: BTRFS info (device sda6): last unmount of filesystem a884989d-7a9b-4fbd-878f-8ac586ff8595 May 8 00:34:23.090909 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 8 00:34:23.095963 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 8 00:34:23.205712 systemd[1]: Finished afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments. May 8 00:34:23.216174 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 8 00:34:23.251539 ignition[669]: Ignition 2.19.0 May 8 00:34:23.251547 ignition[669]: Stage: fetch-offline May 8 00:34:23.251573 ignition[669]: no configs at "/usr/lib/ignition/base.d" May 8 00:34:23.251579 ignition[669]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" May 8 00:34:23.251639 ignition[669]: parsed url from cmdline: "" May 8 00:34:23.251641 ignition[669]: no config URL provided May 8 00:34:23.251643 ignition[669]: reading system config file "/usr/lib/ignition/user.ign" May 8 00:34:23.251648 ignition[669]: no config at "/usr/lib/ignition/user.ign" May 8 00:34:23.253143 ignition[669]: config successfully fetched May 8 00:34:23.253174 ignition[669]: parsing config with SHA512: 2c48eaad9d45e5dd108e9fc0a923f3141af6f87f06d8cc108b16a5b2a9d4bb72868722f6b00c7cfbdf9338fb56d71b2b2d98a545b4fc6a042bb8886dfa630c2f May 8 00:34:23.255962 unknown[669]: fetched base config from "system" May 8 00:34:23.256125 unknown[669]: fetched user config from "vmware" May 8 00:34:23.256704 ignition[669]: fetch-offline: fetch-offline passed May 8 00:34:23.256892 ignition[669]: Ignition finished successfully May 8 00:34:23.257683 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 8 00:34:23.277664 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 8 00:34:23.283179 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 8 00:34:23.294930 systemd-networkd[804]: lo: Link UP May 8 00:34:23.294937 systemd-networkd[804]: lo: Gained carrier May 8 00:34:23.295813 systemd-networkd[804]: Enumeration completed May 8 00:34:23.295863 systemd[1]: Started systemd-networkd.service - Network Configuration. May 8 00:34:23.296069 systemd[1]: Reached target network.target - Network. May 8 00:34:23.296195 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). May 8 00:34:23.296607 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 8 00:34:23.296995 systemd-networkd[804]: ens192: Configuring with /etc/systemd/network/10-dracut-cmdline-99.network. May 8 00:34:23.300231 kernel: vmxnet3 0000:0b:00.0 ens192: intr type 3, mode 0, 3 vectors allocated May 8 00:34:23.300348 kernel: vmxnet3 0000:0b:00.0 ens192: NIC Link is Up 10000 Mbps May 8 00:34:23.301571 systemd-networkd[804]: ens192: Link UP May 8 00:34:23.301575 systemd-networkd[804]: ens192: Gained carrier May 8 00:34:23.309754 ignition[806]: Ignition 2.19.0 May 8 00:34:23.310059 ignition[806]: Stage: kargs May 8 00:34:23.310280 ignition[806]: no configs at "/usr/lib/ignition/base.d" May 8 00:34:23.310418 ignition[806]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" May 8 00:34:23.311150 ignition[806]: kargs: kargs passed May 8 00:34:23.311297 ignition[806]: Ignition finished successfully May 8 00:34:23.312540 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 8 00:34:23.317144 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 8 00:34:23.325431 ignition[814]: Ignition 2.19.0 May 8 00:34:23.325438 ignition[814]: Stage: disks May 8 00:34:23.325551 ignition[814]: no configs at "/usr/lib/ignition/base.d" May 8 00:34:23.325558 ignition[814]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" May 8 00:34:23.326255 ignition[814]: disks: disks passed May 8 00:34:23.326829 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 8 00:34:23.326286 ignition[814]: Ignition finished successfully May 8 00:34:23.327281 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 8 00:34:23.327440 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 8 00:34:23.327649 systemd[1]: Reached target local-fs.target - Local File Systems. May 8 00:34:23.327861 systemd[1]: Reached target sysinit.target - System Initialization. May 8 00:34:23.328054 systemd[1]: Reached target basic.target - Basic System. May 8 00:34:23.332140 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 8 00:34:23.343362 systemd-fsck[822]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks May 8 00:34:23.345141 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 8 00:34:23.348126 systemd[1]: Mounting sysroot.mount - /sysroot... May 8 00:34:23.428925 systemd[1]: Mounted sysroot.mount - /sysroot. May 8 00:34:23.429190 kernel: EXT4-fs (sda9): mounted filesystem 36960c89-ba45-4808-a41c-bf61ce9470a3 r/w with ordered data mode. Quota mode: none. May 8 00:34:23.429422 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 8 00:34:23.440115 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 8 00:34:23.441639 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 8 00:34:23.442021 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. May 8 00:34:23.442064 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 8 00:34:23.442086 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 8 00:34:23.446418 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 8 00:34:23.447267 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 8 00:34:23.456050 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by mount (830) May 8 00:34:23.465897 kernel: BTRFS info (device sda6): first mount of filesystem a884989d-7a9b-4fbd-878f-8ac586ff8595 May 8 00:34:23.465933 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm May 8 00:34:23.465948 kernel: BTRFS info (device sda6): using free space tree May 8 00:34:23.503063 kernel: BTRFS info (device sda6): enabling ssd optimizations May 8 00:34:23.508421 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 8 00:34:23.550425 initrd-setup-root[854]: cut: /sysroot/etc/passwd: No such file or directory May 8 00:34:23.553244 initrd-setup-root[861]: cut: /sysroot/etc/group: No such file or directory May 8 00:34:23.555734 initrd-setup-root[868]: cut: /sysroot/etc/shadow: No such file or directory May 8 00:34:23.558302 initrd-setup-root[875]: cut: /sysroot/etc/gshadow: No such file or directory May 8 00:34:23.689546 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 8 00:34:23.695151 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 8 00:34:23.697660 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 8 00:34:23.703054 kernel: BTRFS info (device sda6): last unmount of filesystem a884989d-7a9b-4fbd-878f-8ac586ff8595 May 8 00:34:23.723426 ignition[943]: INFO : Ignition 2.19.0 May 8 00:34:23.723426 ignition[943]: INFO : Stage: mount May 8 00:34:23.723790 ignition[943]: INFO : no configs at "/usr/lib/ignition/base.d" May 8 00:34:23.723790 ignition[943]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" May 8 00:34:23.724218 ignition[943]: INFO : mount: mount passed May 8 00:34:23.724340 ignition[943]: INFO : Ignition finished successfully May 8 00:34:23.724717 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 8 00:34:23.725674 systemd[1]: Starting ignition-files.service - Ignition (files)... May 8 00:34:23.796423 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 8 00:34:24.026908 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 8 00:34:24.033222 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 8 00:34:24.044134 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (955) May 8 00:34:24.044183 kernel: BTRFS info (device sda6): first mount of filesystem a884989d-7a9b-4fbd-878f-8ac586ff8595 May 8 00:34:24.047074 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm May 8 00:34:24.047108 kernel: BTRFS info (device sda6): using free space tree May 8 00:34:24.052056 kernel: BTRFS info (device sda6): enabling ssd optimizations May 8 00:34:24.053674 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 8 00:34:24.068876 ignition[972]: INFO : Ignition 2.19.0 May 8 00:34:24.068876 ignition[972]: INFO : Stage: files May 8 00:34:24.069548 ignition[972]: INFO : no configs at "/usr/lib/ignition/base.d" May 8 00:34:24.069548 ignition[972]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" May 8 00:34:24.069857 ignition[972]: DEBUG : files: compiled without relabeling support, skipping May 8 00:34:24.070075 ignition[972]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 8 00:34:24.070075 ignition[972]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 8 00:34:24.071806 ignition[972]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 8 00:34:24.072093 ignition[972]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 8 00:34:24.072536 unknown[972]: wrote ssh authorized keys file for user: core May 8 00:34:24.072894 ignition[972]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 8 00:34:24.074409 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" May 8 00:34:24.074409 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" May 8 00:34:24.074409 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" May 8 00:34:24.074409 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 May 8 00:34:24.110624 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK May 8 00:34:24.269767 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" May 8 00:34:24.270019 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" May 8 00:34:24.270019 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" May 8 00:34:24.270019 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" May 8 00:34:24.270019 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" May 8 00:34:24.270661 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 8 00:34:24.270661 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 8 00:34:24.270661 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 8 00:34:24.270661 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 8 00:34:24.270661 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" May 8 00:34:24.270661 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 8 00:34:24.270661 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" May 8 00:34:24.271800 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" May 8 00:34:24.271800 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" May 8 00:34:24.271800 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 May 8 00:34:24.778300 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK May 8 00:34:24.817316 systemd-networkd[804]: ens192: Gained IPv6LL May 8 00:34:25.121848 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" May 8 00:34:25.121848 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/etc/systemd/network/00-vmware.network" May 8 00:34:25.122464 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/etc/systemd/network/00-vmware.network" May 8 00:34:25.122464 ignition[972]: INFO : files: op(d): [started] processing unit "containerd.service" May 8 00:34:25.130436 ignition[972]: INFO : files: op(d): op(e): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" May 8 00:34:25.130688 ignition[972]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" May 8 00:34:25.130688 ignition[972]: INFO : files: op(d): [finished] processing unit "containerd.service" May 8 00:34:25.130688 ignition[972]: INFO : files: op(f): [started] processing unit "prepare-helm.service" May 8 00:34:25.130688 ignition[972]: INFO : files: op(f): op(10): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 8 00:34:25.130688 ignition[972]: INFO : files: op(f): op(10): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 8 00:34:25.130688 ignition[972]: INFO : files: op(f): [finished] processing unit "prepare-helm.service" May 8 00:34:25.130688 ignition[972]: INFO : files: op(11): [started] processing unit "coreos-metadata.service" May 8 00:34:25.130688 ignition[972]: INFO : files: op(11): op(12): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 8 00:34:25.131879 ignition[972]: INFO : files: op(11): op(12): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 8 00:34:25.131879 ignition[972]: INFO : files: op(11): [finished] processing unit "coreos-metadata.service" May 8 00:34:25.131879 ignition[972]: INFO : files: op(13): [started] setting preset to disabled for "coreos-metadata.service" May 8 00:34:25.298274 ignition[972]: INFO : files: op(13): op(14): [started] removing enablement symlink(s) for "coreos-metadata.service" May 8 00:34:25.301058 ignition[972]: INFO : files: op(13): op(14): [finished] removing enablement symlink(s) for "coreos-metadata.service" May 8 00:34:25.301058 ignition[972]: INFO : files: op(13): [finished] setting preset to disabled for "coreos-metadata.service" May 8 00:34:25.301058 ignition[972]: INFO : files: op(15): [started] setting preset to enabled for "prepare-helm.service" May 8 00:34:25.301058 ignition[972]: INFO : files: op(15): [finished] setting preset to enabled for "prepare-helm.service" May 8 00:34:25.301664 ignition[972]: INFO : files: createResultFile: createFiles: op(16): [started] writing file "/sysroot/etc/.ignition-result.json" May 8 00:34:25.301664 ignition[972]: INFO : files: createResultFile: createFiles: op(16): [finished] writing file "/sysroot/etc/.ignition-result.json" May 8 00:34:25.301664 ignition[972]: INFO : files: files passed May 8 00:34:25.301664 ignition[972]: INFO : Ignition finished successfully May 8 00:34:25.302144 systemd[1]: Finished ignition-files.service - Ignition (files). May 8 00:34:25.305172 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 8 00:34:25.306964 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 8 00:34:25.308349 systemd[1]: ignition-quench.service: Deactivated successfully. May 8 00:34:25.308582 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 8 00:34:25.314870 initrd-setup-root-after-ignition[1003]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 8 00:34:25.314870 initrd-setup-root-after-ignition[1003]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 8 00:34:25.315828 initrd-setup-root-after-ignition[1007]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 8 00:34:25.316890 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 8 00:34:25.317507 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 8 00:34:25.321293 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 8 00:34:25.335884 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 8 00:34:25.335983 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 8 00:34:25.336468 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 8 00:34:25.336626 systemd[1]: Reached target initrd.target - Initrd Default Target. May 8 00:34:25.336870 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 8 00:34:25.337519 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 8 00:34:25.347347 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 8 00:34:25.352183 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 8 00:34:25.357915 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 8 00:34:25.358280 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 8 00:34:25.358595 systemd[1]: Stopped target timers.target - Timer Units. May 8 00:34:25.358860 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 8 00:34:25.359200 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 8 00:34:25.359578 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 8 00:34:25.359867 systemd[1]: Stopped target basic.target - Basic System. May 8 00:34:25.360139 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 8 00:34:25.360422 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 8 00:34:25.360711 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 8 00:34:25.360999 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 8 00:34:25.361346 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 8 00:34:25.361631 systemd[1]: Stopped target sysinit.target - System Initialization. May 8 00:34:25.361924 systemd[1]: Stopped target local-fs.target - Local File Systems. May 8 00:34:25.362205 systemd[1]: Stopped target swap.target - Swaps. May 8 00:34:25.362454 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 8 00:34:25.362525 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 8 00:34:25.363010 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 8 00:34:25.363288 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 8 00:34:25.363587 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 8 00:34:25.363736 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 8 00:34:25.364023 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 8 00:34:25.364101 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 8 00:34:25.364549 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 8 00:34:25.364616 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 8 00:34:25.364936 systemd[1]: Stopped target paths.target - Path Units. May 8 00:34:25.365318 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 8 00:34:25.369063 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 8 00:34:25.369230 systemd[1]: Stopped target slices.target - Slice Units. May 8 00:34:25.369515 systemd[1]: Stopped target sockets.target - Socket Units. May 8 00:34:25.369693 systemd[1]: iscsid.socket: Deactivated successfully. May 8 00:34:25.369741 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 8 00:34:25.369900 systemd[1]: iscsiuio.socket: Deactivated successfully. May 8 00:34:25.369945 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 8 00:34:25.370184 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 8 00:34:25.370246 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 8 00:34:25.370406 systemd[1]: ignition-files.service: Deactivated successfully. May 8 00:34:25.370464 systemd[1]: Stopped ignition-files.service - Ignition (files). May 8 00:34:25.382154 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 8 00:34:25.382269 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 8 00:34:25.382344 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 8 00:34:25.383706 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 8 00:34:25.383814 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 8 00:34:25.383881 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 8 00:34:25.384113 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 8 00:34:25.384174 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 8 00:34:25.386558 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 8 00:34:25.386820 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 8 00:34:25.391941 ignition[1028]: INFO : Ignition 2.19.0 May 8 00:34:25.394387 ignition[1028]: INFO : Stage: umount May 8 00:34:25.394387 ignition[1028]: INFO : no configs at "/usr/lib/ignition/base.d" May 8 00:34:25.394387 ignition[1028]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" May 8 00:34:25.394387 ignition[1028]: INFO : umount: umount passed May 8 00:34:25.394387 ignition[1028]: INFO : Ignition finished successfully May 8 00:34:25.395056 systemd[1]: ignition-mount.service: Deactivated successfully. May 8 00:34:25.395117 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 8 00:34:25.395547 systemd[1]: Stopped target network.target - Network. May 8 00:34:25.395646 systemd[1]: ignition-disks.service: Deactivated successfully. May 8 00:34:25.395688 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 8 00:34:25.395801 systemd[1]: ignition-kargs.service: Deactivated successfully. May 8 00:34:25.395829 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 8 00:34:25.395933 systemd[1]: ignition-setup.service: Deactivated successfully. May 8 00:34:25.395960 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 8 00:34:25.396074 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 8 00:34:25.396095 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 8 00:34:25.396313 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 8 00:34:25.396467 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 8 00:34:25.403127 systemd[1]: systemd-resolved.service: Deactivated successfully. May 8 00:34:25.403357 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 8 00:34:25.405086 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 8 00:34:25.405387 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 8 00:34:25.405412 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 8 00:34:25.406378 systemd[1]: systemd-networkd.service: Deactivated successfully. May 8 00:34:25.406445 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 8 00:34:25.407014 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 8 00:34:25.407231 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 8 00:34:25.411168 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 8 00:34:25.411269 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 8 00:34:25.411313 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 8 00:34:25.411469 systemd[1]: afterburn-network-kargs.service: Deactivated successfully. May 8 00:34:25.411500 systemd[1]: Stopped afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments. May 8 00:34:25.412619 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 8 00:34:25.412649 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 8 00:34:25.412819 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 8 00:34:25.412866 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 8 00:34:25.413685 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 8 00:34:25.421196 systemd[1]: network-cleanup.service: Deactivated successfully. May 8 00:34:25.421281 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 8 00:34:25.426585 systemd[1]: systemd-udevd.service: Deactivated successfully. May 8 00:34:25.426693 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 8 00:34:25.427733 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 8 00:34:25.427773 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 8 00:34:25.428206 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 8 00:34:25.428226 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 8 00:34:25.428408 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 8 00:34:25.428432 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 8 00:34:25.428748 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 8 00:34:25.428771 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 8 00:34:25.429119 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 8 00:34:25.429142 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 8 00:34:25.433148 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 8 00:34:25.433271 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 8 00:34:25.433307 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 8 00:34:25.433466 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 8 00:34:25.433494 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 8 00:34:25.436396 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 8 00:34:25.436478 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 8 00:34:25.511909 systemd[1]: sysroot-boot.service: Deactivated successfully. May 8 00:34:25.511983 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 8 00:34:25.512591 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 8 00:34:25.512778 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 8 00:34:25.512818 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 8 00:34:25.515156 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 8 00:34:25.567769 systemd[1]: Switching root. May 8 00:34:25.589446 systemd-journald[214]: Journal stopped May 8 00:34:20.759627 kernel: Linux version 6.6.88-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Wed May 7 22:54:21 -00 2025 May 8 00:34:20.759645 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=86cfbfcc89a9c46f6cbba5bdb3509d1ce1367f0c93b0b0e4c6bdcad1a2064c90 May 8 00:34:20.759651 kernel: Disabled fast string operations May 8 00:34:20.759655 kernel: BIOS-provided physical RAM map: May 8 00:34:20.759659 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ebff] usable May 8 00:34:20.759663 kernel: BIOS-e820: [mem 0x000000000009ec00-0x000000000009ffff] reserved May 8 00:34:20.759669 kernel: BIOS-e820: [mem 0x00000000000dc000-0x00000000000fffff] reserved May 8 00:34:20.759674 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007fedffff] usable May 8 00:34:20.759678 kernel: BIOS-e820: [mem 0x000000007fee0000-0x000000007fefefff] ACPI data May 8 00:34:20.759682 kernel: BIOS-e820: [mem 0x000000007feff000-0x000000007fefffff] ACPI NVS May 8 00:34:20.759686 kernel: BIOS-e820: [mem 0x000000007ff00000-0x000000007fffffff] usable May 8 00:34:20.759690 kernel: BIOS-e820: [mem 0x00000000f0000000-0x00000000f7ffffff] reserved May 8 00:34:20.759695 kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec0ffff] reserved May 8 00:34:20.759699 kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved May 8 00:34:20.759705 kernel: BIOS-e820: [mem 0x00000000fffe0000-0x00000000ffffffff] reserved May 8 00:34:20.759710 kernel: NX (Execute Disable) protection: active May 8 00:34:20.759715 kernel: APIC: Static calls initialized May 8 00:34:20.759720 kernel: SMBIOS 2.7 present. May 8 00:34:20.759725 kernel: DMI: VMware, Inc. VMware Virtual Platform/440BX Desktop Reference Platform, BIOS 6.00 05/28/2020 May 8 00:34:20.759729 kernel: vmware: hypercall mode: 0x00 May 8 00:34:20.759734 kernel: Hypervisor detected: VMware May 8 00:34:20.759739 kernel: vmware: TSC freq read from hypervisor : 3408.000 MHz May 8 00:34:20.759745 kernel: vmware: Host bus clock speed read from hypervisor : 66000000 Hz May 8 00:34:20.759749 kernel: vmware: using clock offset of 4097046173 ns May 8 00:34:20.759754 kernel: tsc: Detected 3408.000 MHz processor May 8 00:34:20.759759 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved May 8 00:34:20.759765 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable May 8 00:34:20.759770 kernel: last_pfn = 0x80000 max_arch_pfn = 0x400000000 May 8 00:34:20.759775 kernel: total RAM covered: 3072M May 8 00:34:20.759779 kernel: Found optimal setting for mtrr clean up May 8 00:34:20.759787 kernel: gran_size: 64K chunk_size: 64K num_reg: 2 lose cover RAM: 0G May 8 00:34:20.759793 kernel: MTRR map: 6 entries (5 fixed + 1 variable; max 21), built from 8 variable MTRRs May 8 00:34:20.759798 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT May 8 00:34:20.759802 kernel: Using GB pages for direct mapping May 8 00:34:20.759807 kernel: ACPI: Early table checksum verification disabled May 8 00:34:20.759812 kernel: ACPI: RSDP 0x00000000000F6A00 000024 (v02 PTLTD ) May 8 00:34:20.759817 kernel: ACPI: XSDT 0x000000007FEE965B 00005C (v01 INTEL 440BX 06040000 VMW 01324272) May 8 00:34:20.759822 kernel: ACPI: FACP 0x000000007FEFEE73 0000F4 (v04 INTEL 440BX 06040000 PTL 000F4240) May 8 00:34:20.759827 kernel: ACPI: DSDT 0x000000007FEEAD55 01411E (v01 PTLTD Custom 06040000 MSFT 03000001) May 8 00:34:20.759832 kernel: ACPI: FACS 0x000000007FEFFFC0 000040 May 8 00:34:20.759840 kernel: ACPI: FACS 0x000000007FEFFFC0 000040 May 8 00:34:20.759845 kernel: ACPI: BOOT 0x000000007FEEAD2D 000028 (v01 PTLTD $SBFTBL$ 06040000 LTP 00000001) May 8 00:34:20.759850 kernel: ACPI: APIC 0x000000007FEEA5EB 000742 (v01 PTLTD ? APIC 06040000 LTP 00000000) May 8 00:34:20.759855 kernel: ACPI: MCFG 0x000000007FEEA5AF 00003C (v01 PTLTD $PCITBL$ 06040000 LTP 00000001) May 8 00:34:20.759860 kernel: ACPI: SRAT 0x000000007FEE9757 0008A8 (v02 VMWARE MEMPLUG 06040000 VMW 00000001) May 8 00:34:20.759867 kernel: ACPI: HPET 0x000000007FEE971F 000038 (v01 VMWARE VMW HPET 06040000 VMW 00000001) May 8 00:34:20.759873 kernel: ACPI: WAET 0x000000007FEE96F7 000028 (v01 VMWARE VMW WAET 06040000 VMW 00000001) May 8 00:34:20.759878 kernel: ACPI: Reserving FACP table memory at [mem 0x7fefee73-0x7fefef66] May 8 00:34:20.759883 kernel: ACPI: Reserving DSDT table memory at [mem 0x7feead55-0x7fefee72] May 8 00:34:20.759888 kernel: ACPI: Reserving FACS table memory at [mem 0x7fefffc0-0x7fefffff] May 8 00:34:20.759893 kernel: ACPI: Reserving FACS table memory at [mem 0x7fefffc0-0x7fefffff] May 8 00:34:20.759898 kernel: ACPI: Reserving BOOT table memory at [mem 0x7feead2d-0x7feead54] May 8 00:34:20.759903 kernel: ACPI: Reserving APIC table memory at [mem 0x7feea5eb-0x7feead2c] May 8 00:34:20.759909 kernel: ACPI: Reserving MCFG table memory at [mem 0x7feea5af-0x7feea5ea] May 8 00:34:20.759914 kernel: ACPI: Reserving SRAT table memory at [mem 0x7fee9757-0x7fee9ffe] May 8 00:34:20.759920 kernel: ACPI: Reserving HPET table memory at [mem 0x7fee971f-0x7fee9756] May 8 00:34:20.759925 kernel: ACPI: Reserving WAET table memory at [mem 0x7fee96f7-0x7fee971e] May 8 00:34:20.759930 kernel: system APIC only can use physical flat May 8 00:34:20.759935 kernel: APIC: Switched APIC routing to: physical flat May 8 00:34:20.759940 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 May 8 00:34:20.759946 kernel: SRAT: PXM 0 -> APIC 0x02 -> Node 0 May 8 00:34:20.759951 kernel: SRAT: PXM 0 -> APIC 0x04 -> Node 0 May 8 00:34:20.759956 kernel: SRAT: PXM 0 -> APIC 0x06 -> Node 0 May 8 00:34:20.759961 kernel: SRAT: PXM 0 -> APIC 0x08 -> Node 0 May 8 00:34:20.759967 kernel: SRAT: PXM 0 -> APIC 0x0a -> Node 0 May 8 00:34:20.759972 kernel: SRAT: PXM 0 -> APIC 0x0c -> Node 0 May 8 00:34:20.759977 kernel: SRAT: PXM 0 -> APIC 0x0e -> Node 0 May 8 00:34:20.759982 kernel: SRAT: PXM 0 -> APIC 0x10 -> Node 0 May 8 00:34:20.759987 kernel: SRAT: PXM 0 -> APIC 0x12 -> Node 0 May 8 00:34:20.759992 kernel: SRAT: PXM 0 -> APIC 0x14 -> Node 0 May 8 00:34:20.759997 kernel: SRAT: PXM 0 -> APIC 0x16 -> Node 0 May 8 00:34:20.760002 kernel: SRAT: PXM 0 -> APIC 0x18 -> Node 0 May 8 00:34:20.760007 kernel: SRAT: PXM 0 -> APIC 0x1a -> Node 0 May 8 00:34:20.760012 kernel: SRAT: PXM 0 -> APIC 0x1c -> Node 0 May 8 00:34:20.760018 kernel: SRAT: PXM 0 -> APIC 0x1e -> Node 0 May 8 00:34:20.760023 kernel: SRAT: PXM 0 -> APIC 0x20 -> Node 0 May 8 00:34:20.760028 kernel: SRAT: PXM 0 -> APIC 0x22 -> Node 0 May 8 00:34:20.760033 kernel: SRAT: PXM 0 -> APIC 0x24 -> Node 0 May 8 00:34:20.760047 kernel: SRAT: PXM 0 -> APIC 0x26 -> Node 0 May 8 00:34:20.760052 kernel: SRAT: PXM 0 -> APIC 0x28 -> Node 0 May 8 00:34:20.760057 kernel: SRAT: PXM 0 -> APIC 0x2a -> Node 0 May 8 00:34:20.760062 kernel: SRAT: PXM 0 -> APIC 0x2c -> Node 0 May 8 00:34:20.760067 kernel: SRAT: PXM 0 -> APIC 0x2e -> Node 0 May 8 00:34:20.760072 kernel: SRAT: PXM 0 -> APIC 0x30 -> Node 0 May 8 00:34:20.760079 kernel: SRAT: PXM 0 -> APIC 0x32 -> Node 0 May 8 00:34:20.760084 kernel: SRAT: PXM 0 -> APIC 0x34 -> Node 0 May 8 00:34:20.760089 kernel: SRAT: PXM 0 -> APIC 0x36 -> Node 0 May 8 00:34:20.760094 kernel: SRAT: PXM 0 -> APIC 0x38 -> Node 0 May 8 00:34:20.760099 kernel: SRAT: PXM 0 -> APIC 0x3a -> Node 0 May 8 00:34:20.760104 kernel: SRAT: PXM 0 -> APIC 0x3c -> Node 0 May 8 00:34:20.760109 kernel: SRAT: PXM 0 -> APIC 0x3e -> Node 0 May 8 00:34:20.760114 kernel: SRAT: PXM 0 -> APIC 0x40 -> Node 0 May 8 00:34:20.760119 kernel: SRAT: PXM 0 -> APIC 0x42 -> Node 0 May 8 00:34:20.760124 kernel: SRAT: PXM 0 -> APIC 0x44 -> Node 0 May 8 00:34:20.760129 kernel: SRAT: PXM 0 -> APIC 0x46 -> Node 0 May 8 00:34:20.760135 kernel: SRAT: PXM 0 -> APIC 0x48 -> Node 0 May 8 00:34:20.760140 kernel: SRAT: PXM 0 -> APIC 0x4a -> Node 0 May 8 00:34:20.760145 kernel: SRAT: PXM 0 -> APIC 0x4c -> Node 0 May 8 00:34:20.760150 kernel: SRAT: PXM 0 -> APIC 0x4e -> Node 0 May 8 00:34:20.760155 kernel: SRAT: PXM 0 -> APIC 0x50 -> Node 0 May 8 00:34:20.760161 kernel: SRAT: PXM 0 -> APIC 0x52 -> Node 0 May 8 00:34:20.760166 kernel: SRAT: PXM 0 -> APIC 0x54 -> Node 0 May 8 00:34:20.760170 kernel: SRAT: PXM 0 -> APIC 0x56 -> Node 0 May 8 00:34:20.760176 kernel: SRAT: PXM 0 -> APIC 0x58 -> Node 0 May 8 00:34:20.760180 kernel: SRAT: PXM 0 -> APIC 0x5a -> Node 0 May 8 00:34:20.760187 kernel: SRAT: PXM 0 -> APIC 0x5c -> Node 0 May 8 00:34:20.760192 kernel: SRAT: PXM 0 -> APIC 0x5e -> Node 0 May 8 00:34:20.760197 kernel: SRAT: PXM 0 -> APIC 0x60 -> Node 0 May 8 00:34:20.760202 kernel: SRAT: PXM 0 -> APIC 0x62 -> Node 0 May 8 00:34:20.760207 kernel: SRAT: PXM 0 -> APIC 0x64 -> Node 0 May 8 00:34:20.760212 kernel: SRAT: PXM 0 -> APIC 0x66 -> Node 0 May 8 00:34:20.760217 kernel: SRAT: PXM 0 -> APIC 0x68 -> Node 0 May 8 00:34:20.760222 kernel: SRAT: PXM 0 -> APIC 0x6a -> Node 0 May 8 00:34:20.760227 kernel: SRAT: PXM 0 -> APIC 0x6c -> Node 0 May 8 00:34:20.760232 kernel: SRAT: PXM 0 -> APIC 0x6e -> Node 0 May 8 00:34:20.760238 kernel: SRAT: PXM 0 -> APIC 0x70 -> Node 0 May 8 00:34:20.760244 kernel: SRAT: PXM 0 -> APIC 0x72 -> Node 0 May 8 00:34:20.760252 kernel: SRAT: PXM 0 -> APIC 0x74 -> Node 0 May 8 00:34:20.760265 kernel: SRAT: PXM 0 -> APIC 0x76 -> Node 0 May 8 00:34:20.760276 kernel: SRAT: PXM 0 -> APIC 0x78 -> Node 0 May 8 00:34:20.760282 kernel: SRAT: PXM 0 -> APIC 0x7a -> Node 0 May 8 00:34:20.760288 kernel: SRAT: PXM 0 -> APIC 0x7c -> Node 0 May 8 00:34:20.760293 kernel: SRAT: PXM 0 -> APIC 0x7e -> Node 0 May 8 00:34:20.760301 kernel: SRAT: PXM 0 -> APIC 0x80 -> Node 0 May 8 00:34:20.760310 kernel: SRAT: PXM 0 -> APIC 0x82 -> Node 0 May 8 00:34:20.760318 kernel: SRAT: PXM 0 -> APIC 0x84 -> Node 0 May 8 00:34:20.760328 kernel: SRAT: PXM 0 -> APIC 0x86 -> Node 0 May 8 00:34:20.760333 kernel: SRAT: PXM 0 -> APIC 0x88 -> Node 0 May 8 00:34:20.760339 kernel: SRAT: PXM 0 -> APIC 0x8a -> Node 0 May 8 00:34:20.760344 kernel: SRAT: PXM 0 -> APIC 0x8c -> Node 0 May 8 00:34:20.760350 kernel: SRAT: PXM 0 -> APIC 0x8e -> Node 0 May 8 00:34:20.760355 kernel: SRAT: PXM 0 -> APIC 0x90 -> Node 0 May 8 00:34:20.760360 kernel: SRAT: PXM 0 -> APIC 0x92 -> Node 0 May 8 00:34:20.760366 kernel: SRAT: PXM 0 -> APIC 0x94 -> Node 0 May 8 00:34:20.760373 kernel: SRAT: PXM 0 -> APIC 0x96 -> Node 0 May 8 00:34:20.760379 kernel: SRAT: PXM 0 -> APIC 0x98 -> Node 0 May 8 00:34:20.760384 kernel: SRAT: PXM 0 -> APIC 0x9a -> Node 0 May 8 00:34:20.760389 kernel: SRAT: PXM 0 -> APIC 0x9c -> Node 0 May 8 00:34:20.760394 kernel: SRAT: PXM 0 -> APIC 0x9e -> Node 0 May 8 00:34:20.760400 kernel: SRAT: PXM 0 -> APIC 0xa0 -> Node 0 May 8 00:34:20.760405 kernel: SRAT: PXM 0 -> APIC 0xa2 -> Node 0 May 8 00:34:20.760410 kernel: SRAT: PXM 0 -> APIC 0xa4 -> Node 0 May 8 00:34:20.760416 kernel: SRAT: PXM 0 -> APIC 0xa6 -> Node 0 May 8 00:34:20.760421 kernel: SRAT: PXM 0 -> APIC 0xa8 -> Node 0 May 8 00:34:20.760428 kernel: SRAT: PXM 0 -> APIC 0xaa -> Node 0 May 8 00:34:20.760433 kernel: SRAT: PXM 0 -> APIC 0xac -> Node 0 May 8 00:34:20.760438 kernel: SRAT: PXM 0 -> APIC 0xae -> Node 0 May 8 00:34:20.760443 kernel: SRAT: PXM 0 -> APIC 0xb0 -> Node 0 May 8 00:34:20.760449 kernel: SRAT: PXM 0 -> APIC 0xb2 -> Node 0 May 8 00:34:20.760454 kernel: SRAT: PXM 0 -> APIC 0xb4 -> Node 0 May 8 00:34:20.760459 kernel: SRAT: PXM 0 -> APIC 0xb6 -> Node 0 May 8 00:34:20.760465 kernel: SRAT: PXM 0 -> APIC 0xb8 -> Node 0 May 8 00:34:20.760470 kernel: SRAT: PXM 0 -> APIC 0xba -> Node 0 May 8 00:34:20.760476 kernel: SRAT: PXM 0 -> APIC 0xbc -> Node 0 May 8 00:34:20.760482 kernel: SRAT: PXM 0 -> APIC 0xbe -> Node 0 May 8 00:34:20.760487 kernel: SRAT: PXM 0 -> APIC 0xc0 -> Node 0 May 8 00:34:20.760493 kernel: SRAT: PXM 0 -> APIC 0xc2 -> Node 0 May 8 00:34:20.760498 kernel: SRAT: PXM 0 -> APIC 0xc4 -> Node 0 May 8 00:34:20.760503 kernel: SRAT: PXM 0 -> APIC 0xc6 -> Node 0 May 8 00:34:20.760508 kernel: SRAT: PXM 0 -> APIC 0xc8 -> Node 0 May 8 00:34:20.760514 kernel: SRAT: PXM 0 -> APIC 0xca -> Node 0 May 8 00:34:20.760519 kernel: SRAT: PXM 0 -> APIC 0xcc -> Node 0 May 8 00:34:20.760524 kernel: SRAT: PXM 0 -> APIC 0xce -> Node 0 May 8 00:34:20.760530 kernel: SRAT: PXM 0 -> APIC 0xd0 -> Node 0 May 8 00:34:20.760536 kernel: SRAT: PXM 0 -> APIC 0xd2 -> Node 0 May 8 00:34:20.760541 kernel: SRAT: PXM 0 -> APIC 0xd4 -> Node 0 May 8 00:34:20.760547 kernel: SRAT: PXM 0 -> APIC 0xd6 -> Node 0 May 8 00:34:20.760552 kernel: SRAT: PXM 0 -> APIC 0xd8 -> Node 0 May 8 00:34:20.760557 kernel: SRAT: PXM 0 -> APIC 0xda -> Node 0 May 8 00:34:20.760563 kernel: SRAT: PXM 0 -> APIC 0xdc -> Node 0 May 8 00:34:20.760568 kernel: SRAT: PXM 0 -> APIC 0xde -> Node 0 May 8 00:34:20.760573 kernel: SRAT: PXM 0 -> APIC 0xe0 -> Node 0 May 8 00:34:20.760579 kernel: SRAT: PXM 0 -> APIC 0xe2 -> Node 0 May 8 00:34:20.760584 kernel: SRAT: PXM 0 -> APIC 0xe4 -> Node 0 May 8 00:34:20.760591 kernel: SRAT: PXM 0 -> APIC 0xe6 -> Node 0 May 8 00:34:20.760596 kernel: SRAT: PXM 0 -> APIC 0xe8 -> Node 0 May 8 00:34:20.760601 kernel: SRAT: PXM 0 -> APIC 0xea -> Node 0 May 8 00:34:20.760607 kernel: SRAT: PXM 0 -> APIC 0xec -> Node 0 May 8 00:34:20.760612 kernel: SRAT: PXM 0 -> APIC 0xee -> Node 0 May 8 00:34:20.760617 kernel: SRAT: PXM 0 -> APIC 0xf0 -> Node 0 May 8 00:34:20.760623 kernel: SRAT: PXM 0 -> APIC 0xf2 -> Node 0 May 8 00:34:20.760628 kernel: SRAT: PXM 0 -> APIC 0xf4 -> Node 0 May 8 00:34:20.760633 kernel: SRAT: PXM 0 -> APIC 0xf6 -> Node 0 May 8 00:34:20.760639 kernel: SRAT: PXM 0 -> APIC 0xf8 -> Node 0 May 8 00:34:20.760644 kernel: SRAT: PXM 0 -> APIC 0xfa -> Node 0 May 8 00:34:20.760650 kernel: SRAT: PXM 0 -> APIC 0xfc -> Node 0 May 8 00:34:20.760656 kernel: SRAT: PXM 0 -> APIC 0xfe -> Node 0 May 8 00:34:20.760661 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] May 8 00:34:20.760667 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] May 8 00:34:20.760672 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000-0xbfffffff] hotplug May 8 00:34:20.760678 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7fffffff] -> [mem 0x00000000-0x7fffffff] May 8 00:34:20.760684 kernel: NODE_DATA(0) allocated [mem 0x7fffa000-0x7fffffff] May 8 00:34:20.760690 kernel: Zone ranges: May 8 00:34:20.760695 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] May 8 00:34:20.760702 kernel: DMA32 [mem 0x0000000001000000-0x000000007fffffff] May 8 00:34:20.760708 kernel: Normal empty May 8 00:34:20.760713 kernel: Movable zone start for each node May 8 00:34:20.760719 kernel: Early memory node ranges May 8 00:34:20.760724 kernel: node 0: [mem 0x0000000000001000-0x000000000009dfff] May 8 00:34:20.760729 kernel: node 0: [mem 0x0000000000100000-0x000000007fedffff] May 8 00:34:20.760735 kernel: node 0: [mem 0x000000007ff00000-0x000000007fffffff] May 8 00:34:20.760741 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007fffffff] May 8 00:34:20.760746 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 8 00:34:20.760751 kernel: On node 0, zone DMA: 98 pages in unavailable ranges May 8 00:34:20.760758 kernel: On node 0, zone DMA32: 32 pages in unavailable ranges May 8 00:34:20.760763 kernel: ACPI: PM-Timer IO Port: 0x1008 May 8 00:34:20.760769 kernel: system APIC only can use physical flat May 8 00:34:20.760774 kernel: ACPI: LAPIC_NMI (acpi_id[0x00] high edge lint[0x1]) May 8 00:34:20.760780 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1]) May 8 00:34:20.760785 kernel: ACPI: LAPIC_NMI (acpi_id[0x02] high edge lint[0x1]) May 8 00:34:20.760791 kernel: ACPI: LAPIC_NMI (acpi_id[0x03] high edge lint[0x1]) May 8 00:34:20.760796 kernel: ACPI: LAPIC_NMI (acpi_id[0x04] high edge lint[0x1]) May 8 00:34:20.760802 kernel: ACPI: LAPIC_NMI (acpi_id[0x05] high edge lint[0x1]) May 8 00:34:20.760808 kernel: ACPI: LAPIC_NMI (acpi_id[0x06] high edge lint[0x1]) May 8 00:34:20.760814 kernel: ACPI: LAPIC_NMI (acpi_id[0x07] high edge lint[0x1]) May 8 00:34:20.760819 kernel: ACPI: LAPIC_NMI (acpi_id[0x08] high edge lint[0x1]) May 8 00:34:20.760825 kernel: ACPI: LAPIC_NMI (acpi_id[0x09] high edge lint[0x1]) May 8 00:34:20.760830 kernel: ACPI: LAPIC_NMI (acpi_id[0x0a] high edge lint[0x1]) May 8 00:34:20.760836 kernel: ACPI: LAPIC_NMI (acpi_id[0x0b] high edge lint[0x1]) May 8 00:34:20.760841 kernel: ACPI: LAPIC_NMI (acpi_id[0x0c] high edge lint[0x1]) May 8 00:34:20.760846 kernel: ACPI: LAPIC_NMI (acpi_id[0x0d] high edge lint[0x1]) May 8 00:34:20.760852 kernel: ACPI: LAPIC_NMI (acpi_id[0x0e] high edge lint[0x1]) May 8 00:34:20.760858 kernel: ACPI: LAPIC_NMI (acpi_id[0x0f] high edge lint[0x1]) May 8 00:34:20.760864 kernel: ACPI: LAPIC_NMI (acpi_id[0x10] high edge lint[0x1]) May 8 00:34:20.760869 kernel: ACPI: LAPIC_NMI (acpi_id[0x11] high edge lint[0x1]) May 8 00:34:20.760875 kernel: ACPI: LAPIC_NMI (acpi_id[0x12] high edge lint[0x1]) May 8 00:34:20.760880 kernel: ACPI: LAPIC_NMI (acpi_id[0x13] high edge lint[0x1]) May 8 00:34:20.760886 kernel: ACPI: LAPIC_NMI (acpi_id[0x14] high edge lint[0x1]) May 8 00:34:20.760891 kernel: ACPI: LAPIC_NMI (acpi_id[0x15] high edge lint[0x1]) May 8 00:34:20.760896 kernel: ACPI: LAPIC_NMI (acpi_id[0x16] high edge lint[0x1]) May 8 00:34:20.760902 kernel: ACPI: LAPIC_NMI (acpi_id[0x17] high edge lint[0x1]) May 8 00:34:20.760907 kernel: ACPI: LAPIC_NMI (acpi_id[0x18] high edge lint[0x1]) May 8 00:34:20.760914 kernel: ACPI: LAPIC_NMI (acpi_id[0x19] high edge lint[0x1]) May 8 00:34:20.760919 kernel: ACPI: LAPIC_NMI (acpi_id[0x1a] high edge lint[0x1]) May 8 00:34:20.760925 kernel: ACPI: LAPIC_NMI (acpi_id[0x1b] high edge lint[0x1]) May 8 00:34:20.760930 kernel: ACPI: LAPIC_NMI (acpi_id[0x1c] high edge lint[0x1]) May 8 00:34:20.760938 kernel: ACPI: LAPIC_NMI (acpi_id[0x1d] high edge lint[0x1]) May 8 00:34:20.760947 kernel: ACPI: LAPIC_NMI (acpi_id[0x1e] high edge lint[0x1]) May 8 00:34:20.760956 kernel: ACPI: LAPIC_NMI (acpi_id[0x1f] high edge lint[0x1]) May 8 00:34:20.760965 kernel: ACPI: LAPIC_NMI (acpi_id[0x20] high edge lint[0x1]) May 8 00:34:20.760972 kernel: ACPI: LAPIC_NMI (acpi_id[0x21] high edge lint[0x1]) May 8 00:34:20.760982 kernel: ACPI: LAPIC_NMI (acpi_id[0x22] high edge lint[0x1]) May 8 00:34:20.760988 kernel: ACPI: LAPIC_NMI (acpi_id[0x23] high edge lint[0x1]) May 8 00:34:20.760993 kernel: ACPI: LAPIC_NMI (acpi_id[0x24] high edge lint[0x1]) May 8 00:34:20.760998 kernel: ACPI: LAPIC_NMI (acpi_id[0x25] high edge lint[0x1]) May 8 00:34:20.761004 kernel: ACPI: LAPIC_NMI (acpi_id[0x26] high edge lint[0x1]) May 8 00:34:20.761009 kernel: ACPI: LAPIC_NMI (acpi_id[0x27] high edge lint[0x1]) May 8 00:34:20.761017 kernel: ACPI: LAPIC_NMI (acpi_id[0x28] high edge lint[0x1]) May 8 00:34:20.761024 kernel: ACPI: LAPIC_NMI (acpi_id[0x29] high edge lint[0x1]) May 8 00:34:20.761030 kernel: ACPI: LAPIC_NMI (acpi_id[0x2a] high edge lint[0x1]) May 8 00:34:20.761035 kernel: ACPI: LAPIC_NMI (acpi_id[0x2b] high edge lint[0x1]) May 8 00:34:20.761089 kernel: ACPI: LAPIC_NMI (acpi_id[0x2c] high edge lint[0x1]) May 8 00:34:20.761097 kernel: ACPI: LAPIC_NMI (acpi_id[0x2d] high edge lint[0x1]) May 8 00:34:20.761106 kernel: ACPI: LAPIC_NMI (acpi_id[0x2e] high edge lint[0x1]) May 8 00:34:20.761114 kernel: ACPI: LAPIC_NMI (acpi_id[0x2f] high edge lint[0x1]) May 8 00:34:20.761119 kernel: ACPI: LAPIC_NMI (acpi_id[0x30] high edge lint[0x1]) May 8 00:34:20.761124 kernel: ACPI: LAPIC_NMI (acpi_id[0x31] high edge lint[0x1]) May 8 00:34:20.761130 kernel: ACPI: LAPIC_NMI (acpi_id[0x32] high edge lint[0x1]) May 8 00:34:20.761135 kernel: ACPI: LAPIC_NMI (acpi_id[0x33] high edge lint[0x1]) May 8 00:34:20.761141 kernel: ACPI: LAPIC_NMI (acpi_id[0x34] high edge lint[0x1]) May 8 00:34:20.761148 kernel: ACPI: LAPIC_NMI (acpi_id[0x35] high edge lint[0x1]) May 8 00:34:20.761154 kernel: ACPI: LAPIC_NMI (acpi_id[0x36] high edge lint[0x1]) May 8 00:34:20.761159 kernel: ACPI: LAPIC_NMI (acpi_id[0x37] high edge lint[0x1]) May 8 00:34:20.761164 kernel: ACPI: LAPIC_NMI (acpi_id[0x38] high edge lint[0x1]) May 8 00:34:20.761170 kernel: ACPI: LAPIC_NMI (acpi_id[0x39] high edge lint[0x1]) May 8 00:34:20.761175 kernel: ACPI: LAPIC_NMI (acpi_id[0x3a] high edge lint[0x1]) May 8 00:34:20.761181 kernel: ACPI: LAPIC_NMI (acpi_id[0x3b] high edge lint[0x1]) May 8 00:34:20.761186 kernel: ACPI: LAPIC_NMI (acpi_id[0x3c] high edge lint[0x1]) May 8 00:34:20.761192 kernel: ACPI: LAPIC_NMI (acpi_id[0x3d] high edge lint[0x1]) May 8 00:34:20.761197 kernel: ACPI: LAPIC_NMI (acpi_id[0x3e] high edge lint[0x1]) May 8 00:34:20.761203 kernel: ACPI: LAPIC_NMI (acpi_id[0x3f] high edge lint[0x1]) May 8 00:34:20.761209 kernel: ACPI: LAPIC_NMI (acpi_id[0x40] high edge lint[0x1]) May 8 00:34:20.761214 kernel: ACPI: LAPIC_NMI (acpi_id[0x41] high edge lint[0x1]) May 8 00:34:20.761220 kernel: ACPI: LAPIC_NMI (acpi_id[0x42] high edge lint[0x1]) May 8 00:34:20.761225 kernel: ACPI: LAPIC_NMI (acpi_id[0x43] high edge lint[0x1]) May 8 00:34:20.761231 kernel: ACPI: LAPIC_NMI (acpi_id[0x44] high edge lint[0x1]) May 8 00:34:20.761236 kernel: ACPI: LAPIC_NMI (acpi_id[0x45] high edge lint[0x1]) May 8 00:34:20.761241 kernel: ACPI: LAPIC_NMI (acpi_id[0x46] high edge lint[0x1]) May 8 00:34:20.761247 kernel: ACPI: LAPIC_NMI (acpi_id[0x47] high edge lint[0x1]) May 8 00:34:20.761253 kernel: ACPI: LAPIC_NMI (acpi_id[0x48] high edge lint[0x1]) May 8 00:34:20.761259 kernel: ACPI: LAPIC_NMI (acpi_id[0x49] high edge lint[0x1]) May 8 00:34:20.761264 kernel: ACPI: LAPIC_NMI (acpi_id[0x4a] high edge lint[0x1]) May 8 00:34:20.761269 kernel: ACPI: LAPIC_NMI (acpi_id[0x4b] high edge lint[0x1]) May 8 00:34:20.761275 kernel: ACPI: LAPIC_NMI (acpi_id[0x4c] high edge lint[0x1]) May 8 00:34:20.761280 kernel: ACPI: LAPIC_NMI (acpi_id[0x4d] high edge lint[0x1]) May 8 00:34:20.761285 kernel: ACPI: LAPIC_NMI (acpi_id[0x4e] high edge lint[0x1]) May 8 00:34:20.761291 kernel: ACPI: LAPIC_NMI (acpi_id[0x4f] high edge lint[0x1]) May 8 00:34:20.761296 kernel: ACPI: LAPIC_NMI (acpi_id[0x50] high edge lint[0x1]) May 8 00:34:20.761302 kernel: ACPI: LAPIC_NMI (acpi_id[0x51] high edge lint[0x1]) May 8 00:34:20.761308 kernel: ACPI: LAPIC_NMI (acpi_id[0x52] high edge lint[0x1]) May 8 00:34:20.761314 kernel: ACPI: LAPIC_NMI (acpi_id[0x53] high edge lint[0x1]) May 8 00:34:20.761320 kernel: ACPI: LAPIC_NMI (acpi_id[0x54] high edge lint[0x1]) May 8 00:34:20.761327 kernel: ACPI: LAPIC_NMI (acpi_id[0x55] high edge lint[0x1]) May 8 00:34:20.761332 kernel: ACPI: LAPIC_NMI (acpi_id[0x56] high edge lint[0x1]) May 8 00:34:20.761338 kernel: ACPI: LAPIC_NMI (acpi_id[0x57] high edge lint[0x1]) May 8 00:34:20.761343 kernel: ACPI: LAPIC_NMI (acpi_id[0x58] high edge lint[0x1]) May 8 00:34:20.761351 kernel: ACPI: LAPIC_NMI (acpi_id[0x59] high edge lint[0x1]) May 8 00:34:20.761360 kernel: ACPI: LAPIC_NMI (acpi_id[0x5a] high edge lint[0x1]) May 8 00:34:20.761368 kernel: ACPI: LAPIC_NMI (acpi_id[0x5b] high edge lint[0x1]) May 8 00:34:20.761375 kernel: ACPI: LAPIC_NMI (acpi_id[0x5c] high edge lint[0x1]) May 8 00:34:20.761380 kernel: ACPI: LAPIC_NMI (acpi_id[0x5d] high edge lint[0x1]) May 8 00:34:20.761386 kernel: ACPI: LAPIC_NMI (acpi_id[0x5e] high edge lint[0x1]) May 8 00:34:20.761391 kernel: ACPI: LAPIC_NMI (acpi_id[0x5f] high edge lint[0x1]) May 8 00:34:20.761397 kernel: ACPI: LAPIC_NMI (acpi_id[0x60] high edge lint[0x1]) May 8 00:34:20.761402 kernel: ACPI: LAPIC_NMI (acpi_id[0x61] high edge lint[0x1]) May 8 00:34:20.761407 kernel: ACPI: LAPIC_NMI (acpi_id[0x62] high edge lint[0x1]) May 8 00:34:20.761413 kernel: ACPI: LAPIC_NMI (acpi_id[0x63] high edge lint[0x1]) May 8 00:34:20.761418 kernel: ACPI: LAPIC_NMI (acpi_id[0x64] high edge lint[0x1]) May 8 00:34:20.761425 kernel: ACPI: LAPIC_NMI (acpi_id[0x65] high edge lint[0x1]) May 8 00:34:20.761431 kernel: ACPI: LAPIC_NMI (acpi_id[0x66] high edge lint[0x1]) May 8 00:34:20.761436 kernel: ACPI: LAPIC_NMI (acpi_id[0x67] high edge lint[0x1]) May 8 00:34:20.761441 kernel: ACPI: LAPIC_NMI (acpi_id[0x68] high edge lint[0x1]) May 8 00:34:20.761447 kernel: ACPI: LAPIC_NMI (acpi_id[0x69] high edge lint[0x1]) May 8 00:34:20.761452 kernel: ACPI: LAPIC_NMI (acpi_id[0x6a] high edge lint[0x1]) May 8 00:34:20.761458 kernel: ACPI: LAPIC_NMI (acpi_id[0x6b] high edge lint[0x1]) May 8 00:34:20.761463 kernel: ACPI: LAPIC_NMI (acpi_id[0x6c] high edge lint[0x1]) May 8 00:34:20.761469 kernel: ACPI: LAPIC_NMI (acpi_id[0x6d] high edge lint[0x1]) May 8 00:34:20.761474 kernel: ACPI: LAPIC_NMI (acpi_id[0x6e] high edge lint[0x1]) May 8 00:34:20.761480 kernel: ACPI: LAPIC_NMI (acpi_id[0x6f] high edge lint[0x1]) May 8 00:34:20.761486 kernel: ACPI: LAPIC_NMI (acpi_id[0x70] high edge lint[0x1]) May 8 00:34:20.761491 kernel: ACPI: LAPIC_NMI (acpi_id[0x71] high edge lint[0x1]) May 8 00:34:20.761497 kernel: ACPI: LAPIC_NMI (acpi_id[0x72] high edge lint[0x1]) May 8 00:34:20.761502 kernel: ACPI: LAPIC_NMI (acpi_id[0x73] high edge lint[0x1]) May 8 00:34:20.761508 kernel: ACPI: LAPIC_NMI (acpi_id[0x74] high edge lint[0x1]) May 8 00:34:20.761513 kernel: ACPI: LAPIC_NMI (acpi_id[0x75] high edge lint[0x1]) May 8 00:34:20.761518 kernel: ACPI: LAPIC_NMI (acpi_id[0x76] high edge lint[0x1]) May 8 00:34:20.761524 kernel: ACPI: LAPIC_NMI (acpi_id[0x77] high edge lint[0x1]) May 8 00:34:20.761530 kernel: ACPI: LAPIC_NMI (acpi_id[0x78] high edge lint[0x1]) May 8 00:34:20.761536 kernel: ACPI: LAPIC_NMI (acpi_id[0x79] high edge lint[0x1]) May 8 00:34:20.761541 kernel: ACPI: LAPIC_NMI (acpi_id[0x7a] high edge lint[0x1]) May 8 00:34:20.761549 kernel: ACPI: LAPIC_NMI (acpi_id[0x7b] high edge lint[0x1]) May 8 00:34:20.761555 kernel: ACPI: LAPIC_NMI (acpi_id[0x7c] high edge lint[0x1]) May 8 00:34:20.761561 kernel: ACPI: LAPIC_NMI (acpi_id[0x7d] high edge lint[0x1]) May 8 00:34:20.761566 kernel: ACPI: LAPIC_NMI (acpi_id[0x7e] high edge lint[0x1]) May 8 00:34:20.761572 kernel: ACPI: LAPIC_NMI (acpi_id[0x7f] high edge lint[0x1]) May 8 00:34:20.761577 kernel: IOAPIC[0]: apic_id 1, version 17, address 0xfec00000, GSI 0-23 May 8 00:34:20.761583 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 high edge) May 8 00:34:20.761590 kernel: ACPI: Using ACPI (MADT) for SMP configuration information May 8 00:34:20.761595 kernel: ACPI: HPET id: 0x8086af01 base: 0xfed00000 May 8 00:34:20.761604 kernel: TSC deadline timer available May 8 00:34:20.761611 kernel: smpboot: Allowing 128 CPUs, 126 hotplug CPUs May 8 00:34:20.761617 kernel: [mem 0x80000000-0xefffffff] available for PCI devices May 8 00:34:20.761622 kernel: Booting paravirtualized kernel on VMware hypervisor May 8 00:34:20.761628 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns May 8 00:34:20.761634 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:128 nr_cpu_ids:128 nr_node_ids:1 May 8 00:34:20.761640 kernel: percpu: Embedded 58 pages/cpu s197096 r8192 d32280 u262144 May 8 00:34:20.761647 kernel: pcpu-alloc: s197096 r8192 d32280 u262144 alloc=1*2097152 May 8 00:34:20.761653 kernel: pcpu-alloc: [0] 000 001 002 003 004 005 006 007 May 8 00:34:20.761658 kernel: pcpu-alloc: [0] 008 009 010 011 012 013 014 015 May 8 00:34:20.761667 kernel: pcpu-alloc: [0] 016 017 018 019 020 021 022 023 May 8 00:34:20.761675 kernel: pcpu-alloc: [0] 024 025 026 027 028 029 030 031 May 8 00:34:20.761681 kernel: pcpu-alloc: [0] 032 033 034 035 036 037 038 039 May 8 00:34:20.761694 kernel: pcpu-alloc: [0] 040 041 042 043 044 045 046 047 May 8 00:34:20.761700 kernel: pcpu-alloc: [0] 048 049 050 051 052 053 054 055 May 8 00:34:20.761706 kernel: pcpu-alloc: [0] 056 057 058 059 060 061 062 063 May 8 00:34:20.761713 kernel: pcpu-alloc: [0] 064 065 066 067 068 069 070 071 May 8 00:34:20.761718 kernel: pcpu-alloc: [0] 072 073 074 075 076 077 078 079 May 8 00:34:20.761724 kernel: pcpu-alloc: [0] 080 081 082 083 084 085 086 087 May 8 00:34:20.761730 kernel: pcpu-alloc: [0] 088 089 090 091 092 093 094 095 May 8 00:34:20.761736 kernel: pcpu-alloc: [0] 096 097 098 099 100 101 102 103 May 8 00:34:20.761741 kernel: pcpu-alloc: [0] 104 105 106 107 108 109 110 111 May 8 00:34:20.761747 kernel: pcpu-alloc: [0] 112 113 114 115 116 117 118 119 May 8 00:34:20.761753 kernel: pcpu-alloc: [0] 120 121 122 123 124 125 126 127 May 8 00:34:20.761760 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=86cfbfcc89a9c46f6cbba5bdb3509d1ce1367f0c93b0b0e4c6bdcad1a2064c90 May 8 00:34:20.761766 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 8 00:34:20.761772 kernel: random: crng init done May 8 00:34:20.761778 kernel: printk: log_buf_len individual max cpu contribution: 4096 bytes May 8 00:34:20.761784 kernel: printk: log_buf_len total cpu_extra contributions: 520192 bytes May 8 00:34:20.761790 kernel: printk: log_buf_len min size: 262144 bytes May 8 00:34:20.761795 kernel: printk: log_buf_len: 1048576 bytes May 8 00:34:20.761801 kernel: printk: early log buf free: 239648(91%) May 8 00:34:20.761807 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 8 00:34:20.761814 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) May 8 00:34:20.761820 kernel: Fallback order for Node 0: 0 May 8 00:34:20.761826 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515808 May 8 00:34:20.761832 kernel: Policy zone: DMA32 May 8 00:34:20.761838 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 8 00:34:20.761844 kernel: Memory: 1936368K/2096628K available (12288K kernel code, 2295K rwdata, 22740K rodata, 42856K init, 2336K bss, 160000K reserved, 0K cma-reserved) May 8 00:34:20.761852 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=128, Nodes=1 May 8 00:34:20.761858 kernel: ftrace: allocating 37944 entries in 149 pages May 8 00:34:20.761864 kernel: ftrace: allocated 149 pages with 4 groups May 8 00:34:20.761869 kernel: Dynamic Preempt: voluntary May 8 00:34:20.761876 kernel: rcu: Preemptible hierarchical RCU implementation. May 8 00:34:20.761882 kernel: rcu: RCU event tracing is enabled. May 8 00:34:20.761888 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=128. May 8 00:34:20.761894 kernel: Trampoline variant of Tasks RCU enabled. May 8 00:34:20.761901 kernel: Rude variant of Tasks RCU enabled. May 8 00:34:20.761907 kernel: Tracing variant of Tasks RCU enabled. May 8 00:34:20.761913 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 8 00:34:20.761919 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=128 May 8 00:34:20.761924 kernel: NR_IRQS: 33024, nr_irqs: 1448, preallocated irqs: 16 May 8 00:34:20.761930 kernel: rcu: srcu_init: Setting srcu_struct sizes to big. May 8 00:34:20.761936 kernel: Console: colour VGA+ 80x25 May 8 00:34:20.761942 kernel: printk: console [tty0] enabled May 8 00:34:20.761948 kernel: printk: console [ttyS0] enabled May 8 00:34:20.761954 kernel: ACPI: Core revision 20230628 May 8 00:34:20.761961 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 133484882848 ns May 8 00:34:20.761967 kernel: APIC: Switch to symmetric I/O mode setup May 8 00:34:20.761973 kernel: x2apic enabled May 8 00:34:20.761979 kernel: APIC: Switched APIC routing to: physical x2apic May 8 00:34:20.761985 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 May 8 00:34:20.761991 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x311fd3cd494, max_idle_ns: 440795223879 ns May 8 00:34:20.761997 kernel: Calibrating delay loop (skipped) preset value.. 6816.00 BogoMIPS (lpj=3408000) May 8 00:34:20.762003 kernel: Disabled fast string operations May 8 00:34:20.762009 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 May 8 00:34:20.762016 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 May 8 00:34:20.762022 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization May 8 00:34:20.762028 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit May 8 00:34:20.762033 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall May 8 00:34:20.762225 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS May 8 00:34:20.762235 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch May 8 00:34:20.762242 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT May 8 00:34:20.762248 kernel: RETBleed: Mitigation: Enhanced IBRS May 8 00:34:20.762254 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier May 8 00:34:20.762261 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl May 8 00:34:20.762267 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode May 8 00:34:20.762273 kernel: SRBDS: Unknown: Dependent on hypervisor status May 8 00:34:20.762279 kernel: GDS: Unknown: Dependent on hypervisor status May 8 00:34:20.762285 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' May 8 00:34:20.762291 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' May 8 00:34:20.762297 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' May 8 00:34:20.762303 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 May 8 00:34:20.762310 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. May 8 00:34:20.762316 kernel: Freeing SMP alternatives memory: 32K May 8 00:34:20.762324 kernel: pid_max: default: 131072 minimum: 1024 May 8 00:34:20.762331 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity May 8 00:34:20.762337 kernel: landlock: Up and running. May 8 00:34:20.762344 kernel: SELinux: Initializing. May 8 00:34:20.762353 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) May 8 00:34:20.762360 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) May 8 00:34:20.762369 kernel: smpboot: CPU0: Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz (family: 0x6, model: 0x9e, stepping: 0xd) May 8 00:34:20.762377 kernel: RCU Tasks: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. May 8 00:34:20.762383 kernel: RCU Tasks Rude: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. May 8 00:34:20.762389 kernel: RCU Tasks Trace: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. May 8 00:34:20.762395 kernel: Performance Events: Skylake events, core PMU driver. May 8 00:34:20.762401 kernel: core: CPUID marked event: 'cpu cycles' unavailable May 8 00:34:20.762407 kernel: core: CPUID marked event: 'instructions' unavailable May 8 00:34:20.762413 kernel: core: CPUID marked event: 'bus cycles' unavailable May 8 00:34:20.762419 kernel: core: CPUID marked event: 'cache references' unavailable May 8 00:34:20.762424 kernel: core: CPUID marked event: 'cache misses' unavailable May 8 00:34:20.762431 kernel: core: CPUID marked event: 'branch instructions' unavailable May 8 00:34:20.762437 kernel: core: CPUID marked event: 'branch misses' unavailable May 8 00:34:20.762443 kernel: ... version: 1 May 8 00:34:20.762448 kernel: ... bit width: 48 May 8 00:34:20.762454 kernel: ... generic registers: 4 May 8 00:34:20.762460 kernel: ... value mask: 0000ffffffffffff May 8 00:34:20.762466 kernel: ... max period: 000000007fffffff May 8 00:34:20.762472 kernel: ... fixed-purpose events: 0 May 8 00:34:20.762478 kernel: ... event mask: 000000000000000f May 8 00:34:20.762485 kernel: signal: max sigframe size: 1776 May 8 00:34:20.762490 kernel: rcu: Hierarchical SRCU implementation. May 8 00:34:20.762497 kernel: rcu: Max phase no-delay instances is 400. May 8 00:34:20.762504 kernel: NMI watchdog: Perf NMI watchdog permanently disabled May 8 00:34:20.762511 kernel: smp: Bringing up secondary CPUs ... May 8 00:34:20.762517 kernel: smpboot: x86: Booting SMP configuration: May 8 00:34:20.762523 kernel: .... node #0, CPUs: #1 May 8 00:34:20.762529 kernel: Disabled fast string operations May 8 00:34:20.762536 kernel: smpboot: CPU 1 Converting physical 2 to logical package 1 May 8 00:34:20.762547 kernel: smpboot: CPU 1 Converting physical 0 to logical die 1 May 8 00:34:20.762557 kernel: smp: Brought up 1 node, 2 CPUs May 8 00:34:20.762567 kernel: smpboot: Max logical packages: 128 May 8 00:34:20.762576 kernel: smpboot: Total of 2 processors activated (13632.00 BogoMIPS) May 8 00:34:20.762582 kernel: devtmpfs: initialized May 8 00:34:20.762588 kernel: x86/mm: Memory block size: 128MB May 8 00:34:20.762594 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7feff000-0x7fefffff] (4096 bytes) May 8 00:34:20.762600 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 8 00:34:20.762606 kernel: futex hash table entries: 32768 (order: 9, 2097152 bytes, linear) May 8 00:34:20.762612 kernel: pinctrl core: initialized pinctrl subsystem May 8 00:34:20.762619 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 8 00:34:20.762625 kernel: audit: initializing netlink subsys (disabled) May 8 00:34:20.762631 kernel: audit: type=2000 audit(1746664459.069:1): state=initialized audit_enabled=0 res=1 May 8 00:34:20.762637 kernel: thermal_sys: Registered thermal governor 'step_wise' May 8 00:34:20.762642 kernel: thermal_sys: Registered thermal governor 'user_space' May 8 00:34:20.762648 kernel: cpuidle: using governor menu May 8 00:34:20.762654 kernel: Simple Boot Flag at 0x36 set to 0x80 May 8 00:34:20.762660 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 8 00:34:20.762666 kernel: dca service started, version 1.12.1 May 8 00:34:20.762673 kernel: PCI: MMCONFIG for domain 0000 [bus 00-7f] at [mem 0xf0000000-0xf7ffffff] (base 0xf0000000) May 8 00:34:20.762679 kernel: PCI: Using configuration type 1 for base access May 8 00:34:20.762685 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. May 8 00:34:20.762691 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 8 00:34:20.762697 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page May 8 00:34:20.762703 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 8 00:34:20.762708 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page May 8 00:34:20.762714 kernel: ACPI: Added _OSI(Module Device) May 8 00:34:20.762720 kernel: ACPI: Added _OSI(Processor Device) May 8 00:34:20.762727 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 8 00:34:20.762733 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 8 00:34:20.762739 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 8 00:34:20.762745 kernel: ACPI: [Firmware Bug]: BIOS _OSI(Linux) query ignored May 8 00:34:20.762751 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC May 8 00:34:20.762757 kernel: ACPI: Interpreter enabled May 8 00:34:20.762762 kernel: ACPI: PM: (supports S0 S1 S5) May 8 00:34:20.762768 kernel: ACPI: Using IOAPIC for interrupt routing May 8 00:34:20.762774 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug May 8 00:34:20.762781 kernel: PCI: Using E820 reservations for host bridge windows May 8 00:34:20.762787 kernel: ACPI: Enabled 4 GPEs in block 00 to 0F May 8 00:34:20.762793 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-7f]) May 8 00:34:20.762873 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 8 00:34:20.762929 kernel: acpi PNP0A03:00: _OSC: platform does not support [AER LTR] May 8 00:34:20.762982 kernel: acpi PNP0A03:00: _OSC: OS now controls [PCIeHotplug PME PCIeCapability] May 8 00:34:20.762991 kernel: PCI host bridge to bus 0000:00 May 8 00:34:20.763075 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] May 8 00:34:20.763126 kernel: pci_bus 0000:00: root bus resource [mem 0x000cc000-0x000dbfff window] May 8 00:34:20.763174 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] May 8 00:34:20.763238 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] May 8 00:34:20.763285 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xfeff window] May 8 00:34:20.763329 kernel: pci_bus 0000:00: root bus resource [bus 00-7f] May 8 00:34:20.763388 kernel: pci 0000:00:00.0: [8086:7190] type 00 class 0x060000 May 8 00:34:20.763450 kernel: pci 0000:00:01.0: [8086:7191] type 01 class 0x060400 May 8 00:34:20.763518 kernel: pci 0000:00:07.0: [8086:7110] type 00 class 0x060100 May 8 00:34:20.763576 kernel: pci 0000:00:07.1: [8086:7111] type 00 class 0x01018a May 8 00:34:20.763627 kernel: pci 0000:00:07.1: reg 0x20: [io 0x1060-0x106f] May 8 00:34:20.763687 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] May 8 00:34:20.763750 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x14: [io 0x03f6] May 8 00:34:20.763804 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] May 8 00:34:20.763853 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x1c: [io 0x0376] May 8 00:34:20.763906 kernel: pci 0000:00:07.3: [8086:7113] type 00 class 0x068000 May 8 00:34:20.763960 kernel: pci 0000:00:07.3: quirk: [io 0x1000-0x103f] claimed by PIIX4 ACPI May 8 00:34:20.764010 kernel: pci 0000:00:07.3: quirk: [io 0x1040-0x104f] claimed by PIIX4 SMB May 8 00:34:20.764093 kernel: pci 0000:00:07.7: [15ad:0740] type 00 class 0x088000 May 8 00:34:20.764154 kernel: pci 0000:00:07.7: reg 0x10: [io 0x1080-0x10bf] May 8 00:34:20.764204 kernel: pci 0000:00:07.7: reg 0x14: [mem 0xfebfe000-0xfebfffff 64bit] May 8 00:34:20.764258 kernel: pci 0000:00:0f.0: [15ad:0405] type 00 class 0x030000 May 8 00:34:20.764311 kernel: pci 0000:00:0f.0: reg 0x10: [io 0x1070-0x107f] May 8 00:34:20.764378 kernel: pci 0000:00:0f.0: reg 0x14: [mem 0xe8000000-0xefffffff pref] May 8 00:34:20.764430 kernel: pci 0000:00:0f.0: reg 0x18: [mem 0xfe000000-0xfe7fffff] May 8 00:34:20.764480 kernel: pci 0000:00:0f.0: reg 0x30: [mem 0x00000000-0x00007fff pref] May 8 00:34:20.764532 kernel: pci 0000:00:0f.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] May 8 00:34:20.764585 kernel: pci 0000:00:11.0: [15ad:0790] type 01 class 0x060401 May 8 00:34:20.764639 kernel: pci 0000:00:15.0: [15ad:07a0] type 01 class 0x060400 May 8 00:34:20.764696 kernel: pci 0000:00:15.0: PME# supported from D0 D3hot D3cold May 8 00:34:20.764755 kernel: pci 0000:00:15.1: [15ad:07a0] type 01 class 0x060400 May 8 00:34:20.764806 kernel: pci 0000:00:15.1: PME# supported from D0 D3hot D3cold May 8 00:34:20.764862 kernel: pci 0000:00:15.2: [15ad:07a0] type 01 class 0x060400 May 8 00:34:20.764912 kernel: pci 0000:00:15.2: PME# supported from D0 D3hot D3cold May 8 00:34:20.764972 kernel: pci 0000:00:15.3: [15ad:07a0] type 01 class 0x060400 May 8 00:34:20.765032 kernel: pci 0000:00:15.3: PME# supported from D0 D3hot D3cold May 8 00:34:20.765358 kernel: pci 0000:00:15.4: [15ad:07a0] type 01 class 0x060400 May 8 00:34:20.767072 kernel: pci 0000:00:15.4: PME# supported from D0 D3hot D3cold May 8 00:34:20.767144 kernel: pci 0000:00:15.5: [15ad:07a0] type 01 class 0x060400 May 8 00:34:20.767203 kernel: pci 0000:00:15.5: PME# supported from D0 D3hot D3cold May 8 00:34:20.767269 kernel: pci 0000:00:15.6: [15ad:07a0] type 01 class 0x060400 May 8 00:34:20.767323 kernel: pci 0000:00:15.6: PME# supported from D0 D3hot D3cold May 8 00:34:20.767377 kernel: pci 0000:00:15.7: [15ad:07a0] type 01 class 0x060400 May 8 00:34:20.767429 kernel: pci 0000:00:15.7: PME# supported from D0 D3hot D3cold May 8 00:34:20.767483 kernel: pci 0000:00:16.0: [15ad:07a0] type 01 class 0x060400 May 8 00:34:20.767543 kernel: pci 0000:00:16.0: PME# supported from D0 D3hot D3cold May 8 00:34:20.767606 kernel: pci 0000:00:16.1: [15ad:07a0] type 01 class 0x060400 May 8 00:34:20.767658 kernel: pci 0000:00:16.1: PME# supported from D0 D3hot D3cold May 8 00:34:20.767711 kernel: pci 0000:00:16.2: [15ad:07a0] type 01 class 0x060400 May 8 00:34:20.767761 kernel: pci 0000:00:16.2: PME# supported from D0 D3hot D3cold May 8 00:34:20.767817 kernel: pci 0000:00:16.3: [15ad:07a0] type 01 class 0x060400 May 8 00:34:20.767869 kernel: pci 0000:00:16.3: PME# supported from D0 D3hot D3cold May 8 00:34:20.767930 kernel: pci 0000:00:16.4: [15ad:07a0] type 01 class 0x060400 May 8 00:34:20.767982 kernel: pci 0000:00:16.4: PME# supported from D0 D3hot D3cold May 8 00:34:20.768055 kernel: pci 0000:00:16.5: [15ad:07a0] type 01 class 0x060400 May 8 00:34:20.768119 kernel: pci 0000:00:16.5: PME# supported from D0 D3hot D3cold May 8 00:34:20.768178 kernel: pci 0000:00:16.6: [15ad:07a0] type 01 class 0x060400 May 8 00:34:20.768227 kernel: pci 0000:00:16.6: PME# supported from D0 D3hot D3cold May 8 00:34:20.768282 kernel: pci 0000:00:16.7: [15ad:07a0] type 01 class 0x060400 May 8 00:34:20.768339 kernel: pci 0000:00:16.7: PME# supported from D0 D3hot D3cold May 8 00:34:20.768401 kernel: pci 0000:00:17.0: [15ad:07a0] type 01 class 0x060400 May 8 00:34:20.768452 kernel: pci 0000:00:17.0: PME# supported from D0 D3hot D3cold May 8 00:34:20.768508 kernel: pci 0000:00:17.1: [15ad:07a0] type 01 class 0x060400 May 8 00:34:20.768559 kernel: pci 0000:00:17.1: PME# supported from D0 D3hot D3cold May 8 00:34:20.768616 kernel: pci 0000:00:17.2: [15ad:07a0] type 01 class 0x060400 May 8 00:34:20.768666 kernel: pci 0000:00:17.2: PME# supported from D0 D3hot D3cold May 8 00:34:20.768735 kernel: pci 0000:00:17.3: [15ad:07a0] type 01 class 0x060400 May 8 00:34:20.768789 kernel: pci 0000:00:17.3: PME# supported from D0 D3hot D3cold May 8 00:34:20.768842 kernel: pci 0000:00:17.4: [15ad:07a0] type 01 class 0x060400 May 8 00:34:20.768896 kernel: pci 0000:00:17.4: PME# supported from D0 D3hot D3cold May 8 00:34:20.768949 kernel: pci 0000:00:17.5: [15ad:07a0] type 01 class 0x060400 May 8 00:34:20.768999 kernel: pci 0000:00:17.5: PME# supported from D0 D3hot D3cold May 8 00:34:20.771146 kernel: pci 0000:00:17.6: [15ad:07a0] type 01 class 0x060400 May 8 00:34:20.771213 kernel: pci 0000:00:17.6: PME# supported from D0 D3hot D3cold May 8 00:34:20.771273 kernel: pci 0000:00:17.7: [15ad:07a0] type 01 class 0x060400 May 8 00:34:20.771329 kernel: pci 0000:00:17.7: PME# supported from D0 D3hot D3cold May 8 00:34:20.771384 kernel: pci 0000:00:18.0: [15ad:07a0] type 01 class 0x060400 May 8 00:34:20.771436 kernel: pci 0000:00:18.0: PME# supported from D0 D3hot D3cold May 8 00:34:20.771489 kernel: pci 0000:00:18.1: [15ad:07a0] type 01 class 0x060400 May 8 00:34:20.771540 kernel: pci 0000:00:18.1: PME# supported from D0 D3hot D3cold May 8 00:34:20.771602 kernel: pci 0000:00:18.2: [15ad:07a0] type 01 class 0x060400 May 8 00:34:20.771657 kernel: pci 0000:00:18.2: PME# supported from D0 D3hot D3cold May 8 00:34:20.771711 kernel: pci 0000:00:18.3: [15ad:07a0] type 01 class 0x060400 May 8 00:34:20.771765 kernel: pci 0000:00:18.3: PME# supported from D0 D3hot D3cold May 8 00:34:20.771821 kernel: pci 0000:00:18.4: [15ad:07a0] type 01 class 0x060400 May 8 00:34:20.771895 kernel: pci 0000:00:18.4: PME# supported from D0 D3hot D3cold May 8 00:34:20.771953 kernel: pci 0000:00:18.5: [15ad:07a0] type 01 class 0x060400 May 8 00:34:20.772007 kernel: pci 0000:00:18.5: PME# supported from D0 D3hot D3cold May 8 00:34:20.772074 kernel: pci 0000:00:18.6: [15ad:07a0] type 01 class 0x060400 May 8 00:34:20.772146 kernel: pci 0000:00:18.6: PME# supported from D0 D3hot D3cold May 8 00:34:20.772201 kernel: pci 0000:00:18.7: [15ad:07a0] type 01 class 0x060400 May 8 00:34:20.772266 kernel: pci 0000:00:18.7: PME# supported from D0 D3hot D3cold May 8 00:34:20.772320 kernel: pci_bus 0000:01: extended config space not accessible May 8 00:34:20.772372 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] May 8 00:34:20.772433 kernel: pci_bus 0000:02: extended config space not accessible May 8 00:34:20.772442 kernel: acpiphp: Slot [32] registered May 8 00:34:20.772449 kernel: acpiphp: Slot [33] registered May 8 00:34:20.772454 kernel: acpiphp: Slot [34] registered May 8 00:34:20.772460 kernel: acpiphp: Slot [35] registered May 8 00:34:20.772466 kernel: acpiphp: Slot [36] registered May 8 00:34:20.772472 kernel: acpiphp: Slot [37] registered May 8 00:34:20.772478 kernel: acpiphp: Slot [38] registered May 8 00:34:20.772486 kernel: acpiphp: Slot [39] registered May 8 00:34:20.772492 kernel: acpiphp: Slot [40] registered May 8 00:34:20.772498 kernel: acpiphp: Slot [41] registered May 8 00:34:20.772503 kernel: acpiphp: Slot [42] registered May 8 00:34:20.772509 kernel: acpiphp: Slot [43] registered May 8 00:34:20.772515 kernel: acpiphp: Slot [44] registered May 8 00:34:20.772521 kernel: acpiphp: Slot [45] registered May 8 00:34:20.772527 kernel: acpiphp: Slot [46] registered May 8 00:34:20.772533 kernel: acpiphp: Slot [47] registered May 8 00:34:20.772539 kernel: acpiphp: Slot [48] registered May 8 00:34:20.772546 kernel: acpiphp: Slot [49] registered May 8 00:34:20.772555 kernel: acpiphp: Slot [50] registered May 8 00:34:20.772561 kernel: acpiphp: Slot [51] registered May 8 00:34:20.772566 kernel: acpiphp: Slot [52] registered May 8 00:34:20.772572 kernel: acpiphp: Slot [53] registered May 8 00:34:20.772578 kernel: acpiphp: Slot [54] registered May 8 00:34:20.772584 kernel: acpiphp: Slot [55] registered May 8 00:34:20.772590 kernel: acpiphp: Slot [56] registered May 8 00:34:20.772595 kernel: acpiphp: Slot [57] registered May 8 00:34:20.772602 kernel: acpiphp: Slot [58] registered May 8 00:34:20.772608 kernel: acpiphp: Slot [59] registered May 8 00:34:20.772614 kernel: acpiphp: Slot [60] registered May 8 00:34:20.772621 kernel: acpiphp: Slot [61] registered May 8 00:34:20.772626 kernel: acpiphp: Slot [62] registered May 8 00:34:20.772632 kernel: acpiphp: Slot [63] registered May 8 00:34:20.772687 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] (subtractive decode) May 8 00:34:20.772748 kernel: pci 0000:00:11.0: bridge window [io 0x2000-0x3fff] May 8 00:34:20.772808 kernel: pci 0000:00:11.0: bridge window [mem 0xfd600000-0xfdffffff] May 8 00:34:20.772864 kernel: pci 0000:00:11.0: bridge window [mem 0xe7b00000-0xe7ffffff 64bit pref] May 8 00:34:20.772914 kernel: pci 0000:00:11.0: bridge window [mem 0x000a0000-0x000bffff window] (subtractive decode) May 8 00:34:20.772964 kernel: pci 0000:00:11.0: bridge window [mem 0x000cc000-0x000dbfff window] (subtractive decode) May 8 00:34:20.773017 kernel: pci 0000:00:11.0: bridge window [mem 0xc0000000-0xfebfffff window] (subtractive decode) May 8 00:34:20.774618 kernel: pci 0000:00:11.0: bridge window [io 0x0000-0x0cf7 window] (subtractive decode) May 8 00:34:20.774679 kernel: pci 0000:00:11.0: bridge window [io 0x0d00-0xfeff window] (subtractive decode) May 8 00:34:20.774747 kernel: pci 0000:03:00.0: [15ad:07c0] type 00 class 0x010700 May 8 00:34:20.774806 kernel: pci 0000:03:00.0: reg 0x10: [io 0x4000-0x4007] May 8 00:34:20.774857 kernel: pci 0000:03:00.0: reg 0x14: [mem 0xfd5f8000-0xfd5fffff 64bit] May 8 00:34:20.774909 kernel: pci 0000:03:00.0: reg 0x30: [mem 0x00000000-0x0000ffff pref] May 8 00:34:20.775012 kernel: pci 0000:03:00.0: PME# supported from D0 D3hot D3cold May 8 00:34:20.775084 kernel: pci 0000:03:00.0: disabling ASPM on pre-1.1 PCIe device. You can enable it with 'pcie_aspm=force' May 8 00:34:20.775143 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] May 8 00:34:20.775194 kernel: pci 0000:00:15.0: bridge window [io 0x4000-0x4fff] May 8 00:34:20.775253 kernel: pci 0000:00:15.0: bridge window [mem 0xfd500000-0xfd5fffff] May 8 00:34:20.775305 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] May 8 00:34:20.775356 kernel: pci 0000:00:15.1: bridge window [io 0x8000-0x8fff] May 8 00:34:20.775405 kernel: pci 0000:00:15.1: bridge window [mem 0xfd100000-0xfd1fffff] May 8 00:34:20.775455 kernel: pci 0000:00:15.1: bridge window [mem 0xe7800000-0xe78fffff 64bit pref] May 8 00:34:20.775511 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] May 8 00:34:20.775562 kernel: pci 0000:00:15.2: bridge window [io 0xc000-0xcfff] May 8 00:34:20.775611 kernel: pci 0000:00:15.2: bridge window [mem 0xfcd00000-0xfcdfffff] May 8 00:34:20.775663 kernel: pci 0000:00:15.2: bridge window [mem 0xe7400000-0xe74fffff 64bit pref] May 8 00:34:20.775716 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] May 8 00:34:20.775767 kernel: pci 0000:00:15.3: bridge window [mem 0xfc900000-0xfc9fffff] May 8 00:34:20.775829 kernel: pci 0000:00:15.3: bridge window [mem 0xe7000000-0xe70fffff 64bit pref] May 8 00:34:20.775927 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] May 8 00:34:20.776009 kernel: pci 0000:00:15.4: bridge window [mem 0xfc500000-0xfc5fffff] May 8 00:34:20.776177 kernel: pci 0000:00:15.4: bridge window [mem 0xe6c00000-0xe6cfffff 64bit pref] May 8 00:34:20.776237 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] May 8 00:34:20.776286 kernel: pci 0000:00:15.5: bridge window [mem 0xfc100000-0xfc1fffff] May 8 00:34:20.776335 kernel: pci 0000:00:15.5: bridge window [mem 0xe6800000-0xe68fffff 64bit pref] May 8 00:34:20.776396 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] May 8 00:34:20.776447 kernel: pci 0000:00:15.6: bridge window [mem 0xfbd00000-0xfbdfffff] May 8 00:34:20.777089 kernel: pci 0000:00:15.6: bridge window [mem 0xe6400000-0xe64fffff 64bit pref] May 8 00:34:20.777163 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] May 8 00:34:20.777215 kernel: pci 0000:00:15.7: bridge window [mem 0xfb900000-0xfb9fffff] May 8 00:34:20.777266 kernel: pci 0000:00:15.7: bridge window [mem 0xe6000000-0xe60fffff 64bit pref] May 8 00:34:20.777344 kernel: pci 0000:0b:00.0: [15ad:07b0] type 00 class 0x020000 May 8 00:34:20.777423 kernel: pci 0000:0b:00.0: reg 0x10: [mem 0xfd4fc000-0xfd4fcfff] May 8 00:34:20.777475 kernel: pci 0000:0b:00.0: reg 0x14: [mem 0xfd4fd000-0xfd4fdfff] May 8 00:34:20.777526 kernel: pci 0000:0b:00.0: reg 0x18: [mem 0xfd4fe000-0xfd4fffff] May 8 00:34:20.777580 kernel: pci 0000:0b:00.0: reg 0x1c: [io 0x5000-0x500f] May 8 00:34:20.777630 kernel: pci 0000:0b:00.0: reg 0x30: [mem 0x00000000-0x0000ffff pref] May 8 00:34:20.777690 kernel: pci 0000:0b:00.0: supports D1 D2 May 8 00:34:20.777741 kernel: pci 0000:0b:00.0: PME# supported from D0 D1 D2 D3hot D3cold May 8 00:34:20.777802 kernel: pci 0000:0b:00.0: disabling ASPM on pre-1.1 PCIe device. You can enable it with 'pcie_aspm=force' May 8 00:34:20.777856 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] May 8 00:34:20.777918 kernel: pci 0000:00:16.0: bridge window [io 0x5000-0x5fff] May 8 00:34:20.779116 kernel: pci 0000:00:16.0: bridge window [mem 0xfd400000-0xfd4fffff] May 8 00:34:20.779174 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] May 8 00:34:20.779227 kernel: pci 0000:00:16.1: bridge window [io 0x9000-0x9fff] May 8 00:34:20.779277 kernel: pci 0000:00:16.1: bridge window [mem 0xfd000000-0xfd0fffff] May 8 00:34:20.779328 kernel: pci 0000:00:16.1: bridge window [mem 0xe7700000-0xe77fffff 64bit pref] May 8 00:34:20.779396 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] May 8 00:34:20.779448 kernel: pci 0000:00:16.2: bridge window [io 0xd000-0xdfff] May 8 00:34:20.779502 kernel: pci 0000:00:16.2: bridge window [mem 0xfcc00000-0xfccfffff] May 8 00:34:20.779565 kernel: pci 0000:00:16.2: bridge window [mem 0xe7300000-0xe73fffff 64bit pref] May 8 00:34:20.779633 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] May 8 00:34:20.779693 kernel: pci 0000:00:16.3: bridge window [mem 0xfc800000-0xfc8fffff] May 8 00:34:20.779753 kernel: pci 0000:00:16.3: bridge window [mem 0xe6f00000-0xe6ffffff 64bit pref] May 8 00:34:20.779815 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] May 8 00:34:20.779875 kernel: pci 0000:00:16.4: bridge window [mem 0xfc400000-0xfc4fffff] May 8 00:34:20.779936 kernel: pci 0000:00:16.4: bridge window [mem 0xe6b00000-0xe6bfffff 64bit pref] May 8 00:34:20.780002 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] May 8 00:34:20.782103 kernel: pci 0000:00:16.5: bridge window [mem 0xfc000000-0xfc0fffff] May 8 00:34:20.782164 kernel: pci 0000:00:16.5: bridge window [mem 0xe6700000-0xe67fffff 64bit pref] May 8 00:34:20.782219 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] May 8 00:34:20.782270 kernel: pci 0000:00:16.6: bridge window [mem 0xfbc00000-0xfbcfffff] May 8 00:34:20.782320 kernel: pci 0000:00:16.6: bridge window [mem 0xe6300000-0xe63fffff 64bit pref] May 8 00:34:20.782371 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] May 8 00:34:20.782421 kernel: pci 0000:00:16.7: bridge window [mem 0xfb800000-0xfb8fffff] May 8 00:34:20.782470 kernel: pci 0000:00:16.7: bridge window [mem 0xe5f00000-0xe5ffffff 64bit pref] May 8 00:34:20.782526 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] May 8 00:34:20.782576 kernel: pci 0000:00:17.0: bridge window [io 0x6000-0x6fff] May 8 00:34:20.782625 kernel: pci 0000:00:17.0: bridge window [mem 0xfd300000-0xfd3fffff] May 8 00:34:20.782674 kernel: pci 0000:00:17.0: bridge window [mem 0xe7a00000-0xe7afffff 64bit pref] May 8 00:34:20.782726 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] May 8 00:34:20.782774 kernel: pci 0000:00:17.1: bridge window [io 0xa000-0xafff] May 8 00:34:20.782824 kernel: pci 0000:00:17.1: bridge window [mem 0xfcf00000-0xfcffffff] May 8 00:34:20.782880 kernel: pci 0000:00:17.1: bridge window [mem 0xe7600000-0xe76fffff 64bit pref] May 8 00:34:20.782935 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] May 8 00:34:20.782986 kernel: pci 0000:00:17.2: bridge window [io 0xe000-0xefff] May 8 00:34:20.783036 kernel: pci 0000:00:17.2: bridge window [mem 0xfcb00000-0xfcbfffff] May 8 00:34:20.785766 kernel: pci 0000:00:17.2: bridge window [mem 0xe7200000-0xe72fffff 64bit pref] May 8 00:34:20.785825 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] May 8 00:34:20.785878 kernel: pci 0000:00:17.3: bridge window [mem 0xfc700000-0xfc7fffff] May 8 00:34:20.785929 kernel: pci 0000:00:17.3: bridge window [mem 0xe6e00000-0xe6efffff 64bit pref] May 8 00:34:20.785984 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] May 8 00:34:20.786034 kernel: pci 0000:00:17.4: bridge window [mem 0xfc300000-0xfc3fffff] May 8 00:34:20.786100 kernel: pci 0000:00:17.4: bridge window [mem 0xe6a00000-0xe6afffff 64bit pref] May 8 00:34:20.786152 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] May 8 00:34:20.786202 kernel: pci 0000:00:17.5: bridge window [mem 0xfbf00000-0xfbffffff] May 8 00:34:20.786252 kernel: pci 0000:00:17.5: bridge window [mem 0xe6600000-0xe66fffff 64bit pref] May 8 00:34:20.786303 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] May 8 00:34:20.786352 kernel: pci 0000:00:17.6: bridge window [mem 0xfbb00000-0xfbbfffff] May 8 00:34:20.786405 kernel: pci 0000:00:17.6: bridge window [mem 0xe6200000-0xe62fffff 64bit pref] May 8 00:34:20.786456 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] May 8 00:34:20.786505 kernel: pci 0000:00:17.7: bridge window [mem 0xfb700000-0xfb7fffff] May 8 00:34:20.786555 kernel: pci 0000:00:17.7: bridge window [mem 0xe5e00000-0xe5efffff 64bit pref] May 8 00:34:20.786605 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] May 8 00:34:20.786655 kernel: pci 0000:00:18.0: bridge window [io 0x7000-0x7fff] May 8 00:34:20.786703 kernel: pci 0000:00:18.0: bridge window [mem 0xfd200000-0xfd2fffff] May 8 00:34:20.786752 kernel: pci 0000:00:18.0: bridge window [mem 0xe7900000-0xe79fffff 64bit pref] May 8 00:34:20.786806 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] May 8 00:34:20.786856 kernel: pci 0000:00:18.1: bridge window [io 0xb000-0xbfff] May 8 00:34:20.786905 kernel: pci 0000:00:18.1: bridge window [mem 0xfce00000-0xfcefffff] May 8 00:34:20.786954 kernel: pci 0000:00:18.1: bridge window [mem 0xe7500000-0xe75fffff 64bit pref] May 8 00:34:20.787005 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] May 8 00:34:20.787071 kernel: pci 0000:00:18.2: bridge window [mem 0xfca00000-0xfcafffff] May 8 00:34:20.787123 kernel: pci 0000:00:18.2: bridge window [mem 0xe7100000-0xe71fffff 64bit pref] May 8 00:34:20.787175 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] May 8 00:34:20.787240 kernel: pci 0000:00:18.3: bridge window [mem 0xfc600000-0xfc6fffff] May 8 00:34:20.787289 kernel: pci 0000:00:18.3: bridge window [mem 0xe6d00000-0xe6dfffff 64bit pref] May 8 00:34:20.787353 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] May 8 00:34:20.787404 kernel: pci 0000:00:18.4: bridge window [mem 0xfc200000-0xfc2fffff] May 8 00:34:20.787453 kernel: pci 0000:00:18.4: bridge window [mem 0xe6900000-0xe69fffff 64bit pref] May 8 00:34:20.787506 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] May 8 00:34:20.787555 kernel: pci 0000:00:18.5: bridge window [mem 0xfbe00000-0xfbefffff] May 8 00:34:20.787605 kernel: pci 0000:00:18.5: bridge window [mem 0xe6500000-0xe65fffff 64bit pref] May 8 00:34:20.787660 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] May 8 00:34:20.787711 kernel: pci 0000:00:18.6: bridge window [mem 0xfba00000-0xfbafffff] May 8 00:34:20.787760 kernel: pci 0000:00:18.6: bridge window [mem 0xe6100000-0xe61fffff 64bit pref] May 8 00:34:20.787811 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] May 8 00:34:20.787860 kernel: pci 0000:00:18.7: bridge window [mem 0xfb600000-0xfb6fffff] May 8 00:34:20.787909 kernel: pci 0000:00:18.7: bridge window [mem 0xe5d00000-0xe5dfffff 64bit pref] May 8 00:34:20.787918 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 9 May 8 00:34:20.787924 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 0 May 8 00:34:20.787932 kernel: ACPI: PCI: Interrupt link LNKB disabled May 8 00:34:20.787938 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 May 8 00:34:20.787944 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 10 May 8 00:34:20.787950 kernel: iommu: Default domain type: Translated May 8 00:34:20.787956 kernel: iommu: DMA domain TLB invalidation policy: lazy mode May 8 00:34:20.787962 kernel: PCI: Using ACPI for IRQ routing May 8 00:34:20.787968 kernel: PCI: pci_cache_line_size set to 64 bytes May 8 00:34:20.787974 kernel: e820: reserve RAM buffer [mem 0x0009ec00-0x0009ffff] May 8 00:34:20.787980 kernel: e820: reserve RAM buffer [mem 0x7fee0000-0x7fffffff] May 8 00:34:20.788032 kernel: pci 0000:00:0f.0: vgaarb: setting as boot VGA device May 8 00:34:20.790149 kernel: pci 0000:00:0f.0: vgaarb: bridge control possible May 8 00:34:20.790201 kernel: pci 0000:00:0f.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none May 8 00:34:20.790210 kernel: vgaarb: loaded May 8 00:34:20.790217 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 May 8 00:34:20.790223 kernel: hpet0: 16 comparators, 64-bit 14.318180 MHz counter May 8 00:34:20.790229 kernel: clocksource: Switched to clocksource tsc-early May 8 00:34:20.790235 kernel: VFS: Disk quotas dquot_6.6.0 May 8 00:34:20.790241 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 8 00:34:20.790250 kernel: pnp: PnP ACPI init May 8 00:34:20.790303 kernel: system 00:00: [io 0x1000-0x103f] has been reserved May 8 00:34:20.790350 kernel: system 00:00: [io 0x1040-0x104f] has been reserved May 8 00:34:20.790395 kernel: system 00:00: [io 0x0cf0-0x0cf1] has been reserved May 8 00:34:20.790443 kernel: system 00:04: [mem 0xfed00000-0xfed003ff] has been reserved May 8 00:34:20.790493 kernel: pnp 00:06: [dma 2] May 8 00:34:20.790542 kernel: system 00:07: [io 0xfce0-0xfcff] has been reserved May 8 00:34:20.790590 kernel: system 00:07: [mem 0xf0000000-0xf7ffffff] has been reserved May 8 00:34:20.790634 kernel: system 00:07: [mem 0xfe800000-0xfe9fffff] has been reserved May 8 00:34:20.790642 kernel: pnp: PnP ACPI: found 8 devices May 8 00:34:20.790649 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns May 8 00:34:20.790656 kernel: NET: Registered PF_INET protocol family May 8 00:34:20.790662 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) May 8 00:34:20.790668 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) May 8 00:34:20.790676 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 8 00:34:20.790682 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) May 8 00:34:20.790688 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) May 8 00:34:20.790693 kernel: TCP: Hash tables configured (established 16384 bind 16384) May 8 00:34:20.790699 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) May 8 00:34:20.790705 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) May 8 00:34:20.790711 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 8 00:34:20.790717 kernel: NET: Registered PF_XDP protocol family May 8 00:34:20.790769 kernel: pci 0000:00:15.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 May 8 00:34:20.790824 kernel: pci 0000:00:15.3: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 May 8 00:34:20.790875 kernel: pci 0000:00:15.4: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 May 8 00:34:20.790927 kernel: pci 0000:00:15.5: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 May 8 00:34:20.790979 kernel: pci 0000:00:15.6: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 May 8 00:34:20.791029 kernel: pci 0000:00:15.7: bridge window [io 0x1000-0x0fff] to [bus 0a] add_size 1000 May 8 00:34:20.791091 kernel: pci 0000:00:16.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 0b] add_size 200000 add_align 100000 May 8 00:34:20.791145 kernel: pci 0000:00:16.3: bridge window [io 0x1000-0x0fff] to [bus 0e] add_size 1000 May 8 00:34:20.791195 kernel: pci 0000:00:16.4: bridge window [io 0x1000-0x0fff] to [bus 0f] add_size 1000 May 8 00:34:20.791245 kernel: pci 0000:00:16.5: bridge window [io 0x1000-0x0fff] to [bus 10] add_size 1000 May 8 00:34:20.791295 kernel: pci 0000:00:16.6: bridge window [io 0x1000-0x0fff] to [bus 11] add_size 1000 May 8 00:34:20.791344 kernel: pci 0000:00:16.7: bridge window [io 0x1000-0x0fff] to [bus 12] add_size 1000 May 8 00:34:20.791394 kernel: pci 0000:00:17.3: bridge window [io 0x1000-0x0fff] to [bus 16] add_size 1000 May 8 00:34:20.791446 kernel: pci 0000:00:17.4: bridge window [io 0x1000-0x0fff] to [bus 17] add_size 1000 May 8 00:34:20.791496 kernel: pci 0000:00:17.5: bridge window [io 0x1000-0x0fff] to [bus 18] add_size 1000 May 8 00:34:20.791546 kernel: pci 0000:00:17.6: bridge window [io 0x1000-0x0fff] to [bus 19] add_size 1000 May 8 00:34:20.791596 kernel: pci 0000:00:17.7: bridge window [io 0x1000-0x0fff] to [bus 1a] add_size 1000 May 8 00:34:20.791645 kernel: pci 0000:00:18.2: bridge window [io 0x1000-0x0fff] to [bus 1d] add_size 1000 May 8 00:34:20.791705 kernel: pci 0000:00:18.3: bridge window [io 0x1000-0x0fff] to [bus 1e] add_size 1000 May 8 00:34:20.791756 kernel: pci 0000:00:18.4: bridge window [io 0x1000-0x0fff] to [bus 1f] add_size 1000 May 8 00:34:20.791806 kernel: pci 0000:00:18.5: bridge window [io 0x1000-0x0fff] to [bus 20] add_size 1000 May 8 00:34:20.791855 kernel: pci 0000:00:18.6: bridge window [io 0x1000-0x0fff] to [bus 21] add_size 1000 May 8 00:34:20.791910 kernel: pci 0000:00:18.7: bridge window [io 0x1000-0x0fff] to [bus 22] add_size 1000 May 8 00:34:20.791963 kernel: pci 0000:00:15.0: BAR 15: assigned [mem 0xc0000000-0xc01fffff 64bit pref] May 8 00:34:20.792016 kernel: pci 0000:00:16.0: BAR 15: assigned [mem 0xc0200000-0xc03fffff 64bit pref] May 8 00:34:20.793967 kernel: pci 0000:00:15.3: BAR 13: no space for [io size 0x1000] May 8 00:34:20.794029 kernel: pci 0000:00:15.3: BAR 13: failed to assign [io size 0x1000] May 8 00:34:20.794102 kernel: pci 0000:00:15.4: BAR 13: no space for [io size 0x1000] May 8 00:34:20.794269 kernel: pci 0000:00:15.4: BAR 13: failed to assign [io size 0x1000] May 8 00:34:20.794337 kernel: pci 0000:00:15.5: BAR 13: no space for [io size 0x1000] May 8 00:34:20.794400 kernel: pci 0000:00:15.5: BAR 13: failed to assign [io size 0x1000] May 8 00:34:20.794452 kernel: pci 0000:00:15.6: BAR 13: no space for [io size 0x1000] May 8 00:34:20.794505 kernel: pci 0000:00:15.6: BAR 13: failed to assign [io size 0x1000] May 8 00:34:20.794556 kernel: pci 0000:00:15.7: BAR 13: no space for [io size 0x1000] May 8 00:34:20.794605 kernel: pci 0000:00:15.7: BAR 13: failed to assign [io size 0x1000] May 8 00:34:20.794656 kernel: pci 0000:00:16.3: BAR 13: no space for [io size 0x1000] May 8 00:34:20.794705 kernel: pci 0000:00:16.3: BAR 13: failed to assign [io size 0x1000] May 8 00:34:20.794756 kernel: pci 0000:00:16.4: BAR 13: no space for [io size 0x1000] May 8 00:34:20.794807 kernel: pci 0000:00:16.4: BAR 13: failed to assign [io size 0x1000] May 8 00:34:20.794857 kernel: pci 0000:00:16.5: BAR 13: no space for [io size 0x1000] May 8 00:34:20.794910 kernel: pci 0000:00:16.5: BAR 13: failed to assign [io size 0x1000] May 8 00:34:20.794961 kernel: pci 0000:00:16.6: BAR 13: no space for [io size 0x1000] May 8 00:34:20.795010 kernel: pci 0000:00:16.6: BAR 13: failed to assign [io size 0x1000] May 8 00:34:20.795100 kernel: pci 0000:00:16.7: BAR 13: no space for [io size 0x1000] May 8 00:34:20.795150 kernel: pci 0000:00:16.7: BAR 13: failed to assign [io size 0x1000] May 8 00:34:20.795199 kernel: pci 0000:00:17.3: BAR 13: no space for [io size 0x1000] May 8 00:34:20.795248 kernel: pci 0000:00:17.3: BAR 13: failed to assign [io size 0x1000] May 8 00:34:20.795297 kernel: pci 0000:00:17.4: BAR 13: no space for [io size 0x1000] May 8 00:34:20.795346 kernel: pci 0000:00:17.4: BAR 13: failed to assign [io size 0x1000] May 8 00:34:20.795399 kernel: pci 0000:00:17.5: BAR 13: no space for [io size 0x1000] May 8 00:34:20.795449 kernel: pci 0000:00:17.5: BAR 13: failed to assign [io size 0x1000] May 8 00:34:20.795498 kernel: pci 0000:00:17.6: BAR 13: no space for [io size 0x1000] May 8 00:34:20.795548 kernel: pci 0000:00:17.6: BAR 13: failed to assign [io size 0x1000] May 8 00:34:20.795598 kernel: pci 0000:00:17.7: BAR 13: no space for [io size 0x1000] May 8 00:34:20.795647 kernel: pci 0000:00:17.7: BAR 13: failed to assign [io size 0x1000] May 8 00:34:20.795697 kernel: pci 0000:00:18.2: BAR 13: no space for [io size 0x1000] May 8 00:34:20.795747 kernel: pci 0000:00:18.2: BAR 13: failed to assign [io size 0x1000] May 8 00:34:20.795799 kernel: pci 0000:00:18.3: BAR 13: no space for [io size 0x1000] May 8 00:34:20.795849 kernel: pci 0000:00:18.3: BAR 13: failed to assign [io size 0x1000] May 8 00:34:20.795899 kernel: pci 0000:00:18.4: BAR 13: no space for [io size 0x1000] May 8 00:34:20.795957 kernel: pci 0000:00:18.4: BAR 13: failed to assign [io size 0x1000] May 8 00:34:20.796008 kernel: pci 0000:00:18.5: BAR 13: no space for [io size 0x1000] May 8 00:34:20.796068 kernel: pci 0000:00:18.5: BAR 13: failed to assign [io size 0x1000] May 8 00:34:20.796120 kernel: pci 0000:00:18.6: BAR 13: no space for [io size 0x1000] May 8 00:34:20.796170 kernel: pci 0000:00:18.6: BAR 13: failed to assign [io size 0x1000] May 8 00:34:20.796223 kernel: pci 0000:00:18.7: BAR 13: no space for [io size 0x1000] May 8 00:34:20.796273 kernel: pci 0000:00:18.7: BAR 13: failed to assign [io size 0x1000] May 8 00:34:20.796328 kernel: pci 0000:00:18.7: BAR 13: no space for [io size 0x1000] May 8 00:34:20.796387 kernel: pci 0000:00:18.7: BAR 13: failed to assign [io size 0x1000] May 8 00:34:20.796438 kernel: pci 0000:00:18.6: BAR 13: no space for [io size 0x1000] May 8 00:34:20.796488 kernel: pci 0000:00:18.6: BAR 13: failed to assign [io size 0x1000] May 8 00:34:20.796545 kernel: pci 0000:00:18.5: BAR 13: no space for [io size 0x1000] May 8 00:34:20.796595 kernel: pci 0000:00:18.5: BAR 13: failed to assign [io size 0x1000] May 8 00:34:20.796652 kernel: pci 0000:00:18.4: BAR 13: no space for [io size 0x1000] May 8 00:34:20.796716 kernel: pci 0000:00:18.4: BAR 13: failed to assign [io size 0x1000] May 8 00:34:20.796774 kernel: pci 0000:00:18.3: BAR 13: no space for [io size 0x1000] May 8 00:34:20.796824 kernel: pci 0000:00:18.3: BAR 13: failed to assign [io size 0x1000] May 8 00:34:20.796874 kernel: pci 0000:00:18.2: BAR 13: no space for [io size 0x1000] May 8 00:34:20.796933 kernel: pci 0000:00:18.2: BAR 13: failed to assign [io size 0x1000] May 8 00:34:20.796985 kernel: pci 0000:00:17.7: BAR 13: no space for [io size 0x1000] May 8 00:34:20.798083 kernel: pci 0000:00:17.7: BAR 13: failed to assign [io size 0x1000] May 8 00:34:20.798167 kernel: pci 0000:00:17.6: BAR 13: no space for [io size 0x1000] May 8 00:34:20.798234 kernel: pci 0000:00:17.6: BAR 13: failed to assign [io size 0x1000] May 8 00:34:20.798293 kernel: pci 0000:00:17.5: BAR 13: no space for [io size 0x1000] May 8 00:34:20.798343 kernel: pci 0000:00:17.5: BAR 13: failed to assign [io size 0x1000] May 8 00:34:20.798393 kernel: pci 0000:00:17.4: BAR 13: no space for [io size 0x1000] May 8 00:34:20.798447 kernel: pci 0000:00:17.4: BAR 13: failed to assign [io size 0x1000] May 8 00:34:20.798498 kernel: pci 0000:00:17.3: BAR 13: no space for [io size 0x1000] May 8 00:34:20.798546 kernel: pci 0000:00:17.3: BAR 13: failed to assign [io size 0x1000] May 8 00:34:20.798595 kernel: pci 0000:00:16.7: BAR 13: no space for [io size 0x1000] May 8 00:34:20.798645 kernel: pci 0000:00:16.7: BAR 13: failed to assign [io size 0x1000] May 8 00:34:20.798702 kernel: pci 0000:00:16.6: BAR 13: no space for [io size 0x1000] May 8 00:34:20.798769 kernel: pci 0000:00:16.6: BAR 13: failed to assign [io size 0x1000] May 8 00:34:20.798824 kernel: pci 0000:00:16.5: BAR 13: no space for [io size 0x1000] May 8 00:34:20.798882 kernel: pci 0000:00:16.5: BAR 13: failed to assign [io size 0x1000] May 8 00:34:20.798939 kernel: pci 0000:00:16.4: BAR 13: no space for [io size 0x1000] May 8 00:34:20.798990 kernel: pci 0000:00:16.4: BAR 13: failed to assign [io size 0x1000] May 8 00:34:20.799095 kernel: pci 0000:00:16.3: BAR 13: no space for [io size 0x1000] May 8 00:34:20.799152 kernel: pci 0000:00:16.3: BAR 13: failed to assign [io size 0x1000] May 8 00:34:20.799208 kernel: pci 0000:00:15.7: BAR 13: no space for [io size 0x1000] May 8 00:34:20.799258 kernel: pci 0000:00:15.7: BAR 13: failed to assign [io size 0x1000] May 8 00:34:20.799322 kernel: pci 0000:00:15.6: BAR 13: no space for [io size 0x1000] May 8 00:34:20.799387 kernel: pci 0000:00:15.6: BAR 13: failed to assign [io size 0x1000] May 8 00:34:20.799448 kernel: pci 0000:00:15.5: BAR 13: no space for [io size 0x1000] May 8 00:34:20.799504 kernel: pci 0000:00:15.5: BAR 13: failed to assign [io size 0x1000] May 8 00:34:20.799554 kernel: pci 0000:00:15.4: BAR 13: no space for [io size 0x1000] May 8 00:34:20.799603 kernel: pci 0000:00:15.4: BAR 13: failed to assign [io size 0x1000] May 8 00:34:20.799656 kernel: pci 0000:00:15.3: BAR 13: no space for [io size 0x1000] May 8 00:34:20.799709 kernel: pci 0000:00:15.3: BAR 13: failed to assign [io size 0x1000] May 8 00:34:20.799760 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] May 8 00:34:20.799811 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] May 8 00:34:20.799864 kernel: pci 0000:00:11.0: bridge window [io 0x2000-0x3fff] May 8 00:34:20.799929 kernel: pci 0000:00:11.0: bridge window [mem 0xfd600000-0xfdffffff] May 8 00:34:20.799991 kernel: pci 0000:00:11.0: bridge window [mem 0xe7b00000-0xe7ffffff 64bit pref] May 8 00:34:20.800083 kernel: pci 0000:03:00.0: BAR 6: assigned [mem 0xfd500000-0xfd50ffff pref] May 8 00:34:20.800142 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] May 8 00:34:20.800194 kernel: pci 0000:00:15.0: bridge window [io 0x4000-0x4fff] May 8 00:34:20.800244 kernel: pci 0000:00:15.0: bridge window [mem 0xfd500000-0xfd5fffff] May 8 00:34:20.800293 kernel: pci 0000:00:15.0: bridge window [mem 0xc0000000-0xc01fffff 64bit pref] May 8 00:34:20.800348 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] May 8 00:34:20.800403 kernel: pci 0000:00:15.1: bridge window [io 0x8000-0x8fff] May 8 00:34:20.800456 kernel: pci 0000:00:15.1: bridge window [mem 0xfd100000-0xfd1fffff] May 8 00:34:20.800526 kernel: pci 0000:00:15.1: bridge window [mem 0xe7800000-0xe78fffff 64bit pref] May 8 00:34:20.800586 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] May 8 00:34:20.800649 kernel: pci 0000:00:15.2: bridge window [io 0xc000-0xcfff] May 8 00:34:20.800699 kernel: pci 0000:00:15.2: bridge window [mem 0xfcd00000-0xfcdfffff] May 8 00:34:20.800749 kernel: pci 0000:00:15.2: bridge window [mem 0xe7400000-0xe74fffff 64bit pref] May 8 00:34:20.800798 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] May 8 00:34:20.800847 kernel: pci 0000:00:15.3: bridge window [mem 0xfc900000-0xfc9fffff] May 8 00:34:20.800905 kernel: pci 0000:00:15.3: bridge window [mem 0xe7000000-0xe70fffff 64bit pref] May 8 00:34:20.800955 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] May 8 00:34:20.801004 kernel: pci 0000:00:15.4: bridge window [mem 0xfc500000-0xfc5fffff] May 8 00:34:20.801445 kernel: pci 0000:00:15.4: bridge window [mem 0xe6c00000-0xe6cfffff 64bit pref] May 8 00:34:20.801502 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] May 8 00:34:20.801553 kernel: pci 0000:00:15.5: bridge window [mem 0xfc100000-0xfc1fffff] May 8 00:34:20.801618 kernel: pci 0000:00:15.5: bridge window [mem 0xe6800000-0xe68fffff 64bit pref] May 8 00:34:20.801685 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] May 8 00:34:20.801738 kernel: pci 0000:00:15.6: bridge window [mem 0xfbd00000-0xfbdfffff] May 8 00:34:20.801800 kernel: pci 0000:00:15.6: bridge window [mem 0xe6400000-0xe64fffff 64bit pref] May 8 00:34:20.801852 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] May 8 00:34:20.801906 kernel: pci 0000:00:15.7: bridge window [mem 0xfb900000-0xfb9fffff] May 8 00:34:20.801957 kernel: pci 0000:00:15.7: bridge window [mem 0xe6000000-0xe60fffff 64bit pref] May 8 00:34:20.802011 kernel: pci 0000:0b:00.0: BAR 6: assigned [mem 0xfd400000-0xfd40ffff pref] May 8 00:34:20.802217 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] May 8 00:34:20.802281 kernel: pci 0000:00:16.0: bridge window [io 0x5000-0x5fff] May 8 00:34:20.802342 kernel: pci 0000:00:16.0: bridge window [mem 0xfd400000-0xfd4fffff] May 8 00:34:20.802399 kernel: pci 0000:00:16.0: bridge window [mem 0xc0200000-0xc03fffff 64bit pref] May 8 00:34:20.802451 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] May 8 00:34:20.802502 kernel: pci 0000:00:16.1: bridge window [io 0x9000-0x9fff] May 8 00:34:20.802551 kernel: pci 0000:00:16.1: bridge window [mem 0xfd000000-0xfd0fffff] May 8 00:34:20.802600 kernel: pci 0000:00:16.1: bridge window [mem 0xe7700000-0xe77fffff 64bit pref] May 8 00:34:20.802657 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] May 8 00:34:20.802716 kernel: pci 0000:00:16.2: bridge window [io 0xd000-0xdfff] May 8 00:34:20.802778 kernel: pci 0000:00:16.2: bridge window [mem 0xfcc00000-0xfccfffff] May 8 00:34:20.802834 kernel: pci 0000:00:16.2: bridge window [mem 0xe7300000-0xe73fffff 64bit pref] May 8 00:34:20.803162 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] May 8 00:34:20.803221 kernel: pci 0000:00:16.3: bridge window [mem 0xfc800000-0xfc8fffff] May 8 00:34:20.803282 kernel: pci 0000:00:16.3: bridge window [mem 0xe6f00000-0xe6ffffff 64bit pref] May 8 00:34:20.803342 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] May 8 00:34:20.803392 kernel: pci 0000:00:16.4: bridge window [mem 0xfc400000-0xfc4fffff] May 8 00:34:20.803452 kernel: pci 0000:00:16.4: bridge window [mem 0xe6b00000-0xe6bfffff 64bit pref] May 8 00:34:20.803503 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] May 8 00:34:20.803552 kernel: pci 0000:00:16.5: bridge window [mem 0xfc000000-0xfc0fffff] May 8 00:34:20.803605 kernel: pci 0000:00:16.5: bridge window [mem 0xe6700000-0xe67fffff 64bit pref] May 8 00:34:20.803662 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] May 8 00:34:20.803712 kernel: pci 0000:00:16.6: bridge window [mem 0xfbc00000-0xfbcfffff] May 8 00:34:20.803770 kernel: pci 0000:00:16.6: bridge window [mem 0xe6300000-0xe63fffff 64bit pref] May 8 00:34:20.803825 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] May 8 00:34:20.803889 kernel: pci 0000:00:16.7: bridge window [mem 0xfb800000-0xfb8fffff] May 8 00:34:20.803949 kernel: pci 0000:00:16.7: bridge window [mem 0xe5f00000-0xe5ffffff 64bit pref] May 8 00:34:20.804000 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] May 8 00:34:20.804067 kernel: pci 0000:00:17.0: bridge window [io 0x6000-0x6fff] May 8 00:34:20.804126 kernel: pci 0000:00:17.0: bridge window [mem 0xfd300000-0xfd3fffff] May 8 00:34:20.804177 kernel: pci 0000:00:17.0: bridge window [mem 0xe7a00000-0xe7afffff 64bit pref] May 8 00:34:20.804228 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] May 8 00:34:20.804277 kernel: pci 0000:00:17.1: bridge window [io 0xa000-0xafff] May 8 00:34:20.804333 kernel: pci 0000:00:17.1: bridge window [mem 0xfcf00000-0xfcffffff] May 8 00:34:20.804383 kernel: pci 0000:00:17.1: bridge window [mem 0xe7600000-0xe76fffff 64bit pref] May 8 00:34:20.804439 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] May 8 00:34:20.804505 kernel: pci 0000:00:17.2: bridge window [io 0xe000-0xefff] May 8 00:34:20.804560 kernel: pci 0000:00:17.2: bridge window [mem 0xfcb00000-0xfcbfffff] May 8 00:34:20.804616 kernel: pci 0000:00:17.2: bridge window [mem 0xe7200000-0xe72fffff 64bit pref] May 8 00:34:20.804669 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] May 8 00:34:20.804719 kernel: pci 0000:00:17.3: bridge window [mem 0xfc700000-0xfc7fffff] May 8 00:34:20.804768 kernel: pci 0000:00:17.3: bridge window [mem 0xe6e00000-0xe6efffff 64bit pref] May 8 00:34:20.804818 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] May 8 00:34:20.804875 kernel: pci 0000:00:17.4: bridge window [mem 0xfc300000-0xfc3fffff] May 8 00:34:20.804938 kernel: pci 0000:00:17.4: bridge window [mem 0xe6a00000-0xe6afffff 64bit pref] May 8 00:34:20.805001 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] May 8 00:34:20.805070 kernel: pci 0000:00:17.5: bridge window [mem 0xfbf00000-0xfbffffff] May 8 00:34:20.805125 kernel: pci 0000:00:17.5: bridge window [mem 0xe6600000-0xe66fffff 64bit pref] May 8 00:34:20.805182 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] May 8 00:34:20.805232 kernel: pci 0000:00:17.6: bridge window [mem 0xfbb00000-0xfbbfffff] May 8 00:34:20.805281 kernel: pci 0000:00:17.6: bridge window [mem 0xe6200000-0xe62fffff 64bit pref] May 8 00:34:20.805331 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] May 8 00:34:20.805381 kernel: pci 0000:00:17.7: bridge window [mem 0xfb700000-0xfb7fffff] May 8 00:34:20.805433 kernel: pci 0000:00:17.7: bridge window [mem 0xe5e00000-0xe5efffff 64bit pref] May 8 00:34:20.805486 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] May 8 00:34:20.805552 kernel: pci 0000:00:18.0: bridge window [io 0x7000-0x7fff] May 8 00:34:20.805603 kernel: pci 0000:00:18.0: bridge window [mem 0xfd200000-0xfd2fffff] May 8 00:34:20.805664 kernel: pci 0000:00:18.0: bridge window [mem 0xe7900000-0xe79fffff 64bit pref] May 8 00:34:20.805718 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] May 8 00:34:20.805767 kernel: pci 0000:00:18.1: bridge window [io 0xb000-0xbfff] May 8 00:34:20.805816 kernel: pci 0000:00:18.1: bridge window [mem 0xfce00000-0xfcefffff] May 8 00:34:20.805865 kernel: pci 0000:00:18.1: bridge window [mem 0xe7500000-0xe75fffff 64bit pref] May 8 00:34:20.805918 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] May 8 00:34:20.805978 kernel: pci 0000:00:18.2: bridge window [mem 0xfca00000-0xfcafffff] May 8 00:34:20.806044 kernel: pci 0000:00:18.2: bridge window [mem 0xe7100000-0xe71fffff 64bit pref] May 8 00:34:20.806115 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] May 8 00:34:20.806180 kernel: pci 0000:00:18.3: bridge window [mem 0xfc600000-0xfc6fffff] May 8 00:34:20.806242 kernel: pci 0000:00:18.3: bridge window [mem 0xe6d00000-0xe6dfffff 64bit pref] May 8 00:34:20.806294 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] May 8 00:34:20.806343 kernel: pci 0000:00:18.4: bridge window [mem 0xfc200000-0xfc2fffff] May 8 00:34:20.806392 kernel: pci 0000:00:18.4: bridge window [mem 0xe6900000-0xe69fffff 64bit pref] May 8 00:34:20.806446 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] May 8 00:34:20.806505 kernel: pci 0000:00:18.5: bridge window [mem 0xfbe00000-0xfbefffff] May 8 00:34:20.806574 kernel: pci 0000:00:18.5: bridge window [mem 0xe6500000-0xe65fffff 64bit pref] May 8 00:34:20.806636 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] May 8 00:34:20.806694 kernel: pci 0000:00:18.6: bridge window [mem 0xfba00000-0xfbafffff] May 8 00:34:20.806764 kernel: pci 0000:00:18.6: bridge window [mem 0xe6100000-0xe61fffff 64bit pref] May 8 00:34:20.806820 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] May 8 00:34:20.806870 kernel: pci 0000:00:18.7: bridge window [mem 0xfb600000-0xfb6fffff] May 8 00:34:20.806919 kernel: pci 0000:00:18.7: bridge window [mem 0xe5d00000-0xe5dfffff 64bit pref] May 8 00:34:20.806969 kernel: pci_bus 0000:00: resource 4 [mem 0x000a0000-0x000bffff window] May 8 00:34:20.807015 kernel: pci_bus 0000:00: resource 5 [mem 0x000cc000-0x000dbfff window] May 8 00:34:20.807117 kernel: pci_bus 0000:00: resource 6 [mem 0xc0000000-0xfebfffff window] May 8 00:34:20.807170 kernel: pci_bus 0000:00: resource 7 [io 0x0000-0x0cf7 window] May 8 00:34:20.807222 kernel: pci_bus 0000:00: resource 8 [io 0x0d00-0xfeff window] May 8 00:34:20.807284 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x3fff] May 8 00:34:20.807343 kernel: pci_bus 0000:02: resource 1 [mem 0xfd600000-0xfdffffff] May 8 00:34:20.807390 kernel: pci_bus 0000:02: resource 2 [mem 0xe7b00000-0xe7ffffff 64bit pref] May 8 00:34:20.807435 kernel: pci_bus 0000:02: resource 4 [mem 0x000a0000-0x000bffff window] May 8 00:34:20.807480 kernel: pci_bus 0000:02: resource 5 [mem 0x000cc000-0x000dbfff window] May 8 00:34:20.807525 kernel: pci_bus 0000:02: resource 6 [mem 0xc0000000-0xfebfffff window] May 8 00:34:20.807570 kernel: pci_bus 0000:02: resource 7 [io 0x0000-0x0cf7 window] May 8 00:34:20.807626 kernel: pci_bus 0000:02: resource 8 [io 0x0d00-0xfeff window] May 8 00:34:20.807684 kernel: pci_bus 0000:03: resource 0 [io 0x4000-0x4fff] May 8 00:34:20.807737 kernel: pci_bus 0000:03: resource 1 [mem 0xfd500000-0xfd5fffff] May 8 00:34:20.807792 kernel: pci_bus 0000:03: resource 2 [mem 0xc0000000-0xc01fffff 64bit pref] May 8 00:34:20.807854 kernel: pci_bus 0000:04: resource 0 [io 0x8000-0x8fff] May 8 00:34:20.807902 kernel: pci_bus 0000:04: resource 1 [mem 0xfd100000-0xfd1fffff] May 8 00:34:20.807949 kernel: pci_bus 0000:04: resource 2 [mem 0xe7800000-0xe78fffff 64bit pref] May 8 00:34:20.808006 kernel: pci_bus 0000:05: resource 0 [io 0xc000-0xcfff] May 8 00:34:20.808065 kernel: pci_bus 0000:05: resource 1 [mem 0xfcd00000-0xfcdfffff] May 8 00:34:20.808113 kernel: pci_bus 0000:05: resource 2 [mem 0xe7400000-0xe74fffff 64bit pref] May 8 00:34:20.808172 kernel: pci_bus 0000:06: resource 1 [mem 0xfc900000-0xfc9fffff] May 8 00:34:20.808237 kernel: pci_bus 0000:06: resource 2 [mem 0xe7000000-0xe70fffff 64bit pref] May 8 00:34:20.808294 kernel: pci_bus 0000:07: resource 1 [mem 0xfc500000-0xfc5fffff] May 8 00:34:20.808351 kernel: pci_bus 0000:07: resource 2 [mem 0xe6c00000-0xe6cfffff 64bit pref] May 8 00:34:20.808421 kernel: pci_bus 0000:08: resource 1 [mem 0xfc100000-0xfc1fffff] May 8 00:34:20.808471 kernel: pci_bus 0000:08: resource 2 [mem 0xe6800000-0xe68fffff 64bit pref] May 8 00:34:20.808522 kernel: pci_bus 0000:09: resource 1 [mem 0xfbd00000-0xfbdfffff] May 8 00:34:20.808568 kernel: pci_bus 0000:09: resource 2 [mem 0xe6400000-0xe64fffff 64bit pref] May 8 00:34:20.808628 kernel: pci_bus 0000:0a: resource 1 [mem 0xfb900000-0xfb9fffff] May 8 00:34:20.808688 kernel: pci_bus 0000:0a: resource 2 [mem 0xe6000000-0xe60fffff 64bit pref] May 8 00:34:20.808754 kernel: pci_bus 0000:0b: resource 0 [io 0x5000-0x5fff] May 8 00:34:20.808811 kernel: pci_bus 0000:0b: resource 1 [mem 0xfd400000-0xfd4fffff] May 8 00:34:20.808869 kernel: pci_bus 0000:0b: resource 2 [mem 0xc0200000-0xc03fffff 64bit pref] May 8 00:34:20.808926 kernel: pci_bus 0000:0c: resource 0 [io 0x9000-0x9fff] May 8 00:34:20.808983 kernel: pci_bus 0000:0c: resource 1 [mem 0xfd000000-0xfd0fffff] May 8 00:34:20.809030 kernel: pci_bus 0000:0c: resource 2 [mem 0xe7700000-0xe77fffff 64bit pref] May 8 00:34:20.809168 kernel: pci_bus 0000:0d: resource 0 [io 0xd000-0xdfff] May 8 00:34:20.809224 kernel: pci_bus 0000:0d: resource 1 [mem 0xfcc00000-0xfccfffff] May 8 00:34:20.809282 kernel: pci_bus 0000:0d: resource 2 [mem 0xe7300000-0xe73fffff 64bit pref] May 8 00:34:20.809346 kernel: pci_bus 0000:0e: resource 1 [mem 0xfc800000-0xfc8fffff] May 8 00:34:20.809403 kernel: pci_bus 0000:0e: resource 2 [mem 0xe6f00000-0xe6ffffff 64bit pref] May 8 00:34:20.809468 kernel: pci_bus 0000:0f: resource 1 [mem 0xfc400000-0xfc4fffff] May 8 00:34:20.809528 kernel: pci_bus 0000:0f: resource 2 [mem 0xe6b00000-0xe6bfffff 64bit pref] May 8 00:34:20.809583 kernel: pci_bus 0000:10: resource 1 [mem 0xfc000000-0xfc0fffff] May 8 00:34:20.809630 kernel: pci_bus 0000:10: resource 2 [mem 0xe6700000-0xe67fffff 64bit pref] May 8 00:34:20.809680 kernel: pci_bus 0000:11: resource 1 [mem 0xfbc00000-0xfbcfffff] May 8 00:34:20.809730 kernel: pci_bus 0000:11: resource 2 [mem 0xe6300000-0xe63fffff 64bit pref] May 8 00:34:20.809791 kernel: pci_bus 0000:12: resource 1 [mem 0xfb800000-0xfb8fffff] May 8 00:34:20.809849 kernel: pci_bus 0000:12: resource 2 [mem 0xe5f00000-0xe5ffffff 64bit pref] May 8 00:34:20.809922 kernel: pci_bus 0000:13: resource 0 [io 0x6000-0x6fff] May 8 00:34:20.809974 kernel: pci_bus 0000:13: resource 1 [mem 0xfd300000-0xfd3fffff] May 8 00:34:20.810020 kernel: pci_bus 0000:13: resource 2 [mem 0xe7a00000-0xe7afffff 64bit pref] May 8 00:34:20.810146 kernel: pci_bus 0000:14: resource 0 [io 0xa000-0xafff] May 8 00:34:20.810194 kernel: pci_bus 0000:14: resource 1 [mem 0xfcf00000-0xfcffffff] May 8 00:34:20.810240 kernel: pci_bus 0000:14: resource 2 [mem 0xe7600000-0xe76fffff 64bit pref] May 8 00:34:20.810303 kernel: pci_bus 0000:15: resource 0 [io 0xe000-0xefff] May 8 00:34:20.810357 kernel: pci_bus 0000:15: resource 1 [mem 0xfcb00000-0xfcbfffff] May 8 00:34:20.810416 kernel: pci_bus 0000:15: resource 2 [mem 0xe7200000-0xe72fffff 64bit pref] May 8 00:34:20.810481 kernel: pci_bus 0000:16: resource 1 [mem 0xfc700000-0xfc7fffff] May 8 00:34:20.810528 kernel: pci_bus 0000:16: resource 2 [mem 0xe6e00000-0xe6efffff 64bit pref] May 8 00:34:20.810582 kernel: pci_bus 0000:17: resource 1 [mem 0xfc300000-0xfc3fffff] May 8 00:34:20.810636 kernel: pci_bus 0000:17: resource 2 [mem 0xe6a00000-0xe6afffff 64bit pref] May 8 00:34:20.810689 kernel: pci_bus 0000:18: resource 1 [mem 0xfbf00000-0xfbffffff] May 8 00:34:20.810735 kernel: pci_bus 0000:18: resource 2 [mem 0xe6600000-0xe66fffff 64bit pref] May 8 00:34:20.810784 kernel: pci_bus 0000:19: resource 1 [mem 0xfbb00000-0xfbbfffff] May 8 00:34:20.810840 kernel: pci_bus 0000:19: resource 2 [mem 0xe6200000-0xe62fffff 64bit pref] May 8 00:34:20.810897 kernel: pci_bus 0000:1a: resource 1 [mem 0xfb700000-0xfb7fffff] May 8 00:34:20.810952 kernel: pci_bus 0000:1a: resource 2 [mem 0xe5e00000-0xe5efffff 64bit pref] May 8 00:34:20.811020 kernel: pci_bus 0000:1b: resource 0 [io 0x7000-0x7fff] May 8 00:34:20.811113 kernel: pci_bus 0000:1b: resource 1 [mem 0xfd200000-0xfd2fffff] May 8 00:34:20.811172 kernel: pci_bus 0000:1b: resource 2 [mem 0xe7900000-0xe79fffff 64bit pref] May 8 00:34:20.811227 kernel: pci_bus 0000:1c: resource 0 [io 0xb000-0xbfff] May 8 00:34:20.811273 kernel: pci_bus 0000:1c: resource 1 [mem 0xfce00000-0xfcefffff] May 8 00:34:20.811319 kernel: pci_bus 0000:1c: resource 2 [mem 0xe7500000-0xe75fffff 64bit pref] May 8 00:34:20.811375 kernel: pci_bus 0000:1d: resource 1 [mem 0xfca00000-0xfcafffff] May 8 00:34:20.811430 kernel: pci_bus 0000:1d: resource 2 [mem 0xe7100000-0xe71fffff 64bit pref] May 8 00:34:20.811493 kernel: pci_bus 0000:1e: resource 1 [mem 0xfc600000-0xfc6fffff] May 8 00:34:20.811550 kernel: pci_bus 0000:1e: resource 2 [mem 0xe6d00000-0xe6dfffff 64bit pref] May 8 00:34:20.811610 kernel: pci_bus 0000:1f: resource 1 [mem 0xfc200000-0xfc2fffff] May 8 00:34:20.811668 kernel: pci_bus 0000:1f: resource 2 [mem 0xe6900000-0xe69fffff 64bit pref] May 8 00:34:20.811729 kernel: pci_bus 0000:20: resource 1 [mem 0xfbe00000-0xfbefffff] May 8 00:34:20.811776 kernel: pci_bus 0000:20: resource 2 [mem 0xe6500000-0xe65fffff 64bit pref] May 8 00:34:20.811826 kernel: pci_bus 0000:21: resource 1 [mem 0xfba00000-0xfbafffff] May 8 00:34:20.811872 kernel: pci_bus 0000:21: resource 2 [mem 0xe6100000-0xe61fffff 64bit pref] May 8 00:34:20.811935 kernel: pci_bus 0000:22: resource 1 [mem 0xfb600000-0xfb6fffff] May 8 00:34:20.811988 kernel: pci_bus 0000:22: resource 2 [mem 0xe5d00000-0xe5dfffff 64bit pref] May 8 00:34:20.812077 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers May 8 00:34:20.812088 kernel: PCI: CLS 32 bytes, default 64 May 8 00:34:20.812095 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer May 8 00:34:20.812102 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x311fd3cd494, max_idle_ns: 440795223879 ns May 8 00:34:20.812111 kernel: clocksource: Switched to clocksource tsc May 8 00:34:20.812118 kernel: Initialise system trusted keyrings May 8 00:34:20.812124 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 May 8 00:34:20.812132 kernel: Key type asymmetric registered May 8 00:34:20.812142 kernel: Asymmetric key parser 'x509' registered May 8 00:34:20.812152 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) May 8 00:34:20.812159 kernel: io scheduler mq-deadline registered May 8 00:34:20.812165 kernel: io scheduler kyber registered May 8 00:34:20.812172 kernel: io scheduler bfq registered May 8 00:34:20.812240 kernel: pcieport 0000:00:15.0: PME: Signaling with IRQ 24 May 8 00:34:20.812298 kernel: pcieport 0000:00:15.0: pciehp: Slot #160 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 8 00:34:20.812351 kernel: pcieport 0000:00:15.1: PME: Signaling with IRQ 25 May 8 00:34:20.812404 kernel: pcieport 0000:00:15.1: pciehp: Slot #161 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 8 00:34:20.812467 kernel: pcieport 0000:00:15.2: PME: Signaling with IRQ 26 May 8 00:34:20.812530 kernel: pcieport 0000:00:15.2: pciehp: Slot #162 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 8 00:34:20.812594 kernel: pcieport 0000:00:15.3: PME: Signaling with IRQ 27 May 8 00:34:20.812657 kernel: pcieport 0000:00:15.3: pciehp: Slot #163 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 8 00:34:20.812717 kernel: pcieport 0000:00:15.4: PME: Signaling with IRQ 28 May 8 00:34:20.812775 kernel: pcieport 0000:00:15.4: pciehp: Slot #164 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 8 00:34:20.812834 kernel: pcieport 0000:00:15.5: PME: Signaling with IRQ 29 May 8 00:34:20.812885 kernel: pcieport 0000:00:15.5: pciehp: Slot #165 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 8 00:34:20.812940 kernel: pcieport 0000:00:15.6: PME: Signaling with IRQ 30 May 8 00:34:20.813002 kernel: pcieport 0000:00:15.6: pciehp: Slot #166 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 8 00:34:20.813112 kernel: pcieport 0000:00:15.7: PME: Signaling with IRQ 31 May 8 00:34:20.813171 kernel: pcieport 0000:00:15.7: pciehp: Slot #167 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 8 00:34:20.813237 kernel: pcieport 0000:00:16.0: PME: Signaling with IRQ 32 May 8 00:34:20.813297 kernel: pcieport 0000:00:16.0: pciehp: Slot #192 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 8 00:34:20.813349 kernel: pcieport 0000:00:16.1: PME: Signaling with IRQ 33 May 8 00:34:20.813400 kernel: pcieport 0000:00:16.1: pciehp: Slot #193 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 8 00:34:20.813457 kernel: pcieport 0000:00:16.2: PME: Signaling with IRQ 34 May 8 00:34:20.813517 kernel: pcieport 0000:00:16.2: pciehp: Slot #194 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 8 00:34:20.813581 kernel: pcieport 0000:00:16.3: PME: Signaling with IRQ 35 May 8 00:34:20.813647 kernel: pcieport 0000:00:16.3: pciehp: Slot #195 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 8 00:34:20.813711 kernel: pcieport 0000:00:16.4: PME: Signaling with IRQ 36 May 8 00:34:20.813777 kernel: pcieport 0000:00:16.4: pciehp: Slot #196 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 8 00:34:20.813835 kernel: pcieport 0000:00:16.5: PME: Signaling with IRQ 37 May 8 00:34:20.813886 kernel: pcieport 0000:00:16.5: pciehp: Slot #197 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 8 00:34:20.813940 kernel: pcieport 0000:00:16.6: PME: Signaling with IRQ 38 May 8 00:34:20.813998 kernel: pcieport 0000:00:16.6: pciehp: Slot #198 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 8 00:34:20.814121 kernel: pcieport 0000:00:16.7: PME: Signaling with IRQ 39 May 8 00:34:20.814187 kernel: pcieport 0000:00:16.7: pciehp: Slot #199 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 8 00:34:20.814239 kernel: pcieport 0000:00:17.0: PME: Signaling with IRQ 40 May 8 00:34:20.814296 kernel: pcieport 0000:00:17.0: pciehp: Slot #224 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 8 00:34:20.814364 kernel: pcieport 0000:00:17.1: PME: Signaling with IRQ 41 May 8 00:34:20.814416 kernel: pcieport 0000:00:17.1: pciehp: Slot #225 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 8 00:34:20.814466 kernel: pcieport 0000:00:17.2: PME: Signaling with IRQ 42 May 8 00:34:20.814515 kernel: pcieport 0000:00:17.2: pciehp: Slot #226 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 8 00:34:20.814577 kernel: pcieport 0000:00:17.3: PME: Signaling with IRQ 43 May 8 00:34:20.814638 kernel: pcieport 0000:00:17.3: pciehp: Slot #227 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 8 00:34:20.814701 kernel: pcieport 0000:00:17.4: PME: Signaling with IRQ 44 May 8 00:34:20.814762 kernel: pcieport 0000:00:17.4: pciehp: Slot #228 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 8 00:34:20.814818 kernel: pcieport 0000:00:17.5: PME: Signaling with IRQ 45 May 8 00:34:20.814873 kernel: pcieport 0000:00:17.5: pciehp: Slot #229 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 8 00:34:20.814952 kernel: pcieport 0000:00:17.6: PME: Signaling with IRQ 46 May 8 00:34:20.815006 kernel: pcieport 0000:00:17.6: pciehp: Slot #230 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 8 00:34:20.815097 kernel: pcieport 0000:00:17.7: PME: Signaling with IRQ 47 May 8 00:34:20.815154 kernel: pcieport 0000:00:17.7: pciehp: Slot #231 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 8 00:34:20.815215 kernel: pcieport 0000:00:18.0: PME: Signaling with IRQ 48 May 8 00:34:20.815283 kernel: pcieport 0000:00:18.0: pciehp: Slot #256 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 8 00:34:20.815334 kernel: pcieport 0000:00:18.1: PME: Signaling with IRQ 49 May 8 00:34:20.815396 kernel: pcieport 0000:00:18.1: pciehp: Slot #257 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 8 00:34:20.815450 kernel: pcieport 0000:00:18.2: PME: Signaling with IRQ 50 May 8 00:34:20.815500 kernel: pcieport 0000:00:18.2: pciehp: Slot #258 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 8 00:34:20.815550 kernel: pcieport 0000:00:18.3: PME: Signaling with IRQ 51 May 8 00:34:20.815604 kernel: pcieport 0000:00:18.3: pciehp: Slot #259 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 8 00:34:20.815663 kernel: pcieport 0000:00:18.4: PME: Signaling with IRQ 52 May 8 00:34:20.815729 kernel: pcieport 0000:00:18.4: pciehp: Slot #260 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 8 00:34:20.815791 kernel: pcieport 0000:00:18.5: PME: Signaling with IRQ 53 May 8 00:34:20.815844 kernel: pcieport 0000:00:18.5: pciehp: Slot #261 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 8 00:34:20.815900 kernel: pcieport 0000:00:18.6: PME: Signaling with IRQ 54 May 8 00:34:20.815959 kernel: pcieport 0000:00:18.6: pciehp: Slot #262 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 8 00:34:20.816013 kernel: pcieport 0000:00:18.7: PME: Signaling with IRQ 55 May 8 00:34:20.816093 kernel: pcieport 0000:00:18.7: pciehp: Slot #263 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 8 00:34:20.816104 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 May 8 00:34:20.816111 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 8 00:34:20.816117 kernel: 00:05: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A May 8 00:34:20.816123 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBC,PNP0f13:MOUS] at 0x60,0x64 irq 1,12 May 8 00:34:20.816130 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 May 8 00:34:20.816139 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 May 8 00:34:20.816203 kernel: rtc_cmos 00:01: registered as rtc0 May 8 00:34:20.816262 kernel: rtc_cmos 00:01: setting system clock to 2025-05-08T00:34:20 UTC (1746664460) May 8 00:34:20.816314 kernel: rtc_cmos 00:01: alarms up to one month, y3k, 114 bytes nvram May 8 00:34:20.816328 kernel: intel_pstate: CPU model not supported May 8 00:34:20.816335 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 May 8 00:34:20.816341 kernel: NET: Registered PF_INET6 protocol family May 8 00:34:20.816348 kernel: Segment Routing with IPv6 May 8 00:34:20.816354 kernel: In-situ OAM (IOAM) with IPv6 May 8 00:34:20.816363 kernel: NET: Registered PF_PACKET protocol family May 8 00:34:20.816370 kernel: Key type dns_resolver registered May 8 00:34:20.816376 kernel: IPI shorthand broadcast: enabled May 8 00:34:20.816382 kernel: sched_clock: Marking stable (942371927, 242356166)->(1256642861, -71914768) May 8 00:34:20.816389 kernel: registered taskstats version 1 May 8 00:34:20.816395 kernel: Loading compiled-in X.509 certificates May 8 00:34:20.816401 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.88-flatcar: 75e4e434c57439d3f2eaf7797bbbcdd698dafd0e' May 8 00:34:20.816408 kernel: Key type .fscrypt registered May 8 00:34:20.816417 kernel: Key type fscrypt-provisioning registered May 8 00:34:20.816425 kernel: ima: No TPM chip found, activating TPM-bypass! May 8 00:34:20.816431 kernel: ima: Allocated hash algorithm: sha1 May 8 00:34:20.816439 kernel: ima: No architecture policies found May 8 00:34:20.816450 kernel: clk: Disabling unused clocks May 8 00:34:20.816458 kernel: Freeing unused kernel image (initmem) memory: 42856K May 8 00:34:20.816464 kernel: Write protecting the kernel read-only data: 36864k May 8 00:34:20.816470 kernel: Freeing unused kernel image (rodata/data gap) memory: 1836K May 8 00:34:20.816476 kernel: Run /init as init process May 8 00:34:20.816487 kernel: with arguments: May 8 00:34:20.816495 kernel: /init May 8 00:34:20.816502 kernel: with environment: May 8 00:34:20.816508 kernel: HOME=/ May 8 00:34:20.816514 kernel: TERM=linux May 8 00:34:20.816520 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 8 00:34:20.816527 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 8 00:34:20.816535 systemd[1]: Detected virtualization vmware. May 8 00:34:20.816542 systemd[1]: Detected architecture x86-64. May 8 00:34:20.816550 systemd[1]: Running in initrd. May 8 00:34:20.816557 systemd[1]: No hostname configured, using default hostname. May 8 00:34:20.816563 systemd[1]: Hostname set to . May 8 00:34:20.816569 systemd[1]: Initializing machine ID from random generator. May 8 00:34:20.816576 systemd[1]: Queued start job for default target initrd.target. May 8 00:34:20.816582 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 8 00:34:20.816590 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 8 00:34:20.816597 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 8 00:34:20.816605 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 8 00:34:20.816611 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 8 00:34:20.816618 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 8 00:34:20.816626 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 8 00:34:20.816632 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 8 00:34:20.816639 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 8 00:34:20.816647 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 8 00:34:20.816653 systemd[1]: Reached target paths.target - Path Units. May 8 00:34:20.816660 systemd[1]: Reached target slices.target - Slice Units. May 8 00:34:20.816670 systemd[1]: Reached target swap.target - Swaps. May 8 00:34:20.816677 systemd[1]: Reached target timers.target - Timer Units. May 8 00:34:20.816683 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 8 00:34:20.816690 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 8 00:34:20.816701 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 8 00:34:20.816711 systemd[1]: Listening on systemd-journald.socket - Journal Socket. May 8 00:34:20.816719 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 8 00:34:20.816725 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 8 00:34:20.816732 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 8 00:34:20.816739 systemd[1]: Reached target sockets.target - Socket Units. May 8 00:34:20.816747 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 8 00:34:20.816754 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 8 00:34:20.816765 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 8 00:34:20.816775 systemd[1]: Starting systemd-fsck-usr.service... May 8 00:34:20.816782 systemd[1]: Starting systemd-journald.service - Journal Service... May 8 00:34:20.816790 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 8 00:34:20.816797 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 8 00:34:20.816816 systemd-journald[214]: Collecting audit messages is disabled. May 8 00:34:20.816836 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 8 00:34:20.816845 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 8 00:34:20.816856 systemd[1]: Finished systemd-fsck-usr.service. May 8 00:34:20.816864 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 8 00:34:20.816870 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 8 00:34:20.816879 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 8 00:34:20.816885 kernel: Bridge firewalling registered May 8 00:34:20.816892 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 8 00:34:20.816898 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 8 00:34:20.816905 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 8 00:34:20.816911 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 8 00:34:20.816918 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 8 00:34:20.816925 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 8 00:34:20.816934 systemd-journald[214]: Journal started May 8 00:34:20.816949 systemd-journald[214]: Runtime Journal (/run/log/journal/0d0bcc9349fd49a6858ad893107d902e) is 4.8M, max 38.6M, 33.8M free. May 8 00:34:20.776515 systemd-modules-load[215]: Inserted module 'overlay' May 8 00:34:20.797778 systemd-modules-load[215]: Inserted module 'br_netfilter' May 8 00:34:20.819312 systemd[1]: Started systemd-journald.service - Journal Service. May 8 00:34:20.824132 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 8 00:34:20.827065 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 8 00:34:20.829144 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 8 00:34:20.832026 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 8 00:34:20.835298 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 8 00:34:20.837133 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 8 00:34:20.841165 dracut-cmdline[244]: dracut-dracut-053 May 8 00:34:20.842377 dracut-cmdline[244]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=86cfbfcc89a9c46f6cbba5bdb3509d1ce1367f0c93b0b0e4c6bdcad1a2064c90 May 8 00:34:20.859274 systemd-resolved[251]: Positive Trust Anchors: May 8 00:34:20.859283 systemd-resolved[251]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 8 00:34:20.859312 systemd-resolved[251]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 8 00:34:20.861150 systemd-resolved[251]: Defaulting to hostname 'linux'. May 8 00:34:20.861890 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 8 00:34:20.862075 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 8 00:34:20.894069 kernel: SCSI subsystem initialized May 8 00:34:20.900051 kernel: Loading iSCSI transport class v2.0-870. May 8 00:34:20.907051 kernel: iscsi: registered transport (tcp) May 8 00:34:20.920049 kernel: iscsi: registered transport (qla4xxx) May 8 00:34:20.920067 kernel: QLogic iSCSI HBA Driver May 8 00:34:20.939660 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 8 00:34:20.944142 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 8 00:34:20.959321 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 8 00:34:20.960404 kernel: device-mapper: uevent: version 1.0.3 May 8 00:34:20.960414 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com May 8 00:34:20.993071 kernel: raid6: avx2x4 gen() 44245 MB/s May 8 00:34:21.010065 kernel: raid6: avx2x2 gen() 46366 MB/s May 8 00:34:21.027333 kernel: raid6: avx2x1 gen() 35277 MB/s May 8 00:34:21.027380 kernel: raid6: using algorithm avx2x2 gen() 46366 MB/s May 8 00:34:21.045317 kernel: raid6: .... xor() 28772 MB/s, rmw enabled May 8 00:34:21.045366 kernel: raid6: using avx2x2 recovery algorithm May 8 00:34:21.059064 kernel: xor: automatically using best checksumming function avx May 8 00:34:21.158063 kernel: Btrfs loaded, zoned=no, fsverity=no May 8 00:34:21.164454 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 8 00:34:21.169140 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 8 00:34:21.176215 systemd-udevd[433]: Using default interface naming scheme 'v255'. May 8 00:34:21.178686 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 8 00:34:21.184143 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 8 00:34:21.190913 dracut-pre-trigger[435]: rd.md=0: removing MD RAID activation May 8 00:34:21.205879 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 8 00:34:21.210122 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 8 00:34:21.288793 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 8 00:34:21.291169 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 8 00:34:21.305787 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 8 00:34:21.306538 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 8 00:34:21.306805 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 8 00:34:21.307004 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 8 00:34:21.312179 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 8 00:34:21.320480 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 8 00:34:21.361066 kernel: VMware PVSCSI driver - version 1.0.7.0-k May 8 00:34:21.367544 kernel: vmw_pvscsi: using 64bit dma May 8 00:34:21.367579 kernel: vmw_pvscsi: max_id: 16 May 8 00:34:21.367592 kernel: vmw_pvscsi: setting ring_pages to 8 May 8 00:34:21.370251 kernel: VMware vmxnet3 virtual NIC driver - version 1.7.0.0-k-NAPI May 8 00:34:21.370274 kernel: vmxnet3 0000:0b:00.0: # of Tx queues : 2, # of Rx queues : 2 May 8 00:34:21.381317 kernel: vmxnet3 0000:0b:00.0 eth0: NIC Link is Up 10000 Mbps May 8 00:34:21.385055 kernel: vmw_pvscsi: enabling reqCallThreshold May 8 00:34:21.385084 kernel: vmw_pvscsi: driver-based request coalescing enabled May 8 00:34:21.385097 kernel: vmw_pvscsi: using MSI-X May 8 00:34:21.385114 kernel: cryptd: max_cpu_qlen set to 1000 May 8 00:34:21.385126 kernel: scsi host0: VMware PVSCSI storage adapter rev 2, req/cmp/msg rings: 8/8/1 pages, cmd_per_lun=254 May 8 00:34:21.393186 kernel: vmxnet3 0000:0b:00.0 ens192: renamed from eth0 May 8 00:34:21.393355 kernel: vmw_pvscsi 0000:03:00.0: VMware PVSCSI rev 2 host #0 May 8 00:34:21.398399 kernel: scsi 0:0:0:0: Direct-Access VMware Virtual disk 2.0 PQ: 0 ANSI: 6 May 8 00:34:21.398433 kernel: AVX2 version of gcm_enc/dec engaged. May 8 00:34:21.399174 kernel: AES CTR mode by8 optimization enabled May 8 00:34:21.402142 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 8 00:34:21.402216 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 8 00:34:21.402523 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 8 00:34:21.402831 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 8 00:34:21.402947 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 8 00:34:21.403076 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 8 00:34:21.407090 kernel: libata version 3.00 loaded. May 8 00:34:21.408077 kernel: ata_piix 0000:00:07.1: version 2.13 May 8 00:34:21.419533 kernel: scsi host1: ata_piix May 8 00:34:21.419618 kernel: scsi host2: ata_piix May 8 00:34:21.419685 kernel: ata1: PATA max UDMA/33 cmd 0x1f0 ctl 0x3f6 bmdma 0x1060 irq 14 May 8 00:34:21.419695 kernel: ata2: PATA max UDMA/33 cmd 0x170 ctl 0x376 bmdma 0x1068 irq 15 May 8 00:34:21.408289 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 8 00:34:21.425687 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 8 00:34:21.430143 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 8 00:34:21.441446 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 8 00:34:21.589128 kernel: ata2.00: ATAPI: VMware Virtual IDE CDROM Drive, 00000001, max UDMA/33 May 8 00:34:21.597284 kernel: scsi 2:0:0:0: CD-ROM NECVMWar VMware IDE CDR10 1.00 PQ: 0 ANSI: 5 May 8 00:34:21.609064 kernel: sd 0:0:0:0: [sda] 17805312 512-byte logical blocks: (9.12 GB/8.49 GiB) May 8 00:34:21.647654 kernel: sd 0:0:0:0: [sda] Write Protect is off May 8 00:34:21.647744 kernel: sd 0:0:0:0: [sda] Mode Sense: 31 00 00 00 May 8 00:34:21.647821 kernel: sd 0:0:0:0: [sda] Cache data unavailable May 8 00:34:21.647886 kernel: sd 0:0:0:0: [sda] Assuming drive cache: write through May 8 00:34:21.647959 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 1x/1x writer dvd-ram cd/rw xa/form2 cdda tray May 8 00:34:21.648027 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 May 8 00:34:21.648062 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 May 8 00:34:21.648131 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 8 00:34:21.648139 kernel: sd 0:0:0:0: [sda] Attached SCSI disk May 8 00:34:21.677828 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by (udev-worker) (487) May 8 00:34:21.681941 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_disk ROOT. May 8 00:34:21.685149 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_disk OEM. May 8 00:34:21.687904 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_disk EFI-SYSTEM. May 8 00:34:21.693055 kernel: BTRFS: device fsid 28014d97-e6d7-4db4-b1d9-76a980e09972 devid 1 transid 39 /dev/sda3 scanned by (udev-worker) (485) May 8 00:34:21.698940 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_disk USR-A. May 8 00:34:21.699249 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_disk USR-A. May 8 00:34:21.704158 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 8 00:34:21.771078 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 8 00:34:21.778101 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 8 00:34:22.779091 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 8 00:34:22.779722 disk-uuid[593]: The operation has completed successfully. May 8 00:34:22.836374 systemd[1]: disk-uuid.service: Deactivated successfully. May 8 00:34:22.836453 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 8 00:34:22.840151 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 8 00:34:22.847211 sh[609]: Success May 8 00:34:22.872076 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" May 8 00:34:22.941484 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 8 00:34:22.943115 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 8 00:34:22.943483 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 8 00:34:23.029624 kernel: BTRFS info (device dm-0): first mount of filesystem 28014d97-e6d7-4db4-b1d9-76a980e09972 May 8 00:34:23.029670 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm May 8 00:34:23.029684 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead May 8 00:34:23.031185 kernel: BTRFS info (device dm-0): disabling log replay at mount time May 8 00:34:23.032335 kernel: BTRFS info (device dm-0): using free space tree May 8 00:34:23.041059 kernel: BTRFS info (device dm-0): enabling ssd optimizations May 8 00:34:23.042960 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 8 00:34:23.052173 systemd[1]: Starting afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments... May 8 00:34:23.053629 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 8 00:34:23.069868 kernel: BTRFS info (device sda6): first mount of filesystem a884989d-7a9b-4fbd-878f-8ac586ff8595 May 8 00:34:23.069906 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm May 8 00:34:23.069921 kernel: BTRFS info (device sda6): using free space tree May 8 00:34:23.077061 kernel: BTRFS info (device sda6): enabling ssd optimizations May 8 00:34:23.084969 systemd[1]: mnt-oem.mount: Deactivated successfully. May 8 00:34:23.086114 kernel: BTRFS info (device sda6): last unmount of filesystem a884989d-7a9b-4fbd-878f-8ac586ff8595 May 8 00:34:23.090909 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 8 00:34:23.095963 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 8 00:34:23.205712 systemd[1]: Finished afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments. May 8 00:34:23.216174 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 8 00:34:23.251539 ignition[669]: Ignition 2.19.0 May 8 00:34:23.251547 ignition[669]: Stage: fetch-offline May 8 00:34:23.251573 ignition[669]: no configs at "/usr/lib/ignition/base.d" May 8 00:34:23.251579 ignition[669]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" May 8 00:34:23.251639 ignition[669]: parsed url from cmdline: "" May 8 00:34:23.251641 ignition[669]: no config URL provided May 8 00:34:23.251643 ignition[669]: reading system config file "/usr/lib/ignition/user.ign" May 8 00:34:23.251648 ignition[669]: no config at "/usr/lib/ignition/user.ign" May 8 00:34:23.253143 ignition[669]: config successfully fetched May 8 00:34:23.253174 ignition[669]: parsing config with SHA512: 2c48eaad9d45e5dd108e9fc0a923f3141af6f87f06d8cc108b16a5b2a9d4bb72868722f6b00c7cfbdf9338fb56d71b2b2d98a545b4fc6a042bb8886dfa630c2f May 8 00:34:23.255962 unknown[669]: fetched base config from "system" May 8 00:34:23.256125 unknown[669]: fetched user config from "vmware" May 8 00:34:23.256704 ignition[669]: fetch-offline: fetch-offline passed May 8 00:34:23.256892 ignition[669]: Ignition finished successfully May 8 00:34:23.257683 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 8 00:34:23.277664 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 8 00:34:23.283179 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 8 00:34:23.294930 systemd-networkd[804]: lo: Link UP May 8 00:34:23.294937 systemd-networkd[804]: lo: Gained carrier May 8 00:34:23.295813 systemd-networkd[804]: Enumeration completed May 8 00:34:23.295863 systemd[1]: Started systemd-networkd.service - Network Configuration. May 8 00:34:23.296069 systemd[1]: Reached target network.target - Network. May 8 00:34:23.296195 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). May 8 00:34:23.296607 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 8 00:34:23.296995 systemd-networkd[804]: ens192: Configuring with /etc/systemd/network/10-dracut-cmdline-99.network. May 8 00:34:23.300231 kernel: vmxnet3 0000:0b:00.0 ens192: intr type 3, mode 0, 3 vectors allocated May 8 00:34:23.300348 kernel: vmxnet3 0000:0b:00.0 ens192: NIC Link is Up 10000 Mbps May 8 00:34:23.301571 systemd-networkd[804]: ens192: Link UP May 8 00:34:23.301575 systemd-networkd[804]: ens192: Gained carrier May 8 00:34:23.309754 ignition[806]: Ignition 2.19.0 May 8 00:34:23.310059 ignition[806]: Stage: kargs May 8 00:34:23.310280 ignition[806]: no configs at "/usr/lib/ignition/base.d" May 8 00:34:23.310418 ignition[806]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" May 8 00:34:23.311150 ignition[806]: kargs: kargs passed May 8 00:34:23.311297 ignition[806]: Ignition finished successfully May 8 00:34:23.312540 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 8 00:34:23.317144 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 8 00:34:23.325431 ignition[814]: Ignition 2.19.0 May 8 00:34:23.325438 ignition[814]: Stage: disks May 8 00:34:23.325551 ignition[814]: no configs at "/usr/lib/ignition/base.d" May 8 00:34:23.325558 ignition[814]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" May 8 00:34:23.326255 ignition[814]: disks: disks passed May 8 00:34:23.326829 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 8 00:34:23.326286 ignition[814]: Ignition finished successfully May 8 00:34:23.327281 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 8 00:34:23.327440 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 8 00:34:23.327649 systemd[1]: Reached target local-fs.target - Local File Systems. May 8 00:34:23.327861 systemd[1]: Reached target sysinit.target - System Initialization. May 8 00:34:23.328054 systemd[1]: Reached target basic.target - Basic System. May 8 00:34:23.332140 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 8 00:34:23.343362 systemd-fsck[822]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks May 8 00:34:23.345141 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 8 00:34:23.348126 systemd[1]: Mounting sysroot.mount - /sysroot... May 8 00:34:23.428925 systemd[1]: Mounted sysroot.mount - /sysroot. May 8 00:34:23.429190 kernel: EXT4-fs (sda9): mounted filesystem 36960c89-ba45-4808-a41c-bf61ce9470a3 r/w with ordered data mode. Quota mode: none. May 8 00:34:23.429422 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 8 00:34:23.440115 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 8 00:34:23.441639 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 8 00:34:23.442021 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. May 8 00:34:23.442064 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 8 00:34:23.442086 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 8 00:34:23.446418 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 8 00:34:23.447267 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 8 00:34:23.456050 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by mount (830) May 8 00:34:23.465897 kernel: BTRFS info (device sda6): first mount of filesystem a884989d-7a9b-4fbd-878f-8ac586ff8595 May 8 00:34:23.465933 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm May 8 00:34:23.465948 kernel: BTRFS info (device sda6): using free space tree May 8 00:34:23.503063 kernel: BTRFS info (device sda6): enabling ssd optimizations May 8 00:34:23.508421 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 8 00:34:23.550425 initrd-setup-root[854]: cut: /sysroot/etc/passwd: No such file or directory May 8 00:34:23.553244 initrd-setup-root[861]: cut: /sysroot/etc/group: No such file or directory May 8 00:34:23.555734 initrd-setup-root[868]: cut: /sysroot/etc/shadow: No such file or directory May 8 00:34:23.558302 initrd-setup-root[875]: cut: /sysroot/etc/gshadow: No such file or directory May 8 00:34:23.689546 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 8 00:34:23.695151 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 8 00:34:23.697660 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 8 00:34:23.703054 kernel: BTRFS info (device sda6): last unmount of filesystem a884989d-7a9b-4fbd-878f-8ac586ff8595 May 8 00:34:23.723426 ignition[943]: INFO : Ignition 2.19.0 May 8 00:34:23.723426 ignition[943]: INFO : Stage: mount May 8 00:34:23.723790 ignition[943]: INFO : no configs at "/usr/lib/ignition/base.d" May 8 00:34:23.723790 ignition[943]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" May 8 00:34:23.724218 ignition[943]: INFO : mount: mount passed May 8 00:34:23.724340 ignition[943]: INFO : Ignition finished successfully May 8 00:34:23.724717 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 8 00:34:23.725674 systemd[1]: Starting ignition-files.service - Ignition (files)... May 8 00:34:23.796423 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 8 00:34:24.026908 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 8 00:34:24.033222 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 8 00:34:24.044134 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (955) May 8 00:34:24.044183 kernel: BTRFS info (device sda6): first mount of filesystem a884989d-7a9b-4fbd-878f-8ac586ff8595 May 8 00:34:24.047074 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm May 8 00:34:24.047108 kernel: BTRFS info (device sda6): using free space tree May 8 00:34:24.052056 kernel: BTRFS info (device sda6): enabling ssd optimizations May 8 00:34:24.053674 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 8 00:34:24.068876 ignition[972]: INFO : Ignition 2.19.0 May 8 00:34:24.068876 ignition[972]: INFO : Stage: files May 8 00:34:24.069548 ignition[972]: INFO : no configs at "/usr/lib/ignition/base.d" May 8 00:34:24.069548 ignition[972]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" May 8 00:34:24.069857 ignition[972]: DEBUG : files: compiled without relabeling support, skipping May 8 00:34:24.070075 ignition[972]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 8 00:34:24.070075 ignition[972]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 8 00:34:24.071806 ignition[972]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 8 00:34:24.072093 ignition[972]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 8 00:34:24.072536 unknown[972]: wrote ssh authorized keys file for user: core May 8 00:34:24.072894 ignition[972]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 8 00:34:24.074409 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" May 8 00:34:24.074409 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" May 8 00:34:24.074409 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" May 8 00:34:24.074409 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 May 8 00:34:24.110624 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK May 8 00:34:24.269767 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" May 8 00:34:24.270019 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" May 8 00:34:24.270019 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" May 8 00:34:24.270019 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" May 8 00:34:24.270019 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" May 8 00:34:24.270661 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 8 00:34:24.270661 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 8 00:34:24.270661 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 8 00:34:24.270661 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 8 00:34:24.270661 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" May 8 00:34:24.270661 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 8 00:34:24.270661 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" May 8 00:34:24.271800 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" May 8 00:34:24.271800 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" May 8 00:34:24.271800 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 May 8 00:34:24.778300 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK May 8 00:34:24.817316 systemd-networkd[804]: ens192: Gained IPv6LL May 8 00:34:25.121848 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" May 8 00:34:25.121848 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/etc/systemd/network/00-vmware.network" May 8 00:34:25.122464 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/etc/systemd/network/00-vmware.network" May 8 00:34:25.122464 ignition[972]: INFO : files: op(d): [started] processing unit "containerd.service" May 8 00:34:25.130436 ignition[972]: INFO : files: op(d): op(e): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" May 8 00:34:25.130688 ignition[972]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" May 8 00:34:25.130688 ignition[972]: INFO : files: op(d): [finished] processing unit "containerd.service" May 8 00:34:25.130688 ignition[972]: INFO : files: op(f): [started] processing unit "prepare-helm.service" May 8 00:34:25.130688 ignition[972]: INFO : files: op(f): op(10): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 8 00:34:25.130688 ignition[972]: INFO : files: op(f): op(10): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 8 00:34:25.130688 ignition[972]: INFO : files: op(f): [finished] processing unit "prepare-helm.service" May 8 00:34:25.130688 ignition[972]: INFO : files: op(11): [started] processing unit "coreos-metadata.service" May 8 00:34:25.130688 ignition[972]: INFO : files: op(11): op(12): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 8 00:34:25.131879 ignition[972]: INFO : files: op(11): op(12): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 8 00:34:25.131879 ignition[972]: INFO : files: op(11): [finished] processing unit "coreos-metadata.service" May 8 00:34:25.131879 ignition[972]: INFO : files: op(13): [started] setting preset to disabled for "coreos-metadata.service" May 8 00:34:25.298274 ignition[972]: INFO : files: op(13): op(14): [started] removing enablement symlink(s) for "coreos-metadata.service" May 8 00:34:25.301058 ignition[972]: INFO : files: op(13): op(14): [finished] removing enablement symlink(s) for "coreos-metadata.service" May 8 00:34:25.301058 ignition[972]: INFO : files: op(13): [finished] setting preset to disabled for "coreos-metadata.service" May 8 00:34:25.301058 ignition[972]: INFO : files: op(15): [started] setting preset to enabled for "prepare-helm.service" May 8 00:34:25.301058 ignition[972]: INFO : files: op(15): [finished] setting preset to enabled for "prepare-helm.service" May 8 00:34:25.301664 ignition[972]: INFO : files: createResultFile: createFiles: op(16): [started] writing file "/sysroot/etc/.ignition-result.json" May 8 00:34:25.301664 ignition[972]: INFO : files: createResultFile: createFiles: op(16): [finished] writing file "/sysroot/etc/.ignition-result.json" May 8 00:34:25.301664 ignition[972]: INFO : files: files passed May 8 00:34:25.301664 ignition[972]: INFO : Ignition finished successfully May 8 00:34:25.302144 systemd[1]: Finished ignition-files.service - Ignition (files). May 8 00:34:25.305172 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 8 00:34:25.306964 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 8 00:34:25.308349 systemd[1]: ignition-quench.service: Deactivated successfully. May 8 00:34:25.308582 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 8 00:34:25.314870 initrd-setup-root-after-ignition[1003]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 8 00:34:25.314870 initrd-setup-root-after-ignition[1003]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 8 00:34:25.315828 initrd-setup-root-after-ignition[1007]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 8 00:34:25.316890 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 8 00:34:25.317507 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 8 00:34:25.321293 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 8 00:34:25.335884 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 8 00:34:25.335983 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 8 00:34:25.336468 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 8 00:34:25.336626 systemd[1]: Reached target initrd.target - Initrd Default Target. May 8 00:34:25.336870 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 8 00:34:25.337519 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 8 00:34:25.347347 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 8 00:34:25.352183 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 8 00:34:25.357915 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 8 00:34:25.358280 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 8 00:34:25.358595 systemd[1]: Stopped target timers.target - Timer Units. May 8 00:34:25.358860 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 8 00:34:25.359200 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 8 00:34:25.359578 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 8 00:34:25.359867 systemd[1]: Stopped target basic.target - Basic System. May 8 00:34:25.360139 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 8 00:34:25.360422 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 8 00:34:25.360711 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 8 00:34:25.360999 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 8 00:34:25.361346 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 8 00:34:25.361631 systemd[1]: Stopped target sysinit.target - System Initialization. May 8 00:34:25.361924 systemd[1]: Stopped target local-fs.target - Local File Systems. May 8 00:34:25.362205 systemd[1]: Stopped target swap.target - Swaps. May 8 00:34:25.362454 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 8 00:34:25.362525 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 8 00:34:25.363010 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 8 00:34:25.363288 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 8 00:34:25.363587 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 8 00:34:25.363736 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 8 00:34:25.364023 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 8 00:34:25.364101 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 8 00:34:25.364549 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 8 00:34:25.364616 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 8 00:34:25.364936 systemd[1]: Stopped target paths.target - Path Units. May 8 00:34:25.365318 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 8 00:34:25.369063 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 8 00:34:25.369230 systemd[1]: Stopped target slices.target - Slice Units. May 8 00:34:25.369515 systemd[1]: Stopped target sockets.target - Socket Units. May 8 00:34:25.369693 systemd[1]: iscsid.socket: Deactivated successfully. May 8 00:34:25.369741 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 8 00:34:25.369900 systemd[1]: iscsiuio.socket: Deactivated successfully. May 8 00:34:25.369945 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 8 00:34:25.370184 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 8 00:34:25.370246 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 8 00:34:25.370406 systemd[1]: ignition-files.service: Deactivated successfully. May 8 00:34:25.370464 systemd[1]: Stopped ignition-files.service - Ignition (files). May 8 00:34:25.382154 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 8 00:34:25.382269 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 8 00:34:25.382344 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 8 00:34:25.383706 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 8 00:34:25.383814 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 8 00:34:25.383881 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 8 00:34:25.384113 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 8 00:34:25.384174 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 8 00:34:25.386558 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 8 00:34:25.386820 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 8 00:34:25.391941 ignition[1028]: INFO : Ignition 2.19.0 May 8 00:34:25.394387 ignition[1028]: INFO : Stage: umount May 8 00:34:25.394387 ignition[1028]: INFO : no configs at "/usr/lib/ignition/base.d" May 8 00:34:25.394387 ignition[1028]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" May 8 00:34:25.394387 ignition[1028]: INFO : umount: umount passed May 8 00:34:25.394387 ignition[1028]: INFO : Ignition finished successfully May 8 00:34:25.395056 systemd[1]: ignition-mount.service: Deactivated successfully. May 8 00:34:25.395117 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 8 00:34:25.395547 systemd[1]: Stopped target network.target - Network. May 8 00:34:25.395646 systemd[1]: ignition-disks.service: Deactivated successfully. May 8 00:34:25.395688 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 8 00:34:25.395801 systemd[1]: ignition-kargs.service: Deactivated successfully. May 8 00:34:25.395829 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 8 00:34:25.395933 systemd[1]: ignition-setup.service: Deactivated successfully. May 8 00:34:25.395960 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 8 00:34:25.396074 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 8 00:34:25.396095 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 8 00:34:25.396313 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 8 00:34:25.396467 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 8 00:34:25.403127 systemd[1]: systemd-resolved.service: Deactivated successfully. May 8 00:34:25.403357 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 8 00:34:25.405086 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 8 00:34:25.405387 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 8 00:34:25.405412 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 8 00:34:25.406378 systemd[1]: systemd-networkd.service: Deactivated successfully. May 8 00:34:25.406445 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 8 00:34:25.407014 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 8 00:34:25.407231 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 8 00:34:25.411168 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 8 00:34:25.411269 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 8 00:34:25.411313 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 8 00:34:25.411469 systemd[1]: afterburn-network-kargs.service: Deactivated successfully. May 8 00:34:25.411500 systemd[1]: Stopped afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments. May 8 00:34:25.412619 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 8 00:34:25.412649 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 8 00:34:25.412819 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 8 00:34:25.412866 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 8 00:34:25.413685 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 8 00:34:25.421196 systemd[1]: network-cleanup.service: Deactivated successfully. May 8 00:34:25.421281 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 8 00:34:25.426585 systemd[1]: systemd-udevd.service: Deactivated successfully. May 8 00:34:25.426693 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 8 00:34:25.427733 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 8 00:34:25.427773 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 8 00:34:25.428206 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 8 00:34:25.428226 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 8 00:34:25.428408 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 8 00:34:25.428432 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 8 00:34:25.428748 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 8 00:34:25.428771 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 8 00:34:25.429119 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 8 00:34:25.429142 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 8 00:34:25.433148 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 8 00:34:25.433271 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 8 00:34:25.433307 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 8 00:34:25.433466 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 8 00:34:25.433494 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 8 00:34:25.436396 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 8 00:34:25.436478 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 8 00:34:25.511909 systemd[1]: sysroot-boot.service: Deactivated successfully. May 8 00:34:25.511983 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 8 00:34:25.512591 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 8 00:34:25.512778 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 8 00:34:25.512818 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 8 00:34:25.515156 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 8 00:34:25.567769 systemd[1]: Switching root. May 8 00:34:25.589446 systemd-journald[214]: Journal stopped May 8 00:34:27.285492 systemd-journald[214]: Received SIGTERM from PID 1 (systemd). May 8 00:34:27.285514 kernel: SELinux: policy capability network_peer_controls=1 May 8 00:34:27.285522 kernel: SELinux: policy capability open_perms=1 May 8 00:34:27.285527 kernel: SELinux: policy capability extended_socket_class=1 May 8 00:34:27.285532 kernel: SELinux: policy capability always_check_network=0 May 8 00:34:27.285537 kernel: SELinux: policy capability cgroup_seclabel=1 May 8 00:34:27.285545 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 8 00:34:27.285551 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 8 00:34:27.285557 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 8 00:34:27.285564 systemd[1]: Successfully loaded SELinux policy in 68.354ms. May 8 00:34:27.285570 kernel: audit: type=1403 audit(1746664466.603:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 8 00:34:27.285576 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 6.868ms. May 8 00:34:27.285583 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 8 00:34:27.285591 systemd[1]: Detected virtualization vmware. May 8 00:34:27.285597 systemd[1]: Detected architecture x86-64. May 8 00:34:27.285603 systemd[1]: Detected first boot. May 8 00:34:27.285610 systemd[1]: Initializing machine ID from random generator. May 8 00:34:27.285617 zram_generator::config[1088]: No configuration found. May 8 00:34:27.285624 systemd[1]: Populated /etc with preset unit settings. May 8 00:34:27.285632 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") May 8 00:34:27.285641 systemd[1]: COREOS_CUSTOM_PUBLIC_IPV4=$(ip addr show ens192 | grep -v "inet 10." | grep -Po "inet \K[\d.]+")" > ${OUTPUT}" May 8 00:34:27.285651 systemd[1]: Queued start job for default target multi-user.target. May 8 00:34:27.285660 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. May 8 00:34:27.285667 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 8 00:34:27.285675 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 8 00:34:27.285681 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 8 00:34:27.285688 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 8 00:34:27.285694 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 8 00:34:27.285701 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 8 00:34:27.285707 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 8 00:34:27.285715 systemd[1]: Created slice user.slice - User and Session Slice. May 8 00:34:27.285722 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 8 00:34:27.285729 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 8 00:34:27.285735 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 8 00:34:27.285742 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 8 00:34:27.285748 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 8 00:34:27.285754 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 8 00:34:27.285761 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... May 8 00:34:27.285767 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 8 00:34:27.285775 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 8 00:34:27.285782 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 8 00:34:27.285790 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 8 00:34:27.285797 systemd[1]: Reached target slices.target - Slice Units. May 8 00:34:27.285803 systemd[1]: Reached target swap.target - Swaps. May 8 00:34:27.285811 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 8 00:34:27.285819 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 8 00:34:27.285829 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 8 00:34:27.285843 systemd[1]: Listening on systemd-journald.socket - Journal Socket. May 8 00:34:27.285854 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 8 00:34:27.285865 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 8 00:34:27.285872 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 8 00:34:27.285881 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 8 00:34:27.285889 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 8 00:34:27.285896 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 8 00:34:27.285903 systemd[1]: Mounting media.mount - External Media Directory... May 8 00:34:27.285910 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 8 00:34:27.285917 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 8 00:34:27.285924 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 8 00:34:27.285930 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 8 00:34:27.285937 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 8 00:34:27.285945 systemd[1]: Starting ignition-delete-config.service - Ignition (delete config)... May 8 00:34:27.285952 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 8 00:34:27.285959 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 8 00:34:27.285965 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 8 00:34:27.285972 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 8 00:34:27.285979 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 8 00:34:27.285985 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 8 00:34:27.285992 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 8 00:34:27.286000 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 8 00:34:27.286008 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. May 8 00:34:27.286014 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) May 8 00:34:27.286021 systemd[1]: Starting systemd-journald.service - Journal Service... May 8 00:34:27.286028 kernel: fuse: init (API version 7.39) May 8 00:34:27.286035 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 8 00:34:27.286057 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 8 00:34:27.286075 systemd-journald[1182]: Collecting audit messages is disabled. May 8 00:34:27.286095 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 8 00:34:27.286102 systemd-journald[1182]: Journal started May 8 00:34:27.286117 systemd-journald[1182]: Runtime Journal (/run/log/journal/19fde08fb48c4219b503e15659632779) is 4.8M, max 38.6M, 33.8M free. May 8 00:34:27.286488 jq[1166]: true May 8 00:34:27.289082 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 8 00:34:27.291066 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 8 00:34:27.291086 kernel: loop: module loaded May 8 00:34:27.292118 systemd[1]: Started systemd-journald.service - Journal Service. May 8 00:34:27.294330 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 8 00:34:27.300168 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 8 00:34:27.300493 systemd[1]: Mounted media.mount - External Media Directory. May 8 00:34:27.300636 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 8 00:34:27.300944 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 8 00:34:27.301099 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 8 00:34:27.302242 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 8 00:34:27.302467 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 8 00:34:27.302548 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 8 00:34:27.302760 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 8 00:34:27.302833 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 8 00:34:27.303046 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 8 00:34:27.303122 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 8 00:34:27.303333 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 8 00:34:27.303407 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 8 00:34:27.303610 systemd[1]: modprobe@loop.service: Deactivated successfully. May 8 00:34:27.303682 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 8 00:34:27.308491 jq[1194]: true May 8 00:34:27.314158 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 8 00:34:27.316898 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 8 00:34:27.318223 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 8 00:34:27.328152 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 8 00:34:27.330112 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 8 00:34:27.330372 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 8 00:34:27.333229 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 8 00:34:27.333381 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 8 00:34:27.336033 systemd[1]: Reached target network-pre.target - Preparation for Network. May 8 00:34:27.336308 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 8 00:34:27.355189 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 8 00:34:27.359229 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 8 00:34:27.359477 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 8 00:34:27.378051 kernel: ACPI: bus type drm_connector registered May 8 00:34:27.380129 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 8 00:34:27.380741 systemd[1]: modprobe@drm.service: Deactivated successfully. May 8 00:34:27.380858 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 8 00:34:27.381162 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 8 00:34:27.385122 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 8 00:34:27.388638 systemd-journald[1182]: Time spent on flushing to /var/log/journal/19fde08fb48c4219b503e15659632779 is 18.054ms for 1820 entries. May 8 00:34:27.388638 systemd-journald[1182]: System Journal (/var/log/journal/19fde08fb48c4219b503e15659632779) is 8.0M, max 584.8M, 576.8M free. May 8 00:34:27.817258 systemd-journald[1182]: Received client request to flush runtime journal. May 8 00:34:27.615516 ignition[1216]: Ignition 2.19.0 May 8 00:34:27.436932 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 8 00:34:27.615885 ignition[1216]: deleting config from guestinfo properties May 8 00:34:27.441129 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... May 8 00:34:27.704115 ignition[1216]: Successfully deleted config May 8 00:34:27.456936 udevadm[1235]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. May 8 00:34:27.506370 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 8 00:34:27.543610 systemd-tmpfiles[1215]: ACLs are not supported, ignoring. May 8 00:34:27.543619 systemd-tmpfiles[1215]: ACLs are not supported, ignoring. May 8 00:34:27.548278 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 8 00:34:27.586909 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 8 00:34:27.587491 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 8 00:34:27.705420 systemd[1]: Finished ignition-delete-config.service - Ignition (delete config). May 8 00:34:27.818782 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 8 00:34:27.831078 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 8 00:34:27.838217 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 8 00:34:28.065502 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 8 00:34:28.070505 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 8 00:34:28.079557 systemd-tmpfiles[1280]: ACLs are not supported, ignoring. May 8 00:34:28.079752 systemd-tmpfiles[1280]: ACLs are not supported, ignoring. May 8 00:34:28.082317 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 8 00:34:28.561703 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 8 00:34:28.569145 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 8 00:34:28.583693 systemd-udevd[1286]: Using default interface naming scheme 'v255'. May 8 00:34:28.609995 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 8 00:34:28.617185 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 8 00:34:28.633923 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 8 00:34:28.645620 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. May 8 00:34:28.690070 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 May 8 00:34:28.698076 kernel: ACPI: button: Power Button [PWRF] May 8 00:34:28.699862 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 8 00:34:28.743062 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (1289) May 8 00:34:28.785074 kernel: piix4_smbus 0000:00:07.3: SMBus Host Controller not enabled! May 8 00:34:28.813092 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input3 May 8 00:34:28.820096 (udev-worker)[1291]: id: Truncating stdout of 'dmi_memory_id' up to 16384 byte. May 8 00:34:28.823055 kernel: vmw_vmci 0000:00:07.7: Using capabilities 0xc May 8 00:34:28.823222 kernel: Guest personality initialized and is active May 8 00:34:28.824122 kernel: VMCI host device registered (name=vmci, major=10, minor=125) May 8 00:34:28.825057 kernel: Initialized host personality May 8 00:34:28.828075 kernel: mousedev: PS/2 mouse device common for all mice May 8 00:34:28.846149 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 8 00:34:28.854429 systemd-networkd[1293]: lo: Link UP May 8 00:34:28.854631 systemd-networkd[1293]: lo: Gained carrier May 8 00:34:28.856248 systemd-networkd[1293]: Enumeration completed May 8 00:34:28.856509 systemd-networkd[1293]: ens192: Configuring with /etc/systemd/network/00-vmware.network. May 8 00:34:28.856859 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_disk OEM. May 8 00:34:28.857031 systemd[1]: Started systemd-networkd.service - Network Configuration. May 8 00:34:28.858834 kernel: vmxnet3 0000:0b:00.0 ens192: intr type 3, mode 0, 3 vectors allocated May 8 00:34:28.858960 kernel: vmxnet3 0000:0b:00.0 ens192: NIC Link is Up 10000 Mbps May 8 00:34:28.859443 systemd-networkd[1293]: ens192: Link UP May 8 00:34:28.859588 systemd-networkd[1293]: ens192: Gained carrier May 8 00:34:28.865233 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 8 00:34:28.894314 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. May 8 00:34:28.900255 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... May 8 00:34:28.910023 lvm[1328]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 8 00:34:28.930804 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. May 8 00:34:28.931454 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 8 00:34:28.940194 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... May 8 00:34:28.940600 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 8 00:34:28.943119 lvm[1333]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 8 00:34:28.967054 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. May 8 00:34:28.967675 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 8 00:34:28.967865 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 8 00:34:28.967930 systemd[1]: Reached target local-fs.target - Local File Systems. May 8 00:34:28.968109 systemd[1]: Reached target machines.target - Containers. May 8 00:34:28.968958 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). May 8 00:34:28.974128 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 8 00:34:28.977174 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 8 00:34:28.977459 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 8 00:34:28.979129 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 8 00:34:28.981142 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... May 8 00:34:28.983120 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 8 00:34:28.984662 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 8 00:34:29.005062 kernel: loop0: detected capacity change from 0 to 210664 May 8 00:34:29.017273 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 8 00:34:29.035440 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 8 00:34:29.036308 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. May 8 00:34:29.183063 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 8 00:34:29.232104 kernel: loop1: detected capacity change from 0 to 140768 May 8 00:34:29.328090 kernel: loop2: detected capacity change from 0 to 142488 May 8 00:34:29.362062 kernel: loop3: detected capacity change from 0 to 2976 May 8 00:34:29.420055 kernel: loop4: detected capacity change from 0 to 210664 May 8 00:34:29.501073 kernel: loop5: detected capacity change from 0 to 140768 May 8 00:34:29.541254 kernel: loop6: detected capacity change from 0 to 142488 May 8 00:34:29.600134 kernel: loop7: detected capacity change from 0 to 2976 May 8 00:34:29.611167 (sd-merge)[1356]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-vmware'. May 8 00:34:29.611509 (sd-merge)[1356]: Merged extensions into '/usr'. May 8 00:34:29.622564 systemd[1]: Reloading requested from client PID 1343 ('systemd-sysext') (unit systemd-sysext.service)... May 8 00:34:29.622576 systemd[1]: Reloading... May 8 00:34:29.657217 zram_generator::config[1384]: No configuration found. May 8 00:34:29.732817 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") May 8 00:34:29.752386 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 8 00:34:29.798586 systemd[1]: Reloading finished in 175 ms. May 8 00:34:29.809985 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 8 00:34:29.818140 systemd[1]: Starting ensure-sysext.service... May 8 00:34:29.820321 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 8 00:34:29.823011 systemd[1]: Reloading requested from client PID 1445 ('systemctl') (unit ensure-sysext.service)... May 8 00:34:29.823020 systemd[1]: Reloading... May 8 00:34:29.834124 ldconfig[1339]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 8 00:34:29.839179 systemd-tmpfiles[1446]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 8 00:34:29.839520 systemd-tmpfiles[1446]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 8 00:34:29.839998 systemd-tmpfiles[1446]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 8 00:34:29.840609 systemd-tmpfiles[1446]: ACLs are not supported, ignoring. May 8 00:34:29.840683 systemd-tmpfiles[1446]: ACLs are not supported, ignoring. May 8 00:34:29.849563 systemd-tmpfiles[1446]: Detected autofs mount point /boot during canonicalization of boot. May 8 00:34:29.849666 systemd-tmpfiles[1446]: Skipping /boot May 8 00:34:29.855396 systemd-tmpfiles[1446]: Detected autofs mount point /boot during canonicalization of boot. May 8 00:34:29.855475 systemd-tmpfiles[1446]: Skipping /boot May 8 00:34:29.865070 zram_generator::config[1472]: No configuration found. May 8 00:34:29.935164 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") May 8 00:34:29.952077 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 8 00:34:29.994451 systemd[1]: Reloading finished in 171 ms. May 8 00:34:30.006755 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 8 00:34:30.020507 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 8 00:34:30.024430 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... May 8 00:34:30.038944 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 8 00:34:30.039980 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 8 00:34:30.042122 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 8 00:34:30.044924 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 8 00:34:30.048323 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 8 00:34:30.049285 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 8 00:34:30.051197 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 8 00:34:30.053463 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 8 00:34:30.053780 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 8 00:34:30.053848 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 8 00:34:30.054282 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 8 00:34:30.054361 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 8 00:34:30.061982 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 8 00:34:30.063109 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 8 00:34:30.063266 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 8 00:34:30.063323 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 8 00:34:30.063732 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 8 00:34:30.067324 systemd[1]: modprobe@loop.service: Deactivated successfully. May 8 00:34:30.067413 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 8 00:34:30.070655 systemd[1]: Finished ensure-sysext.service. May 8 00:34:30.070937 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 8 00:34:30.071017 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 8 00:34:30.073943 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 8 00:34:30.076167 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 8 00:34:30.078873 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 8 00:34:30.079050 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 8 00:34:30.079080 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 8 00:34:30.083811 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 8 00:34:30.083986 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 8 00:34:30.084545 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 8 00:34:30.084645 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 8 00:34:30.088515 systemd[1]: modprobe@drm.service: Deactivated successfully. May 8 00:34:30.093684 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 8 00:34:30.098059 systemd[1]: modprobe@loop.service: Deactivated successfully. May 8 00:34:30.098182 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 8 00:34:30.098708 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 8 00:34:30.114959 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 8 00:34:30.117282 augenrules[1588]: No rules May 8 00:34:30.122249 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 8 00:34:30.122562 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. May 8 00:34:30.133292 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 8 00:34:30.147693 systemd-resolved[1546]: Positive Trust Anchors: May 8 00:34:30.147702 systemd-resolved[1546]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 8 00:34:30.147725 systemd-resolved[1546]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 8 00:34:30.151999 systemd-resolved[1546]: Defaulting to hostname 'linux'. May 8 00:34:30.153266 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 8 00:34:30.153451 systemd[1]: Reached target network.target - Network. May 8 00:34:30.153556 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 8 00:34:30.161070 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 8 00:34:30.161338 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 8 00:34:30.163832 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 8 00:34:30.164158 systemd[1]: Reached target sysinit.target - System Initialization. May 8 00:34:30.164323 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 8 00:34:30.164451 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 8 00:34:30.164564 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 8 00:34:30.164682 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 8 00:34:30.164697 systemd[1]: Reached target paths.target - Path Units. May 8 00:34:30.164804 systemd[1]: Reached target time-set.target - System Time Set. May 8 00:34:30.165073 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 8 00:34:30.165221 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 8 00:34:30.165330 systemd[1]: Reached target timers.target - Timer Units. May 8 00:34:30.165938 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 8 00:34:30.167255 systemd[1]: Starting docker.socket - Docker Socket for the API... May 8 00:34:30.168049 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 8 00:34:30.169672 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 8 00:34:30.169787 systemd[1]: Reached target sockets.target - Socket Units. May 8 00:34:30.169877 systemd[1]: Reached target basic.target - Basic System. May 8 00:34:30.170060 systemd[1]: System is tainted: cgroupsv1 May 8 00:34:30.170085 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 8 00:34:30.170098 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 8 00:34:30.173170 systemd[1]: Starting containerd.service - containerd container runtime... May 8 00:34:30.176180 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 8 00:34:30.177302 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 8 00:34:30.179645 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 8 00:34:30.179770 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 8 00:34:30.184168 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 8 00:34:30.185447 jq[1606]: false May 8 00:34:30.188159 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 8 00:34:30.191127 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 8 00:34:30.197167 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 8 00:34:30.203932 systemd[1]: Starting systemd-logind.service - User Login Management... May 8 00:34:30.205032 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 8 00:34:30.206213 extend-filesystems[1607]: Found loop4 May 8 00:34:30.206213 extend-filesystems[1607]: Found loop5 May 8 00:34:30.206213 extend-filesystems[1607]: Found loop6 May 8 00:34:30.206213 extend-filesystems[1607]: Found loop7 May 8 00:34:30.206213 extend-filesystems[1607]: Found sda May 8 00:34:30.206213 extend-filesystems[1607]: Found sda1 May 8 00:34:30.206213 extend-filesystems[1607]: Found sda2 May 8 00:34:30.206213 extend-filesystems[1607]: Found sda3 May 8 00:34:30.206213 extend-filesystems[1607]: Found usr May 8 00:34:30.206213 extend-filesystems[1607]: Found sda4 May 8 00:34:30.206213 extend-filesystems[1607]: Found sda6 May 8 00:34:30.206213 extend-filesystems[1607]: Found sda7 May 8 00:34:30.206213 extend-filesystems[1607]: Found sda9 May 8 00:34:30.206213 extend-filesystems[1607]: Checking size of /dev/sda9 May 8 00:34:30.211930 systemd[1]: Starting update-engine.service - Update Engine... May 8 00:34:30.215129 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 8 00:34:30.218166 systemd[1]: Starting vgauthd.service - VGAuth Service for open-vm-tools... May 8 00:34:30.223640 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 8 00:34:30.224262 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 8 00:34:30.231169 extend-filesystems[1607]: Old size kept for /dev/sda9 May 8 00:34:30.231492 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 8 00:34:30.231638 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 8 00:34:30.241114 extend-filesystems[1607]: Found sr0 May 8 00:34:30.243843 jq[1624]: true May 8 00:34:30.249101 update_engine[1619]: I20250508 00:34:30.247773 1619 main.cc:92] Flatcar Update Engine starting May 8 00:34:30.245518 systemd[1]: extend-filesystems.service: Deactivated successfully. May 8 00:34:30.245648 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 8 00:34:30.245975 systemd[1]: motdgen.service: Deactivated successfully. May 8 00:34:30.251286 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 8 00:34:30.255341 (ntainerd)[1637]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 8 00:34:30.257195 dbus-daemon[1604]: [system] SELinux support is enabled May 8 00:34:30.258299 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 8 00:34:30.261030 jq[1646]: true May 8 00:34:30.277996 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (1288) May 8 00:34:30.278029 update_engine[1619]: I20250508 00:34:30.269138 1619 update_check_scheduler.cc:74] Next update check in 10m44s May 8 00:34:30.282195 systemd[1]: Started vgauthd.service - VGAuth Service for open-vm-tools. May 8 00:34:30.289667 systemd[1]: Started update-engine.service - Update Engine. May 8 00:34:30.292231 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 8 00:34:30.292519 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 8 00:34:30.292666 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 8 00:34:30.292677 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 8 00:34:30.301165 systemd[1]: Starting vmtoolsd.service - Service for virtual machines hosted on VMware... May 8 00:34:30.303218 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 8 00:34:30.303928 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 8 00:34:30.307571 tar[1631]: linux-amd64/helm May 8 00:34:30.312223 systemd-logind[1616]: Watching system buttons on /dev/input/event1 (Power Button) May 8 00:34:30.312234 systemd-logind[1616]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) May 8 00:34:30.314896 systemd-logind[1616]: New seat seat0. May 8 00:34:30.317105 systemd[1]: Started systemd-logind.service - User Login Management. May 8 00:34:30.327175 systemd[1]: Started vmtoolsd.service - Service for virtual machines hosted on VMware. May 8 00:34:30.356749 bash[1675]: Updated "/home/core/.ssh/authorized_keys" May 8 00:34:30.357140 unknown[1653]: Pref_Init: Using '/etc/vmware-tools/vgauth.conf' as preferences filepath May 8 00:34:30.358527 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 8 00:34:30.359064 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. May 8 00:34:30.361155 unknown[1653]: Core dump limit set to -1 May 8 00:34:30.375284 kernel: NET: Registered PF_VSOCK protocol family May 8 00:34:30.481606 locksmithd[1674]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 8 00:34:30.578110 systemd-networkd[1293]: ens192: Gained IPv6LL May 8 00:34:30.579730 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 8 00:34:30.580091 systemd[1]: Reached target network-online.target - Network is Online. May 8 00:34:30.587289 systemd[1]: Starting coreos-metadata.service - VMware metadata agent... May 8 00:34:30.601168 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 8 00:34:30.605228 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 8 00:34:30.664205 containerd[1637]: time="2025-05-08T00:34:30.661890646Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 May 8 00:34:30.676995 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 8 00:34:30.694429 sshd_keygen[1654]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 8 00:34:30.696997 systemd[1]: coreos-metadata.service: Deactivated successfully. May 8 00:34:30.697183 systemd[1]: Finished coreos-metadata.service - VMware metadata agent. May 8 00:34:30.697538 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 8 00:34:30.721783 containerd[1637]: time="2025-05-08T00:34:30.721756828Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 May 8 00:34:30.725665 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 8 00:34:30.727280 containerd[1637]: time="2025-05-08T00:34:30.727257564Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.88-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 May 8 00:34:30.730050 containerd[1637]: time="2025-05-08T00:34:30.727323019Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 May 8 00:34:30.730050 containerd[1637]: time="2025-05-08T00:34:30.727338040Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 May 8 00:34:30.730050 containerd[1637]: time="2025-05-08T00:34:30.727427292Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 May 8 00:34:30.730050 containerd[1637]: time="2025-05-08T00:34:30.727437664Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 May 8 00:34:30.730050 containerd[1637]: time="2025-05-08T00:34:30.727471222Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 May 8 00:34:30.730050 containerd[1637]: time="2025-05-08T00:34:30.727479935Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 May 8 00:34:30.730050 containerd[1637]: time="2025-05-08T00:34:30.727588817Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 8 00:34:30.730050 containerd[1637]: time="2025-05-08T00:34:30.727597847Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 May 8 00:34:30.730050 containerd[1637]: time="2025-05-08T00:34:30.727604980Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 May 8 00:34:30.730050 containerd[1637]: time="2025-05-08T00:34:30.727610905Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 May 8 00:34:30.730050 containerd[1637]: time="2025-05-08T00:34:30.727653764Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 May 8 00:34:30.730050 containerd[1637]: time="2025-05-08T00:34:30.727765039Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 May 8 00:34:30.730226 containerd[1637]: time="2025-05-08T00:34:30.727852693Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 8 00:34:30.730226 containerd[1637]: time="2025-05-08T00:34:30.727861976Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 May 8 00:34:30.730226 containerd[1637]: time="2025-05-08T00:34:30.727909422Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 May 8 00:34:30.730226 containerd[1637]: time="2025-05-08T00:34:30.727935205Z" level=info msg="metadata content store policy set" policy=shared May 8 00:34:30.733194 systemd[1]: Starting issuegen.service - Generate /run/issue... May 8 00:34:30.737630 systemd[1]: issuegen.service: Deactivated successfully. May 8 00:34:30.737757 systemd[1]: Finished issuegen.service - Generate /run/issue. May 8 00:34:30.739312 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 8 00:34:30.748755 containerd[1637]: time="2025-05-08T00:34:30.747006856Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 May 8 00:34:30.748755 containerd[1637]: time="2025-05-08T00:34:30.747059184Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 May 8 00:34:30.748755 containerd[1637]: time="2025-05-08T00:34:30.747070933Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 May 8 00:34:30.748755 containerd[1637]: time="2025-05-08T00:34:30.747079796Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 May 8 00:34:30.748755 containerd[1637]: time="2025-05-08T00:34:30.747089524Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 May 8 00:34:30.748755 containerd[1637]: time="2025-05-08T00:34:30.747190084Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 May 8 00:34:30.748755 containerd[1637]: time="2025-05-08T00:34:30.747367405Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 May 8 00:34:30.748755 containerd[1637]: time="2025-05-08T00:34:30.747426341Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 May 8 00:34:30.748755 containerd[1637]: time="2025-05-08T00:34:30.747435580Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 May 8 00:34:30.748755 containerd[1637]: time="2025-05-08T00:34:30.747443401Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 May 8 00:34:30.748755 containerd[1637]: time="2025-05-08T00:34:30.747450807Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 May 8 00:34:30.748755 containerd[1637]: time="2025-05-08T00:34:30.747457945Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 May 8 00:34:30.748755 containerd[1637]: time="2025-05-08T00:34:30.747464632Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 May 8 00:34:30.748755 containerd[1637]: time="2025-05-08T00:34:30.747475551Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 May 8 00:34:30.748981 containerd[1637]: time="2025-05-08T00:34:30.747483914Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 May 8 00:34:30.748981 containerd[1637]: time="2025-05-08T00:34:30.747492447Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 May 8 00:34:30.748981 containerd[1637]: time="2025-05-08T00:34:30.747499324Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 May 8 00:34:30.748981 containerd[1637]: time="2025-05-08T00:34:30.747505693Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 May 8 00:34:30.748981 containerd[1637]: time="2025-05-08T00:34:30.747517153Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 May 8 00:34:30.748981 containerd[1637]: time="2025-05-08T00:34:30.747524617Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 May 8 00:34:30.748981 containerd[1637]: time="2025-05-08T00:34:30.747531422Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 May 8 00:34:30.748981 containerd[1637]: time="2025-05-08T00:34:30.747538879Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 May 8 00:34:30.748981 containerd[1637]: time="2025-05-08T00:34:30.747545629Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 May 8 00:34:30.748981 containerd[1637]: time="2025-05-08T00:34:30.747553006Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 May 8 00:34:30.748981 containerd[1637]: time="2025-05-08T00:34:30.747560781Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 May 8 00:34:30.748981 containerd[1637]: time="2025-05-08T00:34:30.747567891Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 May 8 00:34:30.748981 containerd[1637]: time="2025-05-08T00:34:30.747579076Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 May 8 00:34:30.748981 containerd[1637]: time="2025-05-08T00:34:30.747588375Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 May 8 00:34:30.749188 containerd[1637]: time="2025-05-08T00:34:30.747595016Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 May 8 00:34:30.749188 containerd[1637]: time="2025-05-08T00:34:30.747601601Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 May 8 00:34:30.749188 containerd[1637]: time="2025-05-08T00:34:30.747608703Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 May 8 00:34:30.749188 containerd[1637]: time="2025-05-08T00:34:30.747616994Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 May 8 00:34:30.749188 containerd[1637]: time="2025-05-08T00:34:30.747630960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 May 8 00:34:30.749188 containerd[1637]: time="2025-05-08T00:34:30.747638877Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 May 8 00:34:30.749188 containerd[1637]: time="2025-05-08T00:34:30.747644695Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 May 8 00:34:30.749188 containerd[1637]: time="2025-05-08T00:34:30.747682710Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 May 8 00:34:30.749188 containerd[1637]: time="2025-05-08T00:34:30.747694638Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 May 8 00:34:30.749188 containerd[1637]: time="2025-05-08T00:34:30.747701038Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 May 8 00:34:30.749188 containerd[1637]: time="2025-05-08T00:34:30.747707094Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 May 8 00:34:30.749188 containerd[1637]: time="2025-05-08T00:34:30.747712912Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 May 8 00:34:30.749188 containerd[1637]: time="2025-05-08T00:34:30.747722377Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 May 8 00:34:30.749188 containerd[1637]: time="2025-05-08T00:34:30.747729091Z" level=info msg="NRI interface is disabled by configuration." May 8 00:34:30.749379 containerd[1637]: time="2025-05-08T00:34:30.747735876Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 May 8 00:34:30.749394 containerd[1637]: time="2025-05-08T00:34:30.747903653Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" May 8 00:34:30.749394 containerd[1637]: time="2025-05-08T00:34:30.747937970Z" level=info msg="Connect containerd service" May 8 00:34:30.749394 containerd[1637]: time="2025-05-08T00:34:30.747963148Z" level=info msg="using legacy CRI server" May 8 00:34:30.749394 containerd[1637]: time="2025-05-08T00:34:30.747970851Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 8 00:34:30.749394 containerd[1637]: time="2025-05-08T00:34:30.748028845Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" May 8 00:34:30.749394 containerd[1637]: time="2025-05-08T00:34:30.748345205Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 8 00:34:30.749394 containerd[1637]: time="2025-05-08T00:34:30.748577557Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 8 00:34:30.749394 containerd[1637]: time="2025-05-08T00:34:30.748606379Z" level=info msg=serving... address=/run/containerd/containerd.sock May 8 00:34:30.753058 containerd[1637]: time="2025-05-08T00:34:30.752063959Z" level=info msg="Start subscribing containerd event" May 8 00:34:30.753058 containerd[1637]: time="2025-05-08T00:34:30.752099645Z" level=info msg="Start recovering state" May 8 00:34:30.753058 containerd[1637]: time="2025-05-08T00:34:30.752136054Z" level=info msg="Start event monitor" May 8 00:34:30.753058 containerd[1637]: time="2025-05-08T00:34:30.752147785Z" level=info msg="Start snapshots syncer" May 8 00:34:30.753058 containerd[1637]: time="2025-05-08T00:34:30.752153494Z" level=info msg="Start cni network conf syncer for default" May 8 00:34:30.753058 containerd[1637]: time="2025-05-08T00:34:30.752157615Z" level=info msg="Start streaming server" May 8 00:34:30.753058 containerd[1637]: time="2025-05-08T00:34:30.752542983Z" level=info msg="containerd successfully booted in 0.093697s" May 8 00:34:30.752255 systemd[1]: Started containerd.service - containerd container runtime. May 8 00:34:30.760445 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 8 00:34:30.765283 systemd[1]: Started getty@tty1.service - Getty on tty1. May 8 00:34:30.766217 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. May 8 00:34:30.766445 systemd[1]: Reached target getty.target - Login Prompts. May 8 00:35:51.368813 systemd-timesyncd[1572]: Contacted time server 212.227.240.160:123 (0.flatcar.pool.ntp.org). May 8 00:35:51.368859 systemd-timesyncd[1572]: Initial clock synchronization to Thu 2025-05-08 00:35:51.368713 UTC. May 8 00:35:51.368891 systemd-resolved[1546]: Clock change detected. Flushing caches. May 8 00:35:51.467220 tar[1631]: linux-amd64/LICENSE May 8 00:35:51.467220 tar[1631]: linux-amd64/README.md May 8 00:35:51.475683 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 8 00:35:52.613449 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 8 00:35:52.614313 systemd[1]: Reached target multi-user.target - Multi-User System. May 8 00:35:52.615086 systemd[1]: Startup finished in 6.965s (kernel) + 5.488s (userspace) = 12.453s. May 8 00:35:52.625896 (kubelet)[1810]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 8 00:35:52.647294 login[1796]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) May 8 00:35:52.647572 login[1797]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) May 8 00:35:52.654254 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 8 00:35:52.664624 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 8 00:35:52.665926 systemd-logind[1616]: New session 2 of user core. May 8 00:35:52.668969 systemd-logind[1616]: New session 1 of user core. May 8 00:35:52.672820 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 8 00:35:52.677537 systemd[1]: Starting user@500.service - User Manager for UID 500... May 8 00:35:52.679731 (systemd)[1820]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 8 00:35:52.736367 systemd[1820]: Queued start job for default target default.target. May 8 00:35:52.736581 systemd[1820]: Created slice app.slice - User Application Slice. May 8 00:35:52.736596 systemd[1820]: Reached target paths.target - Paths. May 8 00:35:52.736605 systemd[1820]: Reached target timers.target - Timers. May 8 00:35:52.742434 systemd[1820]: Starting dbus.socket - D-Bus User Message Bus Socket... May 8 00:35:52.747624 systemd[1820]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 8 00:35:52.747661 systemd[1820]: Reached target sockets.target - Sockets. May 8 00:35:52.747671 systemd[1820]: Reached target basic.target - Basic System. May 8 00:35:52.747693 systemd[1820]: Reached target default.target - Main User Target. May 8 00:35:52.747709 systemd[1820]: Startup finished in 64ms. May 8 00:35:52.748221 systemd[1]: Started user@500.service - User Manager for UID 500. May 8 00:35:52.749550 systemd[1]: Started session-1.scope - Session 1 of User core. May 8 00:35:52.750033 systemd[1]: Started session-2.scope - Session 2 of User core. May 8 00:35:53.250122 kubelet[1810]: E0508 00:35:53.250084 1810 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 8 00:35:53.251451 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 8 00:35:53.251553 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 8 00:36:03.341400 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 8 00:36:03.353583 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 8 00:36:03.686471 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 8 00:36:03.689224 (kubelet)[1871]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 8 00:36:03.748995 kubelet[1871]: E0508 00:36:03.748960 1871 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 8 00:36:03.751901 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 8 00:36:03.752000 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 8 00:36:13.841194 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 8 00:36:13.848448 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 8 00:36:14.165802 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 8 00:36:14.168962 (kubelet)[1892]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 8 00:36:14.202095 kubelet[1892]: E0508 00:36:14.202059 1892 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 8 00:36:14.203240 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 8 00:36:14.203325 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 8 00:36:21.030085 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 8 00:36:21.034567 systemd[1]: Started sshd@0-139.178.70.106:22-139.178.68.195:57086.service - OpenSSH per-connection server daemon (139.178.68.195:57086). May 8 00:36:21.062100 sshd[1901]: Accepted publickey for core from 139.178.68.195 port 57086 ssh2: RSA SHA256:K6koWqi65G0NEZIdyqBHM11YGd87HXVeKfxzt5n0Rpg May 8 00:36:21.062811 sshd[1901]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:36:21.065385 systemd-logind[1616]: New session 3 of user core. May 8 00:36:21.071482 systemd[1]: Started session-3.scope - Session 3 of User core. May 8 00:36:21.123538 systemd[1]: Started sshd@1-139.178.70.106:22-139.178.68.195:57094.service - OpenSSH per-connection server daemon (139.178.68.195:57094). May 8 00:36:21.149643 sshd[1906]: Accepted publickey for core from 139.178.68.195 port 57094 ssh2: RSA SHA256:K6koWqi65G0NEZIdyqBHM11YGd87HXVeKfxzt5n0Rpg May 8 00:36:21.150928 sshd[1906]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:36:21.154688 systemd-logind[1616]: New session 4 of user core. May 8 00:36:21.161536 systemd[1]: Started session-4.scope - Session 4 of User core. May 8 00:36:21.211039 sshd[1906]: pam_unix(sshd:session): session closed for user core May 8 00:36:21.219575 systemd[1]: Started sshd@2-139.178.70.106:22-139.178.68.195:57098.service - OpenSSH per-connection server daemon (139.178.68.195:57098). May 8 00:36:21.219911 systemd[1]: sshd@1-139.178.70.106:22-139.178.68.195:57094.service: Deactivated successfully. May 8 00:36:21.220899 systemd[1]: session-4.scope: Deactivated successfully. May 8 00:36:21.223635 systemd-logind[1616]: Session 4 logged out. Waiting for processes to exit. May 8 00:36:21.224514 systemd-logind[1616]: Removed session 4. May 8 00:36:21.246158 sshd[1911]: Accepted publickey for core from 139.178.68.195 port 57098 ssh2: RSA SHA256:K6koWqi65G0NEZIdyqBHM11YGd87HXVeKfxzt5n0Rpg May 8 00:36:21.247359 sshd[1911]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:36:21.250273 systemd-logind[1616]: New session 5 of user core. May 8 00:36:21.257554 systemd[1]: Started session-5.scope - Session 5 of User core. May 8 00:36:21.305221 sshd[1911]: pam_unix(sshd:session): session closed for user core May 8 00:36:21.319531 systemd[1]: Started sshd@3-139.178.70.106:22-139.178.68.195:57106.service - OpenSSH per-connection server daemon (139.178.68.195:57106). May 8 00:36:21.319899 systemd[1]: sshd@2-139.178.70.106:22-139.178.68.195:57098.service: Deactivated successfully. May 8 00:36:21.320987 systemd[1]: session-5.scope: Deactivated successfully. May 8 00:36:21.322270 systemd-logind[1616]: Session 5 logged out. Waiting for processes to exit. May 8 00:36:21.322855 systemd-logind[1616]: Removed session 5. May 8 00:36:21.344666 sshd[1919]: Accepted publickey for core from 139.178.68.195 port 57106 ssh2: RSA SHA256:K6koWqi65G0NEZIdyqBHM11YGd87HXVeKfxzt5n0Rpg May 8 00:36:21.345352 sshd[1919]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:36:21.347985 systemd-logind[1616]: New session 6 of user core. May 8 00:36:21.354491 systemd[1]: Started session-6.scope - Session 6 of User core. May 8 00:36:21.403437 sshd[1919]: pam_unix(sshd:session): session closed for user core May 8 00:36:21.408640 systemd[1]: Started sshd@4-139.178.70.106:22-139.178.68.195:57116.service - OpenSSH per-connection server daemon (139.178.68.195:57116). May 8 00:36:21.409164 systemd[1]: sshd@3-139.178.70.106:22-139.178.68.195:57106.service: Deactivated successfully. May 8 00:36:21.409897 systemd[1]: session-6.scope: Deactivated successfully. May 8 00:36:21.411391 systemd-logind[1616]: Session 6 logged out. Waiting for processes to exit. May 8 00:36:21.412143 systemd-logind[1616]: Removed session 6. May 8 00:36:21.432105 sshd[1927]: Accepted publickey for core from 139.178.68.195 port 57116 ssh2: RSA SHA256:K6koWqi65G0NEZIdyqBHM11YGd87HXVeKfxzt5n0Rpg May 8 00:36:21.432830 sshd[1927]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:36:21.435174 systemd-logind[1616]: New session 7 of user core. May 8 00:36:21.438476 systemd[1]: Started session-7.scope - Session 7 of User core. May 8 00:36:21.551228 sudo[1934]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 8 00:36:21.551404 sudo[1934]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 8 00:36:21.568658 sudo[1934]: pam_unix(sudo:session): session closed for user root May 8 00:36:21.570882 sshd[1927]: pam_unix(sshd:session): session closed for user core May 8 00:36:21.572780 systemd[1]: sshd@4-139.178.70.106:22-139.178.68.195:57116.service: Deactivated successfully. May 8 00:36:21.573946 systemd-logind[1616]: Session 7 logged out. Waiting for processes to exit. May 8 00:36:21.578529 systemd[1]: Started sshd@5-139.178.70.106:22-139.178.68.195:57128.service - OpenSSH per-connection server daemon (139.178.68.195:57128). May 8 00:36:21.578861 systemd[1]: session-7.scope: Deactivated successfully. May 8 00:36:21.581584 systemd-logind[1616]: Removed session 7. May 8 00:36:21.602443 sshd[1939]: Accepted publickey for core from 139.178.68.195 port 57128 ssh2: RSA SHA256:K6koWqi65G0NEZIdyqBHM11YGd87HXVeKfxzt5n0Rpg May 8 00:36:21.603202 sshd[1939]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:36:21.605603 systemd-logind[1616]: New session 8 of user core. May 8 00:36:21.608551 systemd[1]: Started session-8.scope - Session 8 of User core. May 8 00:36:21.656132 sudo[1944]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 8 00:36:21.656286 sudo[1944]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 8 00:36:21.658302 sudo[1944]: pam_unix(sudo:session): session closed for user root May 8 00:36:21.661189 sudo[1943]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules May 8 00:36:21.661338 sudo[1943]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 8 00:36:21.674496 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... May 8 00:36:21.675295 auditctl[1947]: No rules May 8 00:36:21.675550 systemd[1]: audit-rules.service: Deactivated successfully. May 8 00:36:21.675667 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. May 8 00:36:21.677811 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... May 8 00:36:21.692602 augenrules[1966]: No rules May 8 00:36:21.693236 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. May 8 00:36:21.694241 sudo[1943]: pam_unix(sudo:session): session closed for user root May 8 00:36:21.695440 sshd[1939]: pam_unix(sshd:session): session closed for user core May 8 00:36:21.702508 systemd[1]: Started sshd@6-139.178.70.106:22-139.178.68.195:57136.service - OpenSSH per-connection server daemon (139.178.68.195:57136). May 8 00:36:21.702765 systemd[1]: sshd@5-139.178.70.106:22-139.178.68.195:57128.service: Deactivated successfully. May 8 00:36:21.703525 systemd[1]: session-8.scope: Deactivated successfully. May 8 00:36:21.704930 systemd-logind[1616]: Session 8 logged out. Waiting for processes to exit. May 8 00:36:21.706276 systemd-logind[1616]: Removed session 8. May 8 00:36:21.725614 sshd[1973]: Accepted publickey for core from 139.178.68.195 port 57136 ssh2: RSA SHA256:K6koWqi65G0NEZIdyqBHM11YGd87HXVeKfxzt5n0Rpg May 8 00:36:21.726361 sshd[1973]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:36:21.729625 systemd-logind[1616]: New session 9 of user core. May 8 00:36:21.740501 systemd[1]: Started session-9.scope - Session 9 of User core. May 8 00:36:21.788066 sudo[1979]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 8 00:36:21.788227 sudo[1979]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 8 00:36:22.163535 systemd[1]: Starting docker.service - Docker Application Container Engine... May 8 00:36:22.163847 (dockerd)[1995]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 8 00:36:22.513205 dockerd[1995]: time="2025-05-08T00:36:22.512839453Z" level=info msg="Starting up" May 8 00:36:22.572815 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport3692385778-merged.mount: Deactivated successfully. May 8 00:36:22.815395 dockerd[1995]: time="2025-05-08T00:36:22.815300559Z" level=info msg="Loading containers: start." May 8 00:36:22.904373 kernel: Initializing XFRM netlink socket May 8 00:36:22.952175 systemd-networkd[1293]: docker0: Link UP May 8 00:36:22.965499 dockerd[1995]: time="2025-05-08T00:36:22.965296206Z" level=info msg="Loading containers: done." May 8 00:36:22.989270 dockerd[1995]: time="2025-05-08T00:36:22.989234579Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 8 00:36:22.989380 dockerd[1995]: time="2025-05-08T00:36:22.989332538Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 May 8 00:36:22.989434 dockerd[1995]: time="2025-05-08T00:36:22.989419372Z" level=info msg="Daemon has completed initialization" May 8 00:36:23.004289 dockerd[1995]: time="2025-05-08T00:36:23.004076075Z" level=info msg="API listen on /run/docker.sock" May 8 00:36:23.004228 systemd[1]: Started docker.service - Docker Application Container Engine. May 8 00:36:23.885215 containerd[1637]: time="2025-05-08T00:36:23.885192103Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\"" May 8 00:36:24.341201 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. May 8 00:36:24.347520 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 8 00:36:24.412168 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 8 00:36:24.415105 (kubelet)[2154]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 8 00:36:24.469609 kubelet[2154]: E0508 00:36:24.469584 2154 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 8 00:36:24.472447 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 8 00:36:24.472542 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 8 00:36:24.719538 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1582053128.mount: Deactivated successfully. May 8 00:36:25.737678 containerd[1637]: time="2025-05-08T00:36:25.737617256Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:36:25.737678 containerd[1637]: time="2025-05-08T00:36:25.737650992Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.12: active requests=0, bytes read=32674873" May 8 00:36:25.738327 containerd[1637]: time="2025-05-08T00:36:25.738311885Z" level=info msg="ImageCreate event name:\"sha256:e113c59aa22f0650435e2a3ed64aadb01e87f3d2835aa3825fe078cd39699bfb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:36:25.741792 containerd[1637]: time="2025-05-08T00:36:25.741779086Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:4878682f7a044274d42399a6316ef452c5411aafd4ad99cc57de7235ca490e4e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:36:25.742478 containerd[1637]: time="2025-05-08T00:36:25.742379821Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.12\" with image id \"sha256:e113c59aa22f0650435e2a3ed64aadb01e87f3d2835aa3825fe078cd39699bfb\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:4878682f7a044274d42399a6316ef452c5411aafd4ad99cc57de7235ca490e4e\", size \"32671673\" in 1.857165656s" May 8 00:36:25.742478 containerd[1637]: time="2025-05-08T00:36:25.742400005Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\" returns image reference \"sha256:e113c59aa22f0650435e2a3ed64aadb01e87f3d2835aa3825fe078cd39699bfb\"" May 8 00:36:25.755450 containerd[1637]: time="2025-05-08T00:36:25.755424315Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\"" May 8 00:36:27.287366 containerd[1637]: time="2025-05-08T00:36:27.287312158Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:36:27.288518 containerd[1637]: time="2025-05-08T00:36:27.288068176Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.12: active requests=0, bytes read=29617534" May 8 00:36:27.288518 containerd[1637]: time="2025-05-08T00:36:27.288128327Z" level=info msg="ImageCreate event name:\"sha256:70742b7b7d90a618a1fa06d89248dbe2c291c19d7f75f4ad60a69d0454dbbac8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:36:27.290306 containerd[1637]: time="2025-05-08T00:36:27.290287754Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3a36711d0409d565b370a18d0c19339e93d4f1b1f2b3fd382eb31c714c463b74\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:36:27.291152 containerd[1637]: time="2025-05-08T00:36:27.291133779Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.12\" with image id \"sha256:70742b7b7d90a618a1fa06d89248dbe2c291c19d7f75f4ad60a69d0454dbbac8\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3a36711d0409d565b370a18d0c19339e93d4f1b1f2b3fd382eb31c714c463b74\", size \"31105907\" in 1.535685738s" May 8 00:36:27.291228 containerd[1637]: time="2025-05-08T00:36:27.291216450Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\" returns image reference \"sha256:70742b7b7d90a618a1fa06d89248dbe2c291c19d7f75f4ad60a69d0454dbbac8\"" May 8 00:36:27.308435 containerd[1637]: time="2025-05-08T00:36:27.308410538Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\"" May 8 00:36:28.612627 containerd[1637]: time="2025-05-08T00:36:28.612394127Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:36:28.613270 containerd[1637]: time="2025-05-08T00:36:28.612961764Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.12: active requests=0, bytes read=17903682" May 8 00:36:28.613270 containerd[1637]: time="2025-05-08T00:36:28.613068164Z" level=info msg="ImageCreate event name:\"sha256:c0b91cfea9f9a1c09fc5d056f3a015e52604fd0d63671ff5bf31e642402ef05d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:36:28.614574 containerd[1637]: time="2025-05-08T00:36:28.614560984Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:521c843d01025be7d4e246ddee8cde74556eb9813c606d6db9f0f03236f6d029\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:36:28.615164 containerd[1637]: time="2025-05-08T00:36:28.615147000Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.12\" with image id \"sha256:c0b91cfea9f9a1c09fc5d056f3a015e52604fd0d63671ff5bf31e642402ef05d\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:521c843d01025be7d4e246ddee8cde74556eb9813c606d6db9f0f03236f6d029\", size \"19392073\" in 1.306712788s" May 8 00:36:28.615192 containerd[1637]: time="2025-05-08T00:36:28.615165521Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\" returns image reference \"sha256:c0b91cfea9f9a1c09fc5d056f3a015e52604fd0d63671ff5bf31e642402ef05d\"" May 8 00:36:28.628569 containerd[1637]: time="2025-05-08T00:36:28.628546545Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\"" May 8 00:36:29.801208 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount469116550.mount: Deactivated successfully. May 8 00:36:30.374369 containerd[1637]: time="2025-05-08T00:36:30.374259492Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:36:30.382890 containerd[1637]: time="2025-05-08T00:36:30.382682839Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.12: active requests=0, bytes read=29185817" May 8 00:36:30.390294 containerd[1637]: time="2025-05-08T00:36:30.390256482Z" level=info msg="ImageCreate event name:\"sha256:c9356fea5d151501039907c3ba870272461396117eabc74063632616f4e31b2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:36:30.395478 containerd[1637]: time="2025-05-08T00:36:30.395446626Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:36:30.396005 containerd[1637]: time="2025-05-08T00:36:30.395830602Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.12\" with image id \"sha256:c9356fea5d151501039907c3ba870272461396117eabc74063632616f4e31b2b\", repo tag \"registry.k8s.io/kube-proxy:v1.30.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15\", size \"29184836\" in 1.767260076s" May 8 00:36:30.396005 containerd[1637]: time="2025-05-08T00:36:30.395858366Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\" returns image reference \"sha256:c9356fea5d151501039907c3ba870272461396117eabc74063632616f4e31b2b\"" May 8 00:36:30.412315 containerd[1637]: time="2025-05-08T00:36:30.412285222Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" May 8 00:36:30.946528 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4064623992.mount: Deactivated successfully. May 8 00:36:31.585166 containerd[1637]: time="2025-05-08T00:36:31.585140826Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:36:31.586049 containerd[1637]: time="2025-05-08T00:36:31.585547070Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" May 8 00:36:31.586049 containerd[1637]: time="2025-05-08T00:36:31.586028963Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:36:31.587596 containerd[1637]: time="2025-05-08T00:36:31.587576640Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:36:31.588537 containerd[1637]: time="2025-05-08T00:36:31.588234341Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.175922374s" May 8 00:36:31.588537 containerd[1637]: time="2025-05-08T00:36:31.588255258Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" May 8 00:36:31.601617 containerd[1637]: time="2025-05-08T00:36:31.601597969Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" May 8 00:36:32.049179 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount381572211.mount: Deactivated successfully. May 8 00:36:32.051896 containerd[1637]: time="2025-05-08T00:36:32.051662928Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" May 8 00:36:32.051992 containerd[1637]: time="2025-05-08T00:36:32.051979812Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:36:32.052808 containerd[1637]: time="2025-05-08T00:36:32.052783473Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:36:32.053485 containerd[1637]: time="2025-05-08T00:36:32.053219678Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 451.602749ms" May 8 00:36:32.053485 containerd[1637]: time="2025-05-08T00:36:32.053237457Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" May 8 00:36:32.053662 containerd[1637]: time="2025-05-08T00:36:32.053651625Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:36:32.066515 containerd[1637]: time="2025-05-08T00:36:32.066495194Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" May 8 00:36:32.512850 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount800386508.mount: Deactivated successfully. May 8 00:36:34.581489 containerd[1637]: time="2025-05-08T00:36:34.580800153Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:36:34.581489 containerd[1637]: time="2025-05-08T00:36:34.581126832Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=57238571" May 8 00:36:34.581489 containerd[1637]: time="2025-05-08T00:36:34.581466922Z" level=info msg="ImageCreate event name:\"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:36:34.583211 containerd[1637]: time="2025-05-08T00:36:34.583195394Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:36:34.584914 containerd[1637]: time="2025-05-08T00:36:34.584895885Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"57236178\" in 2.518379721s" May 8 00:36:34.584992 containerd[1637]: time="2025-05-08T00:36:34.584972509Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" May 8 00:36:34.591075 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. May 8 00:36:34.596572 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 8 00:36:35.130442 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 8 00:36:35.132088 (kubelet)[2371]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 8 00:36:35.176479 kubelet[2371]: E0508 00:36:35.176445 2371 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 8 00:36:35.177638 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 8 00:36:35.177750 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 8 00:36:36.175070 update_engine[1619]: I20250508 00:36:36.174366 1619 update_attempter.cc:509] Updating boot flags... May 8 00:36:36.239994 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (2441) May 8 00:36:36.915021 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 8 00:36:36.924571 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 8 00:36:36.936762 systemd[1]: Reloading requested from client PID 2451 ('systemctl') (unit session-9.scope)... May 8 00:36:36.936839 systemd[1]: Reloading... May 8 00:36:36.993362 zram_generator::config[2488]: No configuration found. May 8 00:36:37.067290 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") May 8 00:36:37.083549 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 8 00:36:37.126792 systemd[1]: Reloading finished in 189 ms. May 8 00:36:37.165559 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM May 8 00:36:37.165718 systemd[1]: kubelet.service: Failed with result 'signal'. May 8 00:36:37.165984 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 8 00:36:37.172822 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 8 00:36:37.487458 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 8 00:36:37.491092 (kubelet)[2568]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 8 00:36:37.522610 kubelet[2568]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 8 00:36:37.522610 kubelet[2568]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 8 00:36:37.522610 kubelet[2568]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 8 00:36:37.535387 kubelet[2568]: I0508 00:36:37.535305 2568 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 8 00:36:37.734945 kubelet[2568]: I0508 00:36:37.734909 2568 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" May 8 00:36:37.734945 kubelet[2568]: I0508 00:36:37.734933 2568 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 8 00:36:37.735142 kubelet[2568]: I0508 00:36:37.735081 2568 server.go:927] "Client rotation is on, will bootstrap in background" May 8 00:36:37.787866 kubelet[2568]: I0508 00:36:37.787548 2568 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 8 00:36:37.796975 kubelet[2568]: E0508 00:36:37.796950 2568 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://139.178.70.106:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 139.178.70.106:6443: connect: connection refused May 8 00:36:37.867327 kubelet[2568]: I0508 00:36:37.867293 2568 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 8 00:36:37.876398 kubelet[2568]: I0508 00:36:37.876352 2568 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 8 00:36:37.877811 kubelet[2568]: I0508 00:36:37.876394 2568 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} May 8 00:36:37.877907 kubelet[2568]: I0508 00:36:37.877817 2568 topology_manager.go:138] "Creating topology manager with none policy" May 8 00:36:37.877907 kubelet[2568]: I0508 00:36:37.877828 2568 container_manager_linux.go:301] "Creating device plugin manager" May 8 00:36:37.880537 kubelet[2568]: I0508 00:36:37.880519 2568 state_mem.go:36] "Initialized new in-memory state store" May 8 00:36:37.881446 kubelet[2568]: I0508 00:36:37.881432 2568 kubelet.go:400] "Attempting to sync node with API server" May 8 00:36:37.881499 kubelet[2568]: I0508 00:36:37.881448 2568 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" May 8 00:36:37.881499 kubelet[2568]: I0508 00:36:37.881468 2568 kubelet.go:312] "Adding apiserver pod source" May 8 00:36:37.881499 kubelet[2568]: I0508 00:36:37.881484 2568 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 8 00:36:37.884527 kubelet[2568]: I0508 00:36:37.884317 2568 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" May 8 00:36:37.886262 kubelet[2568]: I0508 00:36:37.885750 2568 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 8 00:36:37.886262 kubelet[2568]: W0508 00:36:37.885794 2568 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 8 00:36:37.886262 kubelet[2568]: I0508 00:36:37.886218 2568 server.go:1264] "Started kubelet" May 8 00:36:37.888219 kubelet[2568]: W0508 00:36:37.887927 2568 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://139.178.70.106:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 139.178.70.106:6443: connect: connection refused May 8 00:36:37.888219 kubelet[2568]: E0508 00:36:37.887971 2568 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://139.178.70.106:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 139.178.70.106:6443: connect: connection refused May 8 00:36:37.889455 kubelet[2568]: W0508 00:36:37.889420 2568 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://139.178.70.106:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 139.178.70.106:6443: connect: connection refused May 8 00:36:37.889509 kubelet[2568]: E0508 00:36:37.889459 2568 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://139.178.70.106:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 139.178.70.106:6443: connect: connection refused May 8 00:36:37.889556 kubelet[2568]: I0508 00:36:37.889525 2568 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 8 00:36:37.895120 kubelet[2568]: I0508 00:36:37.894613 2568 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 8 00:36:37.895120 kubelet[2568]: I0508 00:36:37.894850 2568 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 8 00:36:37.895120 kubelet[2568]: E0508 00:36:37.894946 2568 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://139.178.70.106:6443/api/v1/namespaces/default/events\": dial tcp 139.178.70.106:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.183d663292c1ede9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-08 00:36:37.886201321 +0000 UTC m=+0.392266540,LastTimestamp:2025-05-08 00:36:37.886201321 +0000 UTC m=+0.392266540,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 8 00:36:37.896003 kubelet[2568]: I0508 00:36:37.895504 2568 server.go:455] "Adding debug handlers to kubelet server" May 8 00:36:37.898122 kubelet[2568]: I0508 00:36:37.897817 2568 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 8 00:36:37.900118 kubelet[2568]: E0508 00:36:37.900107 2568 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 8 00:36:37.900191 kubelet[2568]: I0508 00:36:37.900185 2568 volume_manager.go:291] "Starting Kubelet Volume Manager" May 8 00:36:37.900299 kubelet[2568]: I0508 00:36:37.900293 2568 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 8 00:36:37.900470 kubelet[2568]: I0508 00:36:37.900363 2568 reconciler.go:26] "Reconciler: start to sync state" May 8 00:36:37.900852 kubelet[2568]: W0508 00:36:37.900827 2568 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://139.178.70.106:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 139.178.70.106:6443: connect: connection refused May 8 00:36:37.900905 kubelet[2568]: E0508 00:36:37.900899 2568 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://139.178.70.106:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 139.178.70.106:6443: connect: connection refused May 8 00:36:37.904979 kubelet[2568]: E0508 00:36:37.904940 2568 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.106:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.106:6443: connect: connection refused" interval="200ms" May 8 00:36:37.905168 kubelet[2568]: E0508 00:36:37.905157 2568 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 8 00:36:37.905547 kubelet[2568]: I0508 00:36:37.905540 2568 factory.go:221] Registration of the systemd container factory successfully May 8 00:36:37.905625 kubelet[2568]: I0508 00:36:37.905616 2568 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 8 00:36:37.906214 kubelet[2568]: I0508 00:36:37.906206 2568 factory.go:221] Registration of the containerd container factory successfully May 8 00:36:37.920558 kubelet[2568]: I0508 00:36:37.920536 2568 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 8 00:36:37.921360 kubelet[2568]: I0508 00:36:37.921207 2568 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 8 00:36:37.921360 kubelet[2568]: I0508 00:36:37.921221 2568 status_manager.go:217] "Starting to sync pod status with apiserver" May 8 00:36:37.921360 kubelet[2568]: I0508 00:36:37.921233 2568 kubelet.go:2337] "Starting kubelet main sync loop" May 8 00:36:37.921360 kubelet[2568]: E0508 00:36:37.921257 2568 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 8 00:36:37.925449 kubelet[2568]: W0508 00:36:37.925429 2568 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://139.178.70.106:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 139.178.70.106:6443: connect: connection refused May 8 00:36:37.925509 kubelet[2568]: E0508 00:36:37.925463 2568 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://139.178.70.106:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 139.178.70.106:6443: connect: connection refused May 8 00:36:37.928116 kubelet[2568]: I0508 00:36:37.928087 2568 cpu_manager.go:214] "Starting CPU manager" policy="none" May 8 00:36:37.928116 kubelet[2568]: I0508 00:36:37.928104 2568 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 8 00:36:37.928116 kubelet[2568]: I0508 00:36:37.928117 2568 state_mem.go:36] "Initialized new in-memory state store" May 8 00:36:37.929398 kubelet[2568]: I0508 00:36:37.929384 2568 policy_none.go:49] "None policy: Start" May 8 00:36:37.930065 kubelet[2568]: I0508 00:36:37.929842 2568 memory_manager.go:170] "Starting memorymanager" policy="None" May 8 00:36:37.930065 kubelet[2568]: I0508 00:36:37.929856 2568 state_mem.go:35] "Initializing new in-memory state store" May 8 00:36:37.934740 kubelet[2568]: I0508 00:36:37.934223 2568 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 8 00:36:37.934740 kubelet[2568]: I0508 00:36:37.934362 2568 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 8 00:36:37.934740 kubelet[2568]: I0508 00:36:37.934424 2568 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 8 00:36:37.935373 kubelet[2568]: E0508 00:36:37.935337 2568 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" May 8 00:36:38.001340 kubelet[2568]: I0508 00:36:38.001317 2568 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 8 00:36:38.001626 kubelet[2568]: E0508 00:36:38.001607 2568 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://139.178.70.106:6443/api/v1/nodes\": dial tcp 139.178.70.106:6443: connect: connection refused" node="localhost" May 8 00:36:38.021857 kubelet[2568]: I0508 00:36:38.021802 2568 topology_manager.go:215] "Topology Admit Handler" podUID="a9a7bd3c743f7de1557201987caabff1" podNamespace="kube-system" podName="kube-apiserver-localhost" May 8 00:36:38.023350 kubelet[2568]: I0508 00:36:38.022750 2568 topology_manager.go:215] "Topology Admit Handler" podUID="b20b39a8540dba87b5883a6f0f602dba" podNamespace="kube-system" podName="kube-controller-manager-localhost" May 8 00:36:38.023566 kubelet[2568]: I0508 00:36:38.023550 2568 topology_manager.go:215] "Topology Admit Handler" podUID="6ece95f10dbffa04b25ec3439a115512" podNamespace="kube-system" podName="kube-scheduler-localhost" May 8 00:36:38.105441 kubelet[2568]: E0508 00:36:38.105402 2568 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.106:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.106:6443: connect: connection refused" interval="400ms" May 8 00:36:38.200871 kubelet[2568]: I0508 00:36:38.200834 2568 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a9a7bd3c743f7de1557201987caabff1-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"a9a7bd3c743f7de1557201987caabff1\") " pod="kube-system/kube-apiserver-localhost" May 8 00:36:38.200871 kubelet[2568]: I0508 00:36:38.200877 2568 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 8 00:36:38.200988 kubelet[2568]: I0508 00:36:38.200897 2568 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 8 00:36:38.200988 kubelet[2568]: I0508 00:36:38.200914 2568 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 8 00:36:38.200988 kubelet[2568]: I0508 00:36:38.200929 2568 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 8 00:36:38.200988 kubelet[2568]: I0508 00:36:38.200945 2568 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 8 00:36:38.200988 kubelet[2568]: I0508 00:36:38.200959 2568 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6ece95f10dbffa04b25ec3439a115512-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"6ece95f10dbffa04b25ec3439a115512\") " pod="kube-system/kube-scheduler-localhost" May 8 00:36:38.201074 kubelet[2568]: I0508 00:36:38.200972 2568 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a9a7bd3c743f7de1557201987caabff1-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"a9a7bd3c743f7de1557201987caabff1\") " pod="kube-system/kube-apiserver-localhost" May 8 00:36:38.201074 kubelet[2568]: I0508 00:36:38.201007 2568 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a9a7bd3c743f7de1557201987caabff1-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"a9a7bd3c743f7de1557201987caabff1\") " pod="kube-system/kube-apiserver-localhost" May 8 00:36:38.202819 kubelet[2568]: I0508 00:36:38.202627 2568 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 8 00:36:38.202819 kubelet[2568]: E0508 00:36:38.202800 2568 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://139.178.70.106:6443/api/v1/nodes\": dial tcp 139.178.70.106:6443: connect: connection refused" node="localhost" May 8 00:36:38.239584 systemd[1]: Started sshd@7-139.178.70.106:22-195.133.158.175:45056.service - OpenSSH per-connection server daemon (195.133.158.175:45056). May 8 00:36:38.327139 containerd[1637]: time="2025-05-08T00:36:38.327113768Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:a9a7bd3c743f7de1557201987caabff1,Namespace:kube-system,Attempt:0,}" May 8 00:36:38.330272 containerd[1637]: time="2025-05-08T00:36:38.330158974Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:b20b39a8540dba87b5883a6f0f602dba,Namespace:kube-system,Attempt:0,}" May 8 00:36:38.330966 containerd[1637]: time="2025-05-08T00:36:38.330945405Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:6ece95f10dbffa04b25ec3439a115512,Namespace:kube-system,Attempt:0,}" May 8 00:36:38.506534 kubelet[2568]: E0508 00:36:38.506444 2568 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.106:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.106:6443: connect: connection refused" interval="800ms" May 8 00:36:38.605235 kubelet[2568]: I0508 00:36:38.605208 2568 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 8 00:36:38.605561 kubelet[2568]: E0508 00:36:38.605431 2568 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://139.178.70.106:6443/api/v1/nodes\": dial tcp 139.178.70.106:6443: connect: connection refused" node="localhost" May 8 00:36:38.780530 kubelet[2568]: W0508 00:36:38.780402 2568 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://139.178.70.106:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 139.178.70.106:6443: connect: connection refused May 8 00:36:38.780530 kubelet[2568]: E0508 00:36:38.780466 2568 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://139.178.70.106:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 139.178.70.106:6443: connect: connection refused May 8 00:36:38.969708 kubelet[2568]: W0508 00:36:38.969647 2568 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://139.178.70.106:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 139.178.70.106:6443: connect: connection refused May 8 00:36:38.969708 kubelet[2568]: E0508 00:36:38.969702 2568 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://139.178.70.106:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 139.178.70.106:6443: connect: connection refused May 8 00:36:39.092588 kubelet[2568]: W0508 00:36:39.092549 2568 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://139.178.70.106:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 139.178.70.106:6443: connect: connection refused May 8 00:36:39.092588 kubelet[2568]: E0508 00:36:39.092575 2568 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://139.178.70.106:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 139.178.70.106:6443: connect: connection refused May 8 00:36:39.249959 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount465296119.mount: Deactivated successfully. May 8 00:36:39.288842 kubelet[2568]: W0508 00:36:39.288780 2568 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://139.178.70.106:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 139.178.70.106:6443: connect: connection refused May 8 00:36:39.288842 kubelet[2568]: E0508 00:36:39.288827 2568 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://139.178.70.106:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 139.178.70.106:6443: connect: connection refused May 8 00:36:39.294207 containerd[1637]: time="2025-05-08T00:36:39.294179736Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 8 00:36:39.307285 kubelet[2568]: E0508 00:36:39.307251 2568 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.106:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.106:6443: connect: connection refused" interval="1.6s" May 8 00:36:39.310474 containerd[1637]: time="2025-05-08T00:36:39.310426668Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" May 8 00:36:39.311237 containerd[1637]: time="2025-05-08T00:36:39.311207594Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 8 00:36:39.311785 containerd[1637]: time="2025-05-08T00:36:39.311745401Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 8 00:36:39.312589 containerd[1637]: time="2025-05-08T00:36:39.312550702Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 8 00:36:39.312660 containerd[1637]: time="2025-05-08T00:36:39.312612491Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 8 00:36:39.315886 containerd[1637]: time="2025-05-08T00:36:39.315839070Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 8 00:36:39.316672 containerd[1637]: time="2025-05-08T00:36:39.316523657Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 985.519887ms" May 8 00:36:39.317896 containerd[1637]: time="2025-05-08T00:36:39.317868266Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 990.700223ms" May 8 00:36:39.319518 containerd[1637]: time="2025-05-08T00:36:39.319475354Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 8 00:36:39.320718 containerd[1637]: time="2025-05-08T00:36:39.320508447Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 990.316414ms" May 8 00:36:39.408561 kubelet[2568]: I0508 00:36:39.408099 2568 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 8 00:36:39.408561 kubelet[2568]: E0508 00:36:39.408274 2568 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://139.178.70.106:6443/api/v1/nodes\": dial tcp 139.178.70.106:6443: connect: connection refused" node="localhost" May 8 00:36:39.419334 containerd[1637]: time="2025-05-08T00:36:39.417182957Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:36:39.419334 containerd[1637]: time="2025-05-08T00:36:39.417218465Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:36:39.419334 containerd[1637]: time="2025-05-08T00:36:39.417225914Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:36:39.419334 containerd[1637]: time="2025-05-08T00:36:39.417281182Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:36:39.428267 containerd[1637]: time="2025-05-08T00:36:39.426900140Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:36:39.428267 containerd[1637]: time="2025-05-08T00:36:39.426940545Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:36:39.428267 containerd[1637]: time="2025-05-08T00:36:39.426951267Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:36:39.428267 containerd[1637]: time="2025-05-08T00:36:39.427008113Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:36:39.430476 containerd[1637]: time="2025-05-08T00:36:39.430393567Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:36:39.430568 containerd[1637]: time="2025-05-08T00:36:39.430466031Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:36:39.430568 containerd[1637]: time="2025-05-08T00:36:39.430544455Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:36:39.430720 containerd[1637]: time="2025-05-08T00:36:39.430659458Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:36:39.489587 containerd[1637]: time="2025-05-08T00:36:39.489527745Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:a9a7bd3c743f7de1557201987caabff1,Namespace:kube-system,Attempt:0,} returns sandbox id \"6c78e5c313067c3682daa02016fb3e65b275124f566854cd6703d1c57f8c990d\"" May 8 00:36:39.491532 containerd[1637]: time="2025-05-08T00:36:39.491468607Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:b20b39a8540dba87b5883a6f0f602dba,Namespace:kube-system,Attempt:0,} returns sandbox id \"7f8c9247b217fd8c60f60bb6dc2ef187ef4e53abdc32ac26c113bc8c6edaec75\"" May 8 00:36:39.496723 containerd[1637]: time="2025-05-08T00:36:39.496701734Z" level=info msg="CreateContainer within sandbox \"6c78e5c313067c3682daa02016fb3e65b275124f566854cd6703d1c57f8c990d\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 8 00:36:39.497773 containerd[1637]: time="2025-05-08T00:36:39.497751670Z" level=info msg="CreateContainer within sandbox \"7f8c9247b217fd8c60f60bb6dc2ef187ef4e53abdc32ac26c113bc8c6edaec75\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 8 00:36:39.505813 containerd[1637]: time="2025-05-08T00:36:39.505783431Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:6ece95f10dbffa04b25ec3439a115512,Namespace:kube-system,Attempt:0,} returns sandbox id \"df92f2f72807ddcc353d7275af798e15a4416b5ee6450e33a3e23f239ccd2c21\"" May 8 00:36:39.508035 containerd[1637]: time="2025-05-08T00:36:39.507995558Z" level=info msg="CreateContainer within sandbox \"df92f2f72807ddcc353d7275af798e15a4416b5ee6450e33a3e23f239ccd2c21\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 8 00:36:39.514845 containerd[1637]: time="2025-05-08T00:36:39.514756641Z" level=info msg="CreateContainer within sandbox \"7f8c9247b217fd8c60f60bb6dc2ef187ef4e53abdc32ac26c113bc8c6edaec75\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"f8f934b206a95b8f5b3e458037c404bc0d2da7a3cd1ccb35152a010118bc2449\"" May 8 00:36:39.515246 containerd[1637]: time="2025-05-08T00:36:39.515232707Z" level=info msg="StartContainer for \"f8f934b206a95b8f5b3e458037c404bc0d2da7a3cd1ccb35152a010118bc2449\"" May 8 00:36:39.516384 containerd[1637]: time="2025-05-08T00:36:39.516183273Z" level=info msg="CreateContainer within sandbox \"6c78e5c313067c3682daa02016fb3e65b275124f566854cd6703d1c57f8c990d\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"0428704be396d0f2047cfc0877e82490ce2cec70f08988a2c59f3e56346013bd\"" May 8 00:36:39.516640 containerd[1637]: time="2025-05-08T00:36:39.516615077Z" level=info msg="StartContainer for \"0428704be396d0f2047cfc0877e82490ce2cec70f08988a2c59f3e56346013bd\"" May 8 00:36:39.516992 containerd[1637]: time="2025-05-08T00:36:39.516969534Z" level=info msg="CreateContainer within sandbox \"df92f2f72807ddcc353d7275af798e15a4416b5ee6450e33a3e23f239ccd2c21\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"004e33a98ce3d959bfd6408e68b27caf628bf73320231b522506f3df477863c9\"" May 8 00:36:39.517434 containerd[1637]: time="2025-05-08T00:36:39.517419958Z" level=info msg="StartContainer for \"004e33a98ce3d959bfd6408e68b27caf628bf73320231b522506f3df477863c9\"" May 8 00:36:39.591860 containerd[1637]: time="2025-05-08T00:36:39.591543048Z" level=info msg="StartContainer for \"004e33a98ce3d959bfd6408e68b27caf628bf73320231b522506f3df477863c9\" returns successfully" May 8 00:36:39.599867 containerd[1637]: time="2025-05-08T00:36:39.599842341Z" level=info msg="StartContainer for \"0428704be396d0f2047cfc0877e82490ce2cec70f08988a2c59f3e56346013bd\" returns successfully" May 8 00:36:39.606905 containerd[1637]: time="2025-05-08T00:36:39.606877056Z" level=info msg="StartContainer for \"f8f934b206a95b8f5b3e458037c404bc0d2da7a3cd1ccb35152a010118bc2449\" returns successfully" May 8 00:36:39.806210 kubelet[2568]: E0508 00:36:39.805701 2568 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://139.178.70.106:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 139.178.70.106:6443: connect: connection refused May 8 00:36:40.327900 sshd[2598]: Invalid user apache from 195.133.158.175 port 45056 May 8 00:36:40.775900 sshd[2843]: pam_faillock(sshd:auth): User unknown May 8 00:36:40.783935 sshd[2598]: Postponed keyboard-interactive for invalid user apache from 195.133.158.175 port 45056 ssh2 [preauth] May 8 00:36:41.009613 kubelet[2568]: I0508 00:36:41.009515 2568 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 8 00:36:41.260657 sshd[2843]: pam_unix(sshd:auth): check pass; user unknown May 8 00:36:41.260681 sshd[2843]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=195.133.158.175 May 8 00:36:41.262680 sshd[2843]: pam_faillock(sshd:auth): User unknown May 8 00:36:41.275493 kubelet[2568]: E0508 00:36:41.275458 2568 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" May 8 00:36:41.362423 kubelet[2568]: I0508 00:36:41.362388 2568 kubelet_node_status.go:76] "Successfully registered node" node="localhost" May 8 00:36:41.372452 kubelet[2568]: E0508 00:36:41.372426 2568 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 8 00:36:41.884252 kubelet[2568]: I0508 00:36:41.884220 2568 apiserver.go:52] "Watching apiserver" May 8 00:36:41.901308 kubelet[2568]: I0508 00:36:41.901267 2568 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 8 00:36:42.012887 kubelet[2568]: E0508 00:36:42.012607 2568 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" May 8 00:36:42.756484 sshd[2598]: PAM: Permission denied for illegal user apache from 195.133.158.175 May 8 00:36:42.756834 sshd[2598]: Failed keyboard-interactive/pam for invalid user apache from 195.133.158.175 port 45056 ssh2 May 8 00:36:43.036496 systemd[1]: Reloading requested from client PID 2846 ('systemctl') (unit session-9.scope)... May 8 00:36:43.036510 systemd[1]: Reloading... May 8 00:36:43.101066 zram_generator::config[2885]: No configuration found. May 8 00:36:43.193271 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") May 8 00:36:43.209011 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 8 00:36:43.257436 systemd[1]: Reloading finished in 220 ms. May 8 00:36:43.305282 kubelet[2568]: I0508 00:36:43.304956 2568 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 8 00:36:43.305282 kubelet[2568]: E0508 00:36:43.304909 2568 event.go:319] "Unable to write event (broadcaster is shut down)" event="&Event{ObjectMeta:{localhost.183d663292c1ede9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-08 00:36:37.886201321 +0000 UTC m=+0.392266540,LastTimestamp:2025-05-08 00:36:37.886201321 +0000 UTC m=+0.392266540,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 8 00:36:43.305117 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 8 00:36:43.320256 systemd[1]: kubelet.service: Deactivated successfully. May 8 00:36:43.320875 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 8 00:36:43.326857 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 8 00:36:43.348931 sshd[2598]: Connection closed by invalid user apache 195.133.158.175 port 45056 [preauth] May 8 00:36:43.350201 systemd[1]: sshd@7-139.178.70.106:22-195.133.158.175:45056.service: Deactivated successfully. May 8 00:36:43.960224 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 8 00:36:43.971482 (kubelet)[2965]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 8 00:36:44.131104 kubelet[2965]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 8 00:36:44.131104 kubelet[2965]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 8 00:36:44.131104 kubelet[2965]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 8 00:36:44.131404 kubelet[2965]: I0508 00:36:44.131137 2965 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 8 00:36:44.143401 kubelet[2965]: I0508 00:36:44.143381 2965 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" May 8 00:36:44.143920 kubelet[2965]: I0508 00:36:44.143526 2965 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 8 00:36:44.143920 kubelet[2965]: I0508 00:36:44.143695 2965 server.go:927] "Client rotation is on, will bootstrap in background" May 8 00:36:44.144785 kubelet[2965]: I0508 00:36:44.144775 2965 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 8 00:36:44.145860 kubelet[2965]: I0508 00:36:44.145848 2965 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 8 00:36:44.150802 kubelet[2965]: I0508 00:36:44.150786 2965 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 8 00:36:44.151239 kubelet[2965]: I0508 00:36:44.151218 2965 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 8 00:36:44.151470 kubelet[2965]: I0508 00:36:44.151293 2965 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} May 8 00:36:44.151582 kubelet[2965]: I0508 00:36:44.151573 2965 topology_manager.go:138] "Creating topology manager with none policy" May 8 00:36:44.151634 kubelet[2965]: I0508 00:36:44.151628 2965 container_manager_linux.go:301] "Creating device plugin manager" May 8 00:36:44.151699 kubelet[2965]: I0508 00:36:44.151693 2965 state_mem.go:36] "Initialized new in-memory state store" May 8 00:36:44.151796 kubelet[2965]: I0508 00:36:44.151790 2965 kubelet.go:400] "Attempting to sync node with API server" May 8 00:36:44.152209 kubelet[2965]: I0508 00:36:44.151834 2965 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" May 8 00:36:44.152209 kubelet[2965]: I0508 00:36:44.151852 2965 kubelet.go:312] "Adding apiserver pod source" May 8 00:36:44.152209 kubelet[2965]: I0508 00:36:44.151867 2965 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 8 00:36:44.155072 kubelet[2965]: I0508 00:36:44.155051 2965 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" May 8 00:36:44.155209 kubelet[2965]: I0508 00:36:44.155196 2965 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 8 00:36:44.155512 kubelet[2965]: I0508 00:36:44.155498 2965 server.go:1264] "Started kubelet" May 8 00:36:44.158607 kubelet[2965]: I0508 00:36:44.158166 2965 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 8 00:36:44.161381 kubelet[2965]: I0508 00:36:44.160918 2965 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 8 00:36:44.162555 kubelet[2965]: I0508 00:36:44.162544 2965 server.go:455] "Adding debug handlers to kubelet server" May 8 00:36:44.170020 kubelet[2965]: I0508 00:36:44.169982 2965 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 8 00:36:44.170234 kubelet[2965]: I0508 00:36:44.170227 2965 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 8 00:36:44.171242 kubelet[2965]: I0508 00:36:44.171233 2965 volume_manager.go:291] "Starting Kubelet Volume Manager" May 8 00:36:44.172822 kubelet[2965]: I0508 00:36:44.172607 2965 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 8 00:36:44.173874 kubelet[2965]: I0508 00:36:44.173865 2965 reconciler.go:26] "Reconciler: start to sync state" May 8 00:36:44.181670 kubelet[2965]: I0508 00:36:44.181641 2965 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 8 00:36:44.182870 kubelet[2965]: I0508 00:36:44.182305 2965 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 8 00:36:44.182870 kubelet[2965]: I0508 00:36:44.182324 2965 status_manager.go:217] "Starting to sync pod status with apiserver" May 8 00:36:44.182870 kubelet[2965]: I0508 00:36:44.182335 2965 kubelet.go:2337] "Starting kubelet main sync loop" May 8 00:36:44.182870 kubelet[2965]: E0508 00:36:44.182370 2965 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 8 00:36:44.186125 kubelet[2965]: I0508 00:36:44.185040 2965 factory.go:221] Registration of the systemd container factory successfully May 8 00:36:44.186125 kubelet[2965]: I0508 00:36:44.185115 2965 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 8 00:36:44.189996 kubelet[2965]: E0508 00:36:44.189971 2965 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 8 00:36:44.190340 kubelet[2965]: I0508 00:36:44.190194 2965 factory.go:221] Registration of the containerd container factory successfully May 8 00:36:44.235682 kubelet[2965]: I0508 00:36:44.235629 2965 cpu_manager.go:214] "Starting CPU manager" policy="none" May 8 00:36:44.236403 kubelet[2965]: I0508 00:36:44.236364 2965 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 8 00:36:44.236403 kubelet[2965]: I0508 00:36:44.236385 2965 state_mem.go:36] "Initialized new in-memory state store" May 8 00:36:44.236491 kubelet[2965]: I0508 00:36:44.236481 2965 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 8 00:36:44.236515 kubelet[2965]: I0508 00:36:44.236490 2965 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 8 00:36:44.236515 kubelet[2965]: I0508 00:36:44.236506 2965 policy_none.go:49] "None policy: Start" May 8 00:36:44.236920 kubelet[2965]: I0508 00:36:44.236907 2965 memory_manager.go:170] "Starting memorymanager" policy="None" May 8 00:36:44.236957 kubelet[2965]: I0508 00:36:44.236922 2965 state_mem.go:35] "Initializing new in-memory state store" May 8 00:36:44.237023 kubelet[2965]: I0508 00:36:44.237013 2965 state_mem.go:75] "Updated machine memory state" May 8 00:36:44.238274 kubelet[2965]: I0508 00:36:44.238175 2965 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 8 00:36:44.238308 kubelet[2965]: I0508 00:36:44.238283 2965 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 8 00:36:44.238582 kubelet[2965]: I0508 00:36:44.238334 2965 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 8 00:36:44.272475 kubelet[2965]: I0508 00:36:44.272379 2965 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 8 00:36:44.283742 kubelet[2965]: I0508 00:36:44.283719 2965 topology_manager.go:215] "Topology Admit Handler" podUID="a9a7bd3c743f7de1557201987caabff1" podNamespace="kube-system" podName="kube-apiserver-localhost" May 8 00:36:44.283999 kubelet[2965]: I0508 00:36:44.283824 2965 topology_manager.go:215] "Topology Admit Handler" podUID="b20b39a8540dba87b5883a6f0f602dba" podNamespace="kube-system" podName="kube-controller-manager-localhost" May 8 00:36:44.283999 kubelet[2965]: I0508 00:36:44.283966 2965 topology_manager.go:215] "Topology Admit Handler" podUID="6ece95f10dbffa04b25ec3439a115512" podNamespace="kube-system" podName="kube-scheduler-localhost" May 8 00:36:44.298782 kubelet[2965]: I0508 00:36:44.298767 2965 kubelet_node_status.go:112] "Node was previously registered" node="localhost" May 8 00:36:44.299295 kubelet[2965]: I0508 00:36:44.298859 2965 kubelet_node_status.go:76] "Successfully registered node" node="localhost" May 8 00:36:44.375600 kubelet[2965]: I0508 00:36:44.375421 2965 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6ece95f10dbffa04b25ec3439a115512-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"6ece95f10dbffa04b25ec3439a115512\") " pod="kube-system/kube-scheduler-localhost" May 8 00:36:44.375600 kubelet[2965]: I0508 00:36:44.375455 2965 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 8 00:36:44.375600 kubelet[2965]: I0508 00:36:44.375472 2965 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 8 00:36:44.375600 kubelet[2965]: I0508 00:36:44.375491 2965 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a9a7bd3c743f7de1557201987caabff1-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"a9a7bd3c743f7de1557201987caabff1\") " pod="kube-system/kube-apiserver-localhost" May 8 00:36:44.375600 kubelet[2965]: I0508 00:36:44.375511 2965 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 8 00:36:44.375799 kubelet[2965]: I0508 00:36:44.375526 2965 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 8 00:36:44.375799 kubelet[2965]: I0508 00:36:44.375541 2965 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 8 00:36:44.375799 kubelet[2965]: I0508 00:36:44.375557 2965 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a9a7bd3c743f7de1557201987caabff1-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"a9a7bd3c743f7de1557201987caabff1\") " pod="kube-system/kube-apiserver-localhost" May 8 00:36:44.375799 kubelet[2965]: I0508 00:36:44.375572 2965 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a9a7bd3c743f7de1557201987caabff1-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"a9a7bd3c743f7de1557201987caabff1\") " pod="kube-system/kube-apiserver-localhost" May 8 00:36:45.154988 kubelet[2965]: I0508 00:36:45.154927 2965 apiserver.go:52] "Watching apiserver" May 8 00:36:45.173850 kubelet[2965]: I0508 00:36:45.173807 2965 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 8 00:36:45.229511 kubelet[2965]: E0508 00:36:45.229487 2965 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" May 8 00:36:45.300207 kubelet[2965]: I0508 00:36:45.300166 2965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.3000646439999999 podStartE2EDuration="1.300064644s" podCreationTimestamp="2025-05-08 00:36:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 00:36:45.287234432 +0000 UTC m=+1.220543065" watchObservedRunningTime="2025-05-08 00:36:45.300064644 +0000 UTC m=+1.233373268" May 8 00:36:45.300386 kubelet[2965]: I0508 00:36:45.300368 2965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.300227829 podStartE2EDuration="1.300227829s" podCreationTimestamp="2025-05-08 00:36:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 00:36:45.300032535 +0000 UTC m=+1.233341169" watchObservedRunningTime="2025-05-08 00:36:45.300227829 +0000 UTC m=+1.233536462" May 8 00:36:48.860730 sudo[1979]: pam_unix(sudo:session): session closed for user root May 8 00:36:48.873227 sshd[1973]: pam_unix(sshd:session): session closed for user core May 8 00:36:48.875265 systemd[1]: sshd@6-139.178.70.106:22-139.178.68.195:57136.service: Deactivated successfully. May 8 00:36:48.878510 systemd-logind[1616]: Session 9 logged out. Waiting for processes to exit. May 8 00:36:48.879094 systemd[1]: session-9.scope: Deactivated successfully. May 8 00:36:48.880693 systemd-logind[1616]: Removed session 9. May 8 00:36:50.065978 kubelet[2965]: I0508 00:36:50.065906 2965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=6.065895102 podStartE2EDuration="6.065895102s" podCreationTimestamp="2025-05-08 00:36:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 00:36:45.331655493 +0000 UTC m=+1.264964126" watchObservedRunningTime="2025-05-08 00:36:50.065895102 +0000 UTC m=+5.999203730" May 8 00:36:57.076373 kubelet[2965]: I0508 00:36:57.076310 2965 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 8 00:36:57.086845 containerd[1637]: time="2025-05-08T00:36:57.086737398Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 8 00:36:57.087174 kubelet[2965]: I0508 00:36:57.086993 2965 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 8 00:36:58.046093 kubelet[2965]: I0508 00:36:58.044675 2965 topology_manager.go:215] "Topology Admit Handler" podUID="07a8d446-af61-4488-823b-ada3ac7b5be3" podNamespace="kube-system" podName="kube-proxy-r9wlz" May 8 00:36:58.164042 kubelet[2965]: I0508 00:36:58.164010 2965 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/07a8d446-af61-4488-823b-ada3ac7b5be3-kube-proxy\") pod \"kube-proxy-r9wlz\" (UID: \"07a8d446-af61-4488-823b-ada3ac7b5be3\") " pod="kube-system/kube-proxy-r9wlz" May 8 00:36:58.164042 kubelet[2965]: I0508 00:36:58.164040 2965 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/07a8d446-af61-4488-823b-ada3ac7b5be3-xtables-lock\") pod \"kube-proxy-r9wlz\" (UID: \"07a8d446-af61-4488-823b-ada3ac7b5be3\") " pod="kube-system/kube-proxy-r9wlz" May 8 00:36:58.164325 kubelet[2965]: I0508 00:36:58.164053 2965 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/07a8d446-af61-4488-823b-ada3ac7b5be3-lib-modules\") pod \"kube-proxy-r9wlz\" (UID: \"07a8d446-af61-4488-823b-ada3ac7b5be3\") " pod="kube-system/kube-proxy-r9wlz" May 8 00:36:58.164325 kubelet[2965]: I0508 00:36:58.164065 2965 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8xwsm\" (UniqueName: \"kubernetes.io/projected/07a8d446-af61-4488-823b-ada3ac7b5be3-kube-api-access-8xwsm\") pod \"kube-proxy-r9wlz\" (UID: \"07a8d446-af61-4488-823b-ada3ac7b5be3\") " pod="kube-system/kube-proxy-r9wlz" May 8 00:36:58.194859 kubelet[2965]: I0508 00:36:58.193816 2965 topology_manager.go:215] "Topology Admit Handler" podUID="c928df0e-4d47-410b-887f-bbf41ec0e7c8" podNamespace="tigera-operator" podName="tigera-operator-797db67f8-7wtb6" May 8 00:36:58.363988 containerd[1637]: time="2025-05-08T00:36:58.363947954Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-r9wlz,Uid:07a8d446-af61-4488-823b-ada3ac7b5be3,Namespace:kube-system,Attempt:0,}" May 8 00:36:58.365135 kubelet[2965]: I0508 00:36:58.365061 2965 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qxx2h\" (UniqueName: \"kubernetes.io/projected/c928df0e-4d47-410b-887f-bbf41ec0e7c8-kube-api-access-qxx2h\") pod \"tigera-operator-797db67f8-7wtb6\" (UID: \"c928df0e-4d47-410b-887f-bbf41ec0e7c8\") " pod="tigera-operator/tigera-operator-797db67f8-7wtb6" May 8 00:36:58.365135 kubelet[2965]: I0508 00:36:58.365081 2965 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/c928df0e-4d47-410b-887f-bbf41ec0e7c8-var-lib-calico\") pod \"tigera-operator-797db67f8-7wtb6\" (UID: \"c928df0e-4d47-410b-887f-bbf41ec0e7c8\") " pod="tigera-operator/tigera-operator-797db67f8-7wtb6" May 8 00:36:58.424756 containerd[1637]: time="2025-05-08T00:36:58.424690533Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:36:58.424756 containerd[1637]: time="2025-05-08T00:36:58.424730803Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:36:58.424902 containerd[1637]: time="2025-05-08T00:36:58.424745464Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:36:58.425359 containerd[1637]: time="2025-05-08T00:36:58.425215031Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:36:58.452132 containerd[1637]: time="2025-05-08T00:36:58.452106353Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-r9wlz,Uid:07a8d446-af61-4488-823b-ada3ac7b5be3,Namespace:kube-system,Attempt:0,} returns sandbox id \"e640608b6094c644858ee5a61dc73fc63a4d989ef1b2c0f2992b9706e10c0044\"" May 8 00:36:58.458366 containerd[1637]: time="2025-05-08T00:36:58.458318391Z" level=info msg="CreateContainer within sandbox \"e640608b6094c644858ee5a61dc73fc63a4d989ef1b2c0f2992b9706e10c0044\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 8 00:36:58.500746 containerd[1637]: time="2025-05-08T00:36:58.500418959Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-797db67f8-7wtb6,Uid:c928df0e-4d47-410b-887f-bbf41ec0e7c8,Namespace:tigera-operator,Attempt:0,}" May 8 00:36:58.506404 containerd[1637]: time="2025-05-08T00:36:58.506376737Z" level=info msg="CreateContainer within sandbox \"e640608b6094c644858ee5a61dc73fc63a4d989ef1b2c0f2992b9706e10c0044\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"c9268cad0113731901e3749a7f22b83bf9a374c2e73cd3bb8985c4aadd322e03\"" May 8 00:36:58.506911 containerd[1637]: time="2025-05-08T00:36:58.506871459Z" level=info msg="StartContainer for \"c9268cad0113731901e3749a7f22b83bf9a374c2e73cd3bb8985c4aadd322e03\"" May 8 00:36:58.518846 containerd[1637]: time="2025-05-08T00:36:58.518605652Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:36:58.518846 containerd[1637]: time="2025-05-08T00:36:58.518733868Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:36:58.518846 containerd[1637]: time="2025-05-08T00:36:58.518744722Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:36:58.518973 containerd[1637]: time="2025-05-08T00:36:58.518920396Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:36:58.568451 containerd[1637]: time="2025-05-08T00:36:58.568407027Z" level=info msg="StartContainer for \"c9268cad0113731901e3749a7f22b83bf9a374c2e73cd3bb8985c4aadd322e03\" returns successfully" May 8 00:36:58.569742 containerd[1637]: time="2025-05-08T00:36:58.569700092Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-797db67f8-7wtb6,Uid:c928df0e-4d47-410b-887f-bbf41ec0e7c8,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"db4c92ebe278d41693b1ee74b77e9712623803099792036e61a9bf666393c0fc\"" May 8 00:36:58.575436 containerd[1637]: time="2025-05-08T00:36:58.575377925Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.7\"" May 8 00:37:00.049994 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount57490077.mount: Deactivated successfully. May 8 00:37:00.402296 containerd[1637]: time="2025-05-08T00:37:00.402261952Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:37:00.403823 containerd[1637]: time="2025-05-08T00:37:00.403793424Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.7: active requests=0, bytes read=22002662" May 8 00:37:00.409097 containerd[1637]: time="2025-05-08T00:37:00.409043573Z" level=info msg="ImageCreate event name:\"sha256:e9b19fa62f476f04e5840eb65a0f71b49c7b9f4ceede31675409ddc218bb5578\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:37:00.410582 containerd[1637]: time="2025-05-08T00:37:00.410561161Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:a4a44422d8f2a14e0aaea2031ccb5580f2bf68218c9db444450c1888743305e9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:37:00.411222 containerd[1637]: time="2025-05-08T00:37:00.411118585Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.7\" with image id \"sha256:e9b19fa62f476f04e5840eb65a0f71b49c7b9f4ceede31675409ddc218bb5578\", repo tag \"quay.io/tigera/operator:v1.36.7\", repo digest \"quay.io/tigera/operator@sha256:a4a44422d8f2a14e0aaea2031ccb5580f2bf68218c9db444450c1888743305e9\", size \"21998657\" in 1.835712114s" May 8 00:37:00.411222 containerd[1637]: time="2025-05-08T00:37:00.411145724Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.7\" returns image reference \"sha256:e9b19fa62f476f04e5840eb65a0f71b49c7b9f4ceede31675409ddc218bb5578\"" May 8 00:37:00.416200 containerd[1637]: time="2025-05-08T00:37:00.416110164Z" level=info msg="CreateContainer within sandbox \"db4c92ebe278d41693b1ee74b77e9712623803099792036e61a9bf666393c0fc\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" May 8 00:37:00.431501 containerd[1637]: time="2025-05-08T00:37:00.431467250Z" level=info msg="CreateContainer within sandbox \"db4c92ebe278d41693b1ee74b77e9712623803099792036e61a9bf666393c0fc\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"84b6b50b2bc6f6bd1d5ccce886d6eb98428ac66e831fb18db41e22701897ae7c\"" May 8 00:37:00.432154 containerd[1637]: time="2025-05-08T00:37:00.431851703Z" level=info msg="StartContainer for \"84b6b50b2bc6f6bd1d5ccce886d6eb98428ac66e831fb18db41e22701897ae7c\"" May 8 00:37:00.493941 containerd[1637]: time="2025-05-08T00:37:00.493910155Z" level=info msg="StartContainer for \"84b6b50b2bc6f6bd1d5ccce886d6eb98428ac66e831fb18db41e22701897ae7c\" returns successfully" May 8 00:37:01.244692 kubelet[2965]: I0508 00:37:01.244474 2965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-r9wlz" podStartSLOduration=4.244455363 podStartE2EDuration="4.244455363s" podCreationTimestamp="2025-05-08 00:36:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 00:36:59.24765687 +0000 UTC m=+15.180965506" watchObservedRunningTime="2025-05-08 00:37:01.244455363 +0000 UTC m=+17.177763991" May 8 00:37:03.371176 kubelet[2965]: I0508 00:37:03.370976 2965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-797db67f8-7wtb6" podStartSLOduration=3.526237474 podStartE2EDuration="5.370958571s" podCreationTimestamp="2025-05-08 00:36:58 +0000 UTC" firstStartedPulling="2025-05-08 00:36:58.570414211 +0000 UTC m=+14.503722835" lastFinishedPulling="2025-05-08 00:37:00.415135307 +0000 UTC m=+16.348443932" observedRunningTime="2025-05-08 00:37:01.24506114 +0000 UTC m=+17.178369772" watchObservedRunningTime="2025-05-08 00:37:03.370958571 +0000 UTC m=+19.304267206" May 8 00:37:03.385329 kubelet[2965]: I0508 00:37:03.384407 2965 topology_manager.go:215] "Topology Admit Handler" podUID="2c167c8d-a0b7-4c38-a16f-3f86af2c2838" podNamespace="calico-system" podName="calico-typha-5558c5b55d-zxqj8" May 8 00:37:03.499978 kubelet[2965]: I0508 00:37:03.499008 2965 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2c167c8d-a0b7-4c38-a16f-3f86af2c2838-tigera-ca-bundle\") pod \"calico-typha-5558c5b55d-zxqj8\" (UID: \"2c167c8d-a0b7-4c38-a16f-3f86af2c2838\") " pod="calico-system/calico-typha-5558c5b55d-zxqj8" May 8 00:37:03.499978 kubelet[2965]: I0508 00:37:03.499064 2965 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/2c167c8d-a0b7-4c38-a16f-3f86af2c2838-typha-certs\") pod \"calico-typha-5558c5b55d-zxqj8\" (UID: \"2c167c8d-a0b7-4c38-a16f-3f86af2c2838\") " pod="calico-system/calico-typha-5558c5b55d-zxqj8" May 8 00:37:03.499978 kubelet[2965]: I0508 00:37:03.499085 2965 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tnnwx\" (UniqueName: \"kubernetes.io/projected/2c167c8d-a0b7-4c38-a16f-3f86af2c2838-kube-api-access-tnnwx\") pod \"calico-typha-5558c5b55d-zxqj8\" (UID: \"2c167c8d-a0b7-4c38-a16f-3f86af2c2838\") " pod="calico-system/calico-typha-5558c5b55d-zxqj8" May 8 00:37:03.506427 kubelet[2965]: I0508 00:37:03.505302 2965 topology_manager.go:215] "Topology Admit Handler" podUID="eec1817e-5afa-4b99-9c1c-1caa3e33fbd4" podNamespace="calico-system" podName="calico-node-5kr6b" May 8 00:37:03.599820 kubelet[2965]: I0508 00:37:03.599786 2965 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/eec1817e-5afa-4b99-9c1c-1caa3e33fbd4-flexvol-driver-host\") pod \"calico-node-5kr6b\" (UID: \"eec1817e-5afa-4b99-9c1c-1caa3e33fbd4\") " pod="calico-system/calico-node-5kr6b" May 8 00:37:03.599929 kubelet[2965]: I0508 00:37:03.599847 2965 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-89zrj\" (UniqueName: \"kubernetes.io/projected/eec1817e-5afa-4b99-9c1c-1caa3e33fbd4-kube-api-access-89zrj\") pod \"calico-node-5kr6b\" (UID: \"eec1817e-5afa-4b99-9c1c-1caa3e33fbd4\") " pod="calico-system/calico-node-5kr6b" May 8 00:37:03.599929 kubelet[2965]: I0508 00:37:03.599877 2965 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/eec1817e-5afa-4b99-9c1c-1caa3e33fbd4-policysync\") pod \"calico-node-5kr6b\" (UID: \"eec1817e-5afa-4b99-9c1c-1caa3e33fbd4\") " pod="calico-system/calico-node-5kr6b" May 8 00:37:03.599929 kubelet[2965]: I0508 00:37:03.599922 2965 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/eec1817e-5afa-4b99-9c1c-1caa3e33fbd4-node-certs\") pod \"calico-node-5kr6b\" (UID: \"eec1817e-5afa-4b99-9c1c-1caa3e33fbd4\") " pod="calico-system/calico-node-5kr6b" May 8 00:37:03.600010 kubelet[2965]: I0508 00:37:03.599943 2965 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/eec1817e-5afa-4b99-9c1c-1caa3e33fbd4-var-run-calico\") pod \"calico-node-5kr6b\" (UID: \"eec1817e-5afa-4b99-9c1c-1caa3e33fbd4\") " pod="calico-system/calico-node-5kr6b" May 8 00:37:03.600010 kubelet[2965]: I0508 00:37:03.599959 2965 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/eec1817e-5afa-4b99-9c1c-1caa3e33fbd4-lib-modules\") pod \"calico-node-5kr6b\" (UID: \"eec1817e-5afa-4b99-9c1c-1caa3e33fbd4\") " pod="calico-system/calico-node-5kr6b" May 8 00:37:03.600054 kubelet[2965]: I0508 00:37:03.600019 2965 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/eec1817e-5afa-4b99-9c1c-1caa3e33fbd4-var-lib-calico\") pod \"calico-node-5kr6b\" (UID: \"eec1817e-5afa-4b99-9c1c-1caa3e33fbd4\") " pod="calico-system/calico-node-5kr6b" May 8 00:37:03.600054 kubelet[2965]: I0508 00:37:03.600034 2965 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/eec1817e-5afa-4b99-9c1c-1caa3e33fbd4-cni-bin-dir\") pod \"calico-node-5kr6b\" (UID: \"eec1817e-5afa-4b99-9c1c-1caa3e33fbd4\") " pod="calico-system/calico-node-5kr6b" May 8 00:37:03.600054 kubelet[2965]: I0508 00:37:03.600045 2965 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/eec1817e-5afa-4b99-9c1c-1caa3e33fbd4-cni-log-dir\") pod \"calico-node-5kr6b\" (UID: \"eec1817e-5afa-4b99-9c1c-1caa3e33fbd4\") " pod="calico-system/calico-node-5kr6b" May 8 00:37:03.600117 kubelet[2965]: I0508 00:37:03.600061 2965 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/eec1817e-5afa-4b99-9c1c-1caa3e33fbd4-tigera-ca-bundle\") pod \"calico-node-5kr6b\" (UID: \"eec1817e-5afa-4b99-9c1c-1caa3e33fbd4\") " pod="calico-system/calico-node-5kr6b" May 8 00:37:03.600150 kubelet[2965]: I0508 00:37:03.600115 2965 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/eec1817e-5afa-4b99-9c1c-1caa3e33fbd4-xtables-lock\") pod \"calico-node-5kr6b\" (UID: \"eec1817e-5afa-4b99-9c1c-1caa3e33fbd4\") " pod="calico-system/calico-node-5kr6b" May 8 00:37:03.600181 kubelet[2965]: I0508 00:37:03.600146 2965 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/eec1817e-5afa-4b99-9c1c-1caa3e33fbd4-cni-net-dir\") pod \"calico-node-5kr6b\" (UID: \"eec1817e-5afa-4b99-9c1c-1caa3e33fbd4\") " pod="calico-system/calico-node-5kr6b" May 8 00:37:03.670372 kubelet[2965]: I0508 00:37:03.670277 2965 topology_manager.go:215] "Topology Admit Handler" podUID="a6a1f446-8d54-4427-9c9d-1d9192e66ef3" podNamespace="calico-system" podName="csi-node-driver-htn9m" May 8 00:37:03.679993 kubelet[2965]: E0508 00:37:03.679955 2965 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-htn9m" podUID="a6a1f446-8d54-4427-9c9d-1d9192e66ef3" May 8 00:37:03.700870 kubelet[2965]: I0508 00:37:03.700850 2965 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f5qzw\" (UniqueName: \"kubernetes.io/projected/a6a1f446-8d54-4427-9c9d-1d9192e66ef3-kube-api-access-f5qzw\") pod \"csi-node-driver-htn9m\" (UID: \"a6a1f446-8d54-4427-9c9d-1d9192e66ef3\") " pod="calico-system/csi-node-driver-htn9m" May 8 00:37:03.700940 kubelet[2965]: I0508 00:37:03.700873 2965 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/a6a1f446-8d54-4427-9c9d-1d9192e66ef3-varrun\") pod \"csi-node-driver-htn9m\" (UID: \"a6a1f446-8d54-4427-9c9d-1d9192e66ef3\") " pod="calico-system/csi-node-driver-htn9m" May 8 00:37:03.700940 kubelet[2965]: I0508 00:37:03.700884 2965 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a6a1f446-8d54-4427-9c9d-1d9192e66ef3-kubelet-dir\") pod \"csi-node-driver-htn9m\" (UID: \"a6a1f446-8d54-4427-9c9d-1d9192e66ef3\") " pod="calico-system/csi-node-driver-htn9m" May 8 00:37:03.700940 kubelet[2965]: I0508 00:37:03.700893 2965 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/a6a1f446-8d54-4427-9c9d-1d9192e66ef3-socket-dir\") pod \"csi-node-driver-htn9m\" (UID: \"a6a1f446-8d54-4427-9c9d-1d9192e66ef3\") " pod="calico-system/csi-node-driver-htn9m" May 8 00:37:03.700940 kubelet[2965]: I0508 00:37:03.700902 2965 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/a6a1f446-8d54-4427-9c9d-1d9192e66ef3-registration-dir\") pod \"csi-node-driver-htn9m\" (UID: \"a6a1f446-8d54-4427-9c9d-1d9192e66ef3\") " pod="calico-system/csi-node-driver-htn9m" May 8 00:37:03.718463 kubelet[2965]: E0508 00:37:03.716184 2965 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:37:03.718463 kubelet[2965]: W0508 00:37:03.716209 2965 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:37:03.718463 kubelet[2965]: E0508 00:37:03.716223 2965 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:37:03.777541 containerd[1637]: time="2025-05-08T00:37:03.777166825Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5558c5b55d-zxqj8,Uid:2c167c8d-a0b7-4c38-a16f-3f86af2c2838,Namespace:calico-system,Attempt:0,}" May 8 00:37:03.802096 kubelet[2965]: E0508 00:37:03.802074 2965 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:37:03.802096 kubelet[2965]: W0508 00:37:03.802090 2965 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:37:03.802096 kubelet[2965]: E0508 00:37:03.802104 2965 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:37:03.802404 kubelet[2965]: E0508 00:37:03.802210 2965 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:37:03.802404 kubelet[2965]: W0508 00:37:03.802215 2965 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:37:03.802404 kubelet[2965]: E0508 00:37:03.802221 2965 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:37:03.802596 kubelet[2965]: E0508 00:37:03.802511 2965 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:37:03.802596 kubelet[2965]: W0508 00:37:03.802522 2965 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:37:03.802596 kubelet[2965]: E0508 00:37:03.802537 2965 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:37:03.802743 kubelet[2965]: E0508 00:37:03.802706 2965 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:37:03.802743 kubelet[2965]: W0508 00:37:03.802713 2965 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:37:03.802743 kubelet[2965]: E0508 00:37:03.802719 2965 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:37:03.802986 kubelet[2965]: E0508 00:37:03.802916 2965 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:37:03.802986 kubelet[2965]: W0508 00:37:03.802928 2965 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:37:03.802986 kubelet[2965]: E0508 00:37:03.802942 2965 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:37:03.803272 kubelet[2965]: E0508 00:37:03.803163 2965 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:37:03.803272 kubelet[2965]: W0508 00:37:03.803170 2965 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:37:03.803272 kubelet[2965]: E0508 00:37:03.803185 2965 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:37:03.803363 kubelet[2965]: E0508 00:37:03.803340 2965 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:37:03.803363 kubelet[2965]: W0508 00:37:03.803358 2965 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:37:03.803418 kubelet[2965]: E0508 00:37:03.803368 2965 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:37:03.804204 kubelet[2965]: E0508 00:37:03.803488 2965 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:37:03.804204 kubelet[2965]: W0508 00:37:03.803565 2965 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:37:03.804204 kubelet[2965]: E0508 00:37:03.803573 2965 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:37:03.804204 kubelet[2965]: E0508 00:37:03.803671 2965 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:37:03.804204 kubelet[2965]: W0508 00:37:03.803675 2965 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:37:03.804204 kubelet[2965]: E0508 00:37:03.803680 2965 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:37:03.804204 kubelet[2965]: E0508 00:37:03.803861 2965 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:37:03.804204 kubelet[2965]: W0508 00:37:03.803868 2965 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:37:03.804204 kubelet[2965]: E0508 00:37:03.803877 2965 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:37:03.804204 kubelet[2965]: E0508 00:37:03.804001 2965 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:37:03.814100 kubelet[2965]: W0508 00:37:03.804005 2965 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:37:03.814100 kubelet[2965]: E0508 00:37:03.804010 2965 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:37:03.814100 kubelet[2965]: E0508 00:37:03.804101 2965 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:37:03.814100 kubelet[2965]: W0508 00:37:03.804106 2965 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:37:03.814100 kubelet[2965]: E0508 00:37:03.804111 2965 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:37:03.814100 kubelet[2965]: E0508 00:37:03.804294 2965 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:37:03.814100 kubelet[2965]: W0508 00:37:03.804299 2965 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:37:03.814100 kubelet[2965]: E0508 00:37:03.804309 2965 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:37:03.814100 kubelet[2965]: E0508 00:37:03.804459 2965 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:37:03.814100 kubelet[2965]: W0508 00:37:03.804464 2965 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:37:03.814889 kubelet[2965]: E0508 00:37:03.804501 2965 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:37:03.814889 kubelet[2965]: E0508 00:37:03.804639 2965 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:37:03.814889 kubelet[2965]: W0508 00:37:03.804644 2965 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:37:03.814889 kubelet[2965]: E0508 00:37:03.804659 2965 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:37:03.814889 kubelet[2965]: E0508 00:37:03.804753 2965 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:37:03.814889 kubelet[2965]: W0508 00:37:03.804758 2965 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:37:03.814889 kubelet[2965]: E0508 00:37:03.804772 2965 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:37:03.814889 kubelet[2965]: E0508 00:37:03.804860 2965 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:37:03.814889 kubelet[2965]: W0508 00:37:03.804865 2965 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:37:03.814889 kubelet[2965]: E0508 00:37:03.804873 2965 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:37:03.815389 kubelet[2965]: E0508 00:37:03.804977 2965 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:37:03.815389 kubelet[2965]: W0508 00:37:03.804985 2965 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:37:03.815389 kubelet[2965]: E0508 00:37:03.804997 2965 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:37:03.815389 kubelet[2965]: E0508 00:37:03.805122 2965 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:37:03.815389 kubelet[2965]: W0508 00:37:03.805128 2965 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:37:03.815389 kubelet[2965]: E0508 00:37:03.805138 2965 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:37:03.815389 kubelet[2965]: E0508 00:37:03.805245 2965 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:37:03.815389 kubelet[2965]: W0508 00:37:03.805249 2965 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:37:03.815389 kubelet[2965]: E0508 00:37:03.805258 2965 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:37:03.815389 kubelet[2965]: E0508 00:37:03.805390 2965 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:37:03.815594 kubelet[2965]: W0508 00:37:03.805394 2965 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:37:03.815594 kubelet[2965]: E0508 00:37:03.805403 2965 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:37:03.815594 kubelet[2965]: E0508 00:37:03.805504 2965 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:37:03.815594 kubelet[2965]: W0508 00:37:03.805509 2965 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:37:03.815594 kubelet[2965]: E0508 00:37:03.805519 2965 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:37:03.815594 kubelet[2965]: E0508 00:37:03.805616 2965 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:37:03.815594 kubelet[2965]: W0508 00:37:03.805621 2965 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:37:03.815594 kubelet[2965]: E0508 00:37:03.805631 2965 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:37:03.815594 kubelet[2965]: E0508 00:37:03.805752 2965 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:37:03.815594 kubelet[2965]: W0508 00:37:03.805759 2965 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:37:03.815806 kubelet[2965]: E0508 00:37:03.805766 2965 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:37:03.815806 kubelet[2965]: E0508 00:37:03.813984 2965 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:37:03.815806 kubelet[2965]: W0508 00:37:03.813995 2965 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:37:03.815806 kubelet[2965]: E0508 00:37:03.814007 2965 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:37:03.818356 containerd[1637]: time="2025-05-08T00:37:03.816694603Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-5kr6b,Uid:eec1817e-5afa-4b99-9c1c-1caa3e33fbd4,Namespace:calico-system,Attempt:0,}" May 8 00:37:03.818576 kubelet[2965]: E0508 00:37:03.818561 2965 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:37:03.818576 kubelet[2965]: W0508 00:37:03.818572 2965 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:37:03.818652 kubelet[2965]: E0508 00:37:03.818584 2965 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:37:03.871219 containerd[1637]: time="2025-05-08T00:37:03.871147214Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:37:03.871219 containerd[1637]: time="2025-05-08T00:37:03.871194354Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:37:03.871219 containerd[1637]: time="2025-05-08T00:37:03.871201707Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:37:03.878140 containerd[1637]: time="2025-05-08T00:37:03.871276280Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:37:03.907933 containerd[1637]: time="2025-05-08T00:37:03.907733708Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:37:03.907933 containerd[1637]: time="2025-05-08T00:37:03.907795821Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:37:03.907933 containerd[1637]: time="2025-05-08T00:37:03.907860139Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:37:03.908155 containerd[1637]: time="2025-05-08T00:37:03.908047466Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:37:03.943222 containerd[1637]: time="2025-05-08T00:37:03.942113709Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5558c5b55d-zxqj8,Uid:2c167c8d-a0b7-4c38-a16f-3f86af2c2838,Namespace:calico-system,Attempt:0,} returns sandbox id \"1e48c04261d2a4a0219eb4a6f6095118446d72554edd4a7c1275a8dfaa8904ba\"" May 8 00:37:03.961339 containerd[1637]: time="2025-05-08T00:37:03.960907461Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.3\"" May 8 00:37:03.972789 containerd[1637]: time="2025-05-08T00:37:03.972754954Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-5kr6b,Uid:eec1817e-5afa-4b99-9c1c-1caa3e33fbd4,Namespace:calico-system,Attempt:0,} returns sandbox id \"eb2d614a0f671a46ede64a02eccc94498a3b0c4be905254c10447f476a4aef48\"" May 8 00:37:05.182877 kubelet[2965]: E0508 00:37:05.182797 2965 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-htn9m" podUID="a6a1f446-8d54-4427-9c9d-1d9192e66ef3" May 8 00:37:06.775884 containerd[1637]: time="2025-05-08T00:37:06.775822393Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:37:06.776473 containerd[1637]: time="2025-05-08T00:37:06.776360702Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.3: active requests=0, bytes read=30426870" May 8 00:37:06.779206 containerd[1637]: time="2025-05-08T00:37:06.779164724Z" level=info msg="ImageCreate event name:\"sha256:bde24a3cb8851b59372b76b3ad78f8028d1a915ffed82c6cc6256f34e500bd3d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:37:06.780605 containerd[1637]: time="2025-05-08T00:37:06.780572809Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f5516aa6a78f00931d2625f3012dcf2c69d141ce41483b8d59c6ec6330a18620\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:37:06.781302 containerd[1637]: time="2025-05-08T00:37:06.781209292Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.3\" with image id \"sha256:bde24a3cb8851b59372b76b3ad78f8028d1a915ffed82c6cc6256f34e500bd3d\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f5516aa6a78f00931d2625f3012dcf2c69d141ce41483b8d59c6ec6330a18620\", size \"31919484\" in 2.820243876s" May 8 00:37:06.781302 containerd[1637]: time="2025-05-08T00:37:06.781232839Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.3\" returns image reference \"sha256:bde24a3cb8851b59372b76b3ad78f8028d1a915ffed82c6cc6256f34e500bd3d\"" May 8 00:37:06.783531 containerd[1637]: time="2025-05-08T00:37:06.783403715Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\"" May 8 00:37:06.798141 containerd[1637]: time="2025-05-08T00:37:06.798107812Z" level=info msg="CreateContainer within sandbox \"1e48c04261d2a4a0219eb4a6f6095118446d72554edd4a7c1275a8dfaa8904ba\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" May 8 00:37:06.809814 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount605613123.mount: Deactivated successfully. May 8 00:37:06.827183 containerd[1637]: time="2025-05-08T00:37:06.827147164Z" level=info msg="CreateContainer within sandbox \"1e48c04261d2a4a0219eb4a6f6095118446d72554edd4a7c1275a8dfaa8904ba\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"3e78d53cffe15a6d49f578b062ee6db90656f4507b680b9f2c8ce41959648e82\"" May 8 00:37:06.827674 containerd[1637]: time="2025-05-08T00:37:06.827662598Z" level=info msg="StartContainer for \"3e78d53cffe15a6d49f578b062ee6db90656f4507b680b9f2c8ce41959648e82\"" May 8 00:37:06.881507 containerd[1637]: time="2025-05-08T00:37:06.881279034Z" level=info msg="StartContainer for \"3e78d53cffe15a6d49f578b062ee6db90656f4507b680b9f2c8ce41959648e82\" returns successfully" May 8 00:37:07.190659 kubelet[2965]: E0508 00:37:07.190533 2965 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-htn9m" podUID="a6a1f446-8d54-4427-9c9d-1d9192e66ef3" May 8 00:37:07.281036 kubelet[2965]: I0508 00:37:07.279546 2965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-5558c5b55d-zxqj8" podStartSLOduration=1.457707568 podStartE2EDuration="4.279531392s" podCreationTimestamp="2025-05-08 00:37:03 +0000 UTC" firstStartedPulling="2025-05-08 00:37:03.96008659 +0000 UTC m=+19.893395214" lastFinishedPulling="2025-05-08 00:37:06.781910408 +0000 UTC m=+22.715219038" observedRunningTime="2025-05-08 00:37:07.278464239 +0000 UTC m=+23.211772880" watchObservedRunningTime="2025-05-08 00:37:07.279531392 +0000 UTC m=+23.212840022" May 8 00:37:07.308087 kubelet[2965]: E0508 00:37:07.308062 2965 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:37:07.308087 kubelet[2965]: W0508 00:37:07.308082 2965 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:37:07.308242 kubelet[2965]: E0508 00:37:07.308099 2965 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:37:07.308242 kubelet[2965]: E0508 00:37:07.308217 2965 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:37:07.308242 kubelet[2965]: W0508 00:37:07.308223 2965 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:37:07.308242 kubelet[2965]: E0508 00:37:07.308230 2965 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:37:07.308415 kubelet[2965]: E0508 00:37:07.308320 2965 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:37:07.308415 kubelet[2965]: W0508 00:37:07.308325 2965 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:37:07.308415 kubelet[2965]: E0508 00:37:07.308329 2965 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:37:07.308506 kubelet[2965]: E0508 00:37:07.308476 2965 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:37:07.308506 kubelet[2965]: W0508 00:37:07.308482 2965 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:37:07.308506 kubelet[2965]: E0508 00:37:07.308487 2965 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:37:07.308957 kubelet[2965]: E0508 00:37:07.308585 2965 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:37:07.308957 kubelet[2965]: W0508 00:37:07.308591 2965 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:37:07.308957 kubelet[2965]: E0508 00:37:07.308599 2965 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:37:07.308957 kubelet[2965]: E0508 00:37:07.308685 2965 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:37:07.308957 kubelet[2965]: W0508 00:37:07.308691 2965 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:37:07.308957 kubelet[2965]: E0508 00:37:07.308699 2965 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:37:07.308957 kubelet[2965]: E0508 00:37:07.308782 2965 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:37:07.308957 kubelet[2965]: W0508 00:37:07.308795 2965 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:37:07.308957 kubelet[2965]: E0508 00:37:07.308801 2965 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:37:07.308957 kubelet[2965]: E0508 00:37:07.308939 2965 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:37:07.309547 kubelet[2965]: W0508 00:37:07.308945 2965 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:37:07.309547 kubelet[2965]: E0508 00:37:07.308950 2965 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:37:07.309547 kubelet[2965]: E0508 00:37:07.309041 2965 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:37:07.309547 kubelet[2965]: W0508 00:37:07.309047 2965 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:37:07.309547 kubelet[2965]: E0508 00:37:07.309056 2965 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:37:07.309547 kubelet[2965]: E0508 00:37:07.309137 2965 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:37:07.309547 kubelet[2965]: W0508 00:37:07.309143 2965 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:37:07.309547 kubelet[2965]: E0508 00:37:07.309148 2965 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:37:07.309547 kubelet[2965]: E0508 00:37:07.309246 2965 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:37:07.309547 kubelet[2965]: W0508 00:37:07.309250 2965 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:37:07.310178 kubelet[2965]: E0508 00:37:07.309256 2965 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:37:07.310178 kubelet[2965]: E0508 00:37:07.309400 2965 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:37:07.310178 kubelet[2965]: W0508 00:37:07.309404 2965 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:37:07.310178 kubelet[2965]: E0508 00:37:07.309409 2965 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:37:07.310178 kubelet[2965]: E0508 00:37:07.309527 2965 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:37:07.310178 kubelet[2965]: W0508 00:37:07.309532 2965 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:37:07.310178 kubelet[2965]: E0508 00:37:07.309538 2965 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:37:07.310178 kubelet[2965]: E0508 00:37:07.309624 2965 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:37:07.310178 kubelet[2965]: W0508 00:37:07.309629 2965 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:37:07.310178 kubelet[2965]: E0508 00:37:07.309634 2965 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:37:07.310410 kubelet[2965]: E0508 00:37:07.309719 2965 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:37:07.310410 kubelet[2965]: W0508 00:37:07.309724 2965 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:37:07.310410 kubelet[2965]: E0508 00:37:07.309728 2965 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:37:07.333152 kubelet[2965]: E0508 00:37:07.333104 2965 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:37:07.333152 kubelet[2965]: W0508 00:37:07.333116 2965 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:37:07.333152 kubelet[2965]: E0508 00:37:07.333128 2965 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:37:07.333389 kubelet[2965]: E0508 00:37:07.333249 2965 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:37:07.333389 kubelet[2965]: W0508 00:37:07.333254 2965 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:37:07.333389 kubelet[2965]: E0508 00:37:07.333271 2965 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:37:07.333389 kubelet[2965]: E0508 00:37:07.333385 2965 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:37:07.333389 kubelet[2965]: W0508 00:37:07.333390 2965 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:37:07.333510 kubelet[2965]: E0508 00:37:07.333399 2965 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:37:07.333543 kubelet[2965]: E0508 00:37:07.333519 2965 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:37:07.333543 kubelet[2965]: W0508 00:37:07.333524 2965 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:37:07.333543 kubelet[2965]: E0508 00:37:07.333531 2965 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:37:07.333635 kubelet[2965]: E0508 00:37:07.333625 2965 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:37:07.333635 kubelet[2965]: W0508 00:37:07.333633 2965 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:37:07.333693 kubelet[2965]: E0508 00:37:07.333641 2965 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:37:07.333752 kubelet[2965]: E0508 00:37:07.333740 2965 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:37:07.333752 kubelet[2965]: W0508 00:37:07.333748 2965 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:37:07.336716 kubelet[2965]: E0508 00:37:07.333756 2965 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:37:07.336716 kubelet[2965]: E0508 00:37:07.333859 2965 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:37:07.336716 kubelet[2965]: W0508 00:37:07.333863 2965 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:37:07.336716 kubelet[2965]: E0508 00:37:07.333868 2965 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:37:07.336716 kubelet[2965]: E0508 00:37:07.333980 2965 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:37:07.336716 kubelet[2965]: W0508 00:37:07.333987 2965 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:37:07.336716 kubelet[2965]: E0508 00:37:07.333997 2965 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:37:07.336716 kubelet[2965]: E0508 00:37:07.334096 2965 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:37:07.336716 kubelet[2965]: W0508 00:37:07.334102 2965 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:37:07.336716 kubelet[2965]: E0508 00:37:07.334108 2965 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:37:07.336903 kubelet[2965]: E0508 00:37:07.334204 2965 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:37:07.336903 kubelet[2965]: W0508 00:37:07.334209 2965 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:37:07.336903 kubelet[2965]: E0508 00:37:07.334219 2965 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:37:07.336903 kubelet[2965]: E0508 00:37:07.334325 2965 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:37:07.336903 kubelet[2965]: W0508 00:37:07.334330 2965 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:37:07.336903 kubelet[2965]: E0508 00:37:07.334336 2965 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:37:07.336903 kubelet[2965]: E0508 00:37:07.334532 2965 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:37:07.336903 kubelet[2965]: W0508 00:37:07.334537 2965 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:37:07.336903 kubelet[2965]: E0508 00:37:07.334543 2965 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:37:07.336903 kubelet[2965]: E0508 00:37:07.334627 2965 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:37:07.346294 kubelet[2965]: W0508 00:37:07.334632 2965 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:37:07.346294 kubelet[2965]: E0508 00:37:07.334637 2965 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:37:07.346294 kubelet[2965]: E0508 00:37:07.334715 2965 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:37:07.346294 kubelet[2965]: W0508 00:37:07.334720 2965 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:37:07.346294 kubelet[2965]: E0508 00:37:07.334725 2965 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:37:07.346294 kubelet[2965]: E0508 00:37:07.334815 2965 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:37:07.346294 kubelet[2965]: W0508 00:37:07.334819 2965 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:37:07.346294 kubelet[2965]: E0508 00:37:07.334824 2965 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:37:07.346294 kubelet[2965]: E0508 00:37:07.335073 2965 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:37:07.346294 kubelet[2965]: W0508 00:37:07.335077 2965 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:37:07.346544 kubelet[2965]: E0508 00:37:07.335083 2965 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:37:07.346544 kubelet[2965]: E0508 00:37:07.335178 2965 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:37:07.346544 kubelet[2965]: W0508 00:37:07.335182 2965 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:37:07.346544 kubelet[2965]: E0508 00:37:07.335187 2965 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:37:07.346544 kubelet[2965]: E0508 00:37:07.337761 2965 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:37:07.346544 kubelet[2965]: W0508 00:37:07.337766 2965 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:37:07.346544 kubelet[2965]: E0508 00:37:07.337771 2965 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:37:08.266284 kubelet[2965]: I0508 00:37:08.265951 2965 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 8 00:37:08.316075 kubelet[2965]: E0508 00:37:08.316058 2965 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:37:08.316281 kubelet[2965]: W0508 00:37:08.316187 2965 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:37:08.316281 kubelet[2965]: E0508 00:37:08.316212 2965 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:37:08.316513 kubelet[2965]: E0508 00:37:08.316505 2965 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:37:08.316625 kubelet[2965]: W0508 00:37:08.316577 2965 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:37:08.316625 kubelet[2965]: E0508 00:37:08.316593 2965 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:37:08.316941 kubelet[2965]: E0508 00:37:08.316871 2965 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:37:08.316941 kubelet[2965]: W0508 00:37:08.316879 2965 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:37:08.316941 kubelet[2965]: E0508 00:37:08.316887 2965 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:37:08.317113 kubelet[2965]: E0508 00:37:08.317054 2965 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:37:08.317113 kubelet[2965]: W0508 00:37:08.317060 2965 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:37:08.317113 kubelet[2965]: E0508 00:37:08.317067 2965 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:37:08.317404 kubelet[2965]: E0508 00:37:08.317363 2965 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:37:08.317404 kubelet[2965]: W0508 00:37:08.317371 2965 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:37:08.317404 kubelet[2965]: E0508 00:37:08.317379 2965 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:37:08.317697 kubelet[2965]: E0508 00:37:08.317625 2965 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:37:08.317697 kubelet[2965]: W0508 00:37:08.317632 2965 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:37:08.317697 kubelet[2965]: E0508 00:37:08.317639 2965 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:37:08.317845 kubelet[2965]: E0508 00:37:08.317806 2965 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:37:08.317845 kubelet[2965]: W0508 00:37:08.317813 2965 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:37:08.317845 kubelet[2965]: E0508 00:37:08.317820 2965 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:37:08.318079 kubelet[2965]: E0508 00:37:08.318031 2965 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:37:08.318079 kubelet[2965]: W0508 00:37:08.318039 2965 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:37:08.318079 kubelet[2965]: E0508 00:37:08.318045 2965 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:37:08.318329 kubelet[2965]: E0508 00:37:08.318265 2965 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:37:08.318329 kubelet[2965]: W0508 00:37:08.318272 2965 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:37:08.318329 kubelet[2965]: E0508 00:37:08.318279 2965 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:37:08.318504 kubelet[2965]: E0508 00:37:08.318467 2965 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:37:08.318504 kubelet[2965]: W0508 00:37:08.318474 2965 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:37:08.318504 kubelet[2965]: E0508 00:37:08.318480 2965 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:37:08.318750 kubelet[2965]: E0508 00:37:08.318739 2965 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:37:08.318845 kubelet[2965]: W0508 00:37:08.318805 2965 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:37:08.318845 kubelet[2965]: E0508 00:37:08.318818 2965 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:37:08.319095 kubelet[2965]: E0508 00:37:08.319029 2965 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:37:08.319095 kubelet[2965]: W0508 00:37:08.319036 2965 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:37:08.319095 kubelet[2965]: E0508 00:37:08.319043 2965 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:37:08.319271 kubelet[2965]: E0508 00:37:08.319230 2965 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:37:08.319271 kubelet[2965]: W0508 00:37:08.319237 2965 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:37:08.319271 kubelet[2965]: E0508 00:37:08.319244 2965 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:37:08.319560 kubelet[2965]: E0508 00:37:08.319517 2965 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:37:08.319560 kubelet[2965]: W0508 00:37:08.319524 2965 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:37:08.319560 kubelet[2965]: E0508 00:37:08.319531 2965 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:37:08.319774 kubelet[2965]: E0508 00:37:08.319767 2965 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:37:08.319860 kubelet[2965]: W0508 00:37:08.319817 2965 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:37:08.319860 kubelet[2965]: E0508 00:37:08.319827 2965 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:37:08.340209 kubelet[2965]: E0508 00:37:08.340188 2965 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:37:08.340209 kubelet[2965]: W0508 00:37:08.340203 2965 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:37:08.340398 kubelet[2965]: E0508 00:37:08.340226 2965 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:37:08.340468 kubelet[2965]: E0508 00:37:08.340454 2965 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:37:08.340468 kubelet[2965]: W0508 00:37:08.340465 2965 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:37:08.340834 kubelet[2965]: E0508 00:37:08.340477 2965 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:37:08.340834 kubelet[2965]: E0508 00:37:08.340640 2965 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:37:08.340834 kubelet[2965]: W0508 00:37:08.340645 2965 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:37:08.340834 kubelet[2965]: E0508 00:37:08.340650 2965 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:37:08.341091 kubelet[2965]: E0508 00:37:08.341072 2965 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:37:08.341091 kubelet[2965]: W0508 00:37:08.341080 2965 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:37:08.341091 kubelet[2965]: E0508 00:37:08.341089 2965 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:37:08.341443 kubelet[2965]: E0508 00:37:08.341432 2965 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:37:08.341443 kubelet[2965]: W0508 00:37:08.341441 2965 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:37:08.341531 kubelet[2965]: E0508 00:37:08.341450 2965 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:37:08.342127 kubelet[2965]: E0508 00:37:08.342072 2965 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:37:08.342127 kubelet[2965]: W0508 00:37:08.342080 2965 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:37:08.342210 kubelet[2965]: E0508 00:37:08.342146 2965 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:37:08.342335 kubelet[2965]: E0508 00:37:08.342230 2965 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:37:08.342335 kubelet[2965]: W0508 00:37:08.342242 2965 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:37:08.342335 kubelet[2965]: E0508 00:37:08.342294 2965 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:37:08.342896 kubelet[2965]: E0508 00:37:08.342366 2965 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:37:08.342896 kubelet[2965]: W0508 00:37:08.342371 2965 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:37:08.342896 kubelet[2965]: E0508 00:37:08.342471 2965 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:37:08.342896 kubelet[2965]: W0508 00:37:08.342475 2965 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:37:08.342896 kubelet[2965]: E0508 00:37:08.342481 2965 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:37:08.342896 kubelet[2965]: E0508 00:37:08.342660 2965 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:37:08.343204 kubelet[2965]: E0508 00:37:08.343080 2965 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:37:08.343204 kubelet[2965]: W0508 00:37:08.343086 2965 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:37:08.343204 kubelet[2965]: E0508 00:37:08.343097 2965 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:37:08.343361 kubelet[2965]: E0508 00:37:08.343283 2965 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:37:08.343361 kubelet[2965]: W0508 00:37:08.343290 2965 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:37:08.343361 kubelet[2965]: E0508 00:37:08.343299 2965 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:37:08.343576 kubelet[2965]: E0508 00:37:08.343526 2965 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:37:08.343576 kubelet[2965]: W0508 00:37:08.343532 2965 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:37:08.343576 kubelet[2965]: E0508 00:37:08.343541 2965 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:37:08.345828 kubelet[2965]: E0508 00:37:08.343934 2965 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:37:08.345828 kubelet[2965]: W0508 00:37:08.343940 2965 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:37:08.345828 kubelet[2965]: E0508 00:37:08.343993 2965 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:37:08.345828 kubelet[2965]: E0508 00:37:08.344090 2965 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:37:08.345828 kubelet[2965]: W0508 00:37:08.344098 2965 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:37:08.345828 kubelet[2965]: E0508 00:37:08.344192 2965 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:37:08.345828 kubelet[2965]: E0508 00:37:08.344406 2965 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:37:08.345828 kubelet[2965]: W0508 00:37:08.344413 2965 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:37:08.345828 kubelet[2965]: E0508 00:37:08.344425 2965 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:37:08.345828 kubelet[2965]: E0508 00:37:08.344567 2965 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:37:08.346046 kubelet[2965]: W0508 00:37:08.344573 2965 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:37:08.346046 kubelet[2965]: E0508 00:37:08.344580 2965 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:37:08.346046 kubelet[2965]: E0508 00:37:08.344713 2965 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:37:08.346046 kubelet[2965]: W0508 00:37:08.344718 2965 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:37:08.346046 kubelet[2965]: E0508 00:37:08.344724 2965 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:37:08.346046 kubelet[2965]: E0508 00:37:08.344950 2965 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:37:08.346046 kubelet[2965]: W0508 00:37:08.344955 2965 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:37:08.346046 kubelet[2965]: E0508 00:37:08.344961 2965 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:37:08.656996 containerd[1637]: time="2025-05-08T00:37:08.656963811Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:37:08.657629 containerd[1637]: time="2025-05-08T00:37:08.657588318Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3: active requests=0, bytes read=5366937" May 8 00:37:08.658632 containerd[1637]: time="2025-05-08T00:37:08.658612579Z" level=info msg="ImageCreate event name:\"sha256:0ceddb3add2e9955cbb604f666245e259f30b1d6683c428f8748359e83d238a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:37:08.660246 containerd[1637]: time="2025-05-08T00:37:08.660227834Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:37:08.660705 containerd[1637]: time="2025-05-08T00:37:08.660685652Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" with image id \"sha256:0ceddb3add2e9955cbb604f666245e259f30b1d6683c428f8748359e83d238a5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\", size \"6859519\" in 1.877258948s" May 8 00:37:08.660735 containerd[1637]: time="2025-05-08T00:37:08.660705183Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" returns image reference \"sha256:0ceddb3add2e9955cbb604f666245e259f30b1d6683c428f8748359e83d238a5\"" May 8 00:37:08.663335 containerd[1637]: time="2025-05-08T00:37:08.663304478Z" level=info msg="CreateContainer within sandbox \"eb2d614a0f671a46ede64a02eccc94498a3b0c4be905254c10447f476a4aef48\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" May 8 00:37:08.670701 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount80577388.mount: Deactivated successfully. May 8 00:37:08.681847 containerd[1637]: time="2025-05-08T00:37:08.681815769Z" level=info msg="CreateContainer within sandbox \"eb2d614a0f671a46ede64a02eccc94498a3b0c4be905254c10447f476a4aef48\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"33a7d5023db43c9a0eb26a408542055ff30751ec87c137a89fe308345e353d25\"" May 8 00:37:08.683175 containerd[1637]: time="2025-05-08T00:37:08.683076737Z" level=info msg="StartContainer for \"33a7d5023db43c9a0eb26a408542055ff30751ec87c137a89fe308345e353d25\"" May 8 00:37:08.724539 containerd[1637]: time="2025-05-08T00:37:08.724499265Z" level=info msg="StartContainer for \"33a7d5023db43c9a0eb26a408542055ff30751ec87c137a89fe308345e353d25\" returns successfully" May 8 00:37:08.787255 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-33a7d5023db43c9a0eb26a408542055ff30751ec87c137a89fe308345e353d25-rootfs.mount: Deactivated successfully. May 8 00:37:08.821243 containerd[1637]: time="2025-05-08T00:37:08.820273559Z" level=info msg="shim disconnected" id=33a7d5023db43c9a0eb26a408542055ff30751ec87c137a89fe308345e353d25 namespace=k8s.io May 8 00:37:08.821243 containerd[1637]: time="2025-05-08T00:37:08.821240187Z" level=warning msg="cleaning up after shim disconnected" id=33a7d5023db43c9a0eb26a408542055ff30751ec87c137a89fe308345e353d25 namespace=k8s.io May 8 00:37:08.821243 containerd[1637]: time="2025-05-08T00:37:08.821247545Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 00:37:09.182759 kubelet[2965]: E0508 00:37:09.182722 2965 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-htn9m" podUID="a6a1f446-8d54-4427-9c9d-1d9192e66ef3" May 8 00:37:09.268541 containerd[1637]: time="2025-05-08T00:37:09.268518313Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\"" May 8 00:37:09.717761 kubelet[2965]: I0508 00:37:09.717639 2965 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 8 00:37:11.183114 kubelet[2965]: E0508 00:37:11.183045 2965 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-htn9m" podUID="a6a1f446-8d54-4427-9c9d-1d9192e66ef3" May 8 00:37:13.183480 kubelet[2965]: E0508 00:37:13.183404 2965 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-htn9m" podUID="a6a1f446-8d54-4427-9c9d-1d9192e66ef3" May 8 00:37:14.856589 containerd[1637]: time="2025-05-08T00:37:14.856546838Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:37:14.863313 containerd[1637]: time="2025-05-08T00:37:14.863261125Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.3: active requests=0, bytes read=97793683" May 8 00:37:14.868580 containerd[1637]: time="2025-05-08T00:37:14.868544316Z" level=info msg="ImageCreate event name:\"sha256:a140d04be1bc987bae0a1b9159e1dcb85751c448830efbdb3494207cf602b2d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:37:14.877041 containerd[1637]: time="2025-05-08T00:37:14.877010481Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:37:14.877482 containerd[1637]: time="2025-05-08T00:37:14.877409770Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.3\" with image id \"sha256:a140d04be1bc987bae0a1b9159e1dcb85751c448830efbdb3494207cf602b2d9\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\", size \"99286305\" in 5.608870427s" May 8 00:37:14.877482 containerd[1637]: time="2025-05-08T00:37:14.877427059Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\" returns image reference \"sha256:a140d04be1bc987bae0a1b9159e1dcb85751c448830efbdb3494207cf602b2d9\"" May 8 00:37:14.878807 containerd[1637]: time="2025-05-08T00:37:14.878744299Z" level=info msg="CreateContainer within sandbox \"eb2d614a0f671a46ede64a02eccc94498a3b0c4be905254c10447f476a4aef48\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" May 8 00:37:14.930494 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1284809275.mount: Deactivated successfully. May 8 00:37:14.931747 containerd[1637]: time="2025-05-08T00:37:14.931719337Z" level=info msg="CreateContainer within sandbox \"eb2d614a0f671a46ede64a02eccc94498a3b0c4be905254c10447f476a4aef48\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"1e74ce039d1f811396c9bb53a15b9a07606a480f5ebba0e4c71739bc2486e75f\"" May 8 00:37:14.932967 containerd[1637]: time="2025-05-08T00:37:14.932938965Z" level=info msg="StartContainer for \"1e74ce039d1f811396c9bb53a15b9a07606a480f5ebba0e4c71739bc2486e75f\"" May 8 00:37:14.982563 containerd[1637]: time="2025-05-08T00:37:14.982532461Z" level=info msg="StartContainer for \"1e74ce039d1f811396c9bb53a15b9a07606a480f5ebba0e4c71739bc2486e75f\" returns successfully" May 8 00:37:15.183145 kubelet[2965]: E0508 00:37:15.183048 2965 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-htn9m" podUID="a6a1f446-8d54-4427-9c9d-1d9192e66ef3" May 8 00:37:17.166713 containerd[1637]: time="2025-05-08T00:37:17.166663680Z" level=info msg="shim disconnected" id=1e74ce039d1f811396c9bb53a15b9a07606a480f5ebba0e4c71739bc2486e75f namespace=k8s.io May 8 00:37:17.167863 containerd[1637]: time="2025-05-08T00:37:17.166769329Z" level=warning msg="cleaning up after shim disconnected" id=1e74ce039d1f811396c9bb53a15b9a07606a480f5ebba0e4c71739bc2486e75f namespace=k8s.io May 8 00:37:17.167863 containerd[1637]: time="2025-05-08T00:37:17.166778975Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 00:37:17.166742 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1e74ce039d1f811396c9bb53a15b9a07606a480f5ebba0e4c71739bc2486e75f-rootfs.mount: Deactivated successfully. May 8 00:37:17.183211 kubelet[2965]: I0508 00:37:17.182419 2965 kubelet_node_status.go:497] "Fast updating node status as it just became ready" May 8 00:37:17.192800 containerd[1637]: time="2025-05-08T00:37:17.192216806Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-htn9m,Uid:a6a1f446-8d54-4427-9c9d-1d9192e66ef3,Namespace:calico-system,Attempt:0,}" May 8 00:37:17.196837 kubelet[2965]: I0508 00:37:17.196726 2965 topology_manager.go:215] "Topology Admit Handler" podUID="aa7783fa-83ef-4b49-bf44-ff04e1503a73" podNamespace="kube-system" podName="coredns-7db6d8ff4d-2lxxx" May 8 00:37:17.205605 kubelet[2965]: I0508 00:37:17.201491 2965 topology_manager.go:215] "Topology Admit Handler" podUID="8ca8c9f2-9680-4a61-ad4e-7654297c3c62" podNamespace="kube-system" podName="coredns-7db6d8ff4d-skfjc" May 8 00:37:17.205605 kubelet[2965]: I0508 00:37:17.203181 2965 topology_manager.go:215] "Topology Admit Handler" podUID="600599f4-a6d1-457c-9d46-c0a1214f1987" podNamespace="calico-apiserver" podName="calico-apiserver-84669494cd-c6tg9" May 8 00:37:17.229028 kubelet[2965]: I0508 00:37:17.228273 2965 topology_manager.go:215] "Topology Admit Handler" podUID="9d6ebfe3-fdd8-4570-8cd5-315117175ab6" podNamespace="calico-system" podName="calico-kube-controllers-67ddb48bf6-z6mbt" May 8 00:37:17.229028 kubelet[2965]: I0508 00:37:17.228406 2965 topology_manager.go:215] "Topology Admit Handler" podUID="1adaa741-871f-441d-b6ca-732d5537fc5a" podNamespace="calico-apiserver" podName="calico-apiserver-6d6795bb7b-bt6hl" May 8 00:37:17.229028 kubelet[2965]: I0508 00:37:17.228754 2965 topology_manager.go:215] "Topology Admit Handler" podUID="8a350a15-4629-49d8-9537-09e1c8aafb63" podNamespace="calico-apiserver" podName="calico-apiserver-84669494cd-gzmz2" May 8 00:37:17.281854 containerd[1637]: time="2025-05-08T00:37:17.281561994Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\"" May 8 00:37:17.299741 kubelet[2965]: I0508 00:37:17.299722 2965 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/600599f4-a6d1-457c-9d46-c0a1214f1987-calico-apiserver-certs\") pod \"calico-apiserver-84669494cd-c6tg9\" (UID: \"600599f4-a6d1-457c-9d46-c0a1214f1987\") " pod="calico-apiserver/calico-apiserver-84669494cd-c6tg9" May 8 00:37:17.299967 kubelet[2965]: I0508 00:37:17.299848 2965 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8ca8c9f2-9680-4a61-ad4e-7654297c3c62-config-volume\") pod \"coredns-7db6d8ff4d-skfjc\" (UID: \"8ca8c9f2-9680-4a61-ad4e-7654297c3c62\") " pod="kube-system/coredns-7db6d8ff4d-skfjc" May 8 00:37:17.299967 kubelet[2965]: I0508 00:37:17.299865 2965 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d4dhd\" (UniqueName: \"kubernetes.io/projected/8ca8c9f2-9680-4a61-ad4e-7654297c3c62-kube-api-access-d4dhd\") pod \"coredns-7db6d8ff4d-skfjc\" (UID: \"8ca8c9f2-9680-4a61-ad4e-7654297c3c62\") " pod="kube-system/coredns-7db6d8ff4d-skfjc" May 8 00:37:17.299967 kubelet[2965]: I0508 00:37:17.299879 2965 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/aa7783fa-83ef-4b49-bf44-ff04e1503a73-config-volume\") pod \"coredns-7db6d8ff4d-2lxxx\" (UID: \"aa7783fa-83ef-4b49-bf44-ff04e1503a73\") " pod="kube-system/coredns-7db6d8ff4d-2lxxx" May 8 00:37:17.299967 kubelet[2965]: I0508 00:37:17.299911 2965 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9jrvj\" (UniqueName: \"kubernetes.io/projected/aa7783fa-83ef-4b49-bf44-ff04e1503a73-kube-api-access-9jrvj\") pod \"coredns-7db6d8ff4d-2lxxx\" (UID: \"aa7783fa-83ef-4b49-bf44-ff04e1503a73\") " pod="kube-system/coredns-7db6d8ff4d-2lxxx" May 8 00:37:17.299967 kubelet[2965]: I0508 00:37:17.299924 2965 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/8a350a15-4629-49d8-9537-09e1c8aafb63-calico-apiserver-certs\") pod \"calico-apiserver-84669494cd-gzmz2\" (UID: \"8a350a15-4629-49d8-9537-09e1c8aafb63\") " pod="calico-apiserver/calico-apiserver-84669494cd-gzmz2" May 8 00:37:17.300080 kubelet[2965]: I0508 00:37:17.299932 2965 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rmgph\" (UniqueName: \"kubernetes.io/projected/600599f4-a6d1-457c-9d46-c0a1214f1987-kube-api-access-rmgph\") pod \"calico-apiserver-84669494cd-c6tg9\" (UID: \"600599f4-a6d1-457c-9d46-c0a1214f1987\") " pod="calico-apiserver/calico-apiserver-84669494cd-c6tg9" May 8 00:37:17.300080 kubelet[2965]: I0508 00:37:17.299944 2965 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rpvcz\" (UniqueName: \"kubernetes.io/projected/8a350a15-4629-49d8-9537-09e1c8aafb63-kube-api-access-rpvcz\") pod \"calico-apiserver-84669494cd-gzmz2\" (UID: \"8a350a15-4629-49d8-9537-09e1c8aafb63\") " pod="calico-apiserver/calico-apiserver-84669494cd-gzmz2" May 8 00:37:17.300682 kubelet[2965]: I0508 00:37:17.299954 2965 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9d6ebfe3-fdd8-4570-8cd5-315117175ab6-tigera-ca-bundle\") pod \"calico-kube-controllers-67ddb48bf6-z6mbt\" (UID: \"9d6ebfe3-fdd8-4570-8cd5-315117175ab6\") " pod="calico-system/calico-kube-controllers-67ddb48bf6-z6mbt" May 8 00:37:17.300682 kubelet[2965]: I0508 00:37:17.300234 2965 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b9fm8\" (UniqueName: \"kubernetes.io/projected/9d6ebfe3-fdd8-4570-8cd5-315117175ab6-kube-api-access-b9fm8\") pod \"calico-kube-controllers-67ddb48bf6-z6mbt\" (UID: \"9d6ebfe3-fdd8-4570-8cd5-315117175ab6\") " pod="calico-system/calico-kube-controllers-67ddb48bf6-z6mbt" May 8 00:37:17.300682 kubelet[2965]: I0508 00:37:17.300380 2965 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/1adaa741-871f-441d-b6ca-732d5537fc5a-calico-apiserver-certs\") pod \"calico-apiserver-6d6795bb7b-bt6hl\" (UID: \"1adaa741-871f-441d-b6ca-732d5537fc5a\") " pod="calico-apiserver/calico-apiserver-6d6795bb7b-bt6hl" May 8 00:37:17.300682 kubelet[2965]: I0508 00:37:17.300394 2965 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hmkf2\" (UniqueName: \"kubernetes.io/projected/1adaa741-871f-441d-b6ca-732d5537fc5a-kube-api-access-hmkf2\") pod \"calico-apiserver-6d6795bb7b-bt6hl\" (UID: \"1adaa741-871f-441d-b6ca-732d5537fc5a\") " pod="calico-apiserver/calico-apiserver-6d6795bb7b-bt6hl" May 8 00:37:17.432745 containerd[1637]: time="2025-05-08T00:37:17.432390438Z" level=error msg="Failed to destroy network for sandbox \"e289ef7445f1c45ce3f3d99ae6fd91f0ec78c9b2fa33c1ec3800e4b53e41ed83\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:37:17.432745 containerd[1637]: time="2025-05-08T00:37:17.432668060Z" level=error msg="encountered an error cleaning up failed sandbox \"e289ef7445f1c45ce3f3d99ae6fd91f0ec78c9b2fa33c1ec3800e4b53e41ed83\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:37:17.437650 containerd[1637]: time="2025-05-08T00:37:17.437613644Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-htn9m,Uid:a6a1f446-8d54-4427-9c9d-1d9192e66ef3,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e289ef7445f1c45ce3f3d99ae6fd91f0ec78c9b2fa33c1ec3800e4b53e41ed83\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:37:17.443886 kubelet[2965]: E0508 00:37:17.438099 2965 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e289ef7445f1c45ce3f3d99ae6fd91f0ec78c9b2fa33c1ec3800e4b53e41ed83\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:37:17.444019 kubelet[2965]: E0508 00:37:17.443919 2965 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e289ef7445f1c45ce3f3d99ae6fd91f0ec78c9b2fa33c1ec3800e4b53e41ed83\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-htn9m" May 8 00:37:17.444019 kubelet[2965]: E0508 00:37:17.443940 2965 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e289ef7445f1c45ce3f3d99ae6fd91f0ec78c9b2fa33c1ec3800e4b53e41ed83\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-htn9m" May 8 00:37:17.444019 kubelet[2965]: E0508 00:37:17.443978 2965 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-htn9m_calico-system(a6a1f446-8d54-4427-9c9d-1d9192e66ef3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-htn9m_calico-system(a6a1f446-8d54-4427-9c9d-1d9192e66ef3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e289ef7445f1c45ce3f3d99ae6fd91f0ec78c9b2fa33c1ec3800e4b53e41ed83\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-htn9m" podUID="a6a1f446-8d54-4427-9c9d-1d9192e66ef3" May 8 00:37:17.529334 containerd[1637]: time="2025-05-08T00:37:17.529224111Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-skfjc,Uid:8ca8c9f2-9680-4a61-ad4e-7654297c3c62,Namespace:kube-system,Attempt:0,}" May 8 00:37:17.534362 containerd[1637]: time="2025-05-08T00:37:17.534258827Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-84669494cd-c6tg9,Uid:600599f4-a6d1-457c-9d46-c0a1214f1987,Namespace:calico-apiserver,Attempt:0,}" May 8 00:37:17.536793 containerd[1637]: time="2025-05-08T00:37:17.536770400Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-2lxxx,Uid:aa7783fa-83ef-4b49-bf44-ff04e1503a73,Namespace:kube-system,Attempt:0,}" May 8 00:37:17.537598 containerd[1637]: time="2025-05-08T00:37:17.537580452Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-84669494cd-gzmz2,Uid:8a350a15-4629-49d8-9537-09e1c8aafb63,Namespace:calico-apiserver,Attempt:0,}" May 8 00:37:17.538071 containerd[1637]: time="2025-05-08T00:37:17.537929125Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-67ddb48bf6-z6mbt,Uid:9d6ebfe3-fdd8-4570-8cd5-315117175ab6,Namespace:calico-system,Attempt:0,}" May 8 00:37:17.548062 containerd[1637]: time="2025-05-08T00:37:17.547956430Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d6795bb7b-bt6hl,Uid:1adaa741-871f-441d-b6ca-732d5537fc5a,Namespace:calico-apiserver,Attempt:0,}" May 8 00:37:17.647528 containerd[1637]: time="2025-05-08T00:37:17.647420062Z" level=error msg="Failed to destroy network for sandbox \"19834ce8a9cfaa9b50db698dd94086948e031d2b81df35b0d2ee65e5ea98ff17\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:37:17.648148 containerd[1637]: time="2025-05-08T00:37:17.648120812Z" level=error msg="encountered an error cleaning up failed sandbox \"19834ce8a9cfaa9b50db698dd94086948e031d2b81df35b0d2ee65e5ea98ff17\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:37:17.649330 containerd[1637]: time="2025-05-08T00:37:17.648160722Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-84669494cd-c6tg9,Uid:600599f4-a6d1-457c-9d46-c0a1214f1987,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"19834ce8a9cfaa9b50db698dd94086948e031d2b81df35b0d2ee65e5ea98ff17\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:37:17.649524 kubelet[2965]: E0508 00:37:17.648337 2965 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"19834ce8a9cfaa9b50db698dd94086948e031d2b81df35b0d2ee65e5ea98ff17\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:37:17.649524 kubelet[2965]: E0508 00:37:17.648406 2965 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"19834ce8a9cfaa9b50db698dd94086948e031d2b81df35b0d2ee65e5ea98ff17\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-84669494cd-c6tg9" May 8 00:37:17.649524 kubelet[2965]: E0508 00:37:17.648418 2965 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"19834ce8a9cfaa9b50db698dd94086948e031d2b81df35b0d2ee65e5ea98ff17\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-84669494cd-c6tg9" May 8 00:37:17.649611 kubelet[2965]: E0508 00:37:17.648448 2965 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-84669494cd-c6tg9_calico-apiserver(600599f4-a6d1-457c-9d46-c0a1214f1987)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-84669494cd-c6tg9_calico-apiserver(600599f4-a6d1-457c-9d46-c0a1214f1987)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"19834ce8a9cfaa9b50db698dd94086948e031d2b81df35b0d2ee65e5ea98ff17\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-84669494cd-c6tg9" podUID="600599f4-a6d1-457c-9d46-c0a1214f1987" May 8 00:37:17.669662 containerd[1637]: time="2025-05-08T00:37:17.669623562Z" level=error msg="Failed to destroy network for sandbox \"85f808da269ed32da330f62d5567120fb060cade455dd54cca170ee4850d9d84\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:37:17.669915 containerd[1637]: time="2025-05-08T00:37:17.669859160Z" level=error msg="encountered an error cleaning up failed sandbox \"85f808da269ed32da330f62d5567120fb060cade455dd54cca170ee4850d9d84\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:37:17.669915 containerd[1637]: time="2025-05-08T00:37:17.669888155Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-2lxxx,Uid:aa7783fa-83ef-4b49-bf44-ff04e1503a73,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"85f808da269ed32da330f62d5567120fb060cade455dd54cca170ee4850d9d84\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:37:17.672430 kubelet[2965]: E0508 00:37:17.670021 2965 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"85f808da269ed32da330f62d5567120fb060cade455dd54cca170ee4850d9d84\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:37:17.672430 kubelet[2965]: E0508 00:37:17.670058 2965 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"85f808da269ed32da330f62d5567120fb060cade455dd54cca170ee4850d9d84\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-2lxxx" May 8 00:37:17.672430 kubelet[2965]: E0508 00:37:17.670072 2965 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"85f808da269ed32da330f62d5567120fb060cade455dd54cca170ee4850d9d84\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-2lxxx" May 8 00:37:17.672647 kubelet[2965]: E0508 00:37:17.670101 2965 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-2lxxx_kube-system(aa7783fa-83ef-4b49-bf44-ff04e1503a73)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-2lxxx_kube-system(aa7783fa-83ef-4b49-bf44-ff04e1503a73)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"85f808da269ed32da330f62d5567120fb060cade455dd54cca170ee4850d9d84\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-2lxxx" podUID="aa7783fa-83ef-4b49-bf44-ff04e1503a73" May 8 00:37:17.676449 containerd[1637]: time="2025-05-08T00:37:17.676418913Z" level=error msg="Failed to destroy network for sandbox \"35edfd87b514fae97400e4caca4f6175d9a99921eae3cc3bacadf2944d675916\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:37:17.676825 containerd[1637]: time="2025-05-08T00:37:17.676810822Z" level=error msg="encountered an error cleaning up failed sandbox \"35edfd87b514fae97400e4caca4f6175d9a99921eae3cc3bacadf2944d675916\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:37:17.676908 containerd[1637]: time="2025-05-08T00:37:17.676896115Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-67ddb48bf6-z6mbt,Uid:9d6ebfe3-fdd8-4570-8cd5-315117175ab6,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"35edfd87b514fae97400e4caca4f6175d9a99921eae3cc3bacadf2944d675916\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:37:17.679947 kubelet[2965]: E0508 00:37:17.679925 2965 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"35edfd87b514fae97400e4caca4f6175d9a99921eae3cc3bacadf2944d675916\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:37:17.680365 kubelet[2965]: E0508 00:37:17.680033 2965 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"35edfd87b514fae97400e4caca4f6175d9a99921eae3cc3bacadf2944d675916\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-67ddb48bf6-z6mbt" May 8 00:37:17.680365 kubelet[2965]: E0508 00:37:17.680050 2965 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"35edfd87b514fae97400e4caca4f6175d9a99921eae3cc3bacadf2944d675916\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-67ddb48bf6-z6mbt" May 8 00:37:17.680365 kubelet[2965]: E0508 00:37:17.680081 2965 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-67ddb48bf6-z6mbt_calico-system(9d6ebfe3-fdd8-4570-8cd5-315117175ab6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-67ddb48bf6-z6mbt_calico-system(9d6ebfe3-fdd8-4570-8cd5-315117175ab6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"35edfd87b514fae97400e4caca4f6175d9a99921eae3cc3bacadf2944d675916\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-67ddb48bf6-z6mbt" podUID="9d6ebfe3-fdd8-4570-8cd5-315117175ab6" May 8 00:37:17.685773 containerd[1637]: time="2025-05-08T00:37:17.685693294Z" level=error msg="Failed to destroy network for sandbox \"a65e1dce4f5e24d1837743e276d4eb13e301730b30522c0ad228df6b71c129a0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:37:17.686747 containerd[1637]: time="2025-05-08T00:37:17.686687726Z" level=error msg="encountered an error cleaning up failed sandbox \"a65e1dce4f5e24d1837743e276d4eb13e301730b30522c0ad228df6b71c129a0\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:37:17.686747 containerd[1637]: time="2025-05-08T00:37:17.686718167Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-skfjc,Uid:8ca8c9f2-9680-4a61-ad4e-7654297c3c62,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a65e1dce4f5e24d1837743e276d4eb13e301730b30522c0ad228df6b71c129a0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:37:17.687174 kubelet[2965]: E0508 00:37:17.686907 2965 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a65e1dce4f5e24d1837743e276d4eb13e301730b30522c0ad228df6b71c129a0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:37:17.687174 kubelet[2965]: E0508 00:37:17.686943 2965 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a65e1dce4f5e24d1837743e276d4eb13e301730b30522c0ad228df6b71c129a0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-skfjc" May 8 00:37:17.687174 kubelet[2965]: E0508 00:37:17.686955 2965 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a65e1dce4f5e24d1837743e276d4eb13e301730b30522c0ad228df6b71c129a0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-skfjc" May 8 00:37:17.687248 kubelet[2965]: E0508 00:37:17.686988 2965 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-skfjc_kube-system(8ca8c9f2-9680-4a61-ad4e-7654297c3c62)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-skfjc_kube-system(8ca8c9f2-9680-4a61-ad4e-7654297c3c62)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a65e1dce4f5e24d1837743e276d4eb13e301730b30522c0ad228df6b71c129a0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-skfjc" podUID="8ca8c9f2-9680-4a61-ad4e-7654297c3c62" May 8 00:37:17.692299 containerd[1637]: time="2025-05-08T00:37:17.692254300Z" level=error msg="Failed to destroy network for sandbox \"de9ad73ab25a39cfbc03af33fb28975a627fe98f70858928055b36d703b40f34\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:37:17.692531 containerd[1637]: time="2025-05-08T00:37:17.692503476Z" level=error msg="encountered an error cleaning up failed sandbox \"de9ad73ab25a39cfbc03af33fb28975a627fe98f70858928055b36d703b40f34\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:37:17.692558 containerd[1637]: time="2025-05-08T00:37:17.692549763Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-84669494cd-gzmz2,Uid:8a350a15-4629-49d8-9537-09e1c8aafb63,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"de9ad73ab25a39cfbc03af33fb28975a627fe98f70858928055b36d703b40f34\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:37:17.693296 kubelet[2965]: E0508 00:37:17.692706 2965 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"de9ad73ab25a39cfbc03af33fb28975a627fe98f70858928055b36d703b40f34\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:37:17.693296 kubelet[2965]: E0508 00:37:17.692740 2965 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"de9ad73ab25a39cfbc03af33fb28975a627fe98f70858928055b36d703b40f34\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-84669494cd-gzmz2" May 8 00:37:17.693296 kubelet[2965]: E0508 00:37:17.692753 2965 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"de9ad73ab25a39cfbc03af33fb28975a627fe98f70858928055b36d703b40f34\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-84669494cd-gzmz2" May 8 00:37:17.693392 kubelet[2965]: E0508 00:37:17.692791 2965 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-84669494cd-gzmz2_calico-apiserver(8a350a15-4629-49d8-9537-09e1c8aafb63)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-84669494cd-gzmz2_calico-apiserver(8a350a15-4629-49d8-9537-09e1c8aafb63)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"de9ad73ab25a39cfbc03af33fb28975a627fe98f70858928055b36d703b40f34\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-84669494cd-gzmz2" podUID="8a350a15-4629-49d8-9537-09e1c8aafb63" May 8 00:37:17.694610 containerd[1637]: time="2025-05-08T00:37:17.693969154Z" level=error msg="Failed to destroy network for sandbox \"235c623d0ea98408d68f03e74ab5a67b390280890b3600cc54e80abf10f7dedd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:37:17.694610 containerd[1637]: time="2025-05-08T00:37:17.694267180Z" level=error msg="encountered an error cleaning up failed sandbox \"235c623d0ea98408d68f03e74ab5a67b390280890b3600cc54e80abf10f7dedd\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:37:17.694610 containerd[1637]: time="2025-05-08T00:37:17.694310252Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d6795bb7b-bt6hl,Uid:1adaa741-871f-441d-b6ca-732d5537fc5a,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"235c623d0ea98408d68f03e74ab5a67b390280890b3600cc54e80abf10f7dedd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:37:17.694998 kubelet[2965]: E0508 00:37:17.694455 2965 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"235c623d0ea98408d68f03e74ab5a67b390280890b3600cc54e80abf10f7dedd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:37:17.694998 kubelet[2965]: E0508 00:37:17.694483 2965 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"235c623d0ea98408d68f03e74ab5a67b390280890b3600cc54e80abf10f7dedd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6d6795bb7b-bt6hl" May 8 00:37:17.694998 kubelet[2965]: E0508 00:37:17.694495 2965 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"235c623d0ea98408d68f03e74ab5a67b390280890b3600cc54e80abf10f7dedd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6d6795bb7b-bt6hl" May 8 00:37:17.695063 kubelet[2965]: E0508 00:37:17.694517 2965 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6d6795bb7b-bt6hl_calico-apiserver(1adaa741-871f-441d-b6ca-732d5537fc5a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6d6795bb7b-bt6hl_calico-apiserver(1adaa741-871f-441d-b6ca-732d5537fc5a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"235c623d0ea98408d68f03e74ab5a67b390280890b3600cc54e80abf10f7dedd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6d6795bb7b-bt6hl" podUID="1adaa741-871f-441d-b6ca-732d5537fc5a" May 8 00:37:18.170473 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e289ef7445f1c45ce3f3d99ae6fd91f0ec78c9b2fa33c1ec3800e4b53e41ed83-shm.mount: Deactivated successfully. May 8 00:37:18.282475 kubelet[2965]: I0508 00:37:18.282340 2965 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="de9ad73ab25a39cfbc03af33fb28975a627fe98f70858928055b36d703b40f34" May 8 00:37:18.283963 kubelet[2965]: I0508 00:37:18.283616 2965 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="19834ce8a9cfaa9b50db698dd94086948e031d2b81df35b0d2ee65e5ea98ff17" May 8 00:37:18.287114 kubelet[2965]: I0508 00:37:18.287089 2965 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a65e1dce4f5e24d1837743e276d4eb13e301730b30522c0ad228df6b71c129a0" May 8 00:37:18.289355 kubelet[2965]: I0508 00:37:18.289306 2965 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="85f808da269ed32da330f62d5567120fb060cade455dd54cca170ee4850d9d84" May 8 00:37:18.291136 kubelet[2965]: I0508 00:37:18.291047 2965 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e289ef7445f1c45ce3f3d99ae6fd91f0ec78c9b2fa33c1ec3800e4b53e41ed83" May 8 00:37:18.292694 kubelet[2965]: I0508 00:37:18.292644 2965 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="235c623d0ea98408d68f03e74ab5a67b390280890b3600cc54e80abf10f7dedd" May 8 00:37:18.293520 kubelet[2965]: I0508 00:37:18.293500 2965 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="35edfd87b514fae97400e4caca4f6175d9a99921eae3cc3bacadf2944d675916" May 8 00:37:18.334273 containerd[1637]: time="2025-05-08T00:37:18.334120169Z" level=info msg="StopPodSandbox for \"19834ce8a9cfaa9b50db698dd94086948e031d2b81df35b0d2ee65e5ea98ff17\"" May 8 00:37:18.335064 containerd[1637]: time="2025-05-08T00:37:18.334630868Z" level=info msg="StopPodSandbox for \"35edfd87b514fae97400e4caca4f6175d9a99921eae3cc3bacadf2944d675916\"" May 8 00:37:18.335408 containerd[1637]: time="2025-05-08T00:37:18.335150956Z" level=info msg="Ensure that sandbox 19834ce8a9cfaa9b50db698dd94086948e031d2b81df35b0d2ee65e5ea98ff17 in task-service has been cleanup successfully" May 8 00:37:18.335408 containerd[1637]: time="2025-05-08T00:37:18.335199089Z" level=info msg="StopPodSandbox for \"a65e1dce4f5e24d1837743e276d4eb13e301730b30522c0ad228df6b71c129a0\"" May 8 00:37:18.335408 containerd[1637]: time="2025-05-08T00:37:18.335293775Z" level=info msg="Ensure that sandbox a65e1dce4f5e24d1837743e276d4eb13e301730b30522c0ad228df6b71c129a0 in task-service has been cleanup successfully" May 8 00:37:18.336098 containerd[1637]: time="2025-05-08T00:37:18.335154689Z" level=info msg="Ensure that sandbox 35edfd87b514fae97400e4caca4f6175d9a99921eae3cc3bacadf2944d675916 in task-service has been cleanup successfully" May 8 00:37:18.336243 containerd[1637]: time="2025-05-08T00:37:18.336224587Z" level=info msg="StopPodSandbox for \"85f808da269ed32da330f62d5567120fb060cade455dd54cca170ee4850d9d84\"" May 8 00:37:18.336336 containerd[1637]: time="2025-05-08T00:37:18.336320964Z" level=info msg="Ensure that sandbox 85f808da269ed32da330f62d5567120fb060cade455dd54cca170ee4850d9d84 in task-service has been cleanup successfully" May 8 00:37:18.337058 containerd[1637]: time="2025-05-08T00:37:18.336869975Z" level=info msg="StopPodSandbox for \"e289ef7445f1c45ce3f3d99ae6fd91f0ec78c9b2fa33c1ec3800e4b53e41ed83\"" May 8 00:37:18.337058 containerd[1637]: time="2025-05-08T00:37:18.336980049Z" level=info msg="Ensure that sandbox e289ef7445f1c45ce3f3d99ae6fd91f0ec78c9b2fa33c1ec3800e4b53e41ed83 in task-service has been cleanup successfully" May 8 00:37:18.337312 containerd[1637]: time="2025-05-08T00:37:18.337302180Z" level=info msg="StopPodSandbox for \"de9ad73ab25a39cfbc03af33fb28975a627fe98f70858928055b36d703b40f34\"" May 8 00:37:18.337464 containerd[1637]: time="2025-05-08T00:37:18.337453919Z" level=info msg="Ensure that sandbox de9ad73ab25a39cfbc03af33fb28975a627fe98f70858928055b36d703b40f34 in task-service has been cleanup successfully" May 8 00:37:18.338158 containerd[1637]: time="2025-05-08T00:37:18.337639103Z" level=info msg="StopPodSandbox for \"235c623d0ea98408d68f03e74ab5a67b390280890b3600cc54e80abf10f7dedd\"" May 8 00:37:18.338464 containerd[1637]: time="2025-05-08T00:37:18.338450029Z" level=info msg="Ensure that sandbox 235c623d0ea98408d68f03e74ab5a67b390280890b3600cc54e80abf10f7dedd in task-service has been cleanup successfully" May 8 00:37:18.383185 containerd[1637]: time="2025-05-08T00:37:18.382726619Z" level=error msg="StopPodSandbox for \"19834ce8a9cfaa9b50db698dd94086948e031d2b81df35b0d2ee65e5ea98ff17\" failed" error="failed to destroy network for sandbox \"19834ce8a9cfaa9b50db698dd94086948e031d2b81df35b0d2ee65e5ea98ff17\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:37:18.383695 kubelet[2965]: E0508 00:37:18.383672 2965 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"19834ce8a9cfaa9b50db698dd94086948e031d2b81df35b0d2ee65e5ea98ff17\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="19834ce8a9cfaa9b50db698dd94086948e031d2b81df35b0d2ee65e5ea98ff17" May 8 00:37:18.386893 kubelet[2965]: E0508 00:37:18.383764 2965 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"19834ce8a9cfaa9b50db698dd94086948e031d2b81df35b0d2ee65e5ea98ff17"} May 8 00:37:18.386893 kubelet[2965]: E0508 00:37:18.386852 2965 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"600599f4-a6d1-457c-9d46-c0a1214f1987\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"19834ce8a9cfaa9b50db698dd94086948e031d2b81df35b0d2ee65e5ea98ff17\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 8 00:37:18.386893 kubelet[2965]: E0508 00:37:18.386869 2965 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"600599f4-a6d1-457c-9d46-c0a1214f1987\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"19834ce8a9cfaa9b50db698dd94086948e031d2b81df35b0d2ee65e5ea98ff17\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-84669494cd-c6tg9" podUID="600599f4-a6d1-457c-9d46-c0a1214f1987" May 8 00:37:18.391866 containerd[1637]: time="2025-05-08T00:37:18.391829887Z" level=error msg="StopPodSandbox for \"85f808da269ed32da330f62d5567120fb060cade455dd54cca170ee4850d9d84\" failed" error="failed to destroy network for sandbox \"85f808da269ed32da330f62d5567120fb060cade455dd54cca170ee4850d9d84\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:37:18.392145 kubelet[2965]: E0508 00:37:18.391996 2965 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"85f808da269ed32da330f62d5567120fb060cade455dd54cca170ee4850d9d84\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="85f808da269ed32da330f62d5567120fb060cade455dd54cca170ee4850d9d84" May 8 00:37:18.392145 kubelet[2965]: E0508 00:37:18.392039 2965 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"85f808da269ed32da330f62d5567120fb060cade455dd54cca170ee4850d9d84"} May 8 00:37:18.392275 kubelet[2965]: E0508 00:37:18.392165 2965 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"aa7783fa-83ef-4b49-bf44-ff04e1503a73\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"85f808da269ed32da330f62d5567120fb060cade455dd54cca170ee4850d9d84\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 8 00:37:18.392275 kubelet[2965]: E0508 00:37:18.392183 2965 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"aa7783fa-83ef-4b49-bf44-ff04e1503a73\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"85f808da269ed32da330f62d5567120fb060cade455dd54cca170ee4850d9d84\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-2lxxx" podUID="aa7783fa-83ef-4b49-bf44-ff04e1503a73" May 8 00:37:18.398503 containerd[1637]: time="2025-05-08T00:37:18.398440717Z" level=error msg="StopPodSandbox for \"de9ad73ab25a39cfbc03af33fb28975a627fe98f70858928055b36d703b40f34\" failed" error="failed to destroy network for sandbox \"de9ad73ab25a39cfbc03af33fb28975a627fe98f70858928055b36d703b40f34\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:37:18.398799 kubelet[2965]: E0508 00:37:18.398685 2965 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"de9ad73ab25a39cfbc03af33fb28975a627fe98f70858928055b36d703b40f34\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="de9ad73ab25a39cfbc03af33fb28975a627fe98f70858928055b36d703b40f34" May 8 00:37:18.398799 kubelet[2965]: E0508 00:37:18.398715 2965 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"de9ad73ab25a39cfbc03af33fb28975a627fe98f70858928055b36d703b40f34"} May 8 00:37:18.398799 kubelet[2965]: E0508 00:37:18.398738 2965 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"8a350a15-4629-49d8-9537-09e1c8aafb63\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"de9ad73ab25a39cfbc03af33fb28975a627fe98f70858928055b36d703b40f34\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 8 00:37:18.398799 kubelet[2965]: E0508 00:37:18.398751 2965 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"8a350a15-4629-49d8-9537-09e1c8aafb63\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"de9ad73ab25a39cfbc03af33fb28975a627fe98f70858928055b36d703b40f34\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-84669494cd-gzmz2" podUID="8a350a15-4629-49d8-9537-09e1c8aafb63" May 8 00:37:18.399057 containerd[1637]: time="2025-05-08T00:37:18.399004663Z" level=error msg="StopPodSandbox for \"a65e1dce4f5e24d1837743e276d4eb13e301730b30522c0ad228df6b71c129a0\" failed" error="failed to destroy network for sandbox \"a65e1dce4f5e24d1837743e276d4eb13e301730b30522c0ad228df6b71c129a0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:37:18.399174 kubelet[2965]: E0508 00:37:18.399136 2965 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a65e1dce4f5e24d1837743e276d4eb13e301730b30522c0ad228df6b71c129a0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a65e1dce4f5e24d1837743e276d4eb13e301730b30522c0ad228df6b71c129a0" May 8 00:37:18.399174 kubelet[2965]: E0508 00:37:18.399152 2965 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a65e1dce4f5e24d1837743e276d4eb13e301730b30522c0ad228df6b71c129a0"} May 8 00:37:18.399261 kubelet[2965]: E0508 00:37:18.399166 2965 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"8ca8c9f2-9680-4a61-ad4e-7654297c3c62\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a65e1dce4f5e24d1837743e276d4eb13e301730b30522c0ad228df6b71c129a0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 8 00:37:18.399261 kubelet[2965]: E0508 00:37:18.399247 2965 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"8ca8c9f2-9680-4a61-ad4e-7654297c3c62\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a65e1dce4f5e24d1837743e276d4eb13e301730b30522c0ad228df6b71c129a0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-skfjc" podUID="8ca8c9f2-9680-4a61-ad4e-7654297c3c62" May 8 00:37:18.399435 containerd[1637]: time="2025-05-08T00:37:18.399422485Z" level=error msg="StopPodSandbox for \"35edfd87b514fae97400e4caca4f6175d9a99921eae3cc3bacadf2944d675916\" failed" error="failed to destroy network for sandbox \"35edfd87b514fae97400e4caca4f6175d9a99921eae3cc3bacadf2944d675916\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:37:18.399576 kubelet[2965]: E0508 00:37:18.399496 2965 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"35edfd87b514fae97400e4caca4f6175d9a99921eae3cc3bacadf2944d675916\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="35edfd87b514fae97400e4caca4f6175d9a99921eae3cc3bacadf2944d675916" May 8 00:37:18.399576 kubelet[2965]: E0508 00:37:18.399510 2965 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"35edfd87b514fae97400e4caca4f6175d9a99921eae3cc3bacadf2944d675916"} May 8 00:37:18.399576 kubelet[2965]: E0508 00:37:18.399523 2965 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"9d6ebfe3-fdd8-4570-8cd5-315117175ab6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"35edfd87b514fae97400e4caca4f6175d9a99921eae3cc3bacadf2944d675916\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 8 00:37:18.399870 kubelet[2965]: E0508 00:37:18.399534 2965 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"9d6ebfe3-fdd8-4570-8cd5-315117175ab6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"35edfd87b514fae97400e4caca4f6175d9a99921eae3cc3bacadf2944d675916\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-67ddb48bf6-z6mbt" podUID="9d6ebfe3-fdd8-4570-8cd5-315117175ab6" May 8 00:37:18.400133 containerd[1637]: time="2025-05-08T00:37:18.400045971Z" level=error msg="StopPodSandbox for \"235c623d0ea98408d68f03e74ab5a67b390280890b3600cc54e80abf10f7dedd\" failed" error="failed to destroy network for sandbox \"235c623d0ea98408d68f03e74ab5a67b390280890b3600cc54e80abf10f7dedd\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:37:18.400275 kubelet[2965]: E0508 00:37:18.400210 2965 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"235c623d0ea98408d68f03e74ab5a67b390280890b3600cc54e80abf10f7dedd\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="235c623d0ea98408d68f03e74ab5a67b390280890b3600cc54e80abf10f7dedd" May 8 00:37:18.400275 kubelet[2965]: E0508 00:37:18.400225 2965 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"235c623d0ea98408d68f03e74ab5a67b390280890b3600cc54e80abf10f7dedd"} May 8 00:37:18.400275 kubelet[2965]: E0508 00:37:18.400244 2965 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"1adaa741-871f-441d-b6ca-732d5537fc5a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"235c623d0ea98408d68f03e74ab5a67b390280890b3600cc54e80abf10f7dedd\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 8 00:37:18.400275 kubelet[2965]: E0508 00:37:18.400254 2965 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"1adaa741-871f-441d-b6ca-732d5537fc5a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"235c623d0ea98408d68f03e74ab5a67b390280890b3600cc54e80abf10f7dedd\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6d6795bb7b-bt6hl" podUID="1adaa741-871f-441d-b6ca-732d5537fc5a" May 8 00:37:18.404686 containerd[1637]: time="2025-05-08T00:37:18.404468584Z" level=error msg="StopPodSandbox for \"e289ef7445f1c45ce3f3d99ae6fd91f0ec78c9b2fa33c1ec3800e4b53e41ed83\" failed" error="failed to destroy network for sandbox \"e289ef7445f1c45ce3f3d99ae6fd91f0ec78c9b2fa33c1ec3800e4b53e41ed83\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:37:18.404760 kubelet[2965]: E0508 00:37:18.404596 2965 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"e289ef7445f1c45ce3f3d99ae6fd91f0ec78c9b2fa33c1ec3800e4b53e41ed83\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="e289ef7445f1c45ce3f3d99ae6fd91f0ec78c9b2fa33c1ec3800e4b53e41ed83" May 8 00:37:18.404760 kubelet[2965]: E0508 00:37:18.404622 2965 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"e289ef7445f1c45ce3f3d99ae6fd91f0ec78c9b2fa33c1ec3800e4b53e41ed83"} May 8 00:37:18.404760 kubelet[2965]: E0508 00:37:18.404642 2965 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a6a1f446-8d54-4427-9c9d-1d9192e66ef3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e289ef7445f1c45ce3f3d99ae6fd91f0ec78c9b2fa33c1ec3800e4b53e41ed83\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 8 00:37:18.404760 kubelet[2965]: E0508 00:37:18.404656 2965 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a6a1f446-8d54-4427-9c9d-1d9192e66ef3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e289ef7445f1c45ce3f3d99ae6fd91f0ec78c9b2fa33c1ec3800e4b53e41ed83\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-htn9m" podUID="a6a1f446-8d54-4427-9c9d-1d9192e66ef3" May 8 00:37:23.212512 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount612816405.mount: Deactivated successfully. May 8 00:37:23.409368 containerd[1637]: time="2025-05-08T00:37:23.409275427Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:37:23.412667 containerd[1637]: time="2025-05-08T00:37:23.411810990Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.3: active requests=0, bytes read=144068748" May 8 00:37:23.428200 containerd[1637]: time="2025-05-08T00:37:23.428182113Z" level=info msg="ImageCreate event name:\"sha256:042163432abcec06b8077b24973b223a5f4cfdb35d85c3816f5d07a13d51afae\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:37:23.428715 containerd[1637]: time="2025-05-08T00:37:23.428702145Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:37:23.429240 containerd[1637]: time="2025-05-08T00:37:23.429226061Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.3\" with image id \"sha256:042163432abcec06b8077b24973b223a5f4cfdb35d85c3816f5d07a13d51afae\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916\", size \"144068610\" in 6.147637476s" May 8 00:37:23.429292 containerd[1637]: time="2025-05-08T00:37:23.429284461Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\" returns image reference \"sha256:042163432abcec06b8077b24973b223a5f4cfdb35d85c3816f5d07a13d51afae\"" May 8 00:37:23.490191 containerd[1637]: time="2025-05-08T00:37:23.489963853Z" level=info msg="CreateContainer within sandbox \"eb2d614a0f671a46ede64a02eccc94498a3b0c4be905254c10447f476a4aef48\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" May 8 00:37:23.536592 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4217230098.mount: Deactivated successfully. May 8 00:37:23.543517 containerd[1637]: time="2025-05-08T00:37:23.543493877Z" level=info msg="CreateContainer within sandbox \"eb2d614a0f671a46ede64a02eccc94498a3b0c4be905254c10447f476a4aef48\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"1df0fc809281868abff29d85e56af98d898c199404f2ee61e2517d49aff4936a\"" May 8 00:37:23.547430 containerd[1637]: time="2025-05-08T00:37:23.547410117Z" level=info msg="StartContainer for \"1df0fc809281868abff29d85e56af98d898c199404f2ee61e2517d49aff4936a\"" May 8 00:37:23.613912 containerd[1637]: time="2025-05-08T00:37:23.613564921Z" level=info msg="StartContainer for \"1df0fc809281868abff29d85e56af98d898c199404f2ee61e2517d49aff4936a\" returns successfully" May 8 00:37:23.718365 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. May 8 00:37:23.721066 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. May 8 00:37:23.967670 systemd-resolved[1546]: Under memory pressure, flushing caches. May 8 00:37:23.995103 systemd-journald[1182]: Under memory pressure, flushing caches. May 8 00:37:23.967712 systemd-resolved[1546]: Flushed all caches. May 8 00:37:24.351009 kubelet[2965]: I0508 00:37:24.338186 2965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-5kr6b" podStartSLOduration=1.8626495250000001 podStartE2EDuration="21.32908757s" podCreationTimestamp="2025-05-08 00:37:03 +0000 UTC" firstStartedPulling="2025-05-08 00:37:03.97496397 +0000 UTC m=+19.908272594" lastFinishedPulling="2025-05-08 00:37:23.441402013 +0000 UTC m=+39.374710639" observedRunningTime="2025-05-08 00:37:24.327736733 +0000 UTC m=+40.261045366" watchObservedRunningTime="2025-05-08 00:37:24.32908757 +0000 UTC m=+40.262396198" May 8 00:37:25.932375 kernel: bpftool[4278]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set May 8 00:37:26.016148 systemd-resolved[1546]: Under memory pressure, flushing caches. May 8 00:37:26.016454 systemd-journald[1182]: Under memory pressure, flushing caches. May 8 00:37:26.016152 systemd-resolved[1546]: Flushed all caches. May 8 00:37:26.100917 systemd-networkd[1293]: vxlan.calico: Link UP May 8 00:37:26.100922 systemd-networkd[1293]: vxlan.calico: Gained carrier May 8 00:37:27.871445 systemd-networkd[1293]: vxlan.calico: Gained IPv6LL May 8 00:37:30.192510 containerd[1637]: time="2025-05-08T00:37:30.192442986Z" level=info msg="StopPodSandbox for \"de9ad73ab25a39cfbc03af33fb28975a627fe98f70858928055b36d703b40f34\"" May 8 00:37:30.192953 containerd[1637]: time="2025-05-08T00:37:30.192534164Z" level=info msg="StopPodSandbox for \"a65e1dce4f5e24d1837743e276d4eb13e301730b30522c0ad228df6b71c129a0\"" May 8 00:37:30.530471 containerd[1637]: 2025-05-08 00:37:30.296 [INFO][4412] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="a65e1dce4f5e24d1837743e276d4eb13e301730b30522c0ad228df6b71c129a0" May 8 00:37:30.530471 containerd[1637]: 2025-05-08 00:37:30.297 [INFO][4412] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="a65e1dce4f5e24d1837743e276d4eb13e301730b30522c0ad228df6b71c129a0" iface="eth0" netns="/var/run/netns/cni-7e8a3d8a-435e-7868-0ae4-f8d0e60b1142" May 8 00:37:30.530471 containerd[1637]: 2025-05-08 00:37:30.297 [INFO][4412] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="a65e1dce4f5e24d1837743e276d4eb13e301730b30522c0ad228df6b71c129a0" iface="eth0" netns="/var/run/netns/cni-7e8a3d8a-435e-7868-0ae4-f8d0e60b1142" May 8 00:37:30.530471 containerd[1637]: 2025-05-08 00:37:30.302 [INFO][4412] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="a65e1dce4f5e24d1837743e276d4eb13e301730b30522c0ad228df6b71c129a0" iface="eth0" netns="/var/run/netns/cni-7e8a3d8a-435e-7868-0ae4-f8d0e60b1142" May 8 00:37:30.530471 containerd[1637]: 2025-05-08 00:37:30.302 [INFO][4412] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="a65e1dce4f5e24d1837743e276d4eb13e301730b30522c0ad228df6b71c129a0" May 8 00:37:30.530471 containerd[1637]: 2025-05-08 00:37:30.302 [INFO][4412] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a65e1dce4f5e24d1837743e276d4eb13e301730b30522c0ad228df6b71c129a0" May 8 00:37:30.530471 containerd[1637]: 2025-05-08 00:37:30.512 [INFO][4426] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a65e1dce4f5e24d1837743e276d4eb13e301730b30522c0ad228df6b71c129a0" HandleID="k8s-pod-network.a65e1dce4f5e24d1837743e276d4eb13e301730b30522c0ad228df6b71c129a0" Workload="localhost-k8s-coredns--7db6d8ff4d--skfjc-eth0" May 8 00:37:30.530471 containerd[1637]: 2025-05-08 00:37:30.515 [INFO][4426] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:37:30.530471 containerd[1637]: 2025-05-08 00:37:30.515 [INFO][4426] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:37:30.530471 containerd[1637]: 2025-05-08 00:37:30.525 [WARNING][4426] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a65e1dce4f5e24d1837743e276d4eb13e301730b30522c0ad228df6b71c129a0" HandleID="k8s-pod-network.a65e1dce4f5e24d1837743e276d4eb13e301730b30522c0ad228df6b71c129a0" Workload="localhost-k8s-coredns--7db6d8ff4d--skfjc-eth0" May 8 00:37:30.530471 containerd[1637]: 2025-05-08 00:37:30.525 [INFO][4426] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a65e1dce4f5e24d1837743e276d4eb13e301730b30522c0ad228df6b71c129a0" HandleID="k8s-pod-network.a65e1dce4f5e24d1837743e276d4eb13e301730b30522c0ad228df6b71c129a0" Workload="localhost-k8s-coredns--7db6d8ff4d--skfjc-eth0" May 8 00:37:30.530471 containerd[1637]: 2025-05-08 00:37:30.526 [INFO][4426] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:37:30.530471 containerd[1637]: 2025-05-08 00:37:30.527 [INFO][4412] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="a65e1dce4f5e24d1837743e276d4eb13e301730b30522c0ad228df6b71c129a0" May 8 00:37:30.532483 systemd[1]: run-netns-cni\x2d7e8a3d8a\x2d435e\x2d7868\x2d0ae4\x2df8d0e60b1142.mount: Deactivated successfully. May 8 00:37:30.534265 containerd[1637]: time="2025-05-08T00:37:30.533471066Z" level=info msg="TearDown network for sandbox \"a65e1dce4f5e24d1837743e276d4eb13e301730b30522c0ad228df6b71c129a0\" successfully" May 8 00:37:30.534265 containerd[1637]: time="2025-05-08T00:37:30.533494412Z" level=info msg="StopPodSandbox for \"a65e1dce4f5e24d1837743e276d4eb13e301730b30522c0ad228df6b71c129a0\" returns successfully" May 8 00:37:30.534797 containerd[1637]: time="2025-05-08T00:37:30.534775472Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-skfjc,Uid:8ca8c9f2-9680-4a61-ad4e-7654297c3c62,Namespace:kube-system,Attempt:1,}" May 8 00:37:30.539571 containerd[1637]: 2025-05-08 00:37:30.297 [INFO][4411] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="de9ad73ab25a39cfbc03af33fb28975a627fe98f70858928055b36d703b40f34" May 8 00:37:30.539571 containerd[1637]: 2025-05-08 00:37:30.298 [INFO][4411] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="de9ad73ab25a39cfbc03af33fb28975a627fe98f70858928055b36d703b40f34" iface="eth0" netns="/var/run/netns/cni-4174b129-7837-6049-ec48-af60e21192ab" May 8 00:37:30.539571 containerd[1637]: 2025-05-08 00:37:30.298 [INFO][4411] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="de9ad73ab25a39cfbc03af33fb28975a627fe98f70858928055b36d703b40f34" iface="eth0" netns="/var/run/netns/cni-4174b129-7837-6049-ec48-af60e21192ab" May 8 00:37:30.539571 containerd[1637]: 2025-05-08 00:37:30.302 [INFO][4411] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="de9ad73ab25a39cfbc03af33fb28975a627fe98f70858928055b36d703b40f34" iface="eth0" netns="/var/run/netns/cni-4174b129-7837-6049-ec48-af60e21192ab" May 8 00:37:30.539571 containerd[1637]: 2025-05-08 00:37:30.302 [INFO][4411] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="de9ad73ab25a39cfbc03af33fb28975a627fe98f70858928055b36d703b40f34" May 8 00:37:30.539571 containerd[1637]: 2025-05-08 00:37:30.302 [INFO][4411] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="de9ad73ab25a39cfbc03af33fb28975a627fe98f70858928055b36d703b40f34" May 8 00:37:30.539571 containerd[1637]: 2025-05-08 00:37:30.513 [INFO][4425] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="de9ad73ab25a39cfbc03af33fb28975a627fe98f70858928055b36d703b40f34" HandleID="k8s-pod-network.de9ad73ab25a39cfbc03af33fb28975a627fe98f70858928055b36d703b40f34" Workload="localhost-k8s-calico--apiserver--84669494cd--gzmz2-eth0" May 8 00:37:30.539571 containerd[1637]: 2025-05-08 00:37:30.515 [INFO][4425] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:37:30.539571 containerd[1637]: 2025-05-08 00:37:30.526 [INFO][4425] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:37:30.539571 containerd[1637]: 2025-05-08 00:37:30.533 [WARNING][4425] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="de9ad73ab25a39cfbc03af33fb28975a627fe98f70858928055b36d703b40f34" HandleID="k8s-pod-network.de9ad73ab25a39cfbc03af33fb28975a627fe98f70858928055b36d703b40f34" Workload="localhost-k8s-calico--apiserver--84669494cd--gzmz2-eth0" May 8 00:37:30.539571 containerd[1637]: 2025-05-08 00:37:30.533 [INFO][4425] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="de9ad73ab25a39cfbc03af33fb28975a627fe98f70858928055b36d703b40f34" HandleID="k8s-pod-network.de9ad73ab25a39cfbc03af33fb28975a627fe98f70858928055b36d703b40f34" Workload="localhost-k8s-calico--apiserver--84669494cd--gzmz2-eth0" May 8 00:37:30.539571 containerd[1637]: 2025-05-08 00:37:30.535 [INFO][4425] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:37:30.539571 containerd[1637]: 2025-05-08 00:37:30.537 [INFO][4411] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="de9ad73ab25a39cfbc03af33fb28975a627fe98f70858928055b36d703b40f34" May 8 00:37:30.540613 containerd[1637]: time="2025-05-08T00:37:30.539976576Z" level=info msg="TearDown network for sandbox \"de9ad73ab25a39cfbc03af33fb28975a627fe98f70858928055b36d703b40f34\" successfully" May 8 00:37:30.540613 containerd[1637]: time="2025-05-08T00:37:30.539988802Z" level=info msg="StopPodSandbox for \"de9ad73ab25a39cfbc03af33fb28975a627fe98f70858928055b36d703b40f34\" returns successfully" May 8 00:37:30.540904 containerd[1637]: time="2025-05-08T00:37:30.540888561Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-84669494cd-gzmz2,Uid:8a350a15-4629-49d8-9537-09e1c8aafb63,Namespace:calico-apiserver,Attempt:1,}" May 8 00:37:30.543078 systemd[1]: run-netns-cni\x2d4174b129\x2d7837\x2d6049\x2dec48\x2daf60e21192ab.mount: Deactivated successfully. May 8 00:37:30.673647 systemd-networkd[1293]: cali587a6cb7427: Link UP May 8 00:37:30.674191 systemd-networkd[1293]: cali587a6cb7427: Gained carrier May 8 00:37:30.687421 containerd[1637]: 2025-05-08 00:37:30.597 [INFO][4438] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--84669494cd--gzmz2-eth0 calico-apiserver-84669494cd- calico-apiserver 8a350a15-4629-49d8-9537-09e1c8aafb63 768 0 2025-05-08 00:37:03 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:84669494cd projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-84669494cd-gzmz2 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali587a6cb7427 [] []}} ContainerID="274db3b8176f7e43b0e3db3940d31fbbbe687423c83a8cb33e8e7ef5ef44bdb8" Namespace="calico-apiserver" Pod="calico-apiserver-84669494cd-gzmz2" WorkloadEndpoint="localhost-k8s-calico--apiserver--84669494cd--gzmz2-" May 8 00:37:30.687421 containerd[1637]: 2025-05-08 00:37:30.597 [INFO][4438] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="274db3b8176f7e43b0e3db3940d31fbbbe687423c83a8cb33e8e7ef5ef44bdb8" Namespace="calico-apiserver" Pod="calico-apiserver-84669494cd-gzmz2" WorkloadEndpoint="localhost-k8s-calico--apiserver--84669494cd--gzmz2-eth0" May 8 00:37:30.687421 containerd[1637]: 2025-05-08 00:37:30.639 [INFO][4461] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="274db3b8176f7e43b0e3db3940d31fbbbe687423c83a8cb33e8e7ef5ef44bdb8" HandleID="k8s-pod-network.274db3b8176f7e43b0e3db3940d31fbbbe687423c83a8cb33e8e7ef5ef44bdb8" Workload="localhost-k8s-calico--apiserver--84669494cd--gzmz2-eth0" May 8 00:37:30.687421 containerd[1637]: 2025-05-08 00:37:30.646 [INFO][4461] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="274db3b8176f7e43b0e3db3940d31fbbbe687423c83a8cb33e8e7ef5ef44bdb8" HandleID="k8s-pod-network.274db3b8176f7e43b0e3db3940d31fbbbe687423c83a8cb33e8e7ef5ef44bdb8" Workload="localhost-k8s-calico--apiserver--84669494cd--gzmz2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00042d720), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-84669494cd-gzmz2", "timestamp":"2025-05-08 00:37:30.639334144 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 8 00:37:30.687421 containerd[1637]: 2025-05-08 00:37:30.646 [INFO][4461] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:37:30.687421 containerd[1637]: 2025-05-08 00:37:30.647 [INFO][4461] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:37:30.687421 containerd[1637]: 2025-05-08 00:37:30.647 [INFO][4461] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 8 00:37:30.687421 containerd[1637]: 2025-05-08 00:37:30.650 [INFO][4461] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.274db3b8176f7e43b0e3db3940d31fbbbe687423c83a8cb33e8e7ef5ef44bdb8" host="localhost" May 8 00:37:30.687421 containerd[1637]: 2025-05-08 00:37:30.658 [INFO][4461] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 8 00:37:30.687421 containerd[1637]: 2025-05-08 00:37:30.661 [INFO][4461] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 8 00:37:30.687421 containerd[1637]: 2025-05-08 00:37:30.662 [INFO][4461] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 8 00:37:30.687421 containerd[1637]: 2025-05-08 00:37:30.663 [INFO][4461] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 8 00:37:30.687421 containerd[1637]: 2025-05-08 00:37:30.663 [INFO][4461] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.274db3b8176f7e43b0e3db3940d31fbbbe687423c83a8cb33e8e7ef5ef44bdb8" host="localhost" May 8 00:37:30.687421 containerd[1637]: 2025-05-08 00:37:30.664 [INFO][4461] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.274db3b8176f7e43b0e3db3940d31fbbbe687423c83a8cb33e8e7ef5ef44bdb8 May 8 00:37:30.687421 containerd[1637]: 2025-05-08 00:37:30.666 [INFO][4461] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.274db3b8176f7e43b0e3db3940d31fbbbe687423c83a8cb33e8e7ef5ef44bdb8" host="localhost" May 8 00:37:30.687421 containerd[1637]: 2025-05-08 00:37:30.669 [INFO][4461] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.274db3b8176f7e43b0e3db3940d31fbbbe687423c83a8cb33e8e7ef5ef44bdb8" host="localhost" May 8 00:37:30.687421 containerd[1637]: 2025-05-08 00:37:30.669 [INFO][4461] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.274db3b8176f7e43b0e3db3940d31fbbbe687423c83a8cb33e8e7ef5ef44bdb8" host="localhost" May 8 00:37:30.687421 containerd[1637]: 2025-05-08 00:37:30.669 [INFO][4461] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:37:30.687421 containerd[1637]: 2025-05-08 00:37:30.669 [INFO][4461] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="274db3b8176f7e43b0e3db3940d31fbbbe687423c83a8cb33e8e7ef5ef44bdb8" HandleID="k8s-pod-network.274db3b8176f7e43b0e3db3940d31fbbbe687423c83a8cb33e8e7ef5ef44bdb8" Workload="localhost-k8s-calico--apiserver--84669494cd--gzmz2-eth0" May 8 00:37:30.689798 containerd[1637]: 2025-05-08 00:37:30.670 [INFO][4438] cni-plugin/k8s.go 386: Populated endpoint ContainerID="274db3b8176f7e43b0e3db3940d31fbbbe687423c83a8cb33e8e7ef5ef44bdb8" Namespace="calico-apiserver" Pod="calico-apiserver-84669494cd-gzmz2" WorkloadEndpoint="localhost-k8s-calico--apiserver--84669494cd--gzmz2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--84669494cd--gzmz2-eth0", GenerateName:"calico-apiserver-84669494cd-", Namespace:"calico-apiserver", SelfLink:"", UID:"8a350a15-4629-49d8-9537-09e1c8aafb63", ResourceVersion:"768", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 37, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"84669494cd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-84669494cd-gzmz2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali587a6cb7427", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:37:30.689798 containerd[1637]: 2025-05-08 00:37:30.671 [INFO][4438] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.129/32] ContainerID="274db3b8176f7e43b0e3db3940d31fbbbe687423c83a8cb33e8e7ef5ef44bdb8" Namespace="calico-apiserver" Pod="calico-apiserver-84669494cd-gzmz2" WorkloadEndpoint="localhost-k8s-calico--apiserver--84669494cd--gzmz2-eth0" May 8 00:37:30.689798 containerd[1637]: 2025-05-08 00:37:30.671 [INFO][4438] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali587a6cb7427 ContainerID="274db3b8176f7e43b0e3db3940d31fbbbe687423c83a8cb33e8e7ef5ef44bdb8" Namespace="calico-apiserver" Pod="calico-apiserver-84669494cd-gzmz2" WorkloadEndpoint="localhost-k8s-calico--apiserver--84669494cd--gzmz2-eth0" May 8 00:37:30.689798 containerd[1637]: 2025-05-08 00:37:30.674 [INFO][4438] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="274db3b8176f7e43b0e3db3940d31fbbbe687423c83a8cb33e8e7ef5ef44bdb8" Namespace="calico-apiserver" Pod="calico-apiserver-84669494cd-gzmz2" WorkloadEndpoint="localhost-k8s-calico--apiserver--84669494cd--gzmz2-eth0" May 8 00:37:30.689798 containerd[1637]: 2025-05-08 00:37:30.675 [INFO][4438] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="274db3b8176f7e43b0e3db3940d31fbbbe687423c83a8cb33e8e7ef5ef44bdb8" Namespace="calico-apiserver" Pod="calico-apiserver-84669494cd-gzmz2" WorkloadEndpoint="localhost-k8s-calico--apiserver--84669494cd--gzmz2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--84669494cd--gzmz2-eth0", GenerateName:"calico-apiserver-84669494cd-", Namespace:"calico-apiserver", SelfLink:"", UID:"8a350a15-4629-49d8-9537-09e1c8aafb63", ResourceVersion:"768", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 37, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"84669494cd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"274db3b8176f7e43b0e3db3940d31fbbbe687423c83a8cb33e8e7ef5ef44bdb8", Pod:"calico-apiserver-84669494cd-gzmz2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali587a6cb7427", MAC:"6e:00:3f:a7:5e:db", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:37:30.689798 containerd[1637]: 2025-05-08 00:37:30.683 [INFO][4438] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="274db3b8176f7e43b0e3db3940d31fbbbe687423c83a8cb33e8e7ef5ef44bdb8" Namespace="calico-apiserver" Pod="calico-apiserver-84669494cd-gzmz2" WorkloadEndpoint="localhost-k8s-calico--apiserver--84669494cd--gzmz2-eth0" May 8 00:37:30.706405 systemd-networkd[1293]: calia26865c4475: Link UP May 8 00:37:30.706728 systemd-networkd[1293]: calia26865c4475: Gained carrier May 8 00:37:30.721281 containerd[1637]: 2025-05-08 00:37:30.578 [INFO][4442] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7db6d8ff4d--skfjc-eth0 coredns-7db6d8ff4d- kube-system 8ca8c9f2-9680-4a61-ad4e-7654297c3c62 769 0 2025-05-08 00:36:58 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7db6d8ff4d-skfjc eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calia26865c4475 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="c909bc5f688abaa4a7f4e569bee78e2056d29d3cf5a3cf7bf6a35dc17aded014" Namespace="kube-system" Pod="coredns-7db6d8ff4d-skfjc" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--skfjc-" May 8 00:37:30.721281 containerd[1637]: 2025-05-08 00:37:30.583 [INFO][4442] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="c909bc5f688abaa4a7f4e569bee78e2056d29d3cf5a3cf7bf6a35dc17aded014" Namespace="kube-system" Pod="coredns-7db6d8ff4d-skfjc" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--skfjc-eth0" May 8 00:37:30.721281 containerd[1637]: 2025-05-08 00:37:30.649 [INFO][4466] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c909bc5f688abaa4a7f4e569bee78e2056d29d3cf5a3cf7bf6a35dc17aded014" HandleID="k8s-pod-network.c909bc5f688abaa4a7f4e569bee78e2056d29d3cf5a3cf7bf6a35dc17aded014" Workload="localhost-k8s-coredns--7db6d8ff4d--skfjc-eth0" May 8 00:37:30.721281 containerd[1637]: 2025-05-08 00:37:30.657 [INFO][4466] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c909bc5f688abaa4a7f4e569bee78e2056d29d3cf5a3cf7bf6a35dc17aded014" HandleID="k8s-pod-network.c909bc5f688abaa4a7f4e569bee78e2056d29d3cf5a3cf7bf6a35dc17aded014" Workload="localhost-k8s-coredns--7db6d8ff4d--skfjc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00031c7a0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7db6d8ff4d-skfjc", "timestamp":"2025-05-08 00:37:30.649339519 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 8 00:37:30.721281 containerd[1637]: 2025-05-08 00:37:30.657 [INFO][4466] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:37:30.721281 containerd[1637]: 2025-05-08 00:37:30.669 [INFO][4466] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:37:30.721281 containerd[1637]: 2025-05-08 00:37:30.669 [INFO][4466] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 8 00:37:30.721281 containerd[1637]: 2025-05-08 00:37:30.671 [INFO][4466] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.c909bc5f688abaa4a7f4e569bee78e2056d29d3cf5a3cf7bf6a35dc17aded014" host="localhost" May 8 00:37:30.721281 containerd[1637]: 2025-05-08 00:37:30.676 [INFO][4466] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 8 00:37:30.721281 containerd[1637]: 2025-05-08 00:37:30.679 [INFO][4466] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 8 00:37:30.721281 containerd[1637]: 2025-05-08 00:37:30.683 [INFO][4466] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 8 00:37:30.721281 containerd[1637]: 2025-05-08 00:37:30.690 [INFO][4466] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 8 00:37:30.721281 containerd[1637]: 2025-05-08 00:37:30.690 [INFO][4466] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.c909bc5f688abaa4a7f4e569bee78e2056d29d3cf5a3cf7bf6a35dc17aded014" host="localhost" May 8 00:37:30.721281 containerd[1637]: 2025-05-08 00:37:30.694 [INFO][4466] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.c909bc5f688abaa4a7f4e569bee78e2056d29d3cf5a3cf7bf6a35dc17aded014 May 8 00:37:30.721281 containerd[1637]: 2025-05-08 00:37:30.698 [INFO][4466] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.c909bc5f688abaa4a7f4e569bee78e2056d29d3cf5a3cf7bf6a35dc17aded014" host="localhost" May 8 00:37:30.721281 containerd[1637]: 2025-05-08 00:37:30.701 [INFO][4466] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.c909bc5f688abaa4a7f4e569bee78e2056d29d3cf5a3cf7bf6a35dc17aded014" host="localhost" May 8 00:37:30.721281 containerd[1637]: 2025-05-08 00:37:30.701 [INFO][4466] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.c909bc5f688abaa4a7f4e569bee78e2056d29d3cf5a3cf7bf6a35dc17aded014" host="localhost" May 8 00:37:30.721281 containerd[1637]: 2025-05-08 00:37:30.702 [INFO][4466] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:37:30.721281 containerd[1637]: 2025-05-08 00:37:30.702 [INFO][4466] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="c909bc5f688abaa4a7f4e569bee78e2056d29d3cf5a3cf7bf6a35dc17aded014" HandleID="k8s-pod-network.c909bc5f688abaa4a7f4e569bee78e2056d29d3cf5a3cf7bf6a35dc17aded014" Workload="localhost-k8s-coredns--7db6d8ff4d--skfjc-eth0" May 8 00:37:30.722292 containerd[1637]: 2025-05-08 00:37:30.703 [INFO][4442] cni-plugin/k8s.go 386: Populated endpoint ContainerID="c909bc5f688abaa4a7f4e569bee78e2056d29d3cf5a3cf7bf6a35dc17aded014" Namespace="kube-system" Pod="coredns-7db6d8ff4d-skfjc" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--skfjc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--skfjc-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"8ca8c9f2-9680-4a61-ad4e-7654297c3c62", ResourceVersion:"769", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 36, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7db6d8ff4d-skfjc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia26865c4475", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:37:30.722292 containerd[1637]: 2025-05-08 00:37:30.703 [INFO][4442] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.130/32] ContainerID="c909bc5f688abaa4a7f4e569bee78e2056d29d3cf5a3cf7bf6a35dc17aded014" Namespace="kube-system" Pod="coredns-7db6d8ff4d-skfjc" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--skfjc-eth0" May 8 00:37:30.722292 containerd[1637]: 2025-05-08 00:37:30.703 [INFO][4442] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia26865c4475 ContainerID="c909bc5f688abaa4a7f4e569bee78e2056d29d3cf5a3cf7bf6a35dc17aded014" Namespace="kube-system" Pod="coredns-7db6d8ff4d-skfjc" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--skfjc-eth0" May 8 00:37:30.722292 containerd[1637]: 2025-05-08 00:37:30.706 [INFO][4442] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c909bc5f688abaa4a7f4e569bee78e2056d29d3cf5a3cf7bf6a35dc17aded014" Namespace="kube-system" Pod="coredns-7db6d8ff4d-skfjc" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--skfjc-eth0" May 8 00:37:30.722292 containerd[1637]: 2025-05-08 00:37:30.707 [INFO][4442] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="c909bc5f688abaa4a7f4e569bee78e2056d29d3cf5a3cf7bf6a35dc17aded014" Namespace="kube-system" Pod="coredns-7db6d8ff4d-skfjc" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--skfjc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--skfjc-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"8ca8c9f2-9680-4a61-ad4e-7654297c3c62", ResourceVersion:"769", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 36, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c909bc5f688abaa4a7f4e569bee78e2056d29d3cf5a3cf7bf6a35dc17aded014", Pod:"coredns-7db6d8ff4d-skfjc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia26865c4475", MAC:"a2:9b:2e:b7:3b:3f", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:37:30.722292 containerd[1637]: 2025-05-08 00:37:30.715 [INFO][4442] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="c909bc5f688abaa4a7f4e569bee78e2056d29d3cf5a3cf7bf6a35dc17aded014" Namespace="kube-system" Pod="coredns-7db6d8ff4d-skfjc" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--skfjc-eth0" May 8 00:37:30.725310 containerd[1637]: time="2025-05-08T00:37:30.725160492Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:37:30.725310 containerd[1637]: time="2025-05-08T00:37:30.725196429Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:37:30.725310 containerd[1637]: time="2025-05-08T00:37:30.725207408Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:37:30.725310 containerd[1637]: time="2025-05-08T00:37:30.725281940Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:37:30.742964 containerd[1637]: time="2025-05-08T00:37:30.742637658Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:37:30.743217 containerd[1637]: time="2025-05-08T00:37:30.743038546Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:37:30.743217 containerd[1637]: time="2025-05-08T00:37:30.743049999Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:37:30.743514 containerd[1637]: time="2025-05-08T00:37:30.743443052Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:37:30.746710 systemd-resolved[1546]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 8 00:37:30.759456 systemd-resolved[1546]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 8 00:37:30.788085 containerd[1637]: time="2025-05-08T00:37:30.787983776Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-84669494cd-gzmz2,Uid:8a350a15-4629-49d8-9537-09e1c8aafb63,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"274db3b8176f7e43b0e3db3940d31fbbbe687423c83a8cb33e8e7ef5ef44bdb8\"" May 8 00:37:30.793331 containerd[1637]: time="2025-05-08T00:37:30.793218640Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\"" May 8 00:37:30.797103 containerd[1637]: time="2025-05-08T00:37:30.797086901Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-skfjc,Uid:8ca8c9f2-9680-4a61-ad4e-7654297c3c62,Namespace:kube-system,Attempt:1,} returns sandbox id \"c909bc5f688abaa4a7f4e569bee78e2056d29d3cf5a3cf7bf6a35dc17aded014\"" May 8 00:37:30.798651 containerd[1637]: time="2025-05-08T00:37:30.798638269Z" level=info msg="CreateContainer within sandbox \"c909bc5f688abaa4a7f4e569bee78e2056d29d3cf5a3cf7bf6a35dc17aded014\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 8 00:37:30.816542 containerd[1637]: time="2025-05-08T00:37:30.816518794Z" level=info msg="CreateContainer within sandbox \"c909bc5f688abaa4a7f4e569bee78e2056d29d3cf5a3cf7bf6a35dc17aded014\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a422eb798a2cfc5fde238daa24fa9554f4c12cb4b6cea32dc669d3f19ee33a7e\"" May 8 00:37:30.817417 containerd[1637]: time="2025-05-08T00:37:30.817400352Z" level=info msg="StartContainer for \"a422eb798a2cfc5fde238daa24fa9554f4c12cb4b6cea32dc669d3f19ee33a7e\"" May 8 00:37:30.855237 containerd[1637]: time="2025-05-08T00:37:30.854966010Z" level=info msg="StartContainer for \"a422eb798a2cfc5fde238daa24fa9554f4c12cb4b6cea32dc669d3f19ee33a7e\" returns successfully" May 8 00:37:31.183560 containerd[1637]: time="2025-05-08T00:37:31.183532379Z" level=info msg="StopPodSandbox for \"85f808da269ed32da330f62d5567120fb060cade455dd54cca170ee4850d9d84\"" May 8 00:37:31.184235 containerd[1637]: time="2025-05-08T00:37:31.183760716Z" level=info msg="StopPodSandbox for \"35edfd87b514fae97400e4caca4f6175d9a99921eae3cc3bacadf2944d675916\"" May 8 00:37:31.244944 containerd[1637]: 2025-05-08 00:37:31.217 [INFO][4642] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="35edfd87b514fae97400e4caca4f6175d9a99921eae3cc3bacadf2944d675916" May 8 00:37:31.244944 containerd[1637]: 2025-05-08 00:37:31.218 [INFO][4642] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="35edfd87b514fae97400e4caca4f6175d9a99921eae3cc3bacadf2944d675916" iface="eth0" netns="/var/run/netns/cni-d57f25f2-1203-29c3-7c3d-79373682cbd3" May 8 00:37:31.244944 containerd[1637]: 2025-05-08 00:37:31.218 [INFO][4642] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="35edfd87b514fae97400e4caca4f6175d9a99921eae3cc3bacadf2944d675916" iface="eth0" netns="/var/run/netns/cni-d57f25f2-1203-29c3-7c3d-79373682cbd3" May 8 00:37:31.244944 containerd[1637]: 2025-05-08 00:37:31.218 [INFO][4642] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="35edfd87b514fae97400e4caca4f6175d9a99921eae3cc3bacadf2944d675916" iface="eth0" netns="/var/run/netns/cni-d57f25f2-1203-29c3-7c3d-79373682cbd3" May 8 00:37:31.244944 containerd[1637]: 2025-05-08 00:37:31.218 [INFO][4642] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="35edfd87b514fae97400e4caca4f6175d9a99921eae3cc3bacadf2944d675916" May 8 00:37:31.244944 containerd[1637]: 2025-05-08 00:37:31.218 [INFO][4642] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="35edfd87b514fae97400e4caca4f6175d9a99921eae3cc3bacadf2944d675916" May 8 00:37:31.244944 containerd[1637]: 2025-05-08 00:37:31.238 [INFO][4657] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="35edfd87b514fae97400e4caca4f6175d9a99921eae3cc3bacadf2944d675916" HandleID="k8s-pod-network.35edfd87b514fae97400e4caca4f6175d9a99921eae3cc3bacadf2944d675916" Workload="localhost-k8s-calico--kube--controllers--67ddb48bf6--z6mbt-eth0" May 8 00:37:31.244944 containerd[1637]: 2025-05-08 00:37:31.238 [INFO][4657] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:37:31.244944 containerd[1637]: 2025-05-08 00:37:31.238 [INFO][4657] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:37:31.244944 containerd[1637]: 2025-05-08 00:37:31.242 [WARNING][4657] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="35edfd87b514fae97400e4caca4f6175d9a99921eae3cc3bacadf2944d675916" HandleID="k8s-pod-network.35edfd87b514fae97400e4caca4f6175d9a99921eae3cc3bacadf2944d675916" Workload="localhost-k8s-calico--kube--controllers--67ddb48bf6--z6mbt-eth0" May 8 00:37:31.244944 containerd[1637]: 2025-05-08 00:37:31.242 [INFO][4657] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="35edfd87b514fae97400e4caca4f6175d9a99921eae3cc3bacadf2944d675916" HandleID="k8s-pod-network.35edfd87b514fae97400e4caca4f6175d9a99921eae3cc3bacadf2944d675916" Workload="localhost-k8s-calico--kube--controllers--67ddb48bf6--z6mbt-eth0" May 8 00:37:31.244944 containerd[1637]: 2025-05-08 00:37:31.243 [INFO][4657] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:37:31.244944 containerd[1637]: 2025-05-08 00:37:31.243 [INFO][4642] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="35edfd87b514fae97400e4caca4f6175d9a99921eae3cc3bacadf2944d675916" May 8 00:37:31.247499 containerd[1637]: time="2025-05-08T00:37:31.245029760Z" level=info msg="TearDown network for sandbox \"35edfd87b514fae97400e4caca4f6175d9a99921eae3cc3bacadf2944d675916\" successfully" May 8 00:37:31.247499 containerd[1637]: time="2025-05-08T00:37:31.245045679Z" level=info msg="StopPodSandbox for \"35edfd87b514fae97400e4caca4f6175d9a99921eae3cc3bacadf2944d675916\" returns successfully" May 8 00:37:31.247499 containerd[1637]: time="2025-05-08T00:37:31.245467800Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-67ddb48bf6-z6mbt,Uid:9d6ebfe3-fdd8-4570-8cd5-315117175ab6,Namespace:calico-system,Attempt:1,}" May 8 00:37:31.275693 containerd[1637]: 2025-05-08 00:37:31.236 [INFO][4646] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="85f808da269ed32da330f62d5567120fb060cade455dd54cca170ee4850d9d84" May 8 00:37:31.275693 containerd[1637]: 2025-05-08 00:37:31.237 [INFO][4646] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="85f808da269ed32da330f62d5567120fb060cade455dd54cca170ee4850d9d84" iface="eth0" netns="/var/run/netns/cni-e226815d-0fc0-517e-6b1b-39b18eb95a09" May 8 00:37:31.275693 containerd[1637]: 2025-05-08 00:37:31.237 [INFO][4646] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="85f808da269ed32da330f62d5567120fb060cade455dd54cca170ee4850d9d84" iface="eth0" netns="/var/run/netns/cni-e226815d-0fc0-517e-6b1b-39b18eb95a09" May 8 00:37:31.275693 containerd[1637]: 2025-05-08 00:37:31.237 [INFO][4646] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="85f808da269ed32da330f62d5567120fb060cade455dd54cca170ee4850d9d84" iface="eth0" netns="/var/run/netns/cni-e226815d-0fc0-517e-6b1b-39b18eb95a09" May 8 00:37:31.275693 containerd[1637]: 2025-05-08 00:37:31.237 [INFO][4646] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="85f808da269ed32da330f62d5567120fb060cade455dd54cca170ee4850d9d84" May 8 00:37:31.275693 containerd[1637]: 2025-05-08 00:37:31.237 [INFO][4646] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="85f808da269ed32da330f62d5567120fb060cade455dd54cca170ee4850d9d84" May 8 00:37:31.275693 containerd[1637]: 2025-05-08 00:37:31.258 [INFO][4665] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="85f808da269ed32da330f62d5567120fb060cade455dd54cca170ee4850d9d84" HandleID="k8s-pod-network.85f808da269ed32da330f62d5567120fb060cade455dd54cca170ee4850d9d84" Workload="localhost-k8s-coredns--7db6d8ff4d--2lxxx-eth0" May 8 00:37:31.275693 containerd[1637]: 2025-05-08 00:37:31.258 [INFO][4665] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:37:31.275693 containerd[1637]: 2025-05-08 00:37:31.258 [INFO][4665] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:37:31.275693 containerd[1637]: 2025-05-08 00:37:31.264 [WARNING][4665] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="85f808da269ed32da330f62d5567120fb060cade455dd54cca170ee4850d9d84" HandleID="k8s-pod-network.85f808da269ed32da330f62d5567120fb060cade455dd54cca170ee4850d9d84" Workload="localhost-k8s-coredns--7db6d8ff4d--2lxxx-eth0" May 8 00:37:31.275693 containerd[1637]: 2025-05-08 00:37:31.264 [INFO][4665] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="85f808da269ed32da330f62d5567120fb060cade455dd54cca170ee4850d9d84" HandleID="k8s-pod-network.85f808da269ed32da330f62d5567120fb060cade455dd54cca170ee4850d9d84" Workload="localhost-k8s-coredns--7db6d8ff4d--2lxxx-eth0" May 8 00:37:31.275693 containerd[1637]: 2025-05-08 00:37:31.266 [INFO][4665] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:37:31.275693 containerd[1637]: 2025-05-08 00:37:31.268 [INFO][4646] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="85f808da269ed32da330f62d5567120fb060cade455dd54cca170ee4850d9d84" May 8 00:37:31.277049 containerd[1637]: time="2025-05-08T00:37:31.276002225Z" level=info msg="TearDown network for sandbox \"85f808da269ed32da330f62d5567120fb060cade455dd54cca170ee4850d9d84\" successfully" May 8 00:37:31.277049 containerd[1637]: time="2025-05-08T00:37:31.276019388Z" level=info msg="StopPodSandbox for \"85f808da269ed32da330f62d5567120fb060cade455dd54cca170ee4850d9d84\" returns successfully" May 8 00:37:31.277049 containerd[1637]: time="2025-05-08T00:37:31.276751154Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-2lxxx,Uid:aa7783fa-83ef-4b49-bf44-ff04e1503a73,Namespace:kube-system,Attempt:1,}" May 8 00:37:31.370969 systemd-networkd[1293]: calid973223df80: Link UP May 8 00:37:31.371570 systemd-networkd[1293]: calid973223df80: Gained carrier May 8 00:37:31.388437 containerd[1637]: 2025-05-08 00:37:31.288 [INFO][4672] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--67ddb48bf6--z6mbt-eth0 calico-kube-controllers-67ddb48bf6- calico-system 9d6ebfe3-fdd8-4570-8cd5-315117175ab6 784 0 2025-05-08 00:37:03 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:67ddb48bf6 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-67ddb48bf6-z6mbt eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calid973223df80 [] []}} ContainerID="b2d9f88c3ef0fa40800e8de031ca1db06c427e8d5062b11021d466cf61267c0e" Namespace="calico-system" Pod="calico-kube-controllers-67ddb48bf6-z6mbt" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--67ddb48bf6--z6mbt-" May 8 00:37:31.388437 containerd[1637]: 2025-05-08 00:37:31.288 [INFO][4672] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="b2d9f88c3ef0fa40800e8de031ca1db06c427e8d5062b11021d466cf61267c0e" Namespace="calico-system" Pod="calico-kube-controllers-67ddb48bf6-z6mbt" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--67ddb48bf6--z6mbt-eth0" May 8 00:37:31.388437 containerd[1637]: 2025-05-08 00:37:31.322 [INFO][4696] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b2d9f88c3ef0fa40800e8de031ca1db06c427e8d5062b11021d466cf61267c0e" HandleID="k8s-pod-network.b2d9f88c3ef0fa40800e8de031ca1db06c427e8d5062b11021d466cf61267c0e" Workload="localhost-k8s-calico--kube--controllers--67ddb48bf6--z6mbt-eth0" May 8 00:37:31.388437 containerd[1637]: 2025-05-08 00:37:31.333 [INFO][4696] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b2d9f88c3ef0fa40800e8de031ca1db06c427e8d5062b11021d466cf61267c0e" HandleID="k8s-pod-network.b2d9f88c3ef0fa40800e8de031ca1db06c427e8d5062b11021d466cf61267c0e" Workload="localhost-k8s-calico--kube--controllers--67ddb48bf6--z6mbt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003bd320), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-67ddb48bf6-z6mbt", "timestamp":"2025-05-08 00:37:31.322237012 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 8 00:37:31.388437 containerd[1637]: 2025-05-08 00:37:31.333 [INFO][4696] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:37:31.388437 containerd[1637]: 2025-05-08 00:37:31.333 [INFO][4696] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:37:31.388437 containerd[1637]: 2025-05-08 00:37:31.333 [INFO][4696] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 8 00:37:31.388437 containerd[1637]: 2025-05-08 00:37:31.334 [INFO][4696] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.b2d9f88c3ef0fa40800e8de031ca1db06c427e8d5062b11021d466cf61267c0e" host="localhost" May 8 00:37:31.388437 containerd[1637]: 2025-05-08 00:37:31.342 [INFO][4696] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 8 00:37:31.388437 containerd[1637]: 2025-05-08 00:37:31.345 [INFO][4696] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 8 00:37:31.388437 containerd[1637]: 2025-05-08 00:37:31.347 [INFO][4696] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 8 00:37:31.388437 containerd[1637]: 2025-05-08 00:37:31.348 [INFO][4696] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 8 00:37:31.388437 containerd[1637]: 2025-05-08 00:37:31.348 [INFO][4696] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.b2d9f88c3ef0fa40800e8de031ca1db06c427e8d5062b11021d466cf61267c0e" host="localhost" May 8 00:37:31.388437 containerd[1637]: 2025-05-08 00:37:31.349 [INFO][4696] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.b2d9f88c3ef0fa40800e8de031ca1db06c427e8d5062b11021d466cf61267c0e May 8 00:37:31.388437 containerd[1637]: 2025-05-08 00:37:31.356 [INFO][4696] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.b2d9f88c3ef0fa40800e8de031ca1db06c427e8d5062b11021d466cf61267c0e" host="localhost" May 8 00:37:31.388437 containerd[1637]: 2025-05-08 00:37:31.364 [INFO][4696] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.b2d9f88c3ef0fa40800e8de031ca1db06c427e8d5062b11021d466cf61267c0e" host="localhost" May 8 00:37:31.388437 containerd[1637]: 2025-05-08 00:37:31.364 [INFO][4696] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.b2d9f88c3ef0fa40800e8de031ca1db06c427e8d5062b11021d466cf61267c0e" host="localhost" May 8 00:37:31.388437 containerd[1637]: 2025-05-08 00:37:31.364 [INFO][4696] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:37:31.388437 containerd[1637]: 2025-05-08 00:37:31.364 [INFO][4696] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="b2d9f88c3ef0fa40800e8de031ca1db06c427e8d5062b11021d466cf61267c0e" HandleID="k8s-pod-network.b2d9f88c3ef0fa40800e8de031ca1db06c427e8d5062b11021d466cf61267c0e" Workload="localhost-k8s-calico--kube--controllers--67ddb48bf6--z6mbt-eth0" May 8 00:37:31.392164 containerd[1637]: 2025-05-08 00:37:31.367 [INFO][4672] cni-plugin/k8s.go 386: Populated endpoint ContainerID="b2d9f88c3ef0fa40800e8de031ca1db06c427e8d5062b11021d466cf61267c0e" Namespace="calico-system" Pod="calico-kube-controllers-67ddb48bf6-z6mbt" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--67ddb48bf6--z6mbt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--67ddb48bf6--z6mbt-eth0", GenerateName:"calico-kube-controllers-67ddb48bf6-", Namespace:"calico-system", SelfLink:"", UID:"9d6ebfe3-fdd8-4570-8cd5-315117175ab6", ResourceVersion:"784", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 37, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"67ddb48bf6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-67ddb48bf6-z6mbt", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calid973223df80", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:37:31.392164 containerd[1637]: 2025-05-08 00:37:31.367 [INFO][4672] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.131/32] ContainerID="b2d9f88c3ef0fa40800e8de031ca1db06c427e8d5062b11021d466cf61267c0e" Namespace="calico-system" Pod="calico-kube-controllers-67ddb48bf6-z6mbt" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--67ddb48bf6--z6mbt-eth0" May 8 00:37:31.392164 containerd[1637]: 2025-05-08 00:37:31.367 [INFO][4672] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid973223df80 ContainerID="b2d9f88c3ef0fa40800e8de031ca1db06c427e8d5062b11021d466cf61267c0e" Namespace="calico-system" Pod="calico-kube-controllers-67ddb48bf6-z6mbt" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--67ddb48bf6--z6mbt-eth0" May 8 00:37:31.392164 containerd[1637]: 2025-05-08 00:37:31.371 [INFO][4672] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b2d9f88c3ef0fa40800e8de031ca1db06c427e8d5062b11021d466cf61267c0e" Namespace="calico-system" Pod="calico-kube-controllers-67ddb48bf6-z6mbt" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--67ddb48bf6--z6mbt-eth0" May 8 00:37:31.392164 containerd[1637]: 2025-05-08 00:37:31.373 [INFO][4672] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="b2d9f88c3ef0fa40800e8de031ca1db06c427e8d5062b11021d466cf61267c0e" Namespace="calico-system" Pod="calico-kube-controllers-67ddb48bf6-z6mbt" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--67ddb48bf6--z6mbt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--67ddb48bf6--z6mbt-eth0", GenerateName:"calico-kube-controllers-67ddb48bf6-", Namespace:"calico-system", SelfLink:"", UID:"9d6ebfe3-fdd8-4570-8cd5-315117175ab6", ResourceVersion:"784", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 37, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"67ddb48bf6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b2d9f88c3ef0fa40800e8de031ca1db06c427e8d5062b11021d466cf61267c0e", Pod:"calico-kube-controllers-67ddb48bf6-z6mbt", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calid973223df80", MAC:"e6:29:fd:a6:ae:49", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:37:31.392164 containerd[1637]: 2025-05-08 00:37:31.385 [INFO][4672] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="b2d9f88c3ef0fa40800e8de031ca1db06c427e8d5062b11021d466cf61267c0e" Namespace="calico-system" Pod="calico-kube-controllers-67ddb48bf6-z6mbt" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--67ddb48bf6--z6mbt-eth0" May 8 00:37:31.395475 kubelet[2965]: I0508 00:37:31.394935 2965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-skfjc" podStartSLOduration=33.394922767 podStartE2EDuration="33.394922767s" podCreationTimestamp="2025-05-08 00:36:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 00:37:31.39452499 +0000 UTC m=+47.327833624" watchObservedRunningTime="2025-05-08 00:37:31.394922767 +0000 UTC m=+47.328231394" May 8 00:37:31.428967 systemd-networkd[1293]: cali1818f0681b3: Link UP May 8 00:37:31.429663 systemd-networkd[1293]: cali1818f0681b3: Gained carrier May 8 00:37:31.440536 containerd[1637]: 2025-05-08 00:37:31.315 [INFO][4684] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7db6d8ff4d--2lxxx-eth0 coredns-7db6d8ff4d- kube-system aa7783fa-83ef-4b49-bf44-ff04e1503a73 785 0 2025-05-08 00:36:58 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7db6d8ff4d-2lxxx eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali1818f0681b3 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="92f4c252c28adb209e6ef0b2b094dec0155f0b12943bc717cd5222548e6c1919" Namespace="kube-system" Pod="coredns-7db6d8ff4d-2lxxx" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--2lxxx-" May 8 00:37:31.440536 containerd[1637]: 2025-05-08 00:37:31.315 [INFO][4684] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="92f4c252c28adb209e6ef0b2b094dec0155f0b12943bc717cd5222548e6c1919" Namespace="kube-system" Pod="coredns-7db6d8ff4d-2lxxx" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--2lxxx-eth0" May 8 00:37:31.440536 containerd[1637]: 2025-05-08 00:37:31.352 [INFO][4707] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="92f4c252c28adb209e6ef0b2b094dec0155f0b12943bc717cd5222548e6c1919" HandleID="k8s-pod-network.92f4c252c28adb209e6ef0b2b094dec0155f0b12943bc717cd5222548e6c1919" Workload="localhost-k8s-coredns--7db6d8ff4d--2lxxx-eth0" May 8 00:37:31.440536 containerd[1637]: 2025-05-08 00:37:31.365 [INFO][4707] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="92f4c252c28adb209e6ef0b2b094dec0155f0b12943bc717cd5222548e6c1919" HandleID="k8s-pod-network.92f4c252c28adb209e6ef0b2b094dec0155f0b12943bc717cd5222548e6c1919" Workload="localhost-k8s-coredns--7db6d8ff4d--2lxxx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00031a760), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7db6d8ff4d-2lxxx", "timestamp":"2025-05-08 00:37:31.352212413 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 8 00:37:31.440536 containerd[1637]: 2025-05-08 00:37:31.365 [INFO][4707] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:37:31.440536 containerd[1637]: 2025-05-08 00:37:31.365 [INFO][4707] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:37:31.440536 containerd[1637]: 2025-05-08 00:37:31.365 [INFO][4707] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 8 00:37:31.440536 containerd[1637]: 2025-05-08 00:37:31.370 [INFO][4707] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.92f4c252c28adb209e6ef0b2b094dec0155f0b12943bc717cd5222548e6c1919" host="localhost" May 8 00:37:31.440536 containerd[1637]: 2025-05-08 00:37:31.384 [INFO][4707] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 8 00:37:31.440536 containerd[1637]: 2025-05-08 00:37:31.395 [INFO][4707] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 8 00:37:31.440536 containerd[1637]: 2025-05-08 00:37:31.397 [INFO][4707] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 8 00:37:31.440536 containerd[1637]: 2025-05-08 00:37:31.400 [INFO][4707] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 8 00:37:31.440536 containerd[1637]: 2025-05-08 00:37:31.400 [INFO][4707] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.92f4c252c28adb209e6ef0b2b094dec0155f0b12943bc717cd5222548e6c1919" host="localhost" May 8 00:37:31.440536 containerd[1637]: 2025-05-08 00:37:31.405 [INFO][4707] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.92f4c252c28adb209e6ef0b2b094dec0155f0b12943bc717cd5222548e6c1919 May 8 00:37:31.440536 containerd[1637]: 2025-05-08 00:37:31.410 [INFO][4707] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.92f4c252c28adb209e6ef0b2b094dec0155f0b12943bc717cd5222548e6c1919" host="localhost" May 8 00:37:31.440536 containerd[1637]: 2025-05-08 00:37:31.417 [INFO][4707] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.92f4c252c28adb209e6ef0b2b094dec0155f0b12943bc717cd5222548e6c1919" host="localhost" May 8 00:37:31.440536 containerd[1637]: 2025-05-08 00:37:31.417 [INFO][4707] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.92f4c252c28adb209e6ef0b2b094dec0155f0b12943bc717cd5222548e6c1919" host="localhost" May 8 00:37:31.440536 containerd[1637]: 2025-05-08 00:37:31.417 [INFO][4707] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:37:31.440536 containerd[1637]: 2025-05-08 00:37:31.417 [INFO][4707] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="92f4c252c28adb209e6ef0b2b094dec0155f0b12943bc717cd5222548e6c1919" HandleID="k8s-pod-network.92f4c252c28adb209e6ef0b2b094dec0155f0b12943bc717cd5222548e6c1919" Workload="localhost-k8s-coredns--7db6d8ff4d--2lxxx-eth0" May 8 00:37:31.441762 containerd[1637]: 2025-05-08 00:37:31.426 [INFO][4684] cni-plugin/k8s.go 386: Populated endpoint ContainerID="92f4c252c28adb209e6ef0b2b094dec0155f0b12943bc717cd5222548e6c1919" Namespace="kube-system" Pod="coredns-7db6d8ff4d-2lxxx" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--2lxxx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--2lxxx-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"aa7783fa-83ef-4b49-bf44-ff04e1503a73", ResourceVersion:"785", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 36, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7db6d8ff4d-2lxxx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1818f0681b3", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:37:31.441762 containerd[1637]: 2025-05-08 00:37:31.426 [INFO][4684] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.132/32] ContainerID="92f4c252c28adb209e6ef0b2b094dec0155f0b12943bc717cd5222548e6c1919" Namespace="kube-system" Pod="coredns-7db6d8ff4d-2lxxx" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--2lxxx-eth0" May 8 00:37:31.441762 containerd[1637]: 2025-05-08 00:37:31.426 [INFO][4684] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1818f0681b3 ContainerID="92f4c252c28adb209e6ef0b2b094dec0155f0b12943bc717cd5222548e6c1919" Namespace="kube-system" Pod="coredns-7db6d8ff4d-2lxxx" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--2lxxx-eth0" May 8 00:37:31.441762 containerd[1637]: 2025-05-08 00:37:31.429 [INFO][4684] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="92f4c252c28adb209e6ef0b2b094dec0155f0b12943bc717cd5222548e6c1919" Namespace="kube-system" Pod="coredns-7db6d8ff4d-2lxxx" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--2lxxx-eth0" May 8 00:37:31.441762 containerd[1637]: 2025-05-08 00:37:31.430 [INFO][4684] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="92f4c252c28adb209e6ef0b2b094dec0155f0b12943bc717cd5222548e6c1919" Namespace="kube-system" Pod="coredns-7db6d8ff4d-2lxxx" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--2lxxx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--2lxxx-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"aa7783fa-83ef-4b49-bf44-ff04e1503a73", ResourceVersion:"785", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 36, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"92f4c252c28adb209e6ef0b2b094dec0155f0b12943bc717cd5222548e6c1919", Pod:"coredns-7db6d8ff4d-2lxxx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1818f0681b3", MAC:"92:0d:d2:90:13:84", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:37:31.441762 containerd[1637]: 2025-05-08 00:37:31.437 [INFO][4684] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="92f4c252c28adb209e6ef0b2b094dec0155f0b12943bc717cd5222548e6c1919" Namespace="kube-system" Pod="coredns-7db6d8ff4d-2lxxx" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--2lxxx-eth0" May 8 00:37:31.444595 containerd[1637]: time="2025-05-08T00:37:31.444254711Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:37:31.444595 containerd[1637]: time="2025-05-08T00:37:31.444293961Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:37:31.444595 containerd[1637]: time="2025-05-08T00:37:31.444404264Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:37:31.444595 containerd[1637]: time="2025-05-08T00:37:31.444482004Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:37:31.470427 containerd[1637]: time="2025-05-08T00:37:31.470192357Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:37:31.470427 containerd[1637]: time="2025-05-08T00:37:31.470275958Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:37:31.470930 containerd[1637]: time="2025-05-08T00:37:31.470884786Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:37:31.471384 containerd[1637]: time="2025-05-08T00:37:31.471109938Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:37:31.475703 systemd-resolved[1546]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 8 00:37:31.495234 systemd-resolved[1546]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 8 00:37:31.502705 containerd[1637]: time="2025-05-08T00:37:31.502671907Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-67ddb48bf6-z6mbt,Uid:9d6ebfe3-fdd8-4570-8cd5-315117175ab6,Namespace:calico-system,Attempt:1,} returns sandbox id \"b2d9f88c3ef0fa40800e8de031ca1db06c427e8d5062b11021d466cf61267c0e\"" May 8 00:37:31.519206 containerd[1637]: time="2025-05-08T00:37:31.519111692Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-2lxxx,Uid:aa7783fa-83ef-4b49-bf44-ff04e1503a73,Namespace:kube-system,Attempt:1,} returns sandbox id \"92f4c252c28adb209e6ef0b2b094dec0155f0b12943bc717cd5222548e6c1919\"" May 8 00:37:31.522122 containerd[1637]: time="2025-05-08T00:37:31.522006270Z" level=info msg="CreateContainer within sandbox \"92f4c252c28adb209e6ef0b2b094dec0155f0b12943bc717cd5222548e6c1919\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 8 00:37:31.526594 containerd[1637]: time="2025-05-08T00:37:31.526548640Z" level=info msg="CreateContainer within sandbox \"92f4c252c28adb209e6ef0b2b094dec0155f0b12943bc717cd5222548e6c1919\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"7ecb9457218756f27441cb32dc17a8ac0639f2b607cdee559d66aa3248d6d1f4\"" May 8 00:37:31.527064 containerd[1637]: time="2025-05-08T00:37:31.526995420Z" level=info msg="StartContainer for \"7ecb9457218756f27441cb32dc17a8ac0639f2b607cdee559d66aa3248d6d1f4\"" May 8 00:37:31.535526 systemd[1]: run-netns-cni\x2de226815d\x2d0fc0\x2d517e\x2d6b1b\x2d39b18eb95a09.mount: Deactivated successfully. May 8 00:37:31.535950 systemd[1]: run-netns-cni\x2dd57f25f2\x2d1203\x2d29c3\x2d7c3d\x2d79373682cbd3.mount: Deactivated successfully. May 8 00:37:31.552559 systemd[1]: run-containerd-runc-k8s.io-7ecb9457218756f27441cb32dc17a8ac0639f2b607cdee559d66aa3248d6d1f4-runc.EldmOe.mount: Deactivated successfully. May 8 00:37:31.570214 containerd[1637]: time="2025-05-08T00:37:31.570189027Z" level=info msg="StartContainer for \"7ecb9457218756f27441cb32dc17a8ac0639f2b607cdee559d66aa3248d6d1f4\" returns successfully" May 8 00:37:32.453383 kubelet[2965]: I0508 00:37:32.452865 2965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-2lxxx" podStartSLOduration=34.452853619 podStartE2EDuration="34.452853619s" podCreationTimestamp="2025-05-08 00:36:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 00:37:32.422107979 +0000 UTC m=+48.355416613" watchObservedRunningTime="2025-05-08 00:37:32.452853619 +0000 UTC m=+48.386162246" May 8 00:37:32.479492 systemd-networkd[1293]: calia26865c4475: Gained IPv6LL May 8 00:37:32.671549 systemd-networkd[1293]: cali587a6cb7427: Gained IPv6LL May 8 00:37:33.119553 systemd-networkd[1293]: cali1818f0681b3: Gained IPv6LL May 8 00:37:33.183723 containerd[1637]: time="2025-05-08T00:37:33.183570087Z" level=info msg="StopPodSandbox for \"235c623d0ea98408d68f03e74ab5a67b390280890b3600cc54e80abf10f7dedd\"" May 8 00:37:33.266783 containerd[1637]: 2025-05-08 00:37:33.234 [INFO][4884] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="235c623d0ea98408d68f03e74ab5a67b390280890b3600cc54e80abf10f7dedd" May 8 00:37:33.266783 containerd[1637]: 2025-05-08 00:37:33.234 [INFO][4884] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="235c623d0ea98408d68f03e74ab5a67b390280890b3600cc54e80abf10f7dedd" iface="eth0" netns="/var/run/netns/cni-6c326370-3652-7a1f-24bf-0c8040e60708" May 8 00:37:33.266783 containerd[1637]: 2025-05-08 00:37:33.234 [INFO][4884] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="235c623d0ea98408d68f03e74ab5a67b390280890b3600cc54e80abf10f7dedd" iface="eth0" netns="/var/run/netns/cni-6c326370-3652-7a1f-24bf-0c8040e60708" May 8 00:37:33.266783 containerd[1637]: 2025-05-08 00:37:33.235 [INFO][4884] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="235c623d0ea98408d68f03e74ab5a67b390280890b3600cc54e80abf10f7dedd" iface="eth0" netns="/var/run/netns/cni-6c326370-3652-7a1f-24bf-0c8040e60708" May 8 00:37:33.266783 containerd[1637]: 2025-05-08 00:37:33.235 [INFO][4884] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="235c623d0ea98408d68f03e74ab5a67b390280890b3600cc54e80abf10f7dedd" May 8 00:37:33.266783 containerd[1637]: 2025-05-08 00:37:33.235 [INFO][4884] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="235c623d0ea98408d68f03e74ab5a67b390280890b3600cc54e80abf10f7dedd" May 8 00:37:33.266783 containerd[1637]: 2025-05-08 00:37:33.253 [INFO][4892] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="235c623d0ea98408d68f03e74ab5a67b390280890b3600cc54e80abf10f7dedd" HandleID="k8s-pod-network.235c623d0ea98408d68f03e74ab5a67b390280890b3600cc54e80abf10f7dedd" Workload="localhost-k8s-calico--apiserver--6d6795bb7b--bt6hl-eth0" May 8 00:37:33.266783 containerd[1637]: 2025-05-08 00:37:33.254 [INFO][4892] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:37:33.266783 containerd[1637]: 2025-05-08 00:37:33.254 [INFO][4892] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:37:33.266783 containerd[1637]: 2025-05-08 00:37:33.260 [WARNING][4892] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="235c623d0ea98408d68f03e74ab5a67b390280890b3600cc54e80abf10f7dedd" HandleID="k8s-pod-network.235c623d0ea98408d68f03e74ab5a67b390280890b3600cc54e80abf10f7dedd" Workload="localhost-k8s-calico--apiserver--6d6795bb7b--bt6hl-eth0" May 8 00:37:33.266783 containerd[1637]: 2025-05-08 00:37:33.260 [INFO][4892] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="235c623d0ea98408d68f03e74ab5a67b390280890b3600cc54e80abf10f7dedd" HandleID="k8s-pod-network.235c623d0ea98408d68f03e74ab5a67b390280890b3600cc54e80abf10f7dedd" Workload="localhost-k8s-calico--apiserver--6d6795bb7b--bt6hl-eth0" May 8 00:37:33.266783 containerd[1637]: 2025-05-08 00:37:33.263 [INFO][4892] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:37:33.266783 containerd[1637]: 2025-05-08 00:37:33.265 [INFO][4884] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="235c623d0ea98408d68f03e74ab5a67b390280890b3600cc54e80abf10f7dedd" May 8 00:37:33.268974 containerd[1637]: time="2025-05-08T00:37:33.267613490Z" level=info msg="TearDown network for sandbox \"235c623d0ea98408d68f03e74ab5a67b390280890b3600cc54e80abf10f7dedd\" successfully" May 8 00:37:33.268974 containerd[1637]: time="2025-05-08T00:37:33.267629736Z" level=info msg="StopPodSandbox for \"235c623d0ea98408d68f03e74ab5a67b390280890b3600cc54e80abf10f7dedd\" returns successfully" May 8 00:37:33.272230 containerd[1637]: time="2025-05-08T00:37:33.269493261Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d6795bb7b-bt6hl,Uid:1adaa741-871f-441d-b6ca-732d5537fc5a,Namespace:calico-apiserver,Attempt:1,}" May 8 00:37:33.268992 systemd[1]: run-netns-cni\x2d6c326370\x2d3652\x2d7a1f\x2d24bf\x2d0c8040e60708.mount: Deactivated successfully. May 8 00:37:33.388111 systemd-networkd[1293]: calib3fb1a1f64f: Link UP May 8 00:37:33.389656 systemd-networkd[1293]: calib3fb1a1f64f: Gained carrier May 8 00:37:33.400938 containerd[1637]: 2025-05-08 00:37:33.316 [INFO][4898] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--6d6795bb7b--bt6hl-eth0 calico-apiserver-6d6795bb7b- calico-apiserver 1adaa741-871f-441d-b6ca-732d5537fc5a 818 0 2025-05-08 00:37:03 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6d6795bb7b projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-6d6795bb7b-bt6hl eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calib3fb1a1f64f [] []}} ContainerID="c3def452139becca79dc9949161df9d1142a9b3dc953d3407aaa31b4752edf29" Namespace="calico-apiserver" Pod="calico-apiserver-6d6795bb7b-bt6hl" WorkloadEndpoint="localhost-k8s-calico--apiserver--6d6795bb7b--bt6hl-" May 8 00:37:33.400938 containerd[1637]: 2025-05-08 00:37:33.316 [INFO][4898] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="c3def452139becca79dc9949161df9d1142a9b3dc953d3407aaa31b4752edf29" Namespace="calico-apiserver" Pod="calico-apiserver-6d6795bb7b-bt6hl" WorkloadEndpoint="localhost-k8s-calico--apiserver--6d6795bb7b--bt6hl-eth0" May 8 00:37:33.400938 containerd[1637]: 2025-05-08 00:37:33.342 [INFO][4910] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c3def452139becca79dc9949161df9d1142a9b3dc953d3407aaa31b4752edf29" HandleID="k8s-pod-network.c3def452139becca79dc9949161df9d1142a9b3dc953d3407aaa31b4752edf29" Workload="localhost-k8s-calico--apiserver--6d6795bb7b--bt6hl-eth0" May 8 00:37:33.400938 containerd[1637]: 2025-05-08 00:37:33.351 [INFO][4910] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c3def452139becca79dc9949161df9d1142a9b3dc953d3407aaa31b4752edf29" HandleID="k8s-pod-network.c3def452139becca79dc9949161df9d1142a9b3dc953d3407aaa31b4752edf29" Workload="localhost-k8s-calico--apiserver--6d6795bb7b--bt6hl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00011a150), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-6d6795bb7b-bt6hl", "timestamp":"2025-05-08 00:37:33.342364337 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 8 00:37:33.400938 containerd[1637]: 2025-05-08 00:37:33.351 [INFO][4910] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:37:33.400938 containerd[1637]: 2025-05-08 00:37:33.351 [INFO][4910] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:37:33.400938 containerd[1637]: 2025-05-08 00:37:33.351 [INFO][4910] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 8 00:37:33.400938 containerd[1637]: 2025-05-08 00:37:33.357 [INFO][4910] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.c3def452139becca79dc9949161df9d1142a9b3dc953d3407aaa31b4752edf29" host="localhost" May 8 00:37:33.400938 containerd[1637]: 2025-05-08 00:37:33.361 [INFO][4910] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 8 00:37:33.400938 containerd[1637]: 2025-05-08 00:37:33.364 [INFO][4910] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 8 00:37:33.400938 containerd[1637]: 2025-05-08 00:37:33.365 [INFO][4910] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 8 00:37:33.400938 containerd[1637]: 2025-05-08 00:37:33.370 [INFO][4910] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 8 00:37:33.400938 containerd[1637]: 2025-05-08 00:37:33.370 [INFO][4910] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.c3def452139becca79dc9949161df9d1142a9b3dc953d3407aaa31b4752edf29" host="localhost" May 8 00:37:33.400938 containerd[1637]: 2025-05-08 00:37:33.373 [INFO][4910] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.c3def452139becca79dc9949161df9d1142a9b3dc953d3407aaa31b4752edf29 May 8 00:37:33.400938 containerd[1637]: 2025-05-08 00:37:33.377 [INFO][4910] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.c3def452139becca79dc9949161df9d1142a9b3dc953d3407aaa31b4752edf29" host="localhost" May 8 00:37:33.400938 containerd[1637]: 2025-05-08 00:37:33.381 [INFO][4910] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.c3def452139becca79dc9949161df9d1142a9b3dc953d3407aaa31b4752edf29" host="localhost" May 8 00:37:33.400938 containerd[1637]: 2025-05-08 00:37:33.381 [INFO][4910] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.c3def452139becca79dc9949161df9d1142a9b3dc953d3407aaa31b4752edf29" host="localhost" May 8 00:37:33.400938 containerd[1637]: 2025-05-08 00:37:33.381 [INFO][4910] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:37:33.400938 containerd[1637]: 2025-05-08 00:37:33.381 [INFO][4910] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="c3def452139becca79dc9949161df9d1142a9b3dc953d3407aaa31b4752edf29" HandleID="k8s-pod-network.c3def452139becca79dc9949161df9d1142a9b3dc953d3407aaa31b4752edf29" Workload="localhost-k8s-calico--apiserver--6d6795bb7b--bt6hl-eth0" May 8 00:37:33.402385 containerd[1637]: 2025-05-08 00:37:33.385 [INFO][4898] cni-plugin/k8s.go 386: Populated endpoint ContainerID="c3def452139becca79dc9949161df9d1142a9b3dc953d3407aaa31b4752edf29" Namespace="calico-apiserver" Pod="calico-apiserver-6d6795bb7b-bt6hl" WorkloadEndpoint="localhost-k8s-calico--apiserver--6d6795bb7b--bt6hl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6d6795bb7b--bt6hl-eth0", GenerateName:"calico-apiserver-6d6795bb7b-", Namespace:"calico-apiserver", SelfLink:"", UID:"1adaa741-871f-441d-b6ca-732d5537fc5a", ResourceVersion:"818", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 37, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6d6795bb7b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-6d6795bb7b-bt6hl", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib3fb1a1f64f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:37:33.402385 containerd[1637]: 2025-05-08 00:37:33.385 [INFO][4898] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.133/32] ContainerID="c3def452139becca79dc9949161df9d1142a9b3dc953d3407aaa31b4752edf29" Namespace="calico-apiserver" Pod="calico-apiserver-6d6795bb7b-bt6hl" WorkloadEndpoint="localhost-k8s-calico--apiserver--6d6795bb7b--bt6hl-eth0" May 8 00:37:33.402385 containerd[1637]: 2025-05-08 00:37:33.385 [INFO][4898] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib3fb1a1f64f ContainerID="c3def452139becca79dc9949161df9d1142a9b3dc953d3407aaa31b4752edf29" Namespace="calico-apiserver" Pod="calico-apiserver-6d6795bb7b-bt6hl" WorkloadEndpoint="localhost-k8s-calico--apiserver--6d6795bb7b--bt6hl-eth0" May 8 00:37:33.402385 containerd[1637]: 2025-05-08 00:37:33.389 [INFO][4898] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c3def452139becca79dc9949161df9d1142a9b3dc953d3407aaa31b4752edf29" Namespace="calico-apiserver" Pod="calico-apiserver-6d6795bb7b-bt6hl" WorkloadEndpoint="localhost-k8s-calico--apiserver--6d6795bb7b--bt6hl-eth0" May 8 00:37:33.402385 containerd[1637]: 2025-05-08 00:37:33.390 [INFO][4898] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="c3def452139becca79dc9949161df9d1142a9b3dc953d3407aaa31b4752edf29" Namespace="calico-apiserver" Pod="calico-apiserver-6d6795bb7b-bt6hl" WorkloadEndpoint="localhost-k8s-calico--apiserver--6d6795bb7b--bt6hl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6d6795bb7b--bt6hl-eth0", GenerateName:"calico-apiserver-6d6795bb7b-", Namespace:"calico-apiserver", SelfLink:"", UID:"1adaa741-871f-441d-b6ca-732d5537fc5a", ResourceVersion:"818", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 37, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6d6795bb7b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c3def452139becca79dc9949161df9d1142a9b3dc953d3407aaa31b4752edf29", Pod:"calico-apiserver-6d6795bb7b-bt6hl", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib3fb1a1f64f", MAC:"fa:22:d1:c7:e9:3c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:37:33.402385 containerd[1637]: 2025-05-08 00:37:33.398 [INFO][4898] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="c3def452139becca79dc9949161df9d1142a9b3dc953d3407aaa31b4752edf29" Namespace="calico-apiserver" Pod="calico-apiserver-6d6795bb7b-bt6hl" WorkloadEndpoint="localhost-k8s-calico--apiserver--6d6795bb7b--bt6hl-eth0" May 8 00:37:33.439480 systemd-networkd[1293]: calid973223df80: Gained IPv6LL May 8 00:37:33.440031 containerd[1637]: time="2025-05-08T00:37:33.439292213Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:37:33.440031 containerd[1637]: time="2025-05-08T00:37:33.439324877Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:37:33.440031 containerd[1637]: time="2025-05-08T00:37:33.439331978Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:37:33.440031 containerd[1637]: time="2025-05-08T00:37:33.439762252Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:37:33.467519 systemd-resolved[1546]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 8 00:37:33.501865 containerd[1637]: time="2025-05-08T00:37:33.501826986Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d6795bb7b-bt6hl,Uid:1adaa741-871f-441d-b6ca-732d5537fc5a,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"c3def452139becca79dc9949161df9d1142a9b3dc953d3407aaa31b4752edf29\"" May 8 00:37:33.759645 containerd[1637]: time="2025-05-08T00:37:33.759571783Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:37:33.760245 containerd[1637]: time="2025-05-08T00:37:33.759979211Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.3: active requests=0, bytes read=43021437" May 8 00:37:33.761311 containerd[1637]: time="2025-05-08T00:37:33.760499550Z" level=info msg="ImageCreate event name:\"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:37:33.762437 containerd[1637]: time="2025-05-08T00:37:33.762388943Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:37:33.763039 containerd[1637]: time="2025-05-08T00:37:33.763019192Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" with image id \"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\", size \"44514075\" in 2.969775937s" May 8 00:37:33.763337 containerd[1637]: time="2025-05-08T00:37:33.763101791Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" returns image reference \"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\"" May 8 00:37:33.764727 containerd[1637]: time="2025-05-08T00:37:33.764704118Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\"" May 8 00:37:33.765965 containerd[1637]: time="2025-05-08T00:37:33.765887665Z" level=info msg="CreateContainer within sandbox \"274db3b8176f7e43b0e3db3940d31fbbbe687423c83a8cb33e8e7ef5ef44bdb8\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 8 00:37:33.782473 containerd[1637]: time="2025-05-08T00:37:33.782395983Z" level=info msg="CreateContainer within sandbox \"274db3b8176f7e43b0e3db3940d31fbbbe687423c83a8cb33e8e7ef5ef44bdb8\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"90b431eb58d294c55f6774e885895b227e621bb4f26c69b5accddaa59beb9157\"" May 8 00:37:33.783184 containerd[1637]: time="2025-05-08T00:37:33.782938268Z" level=info msg="StartContainer for \"90b431eb58d294c55f6774e885895b227e621bb4f26c69b5accddaa59beb9157\"" May 8 00:37:33.856947 containerd[1637]: time="2025-05-08T00:37:33.856880245Z" level=info msg="StartContainer for \"90b431eb58d294c55f6774e885895b227e621bb4f26c69b5accddaa59beb9157\" returns successfully" May 8 00:37:34.184616 containerd[1637]: time="2025-05-08T00:37:34.184592916Z" level=info msg="StopPodSandbox for \"19834ce8a9cfaa9b50db698dd94086948e031d2b81df35b0d2ee65e5ea98ff17\"" May 8 00:37:34.186694 containerd[1637]: time="2025-05-08T00:37:34.186675761Z" level=info msg="StopPodSandbox for \"e289ef7445f1c45ce3f3d99ae6fd91f0ec78c9b2fa33c1ec3800e4b53e41ed83\"" May 8 00:37:34.293408 containerd[1637]: 2025-05-08 00:37:34.226 [INFO][5036] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="19834ce8a9cfaa9b50db698dd94086948e031d2b81df35b0d2ee65e5ea98ff17" May 8 00:37:34.293408 containerd[1637]: 2025-05-08 00:37:34.226 [INFO][5036] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="19834ce8a9cfaa9b50db698dd94086948e031d2b81df35b0d2ee65e5ea98ff17" iface="eth0" netns="/var/run/netns/cni-642e57d7-07ac-7138-7313-336d41f93ac7" May 8 00:37:34.293408 containerd[1637]: 2025-05-08 00:37:34.226 [INFO][5036] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="19834ce8a9cfaa9b50db698dd94086948e031d2b81df35b0d2ee65e5ea98ff17" iface="eth0" netns="/var/run/netns/cni-642e57d7-07ac-7138-7313-336d41f93ac7" May 8 00:37:34.293408 containerd[1637]: 2025-05-08 00:37:34.226 [INFO][5036] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="19834ce8a9cfaa9b50db698dd94086948e031d2b81df35b0d2ee65e5ea98ff17" iface="eth0" netns="/var/run/netns/cni-642e57d7-07ac-7138-7313-336d41f93ac7" May 8 00:37:34.293408 containerd[1637]: 2025-05-08 00:37:34.226 [INFO][5036] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="19834ce8a9cfaa9b50db698dd94086948e031d2b81df35b0d2ee65e5ea98ff17" May 8 00:37:34.293408 containerd[1637]: 2025-05-08 00:37:34.226 [INFO][5036] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="19834ce8a9cfaa9b50db698dd94086948e031d2b81df35b0d2ee65e5ea98ff17" May 8 00:37:34.293408 containerd[1637]: 2025-05-08 00:37:34.282 [INFO][5052] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="19834ce8a9cfaa9b50db698dd94086948e031d2b81df35b0d2ee65e5ea98ff17" HandleID="k8s-pod-network.19834ce8a9cfaa9b50db698dd94086948e031d2b81df35b0d2ee65e5ea98ff17" Workload="localhost-k8s-calico--apiserver--84669494cd--c6tg9-eth0" May 8 00:37:34.293408 containerd[1637]: 2025-05-08 00:37:34.283 [INFO][5052] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:37:34.293408 containerd[1637]: 2025-05-08 00:37:34.287 [INFO][5052] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:37:34.293408 containerd[1637]: 2025-05-08 00:37:34.291 [WARNING][5052] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="19834ce8a9cfaa9b50db698dd94086948e031d2b81df35b0d2ee65e5ea98ff17" HandleID="k8s-pod-network.19834ce8a9cfaa9b50db698dd94086948e031d2b81df35b0d2ee65e5ea98ff17" Workload="localhost-k8s-calico--apiserver--84669494cd--c6tg9-eth0" May 8 00:37:34.293408 containerd[1637]: 2025-05-08 00:37:34.291 [INFO][5052] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="19834ce8a9cfaa9b50db698dd94086948e031d2b81df35b0d2ee65e5ea98ff17" HandleID="k8s-pod-network.19834ce8a9cfaa9b50db698dd94086948e031d2b81df35b0d2ee65e5ea98ff17" Workload="localhost-k8s-calico--apiserver--84669494cd--c6tg9-eth0" May 8 00:37:34.293408 containerd[1637]: 2025-05-08 00:37:34.291 [INFO][5052] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:37:34.293408 containerd[1637]: 2025-05-08 00:37:34.292 [INFO][5036] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="19834ce8a9cfaa9b50db698dd94086948e031d2b81df35b0d2ee65e5ea98ff17" May 8 00:37:34.297035 containerd[1637]: 2025-05-08 00:37:34.239 [INFO][5043] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="e289ef7445f1c45ce3f3d99ae6fd91f0ec78c9b2fa33c1ec3800e4b53e41ed83" May 8 00:37:34.297035 containerd[1637]: 2025-05-08 00:37:34.239 [INFO][5043] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="e289ef7445f1c45ce3f3d99ae6fd91f0ec78c9b2fa33c1ec3800e4b53e41ed83" iface="eth0" netns="/var/run/netns/cni-6eb7d5a6-10f1-b218-4448-dab966cef290" May 8 00:37:34.297035 containerd[1637]: 2025-05-08 00:37:34.240 [INFO][5043] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="e289ef7445f1c45ce3f3d99ae6fd91f0ec78c9b2fa33c1ec3800e4b53e41ed83" iface="eth0" netns="/var/run/netns/cni-6eb7d5a6-10f1-b218-4448-dab966cef290" May 8 00:37:34.297035 containerd[1637]: 2025-05-08 00:37:34.240 [INFO][5043] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="e289ef7445f1c45ce3f3d99ae6fd91f0ec78c9b2fa33c1ec3800e4b53e41ed83" iface="eth0" netns="/var/run/netns/cni-6eb7d5a6-10f1-b218-4448-dab966cef290" May 8 00:37:34.297035 containerd[1637]: 2025-05-08 00:37:34.240 [INFO][5043] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="e289ef7445f1c45ce3f3d99ae6fd91f0ec78c9b2fa33c1ec3800e4b53e41ed83" May 8 00:37:34.297035 containerd[1637]: 2025-05-08 00:37:34.240 [INFO][5043] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e289ef7445f1c45ce3f3d99ae6fd91f0ec78c9b2fa33c1ec3800e4b53e41ed83" May 8 00:37:34.297035 containerd[1637]: 2025-05-08 00:37:34.282 [INFO][5057] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e289ef7445f1c45ce3f3d99ae6fd91f0ec78c9b2fa33c1ec3800e4b53e41ed83" HandleID="k8s-pod-network.e289ef7445f1c45ce3f3d99ae6fd91f0ec78c9b2fa33c1ec3800e4b53e41ed83" Workload="localhost-k8s-csi--node--driver--htn9m-eth0" May 8 00:37:34.297035 containerd[1637]: 2025-05-08 00:37:34.282 [INFO][5057] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:37:34.297035 containerd[1637]: 2025-05-08 00:37:34.283 [INFO][5057] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:37:34.297035 containerd[1637]: 2025-05-08 00:37:34.286 [WARNING][5057] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e289ef7445f1c45ce3f3d99ae6fd91f0ec78c9b2fa33c1ec3800e4b53e41ed83" HandleID="k8s-pod-network.e289ef7445f1c45ce3f3d99ae6fd91f0ec78c9b2fa33c1ec3800e4b53e41ed83" Workload="localhost-k8s-csi--node--driver--htn9m-eth0" May 8 00:37:34.297035 containerd[1637]: 2025-05-08 00:37:34.286 [INFO][5057] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e289ef7445f1c45ce3f3d99ae6fd91f0ec78c9b2fa33c1ec3800e4b53e41ed83" HandleID="k8s-pod-network.e289ef7445f1c45ce3f3d99ae6fd91f0ec78c9b2fa33c1ec3800e4b53e41ed83" Workload="localhost-k8s-csi--node--driver--htn9m-eth0" May 8 00:37:34.297035 containerd[1637]: 2025-05-08 00:37:34.287 [INFO][5057] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:37:34.297035 containerd[1637]: 2025-05-08 00:37:34.290 [INFO][5043] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="e289ef7445f1c45ce3f3d99ae6fd91f0ec78c9b2fa33c1ec3800e4b53e41ed83" May 8 00:37:34.297035 containerd[1637]: time="2025-05-08T00:37:34.295415088Z" level=info msg="TearDown network for sandbox \"e289ef7445f1c45ce3f3d99ae6fd91f0ec78c9b2fa33c1ec3800e4b53e41ed83\" successfully" May 8 00:37:34.297035 containerd[1637]: time="2025-05-08T00:37:34.295430819Z" level=info msg="StopPodSandbox for \"e289ef7445f1c45ce3f3d99ae6fd91f0ec78c9b2fa33c1ec3800e4b53e41ed83\" returns successfully" May 8 00:37:34.297035 containerd[1637]: time="2025-05-08T00:37:34.295846247Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-htn9m,Uid:a6a1f446-8d54-4427-9c9d-1d9192e66ef3,Namespace:calico-system,Attempt:1,}" May 8 00:37:34.300550 containerd[1637]: time="2025-05-08T00:37:34.297205695Z" level=info msg="TearDown network for sandbox \"19834ce8a9cfaa9b50db698dd94086948e031d2b81df35b0d2ee65e5ea98ff17\" successfully" May 8 00:37:34.300550 containerd[1637]: time="2025-05-08T00:37:34.297216764Z" level=info msg="StopPodSandbox for \"19834ce8a9cfaa9b50db698dd94086948e031d2b81df35b0d2ee65e5ea98ff17\" returns successfully" May 8 00:37:34.300550 containerd[1637]: time="2025-05-08T00:37:34.299977002Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-84669494cd-c6tg9,Uid:600599f4-a6d1-457c-9d46-c0a1214f1987,Namespace:calico-apiserver,Attempt:1,}" May 8 00:37:34.298233 systemd[1]: run-netns-cni\x2d642e57d7\x2d07ac\x2d7138\x2d7313\x2d336d41f93ac7.mount: Deactivated successfully. May 8 00:37:34.298317 systemd[1]: run-netns-cni\x2d6eb7d5a6\x2d10f1\x2db218\x2d4448\x2ddab966cef290.mount: Deactivated successfully. May 8 00:37:34.429298 kubelet[2965]: I0508 00:37:34.429187 2965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-84669494cd-gzmz2" podStartSLOduration=28.457142437 podStartE2EDuration="31.429171166s" podCreationTimestamp="2025-05-08 00:37:03 +0000 UTC" firstStartedPulling="2025-05-08 00:37:30.792171661 +0000 UTC m=+46.725480285" lastFinishedPulling="2025-05-08 00:37:33.764200384 +0000 UTC m=+49.697509014" observedRunningTime="2025-05-08 00:37:34.428175391 +0000 UTC m=+50.361484025" watchObservedRunningTime="2025-05-08 00:37:34.429171166 +0000 UTC m=+50.362479795" May 8 00:37:34.457935 systemd-networkd[1293]: calid945d34071c: Link UP May 8 00:37:34.460679 systemd-networkd[1293]: calid945d34071c: Gained carrier May 8 00:37:34.464432 systemd-networkd[1293]: calib3fb1a1f64f: Gained IPv6LL May 8 00:37:34.482651 containerd[1637]: 2025-05-08 00:37:34.348 [INFO][5065] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--htn9m-eth0 csi-node-driver- calico-system a6a1f446-8d54-4427-9c9d-1d9192e66ef3 830 0 2025-05-08 00:37:03 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:55b7b4b9d k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-htn9m eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calid945d34071c [] []}} ContainerID="78a5d8e3ed30e8bbc5515cadac6b760b8e489b314f5c017aa27cc02d7a57ae1a" Namespace="calico-system" Pod="csi-node-driver-htn9m" WorkloadEndpoint="localhost-k8s-csi--node--driver--htn9m-" May 8 00:37:34.482651 containerd[1637]: 2025-05-08 00:37:34.348 [INFO][5065] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="78a5d8e3ed30e8bbc5515cadac6b760b8e489b314f5c017aa27cc02d7a57ae1a" Namespace="calico-system" Pod="csi-node-driver-htn9m" WorkloadEndpoint="localhost-k8s-csi--node--driver--htn9m-eth0" May 8 00:37:34.482651 containerd[1637]: 2025-05-08 00:37:34.376 [INFO][5077] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="78a5d8e3ed30e8bbc5515cadac6b760b8e489b314f5c017aa27cc02d7a57ae1a" HandleID="k8s-pod-network.78a5d8e3ed30e8bbc5515cadac6b760b8e489b314f5c017aa27cc02d7a57ae1a" Workload="localhost-k8s-csi--node--driver--htn9m-eth0" May 8 00:37:34.482651 containerd[1637]: 2025-05-08 00:37:34.400 [INFO][5077] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="78a5d8e3ed30e8bbc5515cadac6b760b8e489b314f5c017aa27cc02d7a57ae1a" HandleID="k8s-pod-network.78a5d8e3ed30e8bbc5515cadac6b760b8e489b314f5c017aa27cc02d7a57ae1a" Workload="localhost-k8s-csi--node--driver--htn9m-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002869c0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-htn9m", "timestamp":"2025-05-08 00:37:34.375082022 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 8 00:37:34.482651 containerd[1637]: 2025-05-08 00:37:34.400 [INFO][5077] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:37:34.482651 containerd[1637]: 2025-05-08 00:37:34.400 [INFO][5077] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:37:34.482651 containerd[1637]: 2025-05-08 00:37:34.400 [INFO][5077] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 8 00:37:34.482651 containerd[1637]: 2025-05-08 00:37:34.402 [INFO][5077] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.78a5d8e3ed30e8bbc5515cadac6b760b8e489b314f5c017aa27cc02d7a57ae1a" host="localhost" May 8 00:37:34.482651 containerd[1637]: 2025-05-08 00:37:34.412 [INFO][5077] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 8 00:37:34.482651 containerd[1637]: 2025-05-08 00:37:34.421 [INFO][5077] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 8 00:37:34.482651 containerd[1637]: 2025-05-08 00:37:34.423 [INFO][5077] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 8 00:37:34.482651 containerd[1637]: 2025-05-08 00:37:34.430 [INFO][5077] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 8 00:37:34.482651 containerd[1637]: 2025-05-08 00:37:34.430 [INFO][5077] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.78a5d8e3ed30e8bbc5515cadac6b760b8e489b314f5c017aa27cc02d7a57ae1a" host="localhost" May 8 00:37:34.482651 containerd[1637]: 2025-05-08 00:37:34.432 [INFO][5077] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.78a5d8e3ed30e8bbc5515cadac6b760b8e489b314f5c017aa27cc02d7a57ae1a May 8 00:37:34.482651 containerd[1637]: 2025-05-08 00:37:34.437 [INFO][5077] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.78a5d8e3ed30e8bbc5515cadac6b760b8e489b314f5c017aa27cc02d7a57ae1a" host="localhost" May 8 00:37:34.482651 containerd[1637]: 2025-05-08 00:37:34.446 [INFO][5077] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.78a5d8e3ed30e8bbc5515cadac6b760b8e489b314f5c017aa27cc02d7a57ae1a" host="localhost" May 8 00:37:34.482651 containerd[1637]: 2025-05-08 00:37:34.446 [INFO][5077] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.78a5d8e3ed30e8bbc5515cadac6b760b8e489b314f5c017aa27cc02d7a57ae1a" host="localhost" May 8 00:37:34.482651 containerd[1637]: 2025-05-08 00:37:34.446 [INFO][5077] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:37:34.482651 containerd[1637]: 2025-05-08 00:37:34.446 [INFO][5077] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="78a5d8e3ed30e8bbc5515cadac6b760b8e489b314f5c017aa27cc02d7a57ae1a" HandleID="k8s-pod-network.78a5d8e3ed30e8bbc5515cadac6b760b8e489b314f5c017aa27cc02d7a57ae1a" Workload="localhost-k8s-csi--node--driver--htn9m-eth0" May 8 00:37:34.492673 containerd[1637]: 2025-05-08 00:37:34.453 [INFO][5065] cni-plugin/k8s.go 386: Populated endpoint ContainerID="78a5d8e3ed30e8bbc5515cadac6b760b8e489b314f5c017aa27cc02d7a57ae1a" Namespace="calico-system" Pod="csi-node-driver-htn9m" WorkloadEndpoint="localhost-k8s-csi--node--driver--htn9m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--htn9m-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"a6a1f446-8d54-4427-9c9d-1d9192e66ef3", ResourceVersion:"830", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 37, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b7b4b9d", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-htn9m", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calid945d34071c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:37:34.492673 containerd[1637]: 2025-05-08 00:37:34.453 [INFO][5065] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.134/32] ContainerID="78a5d8e3ed30e8bbc5515cadac6b760b8e489b314f5c017aa27cc02d7a57ae1a" Namespace="calico-system" Pod="csi-node-driver-htn9m" WorkloadEndpoint="localhost-k8s-csi--node--driver--htn9m-eth0" May 8 00:37:34.492673 containerd[1637]: 2025-05-08 00:37:34.453 [INFO][5065] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid945d34071c ContainerID="78a5d8e3ed30e8bbc5515cadac6b760b8e489b314f5c017aa27cc02d7a57ae1a" Namespace="calico-system" Pod="csi-node-driver-htn9m" WorkloadEndpoint="localhost-k8s-csi--node--driver--htn9m-eth0" May 8 00:37:34.492673 containerd[1637]: 2025-05-08 00:37:34.462 [INFO][5065] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="78a5d8e3ed30e8bbc5515cadac6b760b8e489b314f5c017aa27cc02d7a57ae1a" Namespace="calico-system" Pod="csi-node-driver-htn9m" WorkloadEndpoint="localhost-k8s-csi--node--driver--htn9m-eth0" May 8 00:37:34.492673 containerd[1637]: 2025-05-08 00:37:34.462 [INFO][5065] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="78a5d8e3ed30e8bbc5515cadac6b760b8e489b314f5c017aa27cc02d7a57ae1a" Namespace="calico-system" Pod="csi-node-driver-htn9m" WorkloadEndpoint="localhost-k8s-csi--node--driver--htn9m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--htn9m-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"a6a1f446-8d54-4427-9c9d-1d9192e66ef3", ResourceVersion:"830", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 37, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b7b4b9d", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"78a5d8e3ed30e8bbc5515cadac6b760b8e489b314f5c017aa27cc02d7a57ae1a", Pod:"csi-node-driver-htn9m", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calid945d34071c", MAC:"3a:44:67:db:1d:27", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:37:34.492673 containerd[1637]: 2025-05-08 00:37:34.478 [INFO][5065] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="78a5d8e3ed30e8bbc5515cadac6b760b8e489b314f5c017aa27cc02d7a57ae1a" Namespace="calico-system" Pod="csi-node-driver-htn9m" WorkloadEndpoint="localhost-k8s-csi--node--driver--htn9m-eth0" May 8 00:37:34.531922 containerd[1637]: time="2025-05-08T00:37:34.531604680Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:37:34.531922 containerd[1637]: time="2025-05-08T00:37:34.531650133Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:37:34.531922 containerd[1637]: time="2025-05-08T00:37:34.531754956Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:37:34.534370 containerd[1637]: time="2025-05-08T00:37:34.532661912Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:37:34.578489 systemd-networkd[1293]: cali49eacf92d1d: Link UP May 8 00:37:34.579159 systemd-networkd[1293]: cali49eacf92d1d: Gained carrier May 8 00:37:34.586886 systemd-resolved[1546]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 8 00:37:34.602121 containerd[1637]: 2025-05-08 00:37:34.479 [INFO][5083] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--84669494cd--c6tg9-eth0 calico-apiserver-84669494cd- calico-apiserver 600599f4-a6d1-457c-9d46-c0a1214f1987 829 0 2025-05-08 00:37:03 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:84669494cd projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-84669494cd-c6tg9 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali49eacf92d1d [] []}} ContainerID="edcfe40c015f0a7859aa760f892672195034bf9a30e6ea648f2bc5b902d78768" Namespace="calico-apiserver" Pod="calico-apiserver-84669494cd-c6tg9" WorkloadEndpoint="localhost-k8s-calico--apiserver--84669494cd--c6tg9-" May 8 00:37:34.602121 containerd[1637]: 2025-05-08 00:37:34.479 [INFO][5083] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="edcfe40c015f0a7859aa760f892672195034bf9a30e6ea648f2bc5b902d78768" Namespace="calico-apiserver" Pod="calico-apiserver-84669494cd-c6tg9" WorkloadEndpoint="localhost-k8s-calico--apiserver--84669494cd--c6tg9-eth0" May 8 00:37:34.602121 containerd[1637]: 2025-05-08 00:37:34.539 [INFO][5107] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="edcfe40c015f0a7859aa760f892672195034bf9a30e6ea648f2bc5b902d78768" HandleID="k8s-pod-network.edcfe40c015f0a7859aa760f892672195034bf9a30e6ea648f2bc5b902d78768" Workload="localhost-k8s-calico--apiserver--84669494cd--c6tg9-eth0" May 8 00:37:34.602121 containerd[1637]: 2025-05-08 00:37:34.547 [INFO][5107] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="edcfe40c015f0a7859aa760f892672195034bf9a30e6ea648f2bc5b902d78768" HandleID="k8s-pod-network.edcfe40c015f0a7859aa760f892672195034bf9a30e6ea648f2bc5b902d78768" Workload="localhost-k8s-calico--apiserver--84669494cd--c6tg9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00011bb50), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-84669494cd-c6tg9", "timestamp":"2025-05-08 00:37:34.539291519 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 8 00:37:34.602121 containerd[1637]: 2025-05-08 00:37:34.548 [INFO][5107] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:37:34.602121 containerd[1637]: 2025-05-08 00:37:34.548 [INFO][5107] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:37:34.602121 containerd[1637]: 2025-05-08 00:37:34.548 [INFO][5107] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 8 00:37:34.602121 containerd[1637]: 2025-05-08 00:37:34.549 [INFO][5107] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.edcfe40c015f0a7859aa760f892672195034bf9a30e6ea648f2bc5b902d78768" host="localhost" May 8 00:37:34.602121 containerd[1637]: 2025-05-08 00:37:34.552 [INFO][5107] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 8 00:37:34.602121 containerd[1637]: 2025-05-08 00:37:34.555 [INFO][5107] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 8 00:37:34.602121 containerd[1637]: 2025-05-08 00:37:34.556 [INFO][5107] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 8 00:37:34.602121 containerd[1637]: 2025-05-08 00:37:34.558 [INFO][5107] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 8 00:37:34.602121 containerd[1637]: 2025-05-08 00:37:34.558 [INFO][5107] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.edcfe40c015f0a7859aa760f892672195034bf9a30e6ea648f2bc5b902d78768" host="localhost" May 8 00:37:34.602121 containerd[1637]: 2025-05-08 00:37:34.559 [INFO][5107] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.edcfe40c015f0a7859aa760f892672195034bf9a30e6ea648f2bc5b902d78768 May 8 00:37:34.602121 containerd[1637]: 2025-05-08 00:37:34.562 [INFO][5107] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.edcfe40c015f0a7859aa760f892672195034bf9a30e6ea648f2bc5b902d78768" host="localhost" May 8 00:37:34.602121 containerd[1637]: 2025-05-08 00:37:34.568 [INFO][5107] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.edcfe40c015f0a7859aa760f892672195034bf9a30e6ea648f2bc5b902d78768" host="localhost" May 8 00:37:34.602121 containerd[1637]: 2025-05-08 00:37:34.568 [INFO][5107] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.edcfe40c015f0a7859aa760f892672195034bf9a30e6ea648f2bc5b902d78768" host="localhost" May 8 00:37:34.602121 containerd[1637]: 2025-05-08 00:37:34.568 [INFO][5107] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:37:34.602121 containerd[1637]: 2025-05-08 00:37:34.568 [INFO][5107] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="edcfe40c015f0a7859aa760f892672195034bf9a30e6ea648f2bc5b902d78768" HandleID="k8s-pod-network.edcfe40c015f0a7859aa760f892672195034bf9a30e6ea648f2bc5b902d78768" Workload="localhost-k8s-calico--apiserver--84669494cd--c6tg9-eth0" May 8 00:37:34.602607 containerd[1637]: 2025-05-08 00:37:34.576 [INFO][5083] cni-plugin/k8s.go 386: Populated endpoint ContainerID="edcfe40c015f0a7859aa760f892672195034bf9a30e6ea648f2bc5b902d78768" Namespace="calico-apiserver" Pod="calico-apiserver-84669494cd-c6tg9" WorkloadEndpoint="localhost-k8s-calico--apiserver--84669494cd--c6tg9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--84669494cd--c6tg9-eth0", GenerateName:"calico-apiserver-84669494cd-", Namespace:"calico-apiserver", SelfLink:"", UID:"600599f4-a6d1-457c-9d46-c0a1214f1987", ResourceVersion:"829", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 37, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"84669494cd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-84669494cd-c6tg9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali49eacf92d1d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:37:34.602607 containerd[1637]: 2025-05-08 00:37:34.576 [INFO][5083] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.135/32] ContainerID="edcfe40c015f0a7859aa760f892672195034bf9a30e6ea648f2bc5b902d78768" Namespace="calico-apiserver" Pod="calico-apiserver-84669494cd-c6tg9" WorkloadEndpoint="localhost-k8s-calico--apiserver--84669494cd--c6tg9-eth0" May 8 00:37:34.602607 containerd[1637]: 2025-05-08 00:37:34.576 [INFO][5083] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali49eacf92d1d ContainerID="edcfe40c015f0a7859aa760f892672195034bf9a30e6ea648f2bc5b902d78768" Namespace="calico-apiserver" Pod="calico-apiserver-84669494cd-c6tg9" WorkloadEndpoint="localhost-k8s-calico--apiserver--84669494cd--c6tg9-eth0" May 8 00:37:34.602607 containerd[1637]: 2025-05-08 00:37:34.579 [INFO][5083] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="edcfe40c015f0a7859aa760f892672195034bf9a30e6ea648f2bc5b902d78768" Namespace="calico-apiserver" Pod="calico-apiserver-84669494cd-c6tg9" WorkloadEndpoint="localhost-k8s-calico--apiserver--84669494cd--c6tg9-eth0" May 8 00:37:34.602607 containerd[1637]: 2025-05-08 00:37:34.582 [INFO][5083] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="edcfe40c015f0a7859aa760f892672195034bf9a30e6ea648f2bc5b902d78768" Namespace="calico-apiserver" Pod="calico-apiserver-84669494cd-c6tg9" WorkloadEndpoint="localhost-k8s-calico--apiserver--84669494cd--c6tg9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--84669494cd--c6tg9-eth0", GenerateName:"calico-apiserver-84669494cd-", Namespace:"calico-apiserver", SelfLink:"", UID:"600599f4-a6d1-457c-9d46-c0a1214f1987", ResourceVersion:"829", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 37, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"84669494cd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"edcfe40c015f0a7859aa760f892672195034bf9a30e6ea648f2bc5b902d78768", Pod:"calico-apiserver-84669494cd-c6tg9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali49eacf92d1d", MAC:"0e:39:22:10:92:d2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:37:34.602607 containerd[1637]: 2025-05-08 00:37:34.595 [INFO][5083] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="edcfe40c015f0a7859aa760f892672195034bf9a30e6ea648f2bc5b902d78768" Namespace="calico-apiserver" Pod="calico-apiserver-84669494cd-c6tg9" WorkloadEndpoint="localhost-k8s-calico--apiserver--84669494cd--c6tg9-eth0" May 8 00:37:34.605515 containerd[1637]: time="2025-05-08T00:37:34.605452535Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-htn9m,Uid:a6a1f446-8d54-4427-9c9d-1d9192e66ef3,Namespace:calico-system,Attempt:1,} returns sandbox id \"78a5d8e3ed30e8bbc5515cadac6b760b8e489b314f5c017aa27cc02d7a57ae1a\"" May 8 00:37:34.634895 containerd[1637]: time="2025-05-08T00:37:34.634589199Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:37:34.634895 containerd[1637]: time="2025-05-08T00:37:34.634631636Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:37:34.634895 containerd[1637]: time="2025-05-08T00:37:34.634639266Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:37:34.635217 containerd[1637]: time="2025-05-08T00:37:34.635074801Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:37:34.661993 systemd-resolved[1546]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 8 00:37:34.688466 containerd[1637]: time="2025-05-08T00:37:34.688378403Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-84669494cd-c6tg9,Uid:600599f4-a6d1-457c-9d46-c0a1214f1987,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"edcfe40c015f0a7859aa760f892672195034bf9a30e6ea648f2bc5b902d78768\"" May 8 00:37:34.690862 containerd[1637]: time="2025-05-08T00:37:34.690829925Z" level=info msg="CreateContainer within sandbox \"edcfe40c015f0a7859aa760f892672195034bf9a30e6ea648f2bc5b902d78768\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 8 00:37:34.695543 containerd[1637]: time="2025-05-08T00:37:34.695524840Z" level=info msg="CreateContainer within sandbox \"edcfe40c015f0a7859aa760f892672195034bf9a30e6ea648f2bc5b902d78768\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"9c436249bc233085fd7eb4114e7ad2245767fa5f80372d448c46483f817b5044\"" May 8 00:37:34.696673 containerd[1637]: time="2025-05-08T00:37:34.696653222Z" level=info msg="StartContainer for \"9c436249bc233085fd7eb4114e7ad2245767fa5f80372d448c46483f817b5044\"" May 8 00:37:34.748985 containerd[1637]: time="2025-05-08T00:37:34.747549358Z" level=info msg="StartContainer for \"9c436249bc233085fd7eb4114e7ad2245767fa5f80372d448c46483f817b5044\" returns successfully" May 8 00:37:35.428201 kubelet[2965]: I0508 00:37:35.428176 2965 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 8 00:37:35.432453 kubelet[2965]: I0508 00:37:35.431679 2965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-84669494cd-c6tg9" podStartSLOduration=32.431667025 podStartE2EDuration="32.431667025s" podCreationTimestamp="2025-05-08 00:37:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 00:37:35.43123431 +0000 UTC m=+51.364542943" watchObservedRunningTime="2025-05-08 00:37:35.431667025 +0000 UTC m=+51.364975653" May 8 00:37:35.807592 systemd-networkd[1293]: cali49eacf92d1d: Gained IPv6LL May 8 00:37:35.871565 systemd-networkd[1293]: calid945d34071c: Gained IPv6LL May 8 00:37:36.426775 kubelet[2965]: I0508 00:37:36.426617 2965 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 8 00:37:37.465431 containerd[1637]: time="2025-05-08T00:37:37.465391276Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:37:37.652455 containerd[1637]: time="2025-05-08T00:37:37.652391090Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.3: active requests=0, bytes read=34789138" May 8 00:37:37.666689 containerd[1637]: time="2025-05-08T00:37:37.666640285Z" level=info msg="ImageCreate event name:\"sha256:4e982138231b3653a012db4f21ed5e7be69afd5f553dba38cf7e88f0ed740b94\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:37:37.673769 containerd[1637]: time="2025-05-08T00:37:37.673729621Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:feaab0197035d474845e0f8137a99a78cab274f0a3cac4d5485cf9b1bdf9ffa9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:37:37.674371 containerd[1637]: time="2025-05-08T00:37:37.674264059Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" with image id \"sha256:4e982138231b3653a012db4f21ed5e7be69afd5f553dba38cf7e88f0ed740b94\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:feaab0197035d474845e0f8137a99a78cab274f0a3cac4d5485cf9b1bdf9ffa9\", size \"36281728\" in 3.909532285s" May 8 00:37:37.674371 containerd[1637]: time="2025-05-08T00:37:37.674287039Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" returns image reference \"sha256:4e982138231b3653a012db4f21ed5e7be69afd5f553dba38cf7e88f0ed740b94\"" May 8 00:37:37.677920 containerd[1637]: time="2025-05-08T00:37:37.675541525Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\"" May 8 00:37:37.704997 containerd[1637]: time="2025-05-08T00:37:37.704847401Z" level=info msg="CreateContainer within sandbox \"b2d9f88c3ef0fa40800e8de031ca1db06c427e8d5062b11021d466cf61267c0e\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" May 8 00:37:37.716277 containerd[1637]: time="2025-05-08T00:37:37.715998371Z" level=info msg="CreateContainer within sandbox \"b2d9f88c3ef0fa40800e8de031ca1db06c427e8d5062b11021d466cf61267c0e\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"0ee833785bb985b3f3101797a221cc86140c2e0087280819e7facda593131efd\"" May 8 00:37:37.717658 containerd[1637]: time="2025-05-08T00:37:37.717074647Z" level=info msg="StartContainer for \"0ee833785bb985b3f3101797a221cc86140c2e0087280819e7facda593131efd\"" May 8 00:37:37.780176 containerd[1637]: time="2025-05-08T00:37:37.780156027Z" level=info msg="StartContainer for \"0ee833785bb985b3f3101797a221cc86140c2e0087280819e7facda593131efd\" returns successfully" May 8 00:37:38.084074 containerd[1637]: time="2025-05-08T00:37:38.084001647Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:37:38.084793 containerd[1637]: time="2025-05-08T00:37:38.084769981Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.3: active requests=0, bytes read=77" May 8 00:37:38.085913 containerd[1637]: time="2025-05-08T00:37:38.085894682Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" with image id \"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\", size \"44514075\" in 410.335046ms" May 8 00:37:38.085947 containerd[1637]: time="2025-05-08T00:37:38.085915009Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" returns image reference \"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\"" May 8 00:37:38.086735 containerd[1637]: time="2025-05-08T00:37:38.086719991Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\"" May 8 00:37:38.088673 containerd[1637]: time="2025-05-08T00:37:38.088650050Z" level=info msg="CreateContainer within sandbox \"c3def452139becca79dc9949161df9d1142a9b3dc953d3407aaa31b4752edf29\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 8 00:37:38.094583 containerd[1637]: time="2025-05-08T00:37:38.094556757Z" level=info msg="CreateContainer within sandbox \"c3def452139becca79dc9949161df9d1142a9b3dc953d3407aaa31b4752edf29\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"cc386614ac7276f6154fba88ed89192b90c985aafec70a51ca4efca04b1ec3e2\"" May 8 00:37:38.097132 containerd[1637]: time="2025-05-08T00:37:38.097108610Z" level=info msg="StartContainer for \"cc386614ac7276f6154fba88ed89192b90c985aafec70a51ca4efca04b1ec3e2\"" May 8 00:37:38.144813 containerd[1637]: time="2025-05-08T00:37:38.144787800Z" level=info msg="StartContainer for \"cc386614ac7276f6154fba88ed89192b90c985aafec70a51ca4efca04b1ec3e2\" returns successfully" May 8 00:37:38.544392 kubelet[2965]: I0508 00:37:38.542997 2965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-67ddb48bf6-z6mbt" podStartSLOduration=29.371105225 podStartE2EDuration="35.542985892s" podCreationTimestamp="2025-05-08 00:37:03 +0000 UTC" firstStartedPulling="2025-05-08 00:37:31.503363995 +0000 UTC m=+47.436672619" lastFinishedPulling="2025-05-08 00:37:37.675244658 +0000 UTC m=+53.608553286" observedRunningTime="2025-05-08 00:37:38.542119239 +0000 UTC m=+54.475427867" watchObservedRunningTime="2025-05-08 00:37:38.542985892 +0000 UTC m=+54.476294521" May 8 00:37:38.544936 kubelet[2965]: I0508 00:37:38.544673 2965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6d6795bb7b-bt6hl" podStartSLOduration=30.961609134 podStartE2EDuration="35.544666189s" podCreationTimestamp="2025-05-08 00:37:03 +0000 UTC" firstStartedPulling="2025-05-08 00:37:33.503333198 +0000 UTC m=+49.436641822" lastFinishedPulling="2025-05-08 00:37:38.086390253 +0000 UTC m=+54.019698877" observedRunningTime="2025-05-08 00:37:38.531519064 +0000 UTC m=+54.464827693" watchObservedRunningTime="2025-05-08 00:37:38.544666189 +0000 UTC m=+54.477974831" May 8 00:37:39.515005 kubelet[2965]: I0508 00:37:39.514961 2965 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 8 00:37:39.990547 containerd[1637]: time="2025-05-08T00:37:39.990105730Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:37:39.991217 containerd[1637]: time="2025-05-08T00:37:39.991159243Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.3: active requests=0, bytes read=7912898" May 8 00:37:39.992097 containerd[1637]: time="2025-05-08T00:37:39.991492855Z" level=info msg="ImageCreate event name:\"sha256:4c37db5645f4075f8b8170eea8f14e340cb13550e0a392962f1f211ded741505\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:37:39.992684 containerd[1637]: time="2025-05-08T00:37:39.992672091Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:37:39.993662 containerd[1637]: time="2025-05-08T00:37:39.993649466Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.3\" with image id \"sha256:4c37db5645f4075f8b8170eea8f14e340cb13550e0a392962f1f211ded741505\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31\", size \"9405520\" in 1.906908859s" May 8 00:37:39.993852 containerd[1637]: time="2025-05-08T00:37:39.993789560Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\" returns image reference \"sha256:4c37db5645f4075f8b8170eea8f14e340cb13550e0a392962f1f211ded741505\"" May 8 00:37:39.996240 containerd[1637]: time="2025-05-08T00:37:39.996077959Z" level=info msg="CreateContainer within sandbox \"78a5d8e3ed30e8bbc5515cadac6b760b8e489b314f5c017aa27cc02d7a57ae1a\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" May 8 00:37:40.017072 containerd[1637]: time="2025-05-08T00:37:40.017041870Z" level=info msg="CreateContainer within sandbox \"78a5d8e3ed30e8bbc5515cadac6b760b8e489b314f5c017aa27cc02d7a57ae1a\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"e1c649f2ce8de40c5e7f18a6f0e88957fe0fdffc76f739441b43d10ac7b37695\"" May 8 00:37:40.017459 containerd[1637]: time="2025-05-08T00:37:40.017443429Z" level=info msg="StartContainer for \"e1c649f2ce8de40c5e7f18a6f0e88957fe0fdffc76f739441b43d10ac7b37695\"" May 8 00:37:40.064506 containerd[1637]: time="2025-05-08T00:37:40.064478710Z" level=info msg="StartContainer for \"e1c649f2ce8de40c5e7f18a6f0e88957fe0fdffc76f739441b43d10ac7b37695\" returns successfully" May 8 00:37:40.065698 containerd[1637]: time="2025-05-08T00:37:40.065531631Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\"" May 8 00:37:42.155644 containerd[1637]: time="2025-05-08T00:37:42.155397619Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:37:42.156031 containerd[1637]: time="2025-05-08T00:37:42.155922797Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3: active requests=0, bytes read=13991773" May 8 00:37:42.156578 containerd[1637]: time="2025-05-08T00:37:42.156316588Z" level=info msg="ImageCreate event name:\"sha256:e909e2ccf54404290b577fbddd190d036984deed184001767f820b0dddf77fd9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:37:42.157384 containerd[1637]: time="2025-05-08T00:37:42.157367487Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:37:42.157852 containerd[1637]: time="2025-05-08T00:37:42.157836600Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" with image id \"sha256:e909e2ccf54404290b577fbddd190d036984deed184001767f820b0dddf77fd9\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324\", size \"15484347\" in 2.092284723s" May 8 00:37:42.157931 containerd[1637]: time="2025-05-08T00:37:42.157921190Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" returns image reference \"sha256:e909e2ccf54404290b577fbddd190d036984deed184001767f820b0dddf77fd9\"" May 8 00:37:42.177528 containerd[1637]: time="2025-05-08T00:37:42.177377924Z" level=info msg="CreateContainer within sandbox \"78a5d8e3ed30e8bbc5515cadac6b760b8e489b314f5c017aa27cc02d7a57ae1a\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" May 8 00:37:42.193937 containerd[1637]: time="2025-05-08T00:37:42.193748246Z" level=info msg="CreateContainer within sandbox \"78a5d8e3ed30e8bbc5515cadac6b760b8e489b314f5c017aa27cc02d7a57ae1a\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"a4d76740d6d8b2d3bbc956d37115d890181c39f44bb89360fae7857439020961\"" May 8 00:37:42.195002 containerd[1637]: time="2025-05-08T00:37:42.194938131Z" level=info msg="StartContainer for \"a4d76740d6d8b2d3bbc956d37115d890181c39f44bb89360fae7857439020961\"" May 8 00:37:42.252265 containerd[1637]: time="2025-05-08T00:37:42.252242183Z" level=info msg="StartContainer for \"a4d76740d6d8b2d3bbc956d37115d890181c39f44bb89360fae7857439020961\" returns successfully" May 8 00:37:43.218379 kubelet[2965]: I0508 00:37:43.218314 2965 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 May 8 00:37:43.268906 kubelet[2965]: I0508 00:37:43.268872 2965 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock May 8 00:37:44.000449 systemd-journald[1182]: Under memory pressure, flushing caches. May 8 00:37:43.999439 systemd-resolved[1546]: Under memory pressure, flushing caches. May 8 00:37:43.999467 systemd-resolved[1546]: Flushed all caches. May 8 00:37:44.170105 kubelet[2965]: I0508 00:37:44.169370 2965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-htn9m" podStartSLOduration=33.603185831 podStartE2EDuration="41.168163729s" podCreationTimestamp="2025-05-08 00:37:03 +0000 UTC" firstStartedPulling="2025-05-08 00:37:34.609787159 +0000 UTC m=+50.543095782" lastFinishedPulling="2025-05-08 00:37:42.174765055 +0000 UTC m=+58.108073680" observedRunningTime="2025-05-08 00:37:42.708304913 +0000 UTC m=+58.641613546" watchObservedRunningTime="2025-05-08 00:37:44.168163729 +0000 UTC m=+60.101472361" May 8 00:37:44.219900 containerd[1637]: time="2025-05-08T00:37:44.219844726Z" level=info msg="StopPodSandbox for \"e289ef7445f1c45ce3f3d99ae6fd91f0ec78c9b2fa33c1ec3800e4b53e41ed83\"" May 8 00:37:44.390248 containerd[1637]: 2025-05-08 00:37:44.364 [WARNING][5470] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e289ef7445f1c45ce3f3d99ae6fd91f0ec78c9b2fa33c1ec3800e4b53e41ed83" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--htn9m-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"a6a1f446-8d54-4427-9c9d-1d9192e66ef3", ResourceVersion:"883", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 37, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b7b4b9d", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"78a5d8e3ed30e8bbc5515cadac6b760b8e489b314f5c017aa27cc02d7a57ae1a", Pod:"csi-node-driver-htn9m", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calid945d34071c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:37:44.390248 containerd[1637]: 2025-05-08 00:37:44.366 [INFO][5470] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="e289ef7445f1c45ce3f3d99ae6fd91f0ec78c9b2fa33c1ec3800e4b53e41ed83" May 8 00:37:44.390248 containerd[1637]: 2025-05-08 00:37:44.366 [INFO][5470] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e289ef7445f1c45ce3f3d99ae6fd91f0ec78c9b2fa33c1ec3800e4b53e41ed83" iface="eth0" netns="" May 8 00:37:44.390248 containerd[1637]: 2025-05-08 00:37:44.366 [INFO][5470] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="e289ef7445f1c45ce3f3d99ae6fd91f0ec78c9b2fa33c1ec3800e4b53e41ed83" May 8 00:37:44.390248 containerd[1637]: 2025-05-08 00:37:44.366 [INFO][5470] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e289ef7445f1c45ce3f3d99ae6fd91f0ec78c9b2fa33c1ec3800e4b53e41ed83" May 8 00:37:44.390248 containerd[1637]: 2025-05-08 00:37:44.382 [INFO][5479] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e289ef7445f1c45ce3f3d99ae6fd91f0ec78c9b2fa33c1ec3800e4b53e41ed83" HandleID="k8s-pod-network.e289ef7445f1c45ce3f3d99ae6fd91f0ec78c9b2fa33c1ec3800e4b53e41ed83" Workload="localhost-k8s-csi--node--driver--htn9m-eth0" May 8 00:37:44.390248 containerd[1637]: 2025-05-08 00:37:44.382 [INFO][5479] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:37:44.390248 containerd[1637]: 2025-05-08 00:37:44.382 [INFO][5479] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:37:44.390248 containerd[1637]: 2025-05-08 00:37:44.386 [WARNING][5479] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e289ef7445f1c45ce3f3d99ae6fd91f0ec78c9b2fa33c1ec3800e4b53e41ed83" HandleID="k8s-pod-network.e289ef7445f1c45ce3f3d99ae6fd91f0ec78c9b2fa33c1ec3800e4b53e41ed83" Workload="localhost-k8s-csi--node--driver--htn9m-eth0" May 8 00:37:44.390248 containerd[1637]: 2025-05-08 00:37:44.386 [INFO][5479] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e289ef7445f1c45ce3f3d99ae6fd91f0ec78c9b2fa33c1ec3800e4b53e41ed83" HandleID="k8s-pod-network.e289ef7445f1c45ce3f3d99ae6fd91f0ec78c9b2fa33c1ec3800e4b53e41ed83" Workload="localhost-k8s-csi--node--driver--htn9m-eth0" May 8 00:37:44.390248 containerd[1637]: 2025-05-08 00:37:44.387 [INFO][5479] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:37:44.390248 containerd[1637]: 2025-05-08 00:37:44.388 [INFO][5470] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="e289ef7445f1c45ce3f3d99ae6fd91f0ec78c9b2fa33c1ec3800e4b53e41ed83" May 8 00:37:44.396587 containerd[1637]: time="2025-05-08T00:37:44.396558040Z" level=info msg="TearDown network for sandbox \"e289ef7445f1c45ce3f3d99ae6fd91f0ec78c9b2fa33c1ec3800e4b53e41ed83\" successfully" May 8 00:37:44.396690 containerd[1637]: time="2025-05-08T00:37:44.396679580Z" level=info msg="StopPodSandbox for \"e289ef7445f1c45ce3f3d99ae6fd91f0ec78c9b2fa33c1ec3800e4b53e41ed83\" returns successfully" May 8 00:37:44.427569 containerd[1637]: time="2025-05-08T00:37:44.427501544Z" level=info msg="RemovePodSandbox for \"e289ef7445f1c45ce3f3d99ae6fd91f0ec78c9b2fa33c1ec3800e4b53e41ed83\"" May 8 00:37:44.427569 containerd[1637]: time="2025-05-08T00:37:44.427536309Z" level=info msg="Forcibly stopping sandbox \"e289ef7445f1c45ce3f3d99ae6fd91f0ec78c9b2fa33c1ec3800e4b53e41ed83\"" May 8 00:37:44.476865 containerd[1637]: 2025-05-08 00:37:44.452 [WARNING][5498] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e289ef7445f1c45ce3f3d99ae6fd91f0ec78c9b2fa33c1ec3800e4b53e41ed83" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--htn9m-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"a6a1f446-8d54-4427-9c9d-1d9192e66ef3", ResourceVersion:"883", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 37, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b7b4b9d", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"78a5d8e3ed30e8bbc5515cadac6b760b8e489b314f5c017aa27cc02d7a57ae1a", Pod:"csi-node-driver-htn9m", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calid945d34071c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:37:44.476865 containerd[1637]: 2025-05-08 00:37:44.452 [INFO][5498] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="e289ef7445f1c45ce3f3d99ae6fd91f0ec78c9b2fa33c1ec3800e4b53e41ed83" May 8 00:37:44.476865 containerd[1637]: 2025-05-08 00:37:44.452 [INFO][5498] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e289ef7445f1c45ce3f3d99ae6fd91f0ec78c9b2fa33c1ec3800e4b53e41ed83" iface="eth0" netns="" May 8 00:37:44.476865 containerd[1637]: 2025-05-08 00:37:44.452 [INFO][5498] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="e289ef7445f1c45ce3f3d99ae6fd91f0ec78c9b2fa33c1ec3800e4b53e41ed83" May 8 00:37:44.476865 containerd[1637]: 2025-05-08 00:37:44.452 [INFO][5498] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e289ef7445f1c45ce3f3d99ae6fd91f0ec78c9b2fa33c1ec3800e4b53e41ed83" May 8 00:37:44.476865 containerd[1637]: 2025-05-08 00:37:44.467 [INFO][5505] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e289ef7445f1c45ce3f3d99ae6fd91f0ec78c9b2fa33c1ec3800e4b53e41ed83" HandleID="k8s-pod-network.e289ef7445f1c45ce3f3d99ae6fd91f0ec78c9b2fa33c1ec3800e4b53e41ed83" Workload="localhost-k8s-csi--node--driver--htn9m-eth0" May 8 00:37:44.476865 containerd[1637]: 2025-05-08 00:37:44.467 [INFO][5505] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:37:44.476865 containerd[1637]: 2025-05-08 00:37:44.467 [INFO][5505] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:37:44.476865 containerd[1637]: 2025-05-08 00:37:44.472 [WARNING][5505] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e289ef7445f1c45ce3f3d99ae6fd91f0ec78c9b2fa33c1ec3800e4b53e41ed83" HandleID="k8s-pod-network.e289ef7445f1c45ce3f3d99ae6fd91f0ec78c9b2fa33c1ec3800e4b53e41ed83" Workload="localhost-k8s-csi--node--driver--htn9m-eth0" May 8 00:37:44.476865 containerd[1637]: 2025-05-08 00:37:44.472 [INFO][5505] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e289ef7445f1c45ce3f3d99ae6fd91f0ec78c9b2fa33c1ec3800e4b53e41ed83" HandleID="k8s-pod-network.e289ef7445f1c45ce3f3d99ae6fd91f0ec78c9b2fa33c1ec3800e4b53e41ed83" Workload="localhost-k8s-csi--node--driver--htn9m-eth0" May 8 00:37:44.476865 containerd[1637]: 2025-05-08 00:37:44.473 [INFO][5505] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:37:44.476865 containerd[1637]: 2025-05-08 00:37:44.474 [INFO][5498] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="e289ef7445f1c45ce3f3d99ae6fd91f0ec78c9b2fa33c1ec3800e4b53e41ed83" May 8 00:37:44.477395 containerd[1637]: time="2025-05-08T00:37:44.476888030Z" level=info msg="TearDown network for sandbox \"e289ef7445f1c45ce3f3d99ae6fd91f0ec78c9b2fa33c1ec3800e4b53e41ed83\" successfully" May 8 00:37:44.488224 containerd[1637]: time="2025-05-08T00:37:44.488091712Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e289ef7445f1c45ce3f3d99ae6fd91f0ec78c9b2fa33c1ec3800e4b53e41ed83\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 8 00:37:44.512452 containerd[1637]: time="2025-05-08T00:37:44.512355777Z" level=info msg="RemovePodSandbox \"e289ef7445f1c45ce3f3d99ae6fd91f0ec78c9b2fa33c1ec3800e4b53e41ed83\" returns successfully" May 8 00:37:44.518297 containerd[1637]: time="2025-05-08T00:37:44.518133700Z" level=info msg="StopPodSandbox for \"85f808da269ed32da330f62d5567120fb060cade455dd54cca170ee4850d9d84\"" May 8 00:37:44.580478 containerd[1637]: 2025-05-08 00:37:44.560 [WARNING][5524] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="85f808da269ed32da330f62d5567120fb060cade455dd54cca170ee4850d9d84" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--2lxxx-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"aa7783fa-83ef-4b49-bf44-ff04e1503a73", ResourceVersion:"809", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 36, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"92f4c252c28adb209e6ef0b2b094dec0155f0b12943bc717cd5222548e6c1919", Pod:"coredns-7db6d8ff4d-2lxxx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1818f0681b3", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:37:44.580478 containerd[1637]: 2025-05-08 00:37:44.560 [INFO][5524] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="85f808da269ed32da330f62d5567120fb060cade455dd54cca170ee4850d9d84" May 8 00:37:44.580478 containerd[1637]: 2025-05-08 00:37:44.560 [INFO][5524] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="85f808da269ed32da330f62d5567120fb060cade455dd54cca170ee4850d9d84" iface="eth0" netns="" May 8 00:37:44.580478 containerd[1637]: 2025-05-08 00:37:44.560 [INFO][5524] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="85f808da269ed32da330f62d5567120fb060cade455dd54cca170ee4850d9d84" May 8 00:37:44.580478 containerd[1637]: 2025-05-08 00:37:44.560 [INFO][5524] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="85f808da269ed32da330f62d5567120fb060cade455dd54cca170ee4850d9d84" May 8 00:37:44.580478 containerd[1637]: 2025-05-08 00:37:44.573 [INFO][5533] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="85f808da269ed32da330f62d5567120fb060cade455dd54cca170ee4850d9d84" HandleID="k8s-pod-network.85f808da269ed32da330f62d5567120fb060cade455dd54cca170ee4850d9d84" Workload="localhost-k8s-coredns--7db6d8ff4d--2lxxx-eth0" May 8 00:37:44.580478 containerd[1637]: 2025-05-08 00:37:44.573 [INFO][5533] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:37:44.580478 containerd[1637]: 2025-05-08 00:37:44.573 [INFO][5533] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:37:44.580478 containerd[1637]: 2025-05-08 00:37:44.577 [WARNING][5533] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="85f808da269ed32da330f62d5567120fb060cade455dd54cca170ee4850d9d84" HandleID="k8s-pod-network.85f808da269ed32da330f62d5567120fb060cade455dd54cca170ee4850d9d84" Workload="localhost-k8s-coredns--7db6d8ff4d--2lxxx-eth0" May 8 00:37:44.580478 containerd[1637]: 2025-05-08 00:37:44.577 [INFO][5533] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="85f808da269ed32da330f62d5567120fb060cade455dd54cca170ee4850d9d84" HandleID="k8s-pod-network.85f808da269ed32da330f62d5567120fb060cade455dd54cca170ee4850d9d84" Workload="localhost-k8s-coredns--7db6d8ff4d--2lxxx-eth0" May 8 00:37:44.580478 containerd[1637]: 2025-05-08 00:37:44.578 [INFO][5533] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:37:44.580478 containerd[1637]: 2025-05-08 00:37:44.579 [INFO][5524] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="85f808da269ed32da330f62d5567120fb060cade455dd54cca170ee4850d9d84" May 8 00:37:44.581278 containerd[1637]: time="2025-05-08T00:37:44.580515417Z" level=info msg="TearDown network for sandbox \"85f808da269ed32da330f62d5567120fb060cade455dd54cca170ee4850d9d84\" successfully" May 8 00:37:44.581278 containerd[1637]: time="2025-05-08T00:37:44.580531947Z" level=info msg="StopPodSandbox for \"85f808da269ed32da330f62d5567120fb060cade455dd54cca170ee4850d9d84\" returns successfully" May 8 00:37:44.844634 containerd[1637]: time="2025-05-08T00:37:44.844614142Z" level=info msg="RemovePodSandbox for \"85f808da269ed32da330f62d5567120fb060cade455dd54cca170ee4850d9d84\"" May 8 00:37:44.844797 containerd[1637]: time="2025-05-08T00:37:44.844676067Z" level=info msg="Forcibly stopping sandbox \"85f808da269ed32da330f62d5567120fb060cade455dd54cca170ee4850d9d84\"" May 8 00:37:44.899015 containerd[1637]: 2025-05-08 00:37:44.871 [WARNING][5555] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="85f808da269ed32da330f62d5567120fb060cade455dd54cca170ee4850d9d84" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--2lxxx-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"aa7783fa-83ef-4b49-bf44-ff04e1503a73", ResourceVersion:"809", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 36, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"92f4c252c28adb209e6ef0b2b094dec0155f0b12943bc717cd5222548e6c1919", Pod:"coredns-7db6d8ff4d-2lxxx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1818f0681b3", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:37:44.899015 containerd[1637]: 2025-05-08 00:37:44.871 [INFO][5555] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="85f808da269ed32da330f62d5567120fb060cade455dd54cca170ee4850d9d84" May 8 00:37:44.899015 containerd[1637]: 2025-05-08 00:37:44.871 [INFO][5555] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="85f808da269ed32da330f62d5567120fb060cade455dd54cca170ee4850d9d84" iface="eth0" netns="" May 8 00:37:44.899015 containerd[1637]: 2025-05-08 00:37:44.871 [INFO][5555] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="85f808da269ed32da330f62d5567120fb060cade455dd54cca170ee4850d9d84" May 8 00:37:44.899015 containerd[1637]: 2025-05-08 00:37:44.871 [INFO][5555] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="85f808da269ed32da330f62d5567120fb060cade455dd54cca170ee4850d9d84" May 8 00:37:44.899015 containerd[1637]: 2025-05-08 00:37:44.890 [INFO][5562] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="85f808da269ed32da330f62d5567120fb060cade455dd54cca170ee4850d9d84" HandleID="k8s-pod-network.85f808da269ed32da330f62d5567120fb060cade455dd54cca170ee4850d9d84" Workload="localhost-k8s-coredns--7db6d8ff4d--2lxxx-eth0" May 8 00:37:44.899015 containerd[1637]: 2025-05-08 00:37:44.890 [INFO][5562] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:37:44.899015 containerd[1637]: 2025-05-08 00:37:44.890 [INFO][5562] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:37:44.899015 containerd[1637]: 2025-05-08 00:37:44.894 [WARNING][5562] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="85f808da269ed32da330f62d5567120fb060cade455dd54cca170ee4850d9d84" HandleID="k8s-pod-network.85f808da269ed32da330f62d5567120fb060cade455dd54cca170ee4850d9d84" Workload="localhost-k8s-coredns--7db6d8ff4d--2lxxx-eth0" May 8 00:37:44.899015 containerd[1637]: 2025-05-08 00:37:44.894 [INFO][5562] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="85f808da269ed32da330f62d5567120fb060cade455dd54cca170ee4850d9d84" HandleID="k8s-pod-network.85f808da269ed32da330f62d5567120fb060cade455dd54cca170ee4850d9d84" Workload="localhost-k8s-coredns--7db6d8ff4d--2lxxx-eth0" May 8 00:37:44.899015 containerd[1637]: 2025-05-08 00:37:44.895 [INFO][5562] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:37:44.899015 containerd[1637]: 2025-05-08 00:37:44.897 [INFO][5555] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="85f808da269ed32da330f62d5567120fb060cade455dd54cca170ee4850d9d84" May 8 00:37:44.899015 containerd[1637]: time="2025-05-08T00:37:44.898976721Z" level=info msg="TearDown network for sandbox \"85f808da269ed32da330f62d5567120fb060cade455dd54cca170ee4850d9d84\" successfully" May 8 00:37:44.905879 containerd[1637]: time="2025-05-08T00:37:44.905853408Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"85f808da269ed32da330f62d5567120fb060cade455dd54cca170ee4850d9d84\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 8 00:37:44.905990 containerd[1637]: time="2025-05-08T00:37:44.905897740Z" level=info msg="RemovePodSandbox \"85f808da269ed32da330f62d5567120fb060cade455dd54cca170ee4850d9d84\" returns successfully" May 8 00:37:44.906196 containerd[1637]: time="2025-05-08T00:37:44.906182628Z" level=info msg="StopPodSandbox for \"35edfd87b514fae97400e4caca4f6175d9a99921eae3cc3bacadf2944d675916\"" May 8 00:37:44.956122 containerd[1637]: 2025-05-08 00:37:44.928 [WARNING][5580] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="35edfd87b514fae97400e4caca4f6175d9a99921eae3cc3bacadf2944d675916" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--67ddb48bf6--z6mbt-eth0", GenerateName:"calico-kube-controllers-67ddb48bf6-", Namespace:"calico-system", SelfLink:"", UID:"9d6ebfe3-fdd8-4570-8cd5-315117175ab6", ResourceVersion:"866", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 37, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"67ddb48bf6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b2d9f88c3ef0fa40800e8de031ca1db06c427e8d5062b11021d466cf61267c0e", Pod:"calico-kube-controllers-67ddb48bf6-z6mbt", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calid973223df80", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:37:44.956122 containerd[1637]: 2025-05-08 00:37:44.929 [INFO][5580] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="35edfd87b514fae97400e4caca4f6175d9a99921eae3cc3bacadf2944d675916" May 8 00:37:44.956122 containerd[1637]: 2025-05-08 00:37:44.929 [INFO][5580] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="35edfd87b514fae97400e4caca4f6175d9a99921eae3cc3bacadf2944d675916" iface="eth0" netns="" May 8 00:37:44.956122 containerd[1637]: 2025-05-08 00:37:44.929 [INFO][5580] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="35edfd87b514fae97400e4caca4f6175d9a99921eae3cc3bacadf2944d675916" May 8 00:37:44.956122 containerd[1637]: 2025-05-08 00:37:44.929 [INFO][5580] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="35edfd87b514fae97400e4caca4f6175d9a99921eae3cc3bacadf2944d675916" May 8 00:37:44.956122 containerd[1637]: 2025-05-08 00:37:44.946 [INFO][5587] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="35edfd87b514fae97400e4caca4f6175d9a99921eae3cc3bacadf2944d675916" HandleID="k8s-pod-network.35edfd87b514fae97400e4caca4f6175d9a99921eae3cc3bacadf2944d675916" Workload="localhost-k8s-calico--kube--controllers--67ddb48bf6--z6mbt-eth0" May 8 00:37:44.956122 containerd[1637]: 2025-05-08 00:37:44.947 [INFO][5587] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:37:44.956122 containerd[1637]: 2025-05-08 00:37:44.947 [INFO][5587] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:37:44.956122 containerd[1637]: 2025-05-08 00:37:44.952 [WARNING][5587] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="35edfd87b514fae97400e4caca4f6175d9a99921eae3cc3bacadf2944d675916" HandleID="k8s-pod-network.35edfd87b514fae97400e4caca4f6175d9a99921eae3cc3bacadf2944d675916" Workload="localhost-k8s-calico--kube--controllers--67ddb48bf6--z6mbt-eth0" May 8 00:37:44.956122 containerd[1637]: 2025-05-08 00:37:44.952 [INFO][5587] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="35edfd87b514fae97400e4caca4f6175d9a99921eae3cc3bacadf2944d675916" HandleID="k8s-pod-network.35edfd87b514fae97400e4caca4f6175d9a99921eae3cc3bacadf2944d675916" Workload="localhost-k8s-calico--kube--controllers--67ddb48bf6--z6mbt-eth0" May 8 00:37:44.956122 containerd[1637]: 2025-05-08 00:37:44.953 [INFO][5587] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:37:44.956122 containerd[1637]: 2025-05-08 00:37:44.954 [INFO][5580] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="35edfd87b514fae97400e4caca4f6175d9a99921eae3cc3bacadf2944d675916" May 8 00:37:44.956997 containerd[1637]: time="2025-05-08T00:37:44.956160922Z" level=info msg="TearDown network for sandbox \"35edfd87b514fae97400e4caca4f6175d9a99921eae3cc3bacadf2944d675916\" successfully" May 8 00:37:44.956997 containerd[1637]: time="2025-05-08T00:37:44.956191164Z" level=info msg="StopPodSandbox for \"35edfd87b514fae97400e4caca4f6175d9a99921eae3cc3bacadf2944d675916\" returns successfully" May 8 00:37:44.956997 containerd[1637]: time="2025-05-08T00:37:44.956511106Z" level=info msg="RemovePodSandbox for \"35edfd87b514fae97400e4caca4f6175d9a99921eae3cc3bacadf2944d675916\"" May 8 00:37:44.956997 containerd[1637]: time="2025-05-08T00:37:44.956524422Z" level=info msg="Forcibly stopping sandbox \"35edfd87b514fae97400e4caca4f6175d9a99921eae3cc3bacadf2944d675916\"" May 8 00:37:44.999856 containerd[1637]: 2025-05-08 00:37:44.979 [WARNING][5605] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="35edfd87b514fae97400e4caca4f6175d9a99921eae3cc3bacadf2944d675916" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--67ddb48bf6--z6mbt-eth0", GenerateName:"calico-kube-controllers-67ddb48bf6-", Namespace:"calico-system", SelfLink:"", UID:"9d6ebfe3-fdd8-4570-8cd5-315117175ab6", ResourceVersion:"866", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 37, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"67ddb48bf6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b2d9f88c3ef0fa40800e8de031ca1db06c427e8d5062b11021d466cf61267c0e", Pod:"calico-kube-controllers-67ddb48bf6-z6mbt", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calid973223df80", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:37:44.999856 containerd[1637]: 2025-05-08 00:37:44.979 [INFO][5605] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="35edfd87b514fae97400e4caca4f6175d9a99921eae3cc3bacadf2944d675916" May 8 00:37:44.999856 containerd[1637]: 2025-05-08 00:37:44.979 [INFO][5605] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="35edfd87b514fae97400e4caca4f6175d9a99921eae3cc3bacadf2944d675916" iface="eth0" netns="" May 8 00:37:44.999856 containerd[1637]: 2025-05-08 00:37:44.979 [INFO][5605] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="35edfd87b514fae97400e4caca4f6175d9a99921eae3cc3bacadf2944d675916" May 8 00:37:44.999856 containerd[1637]: 2025-05-08 00:37:44.979 [INFO][5605] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="35edfd87b514fae97400e4caca4f6175d9a99921eae3cc3bacadf2944d675916" May 8 00:37:44.999856 containerd[1637]: 2025-05-08 00:37:44.993 [INFO][5612] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="35edfd87b514fae97400e4caca4f6175d9a99921eae3cc3bacadf2944d675916" HandleID="k8s-pod-network.35edfd87b514fae97400e4caca4f6175d9a99921eae3cc3bacadf2944d675916" Workload="localhost-k8s-calico--kube--controllers--67ddb48bf6--z6mbt-eth0" May 8 00:37:44.999856 containerd[1637]: 2025-05-08 00:37:44.993 [INFO][5612] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:37:44.999856 containerd[1637]: 2025-05-08 00:37:44.993 [INFO][5612] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:37:44.999856 containerd[1637]: 2025-05-08 00:37:44.997 [WARNING][5612] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="35edfd87b514fae97400e4caca4f6175d9a99921eae3cc3bacadf2944d675916" HandleID="k8s-pod-network.35edfd87b514fae97400e4caca4f6175d9a99921eae3cc3bacadf2944d675916" Workload="localhost-k8s-calico--kube--controllers--67ddb48bf6--z6mbt-eth0" May 8 00:37:44.999856 containerd[1637]: 2025-05-08 00:37:44.997 [INFO][5612] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="35edfd87b514fae97400e4caca4f6175d9a99921eae3cc3bacadf2944d675916" HandleID="k8s-pod-network.35edfd87b514fae97400e4caca4f6175d9a99921eae3cc3bacadf2944d675916" Workload="localhost-k8s-calico--kube--controllers--67ddb48bf6--z6mbt-eth0" May 8 00:37:44.999856 containerd[1637]: 2025-05-08 00:37:44.997 [INFO][5612] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:37:44.999856 containerd[1637]: 2025-05-08 00:37:44.998 [INFO][5605] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="35edfd87b514fae97400e4caca4f6175d9a99921eae3cc3bacadf2944d675916" May 8 00:37:45.000163 containerd[1637]: time="2025-05-08T00:37:44.999893868Z" level=info msg="TearDown network for sandbox \"35edfd87b514fae97400e4caca4f6175d9a99921eae3cc3bacadf2944d675916\" successfully" May 8 00:37:45.010305 containerd[1637]: time="2025-05-08T00:37:45.010274519Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"35edfd87b514fae97400e4caca4f6175d9a99921eae3cc3bacadf2944d675916\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 8 00:37:45.010467 containerd[1637]: time="2025-05-08T00:37:45.010321004Z" level=info msg="RemovePodSandbox \"35edfd87b514fae97400e4caca4f6175d9a99921eae3cc3bacadf2944d675916\" returns successfully" May 8 00:37:45.010629 containerd[1637]: time="2025-05-08T00:37:45.010614795Z" level=info msg="StopPodSandbox for \"a65e1dce4f5e24d1837743e276d4eb13e301730b30522c0ad228df6b71c129a0\"" May 8 00:37:45.055404 containerd[1637]: 2025-05-08 00:37:45.034 [WARNING][5630] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a65e1dce4f5e24d1837743e276d4eb13e301730b30522c0ad228df6b71c129a0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--skfjc-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"8ca8c9f2-9680-4a61-ad4e-7654297c3c62", ResourceVersion:"792", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 36, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c909bc5f688abaa4a7f4e569bee78e2056d29d3cf5a3cf7bf6a35dc17aded014", Pod:"coredns-7db6d8ff4d-skfjc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia26865c4475", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:37:45.055404 containerd[1637]: 2025-05-08 00:37:45.034 [INFO][5630] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="a65e1dce4f5e24d1837743e276d4eb13e301730b30522c0ad228df6b71c129a0" May 8 00:37:45.055404 containerd[1637]: 2025-05-08 00:37:45.034 [INFO][5630] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a65e1dce4f5e24d1837743e276d4eb13e301730b30522c0ad228df6b71c129a0" iface="eth0" netns="" May 8 00:37:45.055404 containerd[1637]: 2025-05-08 00:37:45.034 [INFO][5630] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="a65e1dce4f5e24d1837743e276d4eb13e301730b30522c0ad228df6b71c129a0" May 8 00:37:45.055404 containerd[1637]: 2025-05-08 00:37:45.034 [INFO][5630] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a65e1dce4f5e24d1837743e276d4eb13e301730b30522c0ad228df6b71c129a0" May 8 00:37:45.055404 containerd[1637]: 2025-05-08 00:37:45.048 [INFO][5637] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a65e1dce4f5e24d1837743e276d4eb13e301730b30522c0ad228df6b71c129a0" HandleID="k8s-pod-network.a65e1dce4f5e24d1837743e276d4eb13e301730b30522c0ad228df6b71c129a0" Workload="localhost-k8s-coredns--7db6d8ff4d--skfjc-eth0" May 8 00:37:45.055404 containerd[1637]: 2025-05-08 00:37:45.049 [INFO][5637] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:37:45.055404 containerd[1637]: 2025-05-08 00:37:45.049 [INFO][5637] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:37:45.055404 containerd[1637]: 2025-05-08 00:37:45.052 [WARNING][5637] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a65e1dce4f5e24d1837743e276d4eb13e301730b30522c0ad228df6b71c129a0" HandleID="k8s-pod-network.a65e1dce4f5e24d1837743e276d4eb13e301730b30522c0ad228df6b71c129a0" Workload="localhost-k8s-coredns--7db6d8ff4d--skfjc-eth0" May 8 00:37:45.055404 containerd[1637]: 2025-05-08 00:37:45.052 [INFO][5637] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a65e1dce4f5e24d1837743e276d4eb13e301730b30522c0ad228df6b71c129a0" HandleID="k8s-pod-network.a65e1dce4f5e24d1837743e276d4eb13e301730b30522c0ad228df6b71c129a0" Workload="localhost-k8s-coredns--7db6d8ff4d--skfjc-eth0" May 8 00:37:45.055404 containerd[1637]: 2025-05-08 00:37:45.053 [INFO][5637] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:37:45.055404 containerd[1637]: 2025-05-08 00:37:45.054 [INFO][5630] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="a65e1dce4f5e24d1837743e276d4eb13e301730b30522c0ad228df6b71c129a0" May 8 00:37:45.055831 containerd[1637]: time="2025-05-08T00:37:45.055476809Z" level=info msg="TearDown network for sandbox \"a65e1dce4f5e24d1837743e276d4eb13e301730b30522c0ad228df6b71c129a0\" successfully" May 8 00:37:45.055831 containerd[1637]: time="2025-05-08T00:37:45.055495373Z" level=info msg="StopPodSandbox for \"a65e1dce4f5e24d1837743e276d4eb13e301730b30522c0ad228df6b71c129a0\" returns successfully" May 8 00:37:45.055831 containerd[1637]: time="2025-05-08T00:37:45.055806362Z" level=info msg="RemovePodSandbox for \"a65e1dce4f5e24d1837743e276d4eb13e301730b30522c0ad228df6b71c129a0\"" May 8 00:37:45.055885 containerd[1637]: time="2025-05-08T00:37:45.055857775Z" level=info msg="Forcibly stopping sandbox \"a65e1dce4f5e24d1837743e276d4eb13e301730b30522c0ad228df6b71c129a0\"" May 8 00:37:45.098218 containerd[1637]: 2025-05-08 00:37:45.077 [WARNING][5655] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a65e1dce4f5e24d1837743e276d4eb13e301730b30522c0ad228df6b71c129a0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--skfjc-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"8ca8c9f2-9680-4a61-ad4e-7654297c3c62", ResourceVersion:"792", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 36, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c909bc5f688abaa4a7f4e569bee78e2056d29d3cf5a3cf7bf6a35dc17aded014", Pod:"coredns-7db6d8ff4d-skfjc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia26865c4475", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:37:45.098218 containerd[1637]: 2025-05-08 00:37:45.077 [INFO][5655] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="a65e1dce4f5e24d1837743e276d4eb13e301730b30522c0ad228df6b71c129a0" May 8 00:37:45.098218 containerd[1637]: 2025-05-08 00:37:45.077 [INFO][5655] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a65e1dce4f5e24d1837743e276d4eb13e301730b30522c0ad228df6b71c129a0" iface="eth0" netns="" May 8 00:37:45.098218 containerd[1637]: 2025-05-08 00:37:45.077 [INFO][5655] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="a65e1dce4f5e24d1837743e276d4eb13e301730b30522c0ad228df6b71c129a0" May 8 00:37:45.098218 containerd[1637]: 2025-05-08 00:37:45.077 [INFO][5655] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a65e1dce4f5e24d1837743e276d4eb13e301730b30522c0ad228df6b71c129a0" May 8 00:37:45.098218 containerd[1637]: 2025-05-08 00:37:45.090 [INFO][5662] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a65e1dce4f5e24d1837743e276d4eb13e301730b30522c0ad228df6b71c129a0" HandleID="k8s-pod-network.a65e1dce4f5e24d1837743e276d4eb13e301730b30522c0ad228df6b71c129a0" Workload="localhost-k8s-coredns--7db6d8ff4d--skfjc-eth0" May 8 00:37:45.098218 containerd[1637]: 2025-05-08 00:37:45.090 [INFO][5662] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:37:45.098218 containerd[1637]: 2025-05-08 00:37:45.091 [INFO][5662] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:37:45.098218 containerd[1637]: 2025-05-08 00:37:45.095 [WARNING][5662] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a65e1dce4f5e24d1837743e276d4eb13e301730b30522c0ad228df6b71c129a0" HandleID="k8s-pod-network.a65e1dce4f5e24d1837743e276d4eb13e301730b30522c0ad228df6b71c129a0" Workload="localhost-k8s-coredns--7db6d8ff4d--skfjc-eth0" May 8 00:37:45.098218 containerd[1637]: 2025-05-08 00:37:45.095 [INFO][5662] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a65e1dce4f5e24d1837743e276d4eb13e301730b30522c0ad228df6b71c129a0" HandleID="k8s-pod-network.a65e1dce4f5e24d1837743e276d4eb13e301730b30522c0ad228df6b71c129a0" Workload="localhost-k8s-coredns--7db6d8ff4d--skfjc-eth0" May 8 00:37:45.098218 containerd[1637]: 2025-05-08 00:37:45.096 [INFO][5662] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:37:45.098218 containerd[1637]: 2025-05-08 00:37:45.097 [INFO][5655] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="a65e1dce4f5e24d1837743e276d4eb13e301730b30522c0ad228df6b71c129a0" May 8 00:37:45.099439 containerd[1637]: time="2025-05-08T00:37:45.098224003Z" level=info msg="TearDown network for sandbox \"a65e1dce4f5e24d1837743e276d4eb13e301730b30522c0ad228df6b71c129a0\" successfully" May 8 00:37:45.102215 containerd[1637]: time="2025-05-08T00:37:45.102192466Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a65e1dce4f5e24d1837743e276d4eb13e301730b30522c0ad228df6b71c129a0\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 8 00:37:45.102464 containerd[1637]: time="2025-05-08T00:37:45.102239523Z" level=info msg="RemovePodSandbox \"a65e1dce4f5e24d1837743e276d4eb13e301730b30522c0ad228df6b71c129a0\" returns successfully" May 8 00:37:45.102621 containerd[1637]: time="2025-05-08T00:37:45.102603680Z" level=info msg="StopPodSandbox for \"de9ad73ab25a39cfbc03af33fb28975a627fe98f70858928055b36d703b40f34\"" May 8 00:37:45.149068 containerd[1637]: 2025-05-08 00:37:45.128 [WARNING][5681] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="de9ad73ab25a39cfbc03af33fb28975a627fe98f70858928055b36d703b40f34" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--84669494cd--gzmz2-eth0", GenerateName:"calico-apiserver-84669494cd-", Namespace:"calico-apiserver", SelfLink:"", UID:"8a350a15-4629-49d8-9537-09e1c8aafb63", ResourceVersion:"831", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 37, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"84669494cd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"274db3b8176f7e43b0e3db3940d31fbbbe687423c83a8cb33e8e7ef5ef44bdb8", Pod:"calico-apiserver-84669494cd-gzmz2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali587a6cb7427", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:37:45.149068 containerd[1637]: 2025-05-08 00:37:45.128 [INFO][5681] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="de9ad73ab25a39cfbc03af33fb28975a627fe98f70858928055b36d703b40f34" May 8 00:37:45.149068 containerd[1637]: 2025-05-08 00:37:45.128 [INFO][5681] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="de9ad73ab25a39cfbc03af33fb28975a627fe98f70858928055b36d703b40f34" iface="eth0" netns="" May 8 00:37:45.149068 containerd[1637]: 2025-05-08 00:37:45.129 [INFO][5681] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="de9ad73ab25a39cfbc03af33fb28975a627fe98f70858928055b36d703b40f34" May 8 00:37:45.149068 containerd[1637]: 2025-05-08 00:37:45.129 [INFO][5681] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="de9ad73ab25a39cfbc03af33fb28975a627fe98f70858928055b36d703b40f34" May 8 00:37:45.149068 containerd[1637]: 2025-05-08 00:37:45.141 [INFO][5688] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="de9ad73ab25a39cfbc03af33fb28975a627fe98f70858928055b36d703b40f34" HandleID="k8s-pod-network.de9ad73ab25a39cfbc03af33fb28975a627fe98f70858928055b36d703b40f34" Workload="localhost-k8s-calico--apiserver--84669494cd--gzmz2-eth0" May 8 00:37:45.149068 containerd[1637]: 2025-05-08 00:37:45.141 [INFO][5688] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:37:45.149068 containerd[1637]: 2025-05-08 00:37:45.141 [INFO][5688] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:37:45.149068 containerd[1637]: 2025-05-08 00:37:45.146 [WARNING][5688] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="de9ad73ab25a39cfbc03af33fb28975a627fe98f70858928055b36d703b40f34" HandleID="k8s-pod-network.de9ad73ab25a39cfbc03af33fb28975a627fe98f70858928055b36d703b40f34" Workload="localhost-k8s-calico--apiserver--84669494cd--gzmz2-eth0" May 8 00:37:45.149068 containerd[1637]: 2025-05-08 00:37:45.146 [INFO][5688] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="de9ad73ab25a39cfbc03af33fb28975a627fe98f70858928055b36d703b40f34" HandleID="k8s-pod-network.de9ad73ab25a39cfbc03af33fb28975a627fe98f70858928055b36d703b40f34" Workload="localhost-k8s-calico--apiserver--84669494cd--gzmz2-eth0" May 8 00:37:45.149068 containerd[1637]: 2025-05-08 00:37:45.147 [INFO][5688] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:37:45.149068 containerd[1637]: 2025-05-08 00:37:45.147 [INFO][5681] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="de9ad73ab25a39cfbc03af33fb28975a627fe98f70858928055b36d703b40f34" May 8 00:37:45.149487 containerd[1637]: time="2025-05-08T00:37:45.149095650Z" level=info msg="TearDown network for sandbox \"de9ad73ab25a39cfbc03af33fb28975a627fe98f70858928055b36d703b40f34\" successfully" May 8 00:37:45.149487 containerd[1637]: time="2025-05-08T00:37:45.149111556Z" level=info msg="StopPodSandbox for \"de9ad73ab25a39cfbc03af33fb28975a627fe98f70858928055b36d703b40f34\" returns successfully" May 8 00:37:45.149487 containerd[1637]: time="2025-05-08T00:37:45.149436480Z" level=info msg="RemovePodSandbox for \"de9ad73ab25a39cfbc03af33fb28975a627fe98f70858928055b36d703b40f34\"" May 8 00:37:45.149487 containerd[1637]: time="2025-05-08T00:37:45.149452732Z" level=info msg="Forcibly stopping sandbox \"de9ad73ab25a39cfbc03af33fb28975a627fe98f70858928055b36d703b40f34\"" May 8 00:37:45.192566 containerd[1637]: 2025-05-08 00:37:45.171 [WARNING][5707] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="de9ad73ab25a39cfbc03af33fb28975a627fe98f70858928055b36d703b40f34" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--84669494cd--gzmz2-eth0", GenerateName:"calico-apiserver-84669494cd-", Namespace:"calico-apiserver", SelfLink:"", UID:"8a350a15-4629-49d8-9537-09e1c8aafb63", ResourceVersion:"831", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 37, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"84669494cd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"274db3b8176f7e43b0e3db3940d31fbbbe687423c83a8cb33e8e7ef5ef44bdb8", Pod:"calico-apiserver-84669494cd-gzmz2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali587a6cb7427", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:37:45.192566 containerd[1637]: 2025-05-08 00:37:45.172 [INFO][5707] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="de9ad73ab25a39cfbc03af33fb28975a627fe98f70858928055b36d703b40f34" May 8 00:37:45.192566 containerd[1637]: 2025-05-08 00:37:45.172 [INFO][5707] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="de9ad73ab25a39cfbc03af33fb28975a627fe98f70858928055b36d703b40f34" iface="eth0" netns="" May 8 00:37:45.192566 containerd[1637]: 2025-05-08 00:37:45.172 [INFO][5707] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="de9ad73ab25a39cfbc03af33fb28975a627fe98f70858928055b36d703b40f34" May 8 00:37:45.192566 containerd[1637]: 2025-05-08 00:37:45.172 [INFO][5707] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="de9ad73ab25a39cfbc03af33fb28975a627fe98f70858928055b36d703b40f34" May 8 00:37:45.192566 containerd[1637]: 2025-05-08 00:37:45.186 [INFO][5714] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="de9ad73ab25a39cfbc03af33fb28975a627fe98f70858928055b36d703b40f34" HandleID="k8s-pod-network.de9ad73ab25a39cfbc03af33fb28975a627fe98f70858928055b36d703b40f34" Workload="localhost-k8s-calico--apiserver--84669494cd--gzmz2-eth0" May 8 00:37:45.192566 containerd[1637]: 2025-05-08 00:37:45.186 [INFO][5714] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:37:45.192566 containerd[1637]: 2025-05-08 00:37:45.186 [INFO][5714] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:37:45.192566 containerd[1637]: 2025-05-08 00:37:45.189 [WARNING][5714] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="de9ad73ab25a39cfbc03af33fb28975a627fe98f70858928055b36d703b40f34" HandleID="k8s-pod-network.de9ad73ab25a39cfbc03af33fb28975a627fe98f70858928055b36d703b40f34" Workload="localhost-k8s-calico--apiserver--84669494cd--gzmz2-eth0" May 8 00:37:45.192566 containerd[1637]: 2025-05-08 00:37:45.189 [INFO][5714] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="de9ad73ab25a39cfbc03af33fb28975a627fe98f70858928055b36d703b40f34" HandleID="k8s-pod-network.de9ad73ab25a39cfbc03af33fb28975a627fe98f70858928055b36d703b40f34" Workload="localhost-k8s-calico--apiserver--84669494cd--gzmz2-eth0" May 8 00:37:45.192566 containerd[1637]: 2025-05-08 00:37:45.190 [INFO][5714] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:37:45.192566 containerd[1637]: 2025-05-08 00:37:45.191 [INFO][5707] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="de9ad73ab25a39cfbc03af33fb28975a627fe98f70858928055b36d703b40f34" May 8 00:37:45.192566 containerd[1637]: time="2025-05-08T00:37:45.192498146Z" level=info msg="TearDown network for sandbox \"de9ad73ab25a39cfbc03af33fb28975a627fe98f70858928055b36d703b40f34\" successfully" May 8 00:37:45.193768 containerd[1637]: time="2025-05-08T00:37:45.193751060Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"de9ad73ab25a39cfbc03af33fb28975a627fe98f70858928055b36d703b40f34\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 8 00:37:45.193829 containerd[1637]: time="2025-05-08T00:37:45.193785865Z" level=info msg="RemovePodSandbox \"de9ad73ab25a39cfbc03af33fb28975a627fe98f70858928055b36d703b40f34\" returns successfully" May 8 00:37:45.194156 containerd[1637]: time="2025-05-08T00:37:45.194140541Z" level=info msg="StopPodSandbox for \"235c623d0ea98408d68f03e74ab5a67b390280890b3600cc54e80abf10f7dedd\"" May 8 00:37:45.249872 containerd[1637]: 2025-05-08 00:37:45.226 [WARNING][5733] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="235c623d0ea98408d68f03e74ab5a67b390280890b3600cc54e80abf10f7dedd" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6d6795bb7b--bt6hl-eth0", GenerateName:"calico-apiserver-6d6795bb7b-", Namespace:"calico-apiserver", SelfLink:"", UID:"1adaa741-871f-441d-b6ca-732d5537fc5a", ResourceVersion:"862", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 37, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6d6795bb7b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c3def452139becca79dc9949161df9d1142a9b3dc953d3407aaa31b4752edf29", Pod:"calico-apiserver-6d6795bb7b-bt6hl", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib3fb1a1f64f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:37:45.249872 containerd[1637]: 2025-05-08 00:37:45.226 [INFO][5733] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="235c623d0ea98408d68f03e74ab5a67b390280890b3600cc54e80abf10f7dedd" May 8 00:37:45.249872 containerd[1637]: 2025-05-08 00:37:45.226 [INFO][5733] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="235c623d0ea98408d68f03e74ab5a67b390280890b3600cc54e80abf10f7dedd" iface="eth0" netns="" May 8 00:37:45.249872 containerd[1637]: 2025-05-08 00:37:45.226 [INFO][5733] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="235c623d0ea98408d68f03e74ab5a67b390280890b3600cc54e80abf10f7dedd" May 8 00:37:45.249872 containerd[1637]: 2025-05-08 00:37:45.226 [INFO][5733] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="235c623d0ea98408d68f03e74ab5a67b390280890b3600cc54e80abf10f7dedd" May 8 00:37:45.249872 containerd[1637]: 2025-05-08 00:37:45.242 [INFO][5740] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="235c623d0ea98408d68f03e74ab5a67b390280890b3600cc54e80abf10f7dedd" HandleID="k8s-pod-network.235c623d0ea98408d68f03e74ab5a67b390280890b3600cc54e80abf10f7dedd" Workload="localhost-k8s-calico--apiserver--6d6795bb7b--bt6hl-eth0" May 8 00:37:45.249872 containerd[1637]: 2025-05-08 00:37:45.242 [INFO][5740] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:37:45.249872 containerd[1637]: 2025-05-08 00:37:45.242 [INFO][5740] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:37:45.249872 containerd[1637]: 2025-05-08 00:37:45.246 [WARNING][5740] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="235c623d0ea98408d68f03e74ab5a67b390280890b3600cc54e80abf10f7dedd" HandleID="k8s-pod-network.235c623d0ea98408d68f03e74ab5a67b390280890b3600cc54e80abf10f7dedd" Workload="localhost-k8s-calico--apiserver--6d6795bb7b--bt6hl-eth0" May 8 00:37:45.249872 containerd[1637]: 2025-05-08 00:37:45.246 [INFO][5740] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="235c623d0ea98408d68f03e74ab5a67b390280890b3600cc54e80abf10f7dedd" HandleID="k8s-pod-network.235c623d0ea98408d68f03e74ab5a67b390280890b3600cc54e80abf10f7dedd" Workload="localhost-k8s-calico--apiserver--6d6795bb7b--bt6hl-eth0" May 8 00:37:45.249872 containerd[1637]: 2025-05-08 00:37:45.247 [INFO][5740] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:37:45.249872 containerd[1637]: 2025-05-08 00:37:45.248 [INFO][5733] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="235c623d0ea98408d68f03e74ab5a67b390280890b3600cc54e80abf10f7dedd" May 8 00:37:45.250610 containerd[1637]: time="2025-05-08T00:37:45.249905872Z" level=info msg="TearDown network for sandbox \"235c623d0ea98408d68f03e74ab5a67b390280890b3600cc54e80abf10f7dedd\" successfully" May 8 00:37:45.250610 containerd[1637]: time="2025-05-08T00:37:45.249927817Z" level=info msg="StopPodSandbox for \"235c623d0ea98408d68f03e74ab5a67b390280890b3600cc54e80abf10f7dedd\" returns successfully" May 8 00:37:45.251015 containerd[1637]: time="2025-05-08T00:37:45.250761917Z" level=info msg="RemovePodSandbox for \"235c623d0ea98408d68f03e74ab5a67b390280890b3600cc54e80abf10f7dedd\"" May 8 00:37:45.251015 containerd[1637]: time="2025-05-08T00:37:45.250812481Z" level=info msg="Forcibly stopping sandbox \"235c623d0ea98408d68f03e74ab5a67b390280890b3600cc54e80abf10f7dedd\"" May 8 00:37:45.317818 containerd[1637]: 2025-05-08 00:37:45.274 [WARNING][5759] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="235c623d0ea98408d68f03e74ab5a67b390280890b3600cc54e80abf10f7dedd" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6d6795bb7b--bt6hl-eth0", GenerateName:"calico-apiserver-6d6795bb7b-", Namespace:"calico-apiserver", SelfLink:"", UID:"1adaa741-871f-441d-b6ca-732d5537fc5a", ResourceVersion:"862", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 37, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6d6795bb7b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c3def452139becca79dc9949161df9d1142a9b3dc953d3407aaa31b4752edf29", Pod:"calico-apiserver-6d6795bb7b-bt6hl", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib3fb1a1f64f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:37:45.317818 containerd[1637]: 2025-05-08 00:37:45.274 [INFO][5759] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="235c623d0ea98408d68f03e74ab5a67b390280890b3600cc54e80abf10f7dedd" May 8 00:37:45.317818 containerd[1637]: 2025-05-08 00:37:45.274 [INFO][5759] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="235c623d0ea98408d68f03e74ab5a67b390280890b3600cc54e80abf10f7dedd" iface="eth0" netns="" May 8 00:37:45.317818 containerd[1637]: 2025-05-08 00:37:45.274 [INFO][5759] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="235c623d0ea98408d68f03e74ab5a67b390280890b3600cc54e80abf10f7dedd" May 8 00:37:45.317818 containerd[1637]: 2025-05-08 00:37:45.274 [INFO][5759] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="235c623d0ea98408d68f03e74ab5a67b390280890b3600cc54e80abf10f7dedd" May 8 00:37:45.317818 containerd[1637]: 2025-05-08 00:37:45.309 [INFO][5766] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="235c623d0ea98408d68f03e74ab5a67b390280890b3600cc54e80abf10f7dedd" HandleID="k8s-pod-network.235c623d0ea98408d68f03e74ab5a67b390280890b3600cc54e80abf10f7dedd" Workload="localhost-k8s-calico--apiserver--6d6795bb7b--bt6hl-eth0" May 8 00:37:45.317818 containerd[1637]: 2025-05-08 00:37:45.309 [INFO][5766] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:37:45.317818 containerd[1637]: 2025-05-08 00:37:45.309 [INFO][5766] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:37:45.317818 containerd[1637]: 2025-05-08 00:37:45.313 [WARNING][5766] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="235c623d0ea98408d68f03e74ab5a67b390280890b3600cc54e80abf10f7dedd" HandleID="k8s-pod-network.235c623d0ea98408d68f03e74ab5a67b390280890b3600cc54e80abf10f7dedd" Workload="localhost-k8s-calico--apiserver--6d6795bb7b--bt6hl-eth0" May 8 00:37:45.317818 containerd[1637]: 2025-05-08 00:37:45.313 [INFO][5766] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="235c623d0ea98408d68f03e74ab5a67b390280890b3600cc54e80abf10f7dedd" HandleID="k8s-pod-network.235c623d0ea98408d68f03e74ab5a67b390280890b3600cc54e80abf10f7dedd" Workload="localhost-k8s-calico--apiserver--6d6795bb7b--bt6hl-eth0" May 8 00:37:45.317818 containerd[1637]: 2025-05-08 00:37:45.314 [INFO][5766] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:37:45.317818 containerd[1637]: 2025-05-08 00:37:45.316 [INFO][5759] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="235c623d0ea98408d68f03e74ab5a67b390280890b3600cc54e80abf10f7dedd" May 8 00:37:45.318436 containerd[1637]: time="2025-05-08T00:37:45.317843807Z" level=info msg="TearDown network for sandbox \"235c623d0ea98408d68f03e74ab5a67b390280890b3600cc54e80abf10f7dedd\" successfully" May 8 00:37:45.327958 containerd[1637]: time="2025-05-08T00:37:45.327924016Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"235c623d0ea98408d68f03e74ab5a67b390280890b3600cc54e80abf10f7dedd\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 8 00:37:45.328093 containerd[1637]: time="2025-05-08T00:37:45.327977878Z" level=info msg="RemovePodSandbox \"235c623d0ea98408d68f03e74ab5a67b390280890b3600cc54e80abf10f7dedd\" returns successfully" May 8 00:37:45.328627 containerd[1637]: time="2025-05-08T00:37:45.328415762Z" level=info msg="StopPodSandbox for \"19834ce8a9cfaa9b50db698dd94086948e031d2b81df35b0d2ee65e5ea98ff17\"" May 8 00:37:45.404115 containerd[1637]: 2025-05-08 00:37:45.380 [WARNING][5784] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="19834ce8a9cfaa9b50db698dd94086948e031d2b81df35b0d2ee65e5ea98ff17" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--84669494cd--c6tg9-eth0", GenerateName:"calico-apiserver-84669494cd-", Namespace:"calico-apiserver", SelfLink:"", UID:"600599f4-a6d1-457c-9d46-c0a1214f1987", ResourceVersion:"846", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 37, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"84669494cd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"edcfe40c015f0a7859aa760f892672195034bf9a30e6ea648f2bc5b902d78768", Pod:"calico-apiserver-84669494cd-c6tg9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali49eacf92d1d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:37:45.404115 containerd[1637]: 2025-05-08 00:37:45.380 [INFO][5784] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="19834ce8a9cfaa9b50db698dd94086948e031d2b81df35b0d2ee65e5ea98ff17" May 8 00:37:45.404115 containerd[1637]: 2025-05-08 00:37:45.380 [INFO][5784] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="19834ce8a9cfaa9b50db698dd94086948e031d2b81df35b0d2ee65e5ea98ff17" iface="eth0" netns="" May 8 00:37:45.404115 containerd[1637]: 2025-05-08 00:37:45.380 [INFO][5784] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="19834ce8a9cfaa9b50db698dd94086948e031d2b81df35b0d2ee65e5ea98ff17" May 8 00:37:45.404115 containerd[1637]: 2025-05-08 00:37:45.380 [INFO][5784] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="19834ce8a9cfaa9b50db698dd94086948e031d2b81df35b0d2ee65e5ea98ff17" May 8 00:37:45.404115 containerd[1637]: 2025-05-08 00:37:45.396 [INFO][5792] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="19834ce8a9cfaa9b50db698dd94086948e031d2b81df35b0d2ee65e5ea98ff17" HandleID="k8s-pod-network.19834ce8a9cfaa9b50db698dd94086948e031d2b81df35b0d2ee65e5ea98ff17" Workload="localhost-k8s-calico--apiserver--84669494cd--c6tg9-eth0" May 8 00:37:45.404115 containerd[1637]: 2025-05-08 00:37:45.396 [INFO][5792] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:37:45.404115 containerd[1637]: 2025-05-08 00:37:45.396 [INFO][5792] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:37:45.404115 containerd[1637]: 2025-05-08 00:37:45.400 [WARNING][5792] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="19834ce8a9cfaa9b50db698dd94086948e031d2b81df35b0d2ee65e5ea98ff17" HandleID="k8s-pod-network.19834ce8a9cfaa9b50db698dd94086948e031d2b81df35b0d2ee65e5ea98ff17" Workload="localhost-k8s-calico--apiserver--84669494cd--c6tg9-eth0" May 8 00:37:45.404115 containerd[1637]: 2025-05-08 00:37:45.400 [INFO][5792] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="19834ce8a9cfaa9b50db698dd94086948e031d2b81df35b0d2ee65e5ea98ff17" HandleID="k8s-pod-network.19834ce8a9cfaa9b50db698dd94086948e031d2b81df35b0d2ee65e5ea98ff17" Workload="localhost-k8s-calico--apiserver--84669494cd--c6tg9-eth0" May 8 00:37:45.404115 containerd[1637]: 2025-05-08 00:37:45.401 [INFO][5792] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:37:45.404115 containerd[1637]: 2025-05-08 00:37:45.402 [INFO][5784] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="19834ce8a9cfaa9b50db698dd94086948e031d2b81df35b0d2ee65e5ea98ff17" May 8 00:37:45.404663 containerd[1637]: time="2025-05-08T00:37:45.404480295Z" level=info msg="TearDown network for sandbox \"19834ce8a9cfaa9b50db698dd94086948e031d2b81df35b0d2ee65e5ea98ff17\" successfully" May 8 00:37:45.404663 containerd[1637]: time="2025-05-08T00:37:45.404508754Z" level=info msg="StopPodSandbox for \"19834ce8a9cfaa9b50db698dd94086948e031d2b81df35b0d2ee65e5ea98ff17\" returns successfully" May 8 00:37:45.410851 containerd[1637]: time="2025-05-08T00:37:45.410797567Z" level=info msg="RemovePodSandbox for \"19834ce8a9cfaa9b50db698dd94086948e031d2b81df35b0d2ee65e5ea98ff17\"" May 8 00:37:45.410851 containerd[1637]: time="2025-05-08T00:37:45.410823113Z" level=info msg="Forcibly stopping sandbox \"19834ce8a9cfaa9b50db698dd94086948e031d2b81df35b0d2ee65e5ea98ff17\"" May 8 00:37:45.474512 containerd[1637]: 2025-05-08 00:37:45.452 [WARNING][5810] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="19834ce8a9cfaa9b50db698dd94086948e031d2b81df35b0d2ee65e5ea98ff17" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--84669494cd--c6tg9-eth0", GenerateName:"calico-apiserver-84669494cd-", Namespace:"calico-apiserver", SelfLink:"", UID:"600599f4-a6d1-457c-9d46-c0a1214f1987", ResourceVersion:"846", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 37, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"84669494cd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"edcfe40c015f0a7859aa760f892672195034bf9a30e6ea648f2bc5b902d78768", Pod:"calico-apiserver-84669494cd-c6tg9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali49eacf92d1d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:37:45.474512 containerd[1637]: 2025-05-08 00:37:45.452 [INFO][5810] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="19834ce8a9cfaa9b50db698dd94086948e031d2b81df35b0d2ee65e5ea98ff17" May 8 00:37:45.474512 containerd[1637]: 2025-05-08 00:37:45.452 [INFO][5810] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="19834ce8a9cfaa9b50db698dd94086948e031d2b81df35b0d2ee65e5ea98ff17" iface="eth0" netns="" May 8 00:37:45.474512 containerd[1637]: 2025-05-08 00:37:45.452 [INFO][5810] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="19834ce8a9cfaa9b50db698dd94086948e031d2b81df35b0d2ee65e5ea98ff17" May 8 00:37:45.474512 containerd[1637]: 2025-05-08 00:37:45.452 [INFO][5810] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="19834ce8a9cfaa9b50db698dd94086948e031d2b81df35b0d2ee65e5ea98ff17" May 8 00:37:45.474512 containerd[1637]: 2025-05-08 00:37:45.467 [INFO][5817] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="19834ce8a9cfaa9b50db698dd94086948e031d2b81df35b0d2ee65e5ea98ff17" HandleID="k8s-pod-network.19834ce8a9cfaa9b50db698dd94086948e031d2b81df35b0d2ee65e5ea98ff17" Workload="localhost-k8s-calico--apiserver--84669494cd--c6tg9-eth0" May 8 00:37:45.474512 containerd[1637]: 2025-05-08 00:37:45.467 [INFO][5817] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:37:45.474512 containerd[1637]: 2025-05-08 00:37:45.467 [INFO][5817] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:37:45.474512 containerd[1637]: 2025-05-08 00:37:45.471 [WARNING][5817] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="19834ce8a9cfaa9b50db698dd94086948e031d2b81df35b0d2ee65e5ea98ff17" HandleID="k8s-pod-network.19834ce8a9cfaa9b50db698dd94086948e031d2b81df35b0d2ee65e5ea98ff17" Workload="localhost-k8s-calico--apiserver--84669494cd--c6tg9-eth0" May 8 00:37:45.474512 containerd[1637]: 2025-05-08 00:37:45.471 [INFO][5817] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="19834ce8a9cfaa9b50db698dd94086948e031d2b81df35b0d2ee65e5ea98ff17" HandleID="k8s-pod-network.19834ce8a9cfaa9b50db698dd94086948e031d2b81df35b0d2ee65e5ea98ff17" Workload="localhost-k8s-calico--apiserver--84669494cd--c6tg9-eth0" May 8 00:37:45.474512 containerd[1637]: 2025-05-08 00:37:45.472 [INFO][5817] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:37:45.474512 containerd[1637]: 2025-05-08 00:37:45.473 [INFO][5810] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="19834ce8a9cfaa9b50db698dd94086948e031d2b81df35b0d2ee65e5ea98ff17" May 8 00:37:45.474994 containerd[1637]: time="2025-05-08T00:37:45.474539038Z" level=info msg="TearDown network for sandbox \"19834ce8a9cfaa9b50db698dd94086948e031d2b81df35b0d2ee65e5ea98ff17\" successfully" May 8 00:37:45.489088 containerd[1637]: time="2025-05-08T00:37:45.489058767Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"19834ce8a9cfaa9b50db698dd94086948e031d2b81df35b0d2ee65e5ea98ff17\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 8 00:37:45.489335 containerd[1637]: time="2025-05-08T00:37:45.489267203Z" level=info msg="RemovePodSandbox \"19834ce8a9cfaa9b50db698dd94086948e031d2b81df35b0d2ee65e5ea98ff17\" returns successfully" May 8 00:37:46.047629 systemd-resolved[1546]: Under memory pressure, flushing caches. May 8 00:37:46.047635 systemd-resolved[1546]: Flushed all caches. May 8 00:37:46.051860 systemd-journald[1182]: Under memory pressure, flushing caches. May 8 00:37:46.504243 kubelet[2965]: I0508 00:37:46.504215 2965 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 8 00:37:46.574429 kubelet[2965]: I0508 00:37:46.574099 2965 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 8 00:37:46.590382 containerd[1637]: time="2025-05-08T00:37:46.589603973Z" level=info msg="StopContainer for \"9c436249bc233085fd7eb4114e7ad2245767fa5f80372d448c46483f817b5044\" with timeout 30 (s)" May 8 00:37:46.594459 containerd[1637]: time="2025-05-08T00:37:46.593332685Z" level=info msg="Stop container \"9c436249bc233085fd7eb4114e7ad2245767fa5f80372d448c46483f817b5044\" with signal terminated" May 8 00:37:46.666929 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9c436249bc233085fd7eb4114e7ad2245767fa5f80372d448c46483f817b5044-rootfs.mount: Deactivated successfully. May 8 00:37:46.682810 containerd[1637]: time="2025-05-08T00:37:46.649531982Z" level=info msg="shim disconnected" id=9c436249bc233085fd7eb4114e7ad2245767fa5f80372d448c46483f817b5044 namespace=k8s.io May 8 00:37:46.691242 containerd[1637]: time="2025-05-08T00:37:46.691190592Z" level=warning msg="cleaning up after shim disconnected" id=9c436249bc233085fd7eb4114e7ad2245767fa5f80372d448c46483f817b5044 namespace=k8s.io May 8 00:37:46.691242 containerd[1637]: time="2025-05-08T00:37:46.691214791Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 00:37:46.701948 kubelet[2965]: I0508 00:37:46.701825 2965 topology_manager.go:215] "Topology Admit Handler" podUID="940f10b7-156a-4f61-a005-036a60dbe751" podNamespace="calico-apiserver" podName="calico-apiserver-6d6795bb7b-k7g2p" May 8 00:37:46.730849 containerd[1637]: time="2025-05-08T00:37:46.730764206Z" level=info msg="StopContainer for \"9c436249bc233085fd7eb4114e7ad2245767fa5f80372d448c46483f817b5044\" returns successfully" May 8 00:37:46.731261 containerd[1637]: time="2025-05-08T00:37:46.731243295Z" level=info msg="StopPodSandbox for \"edcfe40c015f0a7859aa760f892672195034bf9a30e6ea648f2bc5b902d78768\"" May 8 00:37:46.731300 containerd[1637]: time="2025-05-08T00:37:46.731274142Z" level=info msg="Container to stop \"9c436249bc233085fd7eb4114e7ad2245767fa5f80372d448c46483f817b5044\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 8 00:37:46.733091 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-edcfe40c015f0a7859aa760f892672195034bf9a30e6ea648f2bc5b902d78768-shm.mount: Deactivated successfully. May 8 00:37:46.757195 containerd[1637]: time="2025-05-08T00:37:46.756996062Z" level=info msg="shim disconnected" id=edcfe40c015f0a7859aa760f892672195034bf9a30e6ea648f2bc5b902d78768 namespace=k8s.io May 8 00:37:46.757195 containerd[1637]: time="2025-05-08T00:37:46.757159040Z" level=warning msg="cleaning up after shim disconnected" id=edcfe40c015f0a7859aa760f892672195034bf9a30e6ea648f2bc5b902d78768 namespace=k8s.io May 8 00:37:46.757195 containerd[1637]: time="2025-05-08T00:37:46.757166030Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 00:37:46.758478 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-edcfe40c015f0a7859aa760f892672195034bf9a30e6ea648f2bc5b902d78768-rootfs.mount: Deactivated successfully. May 8 00:37:46.793853 kubelet[2965]: I0508 00:37:46.793825 2965 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5dgxx\" (UniqueName: \"kubernetes.io/projected/940f10b7-156a-4f61-a005-036a60dbe751-kube-api-access-5dgxx\") pod \"calico-apiserver-6d6795bb7b-k7g2p\" (UID: \"940f10b7-156a-4f61-a005-036a60dbe751\") " pod="calico-apiserver/calico-apiserver-6d6795bb7b-k7g2p" May 8 00:37:46.794394 kubelet[2965]: I0508 00:37:46.794354 2965 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/940f10b7-156a-4f61-a005-036a60dbe751-calico-apiserver-certs\") pod \"calico-apiserver-6d6795bb7b-k7g2p\" (UID: \"940f10b7-156a-4f61-a005-036a60dbe751\") " pod="calico-apiserver/calico-apiserver-6d6795bb7b-k7g2p" May 8 00:37:46.799819 systemd-networkd[1293]: cali49eacf92d1d: Link DOWN May 8 00:37:46.799823 systemd-networkd[1293]: cali49eacf92d1d: Lost carrier May 8 00:37:46.851546 containerd[1637]: 2025-05-08 00:37:46.793 [INFO][5912] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="edcfe40c015f0a7859aa760f892672195034bf9a30e6ea648f2bc5b902d78768" May 8 00:37:46.851546 containerd[1637]: 2025-05-08 00:37:46.795 [INFO][5912] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="edcfe40c015f0a7859aa760f892672195034bf9a30e6ea648f2bc5b902d78768" iface="eth0" netns="/var/run/netns/cni-202f7db1-3ec0-6282-699e-e4d078a87278" May 8 00:37:46.851546 containerd[1637]: 2025-05-08 00:37:46.795 [INFO][5912] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="edcfe40c015f0a7859aa760f892672195034bf9a30e6ea648f2bc5b902d78768" iface="eth0" netns="/var/run/netns/cni-202f7db1-3ec0-6282-699e-e4d078a87278" May 8 00:37:46.851546 containerd[1637]: 2025-05-08 00:37:46.808 [INFO][5912] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="edcfe40c015f0a7859aa760f892672195034bf9a30e6ea648f2bc5b902d78768" after=12.695559ms iface="eth0" netns="/var/run/netns/cni-202f7db1-3ec0-6282-699e-e4d078a87278" May 8 00:37:46.851546 containerd[1637]: 2025-05-08 00:37:46.808 [INFO][5912] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="edcfe40c015f0a7859aa760f892672195034bf9a30e6ea648f2bc5b902d78768" May 8 00:37:46.851546 containerd[1637]: 2025-05-08 00:37:46.808 [INFO][5912] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="edcfe40c015f0a7859aa760f892672195034bf9a30e6ea648f2bc5b902d78768" May 8 00:37:46.851546 containerd[1637]: 2025-05-08 00:37:46.821 [INFO][5926] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="edcfe40c015f0a7859aa760f892672195034bf9a30e6ea648f2bc5b902d78768" HandleID="k8s-pod-network.edcfe40c015f0a7859aa760f892672195034bf9a30e6ea648f2bc5b902d78768" Workload="localhost-k8s-calico--apiserver--84669494cd--c6tg9-eth0" May 8 00:37:46.851546 containerd[1637]: 2025-05-08 00:37:46.821 [INFO][5926] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:37:46.851546 containerd[1637]: 2025-05-08 00:37:46.821 [INFO][5926] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:37:46.851546 containerd[1637]: 2025-05-08 00:37:46.848 [INFO][5926] ipam/ipam_plugin.go 431: Released address using handleID ContainerID="edcfe40c015f0a7859aa760f892672195034bf9a30e6ea648f2bc5b902d78768" HandleID="k8s-pod-network.edcfe40c015f0a7859aa760f892672195034bf9a30e6ea648f2bc5b902d78768" Workload="localhost-k8s-calico--apiserver--84669494cd--c6tg9-eth0" May 8 00:37:46.851546 containerd[1637]: 2025-05-08 00:37:46.848 [INFO][5926] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="edcfe40c015f0a7859aa760f892672195034bf9a30e6ea648f2bc5b902d78768" HandleID="k8s-pod-network.edcfe40c015f0a7859aa760f892672195034bf9a30e6ea648f2bc5b902d78768" Workload="localhost-k8s-calico--apiserver--84669494cd--c6tg9-eth0" May 8 00:37:46.851546 containerd[1637]: 2025-05-08 00:37:46.849 [INFO][5926] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:37:46.851546 containerd[1637]: 2025-05-08 00:37:46.850 [INFO][5912] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="edcfe40c015f0a7859aa760f892672195034bf9a30e6ea648f2bc5b902d78768" May 8 00:37:46.853449 containerd[1637]: time="2025-05-08T00:37:46.853423024Z" level=info msg="TearDown network for sandbox \"edcfe40c015f0a7859aa760f892672195034bf9a30e6ea648f2bc5b902d78768\" successfully" May 8 00:37:46.853488 containerd[1637]: time="2025-05-08T00:37:46.853448609Z" level=info msg="StopPodSandbox for \"edcfe40c015f0a7859aa760f892672195034bf9a30e6ea648f2bc5b902d78768\" returns successfully" May 8 00:37:46.854663 systemd[1]: run-netns-cni\x2d202f7db1\x2d3ec0\x2d6282\x2d699e\x2de4d078a87278.mount: Deactivated successfully. May 8 00:37:46.995939 kubelet[2965]: I0508 00:37:46.995904 2965 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rmgph\" (UniqueName: \"kubernetes.io/projected/600599f4-a6d1-457c-9d46-c0a1214f1987-kube-api-access-rmgph\") pod \"600599f4-a6d1-457c-9d46-c0a1214f1987\" (UID: \"600599f4-a6d1-457c-9d46-c0a1214f1987\") " May 8 00:37:46.995939 kubelet[2965]: I0508 00:37:46.995944 2965 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/600599f4-a6d1-457c-9d46-c0a1214f1987-calico-apiserver-certs\") pod \"600599f4-a6d1-457c-9d46-c0a1214f1987\" (UID: \"600599f4-a6d1-457c-9d46-c0a1214f1987\") " May 8 00:37:47.001086 kubelet[2965]: I0508 00:37:46.999688 2965 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/600599f4-a6d1-457c-9d46-c0a1214f1987-kube-api-access-rmgph" (OuterVolumeSpecName: "kube-api-access-rmgph") pod "600599f4-a6d1-457c-9d46-c0a1214f1987" (UID: "600599f4-a6d1-457c-9d46-c0a1214f1987"). InnerVolumeSpecName "kube-api-access-rmgph". PluginName "kubernetes.io/projected", VolumeGidValue "" May 8 00:37:47.001086 kubelet[2965]: I0508 00:37:46.999778 2965 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/600599f4-a6d1-457c-9d46-c0a1214f1987-calico-apiserver-certs" (OuterVolumeSpecName: "calico-apiserver-certs") pod "600599f4-a6d1-457c-9d46-c0a1214f1987" (UID: "600599f4-a6d1-457c-9d46-c0a1214f1987"). InnerVolumeSpecName "calico-apiserver-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" May 8 00:37:47.023021 containerd[1637]: time="2025-05-08T00:37:47.022953323Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d6795bb7b-k7g2p,Uid:940f10b7-156a-4f61-a005-036a60dbe751,Namespace:calico-apiserver,Attempt:0,}" May 8 00:37:47.088625 systemd-networkd[1293]: cali97c99f04de2: Link UP May 8 00:37:47.089326 systemd-networkd[1293]: cali97c99f04de2: Gained carrier May 8 00:37:47.097019 kubelet[2965]: I0508 00:37:47.096958 2965 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-rmgph\" (UniqueName: \"kubernetes.io/projected/600599f4-a6d1-457c-9d46-c0a1214f1987-kube-api-access-rmgph\") on node \"localhost\" DevicePath \"\"" May 8 00:37:47.097019 kubelet[2965]: I0508 00:37:47.096975 2965 reconciler_common.go:289] "Volume detached for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/600599f4-a6d1-457c-9d46-c0a1214f1987-calico-apiserver-certs\") on node \"localhost\" DevicePath \"\"" May 8 00:37:47.098453 containerd[1637]: 2025-05-08 00:37:47.052 [INFO][5939] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--6d6795bb7b--k7g2p-eth0 calico-apiserver-6d6795bb7b- calico-apiserver 940f10b7-156a-4f61-a005-036a60dbe751 928 0 2025-05-08 00:37:46 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6d6795bb7b projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-6d6795bb7b-k7g2p eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali97c99f04de2 [] []}} ContainerID="2ef0e94048b0bec9db2c67c937c2df7847dcb988b8988d0046da9c31bdde45e4" Namespace="calico-apiserver" Pod="calico-apiserver-6d6795bb7b-k7g2p" WorkloadEndpoint="localhost-k8s-calico--apiserver--6d6795bb7b--k7g2p-" May 8 00:37:47.098453 containerd[1637]: 2025-05-08 00:37:47.052 [INFO][5939] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="2ef0e94048b0bec9db2c67c937c2df7847dcb988b8988d0046da9c31bdde45e4" Namespace="calico-apiserver" Pod="calico-apiserver-6d6795bb7b-k7g2p" WorkloadEndpoint="localhost-k8s-calico--apiserver--6d6795bb7b--k7g2p-eth0" May 8 00:37:47.098453 containerd[1637]: 2025-05-08 00:37:47.066 [INFO][5950] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2ef0e94048b0bec9db2c67c937c2df7847dcb988b8988d0046da9c31bdde45e4" HandleID="k8s-pod-network.2ef0e94048b0bec9db2c67c937c2df7847dcb988b8988d0046da9c31bdde45e4" Workload="localhost-k8s-calico--apiserver--6d6795bb7b--k7g2p-eth0" May 8 00:37:47.098453 containerd[1637]: 2025-05-08 00:37:47.072 [INFO][5950] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="2ef0e94048b0bec9db2c67c937c2df7847dcb988b8988d0046da9c31bdde45e4" HandleID="k8s-pod-network.2ef0e94048b0bec9db2c67c937c2df7847dcb988b8988d0046da9c31bdde45e4" Workload="localhost-k8s-calico--apiserver--6d6795bb7b--k7g2p-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000384aa0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-6d6795bb7b-k7g2p", "timestamp":"2025-05-08 00:37:47.066891197 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 8 00:37:47.098453 containerd[1637]: 2025-05-08 00:37:47.072 [INFO][5950] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:37:47.098453 containerd[1637]: 2025-05-08 00:37:47.072 [INFO][5950] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:37:47.098453 containerd[1637]: 2025-05-08 00:37:47.072 [INFO][5950] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 8 00:37:47.098453 containerd[1637]: 2025-05-08 00:37:47.074 [INFO][5950] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.2ef0e94048b0bec9db2c67c937c2df7847dcb988b8988d0046da9c31bdde45e4" host="localhost" May 8 00:37:47.098453 containerd[1637]: 2025-05-08 00:37:47.075 [INFO][5950] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 8 00:37:47.098453 containerd[1637]: 2025-05-08 00:37:47.077 [INFO][5950] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 8 00:37:47.098453 containerd[1637]: 2025-05-08 00:37:47.078 [INFO][5950] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 8 00:37:47.098453 containerd[1637]: 2025-05-08 00:37:47.079 [INFO][5950] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 8 00:37:47.098453 containerd[1637]: 2025-05-08 00:37:47.079 [INFO][5950] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.2ef0e94048b0bec9db2c67c937c2df7847dcb988b8988d0046da9c31bdde45e4" host="localhost" May 8 00:37:47.098453 containerd[1637]: 2025-05-08 00:37:47.080 [INFO][5950] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.2ef0e94048b0bec9db2c67c937c2df7847dcb988b8988d0046da9c31bdde45e4 May 8 00:37:47.098453 containerd[1637]: 2025-05-08 00:37:47.082 [INFO][5950] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.2ef0e94048b0bec9db2c67c937c2df7847dcb988b8988d0046da9c31bdde45e4" host="localhost" May 8 00:37:47.098453 containerd[1637]: 2025-05-08 00:37:47.085 [INFO][5950] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.2ef0e94048b0bec9db2c67c937c2df7847dcb988b8988d0046da9c31bdde45e4" host="localhost" May 8 00:37:47.098453 containerd[1637]: 2025-05-08 00:37:47.085 [INFO][5950] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.2ef0e94048b0bec9db2c67c937c2df7847dcb988b8988d0046da9c31bdde45e4" host="localhost" May 8 00:37:47.098453 containerd[1637]: 2025-05-08 00:37:47.085 [INFO][5950] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:37:47.098453 containerd[1637]: 2025-05-08 00:37:47.085 [INFO][5950] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="2ef0e94048b0bec9db2c67c937c2df7847dcb988b8988d0046da9c31bdde45e4" HandleID="k8s-pod-network.2ef0e94048b0bec9db2c67c937c2df7847dcb988b8988d0046da9c31bdde45e4" Workload="localhost-k8s-calico--apiserver--6d6795bb7b--k7g2p-eth0" May 8 00:37:47.098824 containerd[1637]: 2025-05-08 00:37:47.086 [INFO][5939] cni-plugin/k8s.go 386: Populated endpoint ContainerID="2ef0e94048b0bec9db2c67c937c2df7847dcb988b8988d0046da9c31bdde45e4" Namespace="calico-apiserver" Pod="calico-apiserver-6d6795bb7b-k7g2p" WorkloadEndpoint="localhost-k8s-calico--apiserver--6d6795bb7b--k7g2p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6d6795bb7b--k7g2p-eth0", GenerateName:"calico-apiserver-6d6795bb7b-", Namespace:"calico-apiserver", SelfLink:"", UID:"940f10b7-156a-4f61-a005-036a60dbe751", ResourceVersion:"928", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 37, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6d6795bb7b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-6d6795bb7b-k7g2p", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali97c99f04de2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:37:47.098824 containerd[1637]: 2025-05-08 00:37:47.086 [INFO][5939] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.136/32] ContainerID="2ef0e94048b0bec9db2c67c937c2df7847dcb988b8988d0046da9c31bdde45e4" Namespace="calico-apiserver" Pod="calico-apiserver-6d6795bb7b-k7g2p" WorkloadEndpoint="localhost-k8s-calico--apiserver--6d6795bb7b--k7g2p-eth0" May 8 00:37:47.098824 containerd[1637]: 2025-05-08 00:37:47.086 [INFO][5939] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali97c99f04de2 ContainerID="2ef0e94048b0bec9db2c67c937c2df7847dcb988b8988d0046da9c31bdde45e4" Namespace="calico-apiserver" Pod="calico-apiserver-6d6795bb7b-k7g2p" WorkloadEndpoint="localhost-k8s-calico--apiserver--6d6795bb7b--k7g2p-eth0" May 8 00:37:47.098824 containerd[1637]: 2025-05-08 00:37:47.089 [INFO][5939] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2ef0e94048b0bec9db2c67c937c2df7847dcb988b8988d0046da9c31bdde45e4" Namespace="calico-apiserver" Pod="calico-apiserver-6d6795bb7b-k7g2p" WorkloadEndpoint="localhost-k8s-calico--apiserver--6d6795bb7b--k7g2p-eth0" May 8 00:37:47.098824 containerd[1637]: 2025-05-08 00:37:47.089 [INFO][5939] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="2ef0e94048b0bec9db2c67c937c2df7847dcb988b8988d0046da9c31bdde45e4" Namespace="calico-apiserver" Pod="calico-apiserver-6d6795bb7b-k7g2p" WorkloadEndpoint="localhost-k8s-calico--apiserver--6d6795bb7b--k7g2p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6d6795bb7b--k7g2p-eth0", GenerateName:"calico-apiserver-6d6795bb7b-", Namespace:"calico-apiserver", SelfLink:"", UID:"940f10b7-156a-4f61-a005-036a60dbe751", ResourceVersion:"928", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 37, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6d6795bb7b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2ef0e94048b0bec9db2c67c937c2df7847dcb988b8988d0046da9c31bdde45e4", Pod:"calico-apiserver-6d6795bb7b-k7g2p", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali97c99f04de2", MAC:"0a:47:df:77:af:ff", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:37:47.098824 containerd[1637]: 2025-05-08 00:37:47.097 [INFO][5939] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="2ef0e94048b0bec9db2c67c937c2df7847dcb988b8988d0046da9c31bdde45e4" Namespace="calico-apiserver" Pod="calico-apiserver-6d6795bb7b-k7g2p" WorkloadEndpoint="localhost-k8s-calico--apiserver--6d6795bb7b--k7g2p-eth0" May 8 00:37:47.112210 containerd[1637]: time="2025-05-08T00:37:47.112107999Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:37:47.112508 containerd[1637]: time="2025-05-08T00:37:47.112218196Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:37:47.112508 containerd[1637]: time="2025-05-08T00:37:47.112262513Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:37:47.112574 containerd[1637]: time="2025-05-08T00:37:47.112553209Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:37:47.135893 systemd-resolved[1546]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 8 00:37:47.156540 containerd[1637]: time="2025-05-08T00:37:47.156519764Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d6795bb7b-k7g2p,Uid:940f10b7-156a-4f61-a005-036a60dbe751,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"2ef0e94048b0bec9db2c67c937c2df7847dcb988b8988d0046da9c31bdde45e4\"" May 8 00:37:47.163279 containerd[1637]: time="2025-05-08T00:37:47.163245814Z" level=info msg="CreateContainer within sandbox \"2ef0e94048b0bec9db2c67c937c2df7847dcb988b8988d0046da9c31bdde45e4\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 8 00:37:47.166986 containerd[1637]: time="2025-05-08T00:37:47.166898881Z" level=info msg="CreateContainer within sandbox \"2ef0e94048b0bec9db2c67c937c2df7847dcb988b8988d0046da9c31bdde45e4\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"69ca071000c6a53ef40633a63300b8db96307bf05da44013bf79e72b493e75c8\"" May 8 00:37:47.168101 containerd[1637]: time="2025-05-08T00:37:47.167879803Z" level=info msg="StartContainer for \"69ca071000c6a53ef40633a63300b8db96307bf05da44013bf79e72b493e75c8\"" May 8 00:37:47.215905 containerd[1637]: time="2025-05-08T00:37:47.215886958Z" level=info msg="StartContainer for \"69ca071000c6a53ef40633a63300b8db96307bf05da44013bf79e72b493e75c8\" returns successfully" May 8 00:37:47.654205 systemd[1]: var-lib-kubelet-pods-600599f4\x2da6d1\x2d457c\x2d9d46\x2dc0a1214f1987-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2drmgph.mount: Deactivated successfully. May 8 00:37:47.654290 systemd[1]: var-lib-kubelet-pods-600599f4\x2da6d1\x2d457c\x2d9d46\x2dc0a1214f1987-volumes-kubernetes.io\x7esecret-calico\x2dapiserver\x2dcerts.mount: Deactivated successfully. May 8 00:37:47.768498 kubelet[2965]: I0508 00:37:47.768478 2965 scope.go:117] "RemoveContainer" containerID="9c436249bc233085fd7eb4114e7ad2245767fa5f80372d448c46483f817b5044" May 8 00:37:47.769222 containerd[1637]: time="2025-05-08T00:37:47.769202061Z" level=info msg="RemoveContainer for \"9c436249bc233085fd7eb4114e7ad2245767fa5f80372d448c46483f817b5044\"" May 8 00:37:47.781759 containerd[1637]: time="2025-05-08T00:37:47.781733452Z" level=info msg="RemoveContainer for \"9c436249bc233085fd7eb4114e7ad2245767fa5f80372d448c46483f817b5044\" returns successfully" May 8 00:37:47.853290 kubelet[2965]: I0508 00:37:47.852760 2965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6d6795bb7b-k7g2p" podStartSLOduration=1.845270589 podStartE2EDuration="1.845270589s" podCreationTimestamp="2025-05-08 00:37:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 00:37:47.792185981 +0000 UTC m=+63.725494614" watchObservedRunningTime="2025-05-08 00:37:47.845270589 +0000 UTC m=+63.778579224" May 8 00:37:48.095565 systemd-resolved[1546]: Under memory pressure, flushing caches. May 8 00:37:48.097598 systemd-journald[1182]: Under memory pressure, flushing caches. May 8 00:37:48.095569 systemd-resolved[1546]: Flushed all caches. May 8 00:37:48.188746 kubelet[2965]: I0508 00:37:48.188695 2965 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="600599f4-a6d1-457c-9d46-c0a1214f1987" path="/var/lib/kubelet/pods/600599f4-a6d1-457c-9d46-c0a1214f1987/volumes" May 8 00:37:48.245020 kubelet[2965]: I0508 00:37:48.244996 2965 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 8 00:37:48.246031 containerd[1637]: time="2025-05-08T00:37:48.245988381Z" level=info msg="StopContainer for \"90b431eb58d294c55f6774e885895b227e621bb4f26c69b5accddaa59beb9157\" with timeout 30 (s)" May 8 00:37:48.247790 containerd[1637]: time="2025-05-08T00:37:48.247567535Z" level=info msg="Stop container \"90b431eb58d294c55f6774e885895b227e621bb4f26c69b5accddaa59beb9157\" with signal terminated" May 8 00:37:48.280062 containerd[1637]: time="2025-05-08T00:37:48.279955494Z" level=info msg="shim disconnected" id=90b431eb58d294c55f6774e885895b227e621bb4f26c69b5accddaa59beb9157 namespace=k8s.io May 8 00:37:48.280252 containerd[1637]: time="2025-05-08T00:37:48.280045738Z" level=warning msg="cleaning up after shim disconnected" id=90b431eb58d294c55f6774e885895b227e621bb4f26c69b5accddaa59beb9157 namespace=k8s.io May 8 00:37:48.280252 containerd[1637]: time="2025-05-08T00:37:48.280168374Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 00:37:48.280946 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-90b431eb58d294c55f6774e885895b227e621bb4f26c69b5accddaa59beb9157-rootfs.mount: Deactivated successfully. May 8 00:37:48.293749 containerd[1637]: time="2025-05-08T00:37:48.293717550Z" level=info msg="StopContainer for \"90b431eb58d294c55f6774e885895b227e621bb4f26c69b5accddaa59beb9157\" returns successfully" May 8 00:37:48.294154 containerd[1637]: time="2025-05-08T00:37:48.294139952Z" level=info msg="StopPodSandbox for \"274db3b8176f7e43b0e3db3940d31fbbbe687423c83a8cb33e8e7ef5ef44bdb8\"" May 8 00:37:48.294184 containerd[1637]: time="2025-05-08T00:37:48.294162382Z" level=info msg="Container to stop \"90b431eb58d294c55f6774e885895b227e621bb4f26c69b5accddaa59beb9157\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 8 00:37:48.296082 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-274db3b8176f7e43b0e3db3940d31fbbbe687423c83a8cb33e8e7ef5ef44bdb8-shm.mount: Deactivated successfully. May 8 00:37:48.315112 containerd[1637]: time="2025-05-08T00:37:48.314054413Z" level=info msg="shim disconnected" id=274db3b8176f7e43b0e3db3940d31fbbbe687423c83a8cb33e8e7ef5ef44bdb8 namespace=k8s.io May 8 00:37:48.315112 containerd[1637]: time="2025-05-08T00:37:48.314252041Z" level=warning msg="cleaning up after shim disconnected" id=274db3b8176f7e43b0e3db3940d31fbbbe687423c83a8cb33e8e7ef5ef44bdb8 namespace=k8s.io May 8 00:37:48.315112 containerd[1637]: time="2025-05-08T00:37:48.314261570Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 00:37:48.317085 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-274db3b8176f7e43b0e3db3940d31fbbbe687423c83a8cb33e8e7ef5ef44bdb8-rootfs.mount: Deactivated successfully. May 8 00:37:48.371000 systemd-networkd[1293]: cali587a6cb7427: Link DOWN May 8 00:37:48.371005 systemd-networkd[1293]: cali587a6cb7427: Lost carrier May 8 00:37:48.415959 containerd[1637]: 2025-05-08 00:37:48.367 [INFO][6148] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="274db3b8176f7e43b0e3db3940d31fbbbe687423c83a8cb33e8e7ef5ef44bdb8" May 8 00:37:48.415959 containerd[1637]: 2025-05-08 00:37:48.367 [INFO][6148] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="274db3b8176f7e43b0e3db3940d31fbbbe687423c83a8cb33e8e7ef5ef44bdb8" iface="eth0" netns="/var/run/netns/cni-d6c1cdfd-522e-05a9-b52c-4f9b44310079" May 8 00:37:48.415959 containerd[1637]: 2025-05-08 00:37:48.369 [INFO][6148] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="274db3b8176f7e43b0e3db3940d31fbbbe687423c83a8cb33e8e7ef5ef44bdb8" iface="eth0" netns="/var/run/netns/cni-d6c1cdfd-522e-05a9-b52c-4f9b44310079" May 8 00:37:48.415959 containerd[1637]: 2025-05-08 00:37:48.382 [INFO][6148] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="274db3b8176f7e43b0e3db3940d31fbbbe687423c83a8cb33e8e7ef5ef44bdb8" after=14.527839ms iface="eth0" netns="/var/run/netns/cni-d6c1cdfd-522e-05a9-b52c-4f9b44310079" May 8 00:37:48.415959 containerd[1637]: 2025-05-08 00:37:48.382 [INFO][6148] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="274db3b8176f7e43b0e3db3940d31fbbbe687423c83a8cb33e8e7ef5ef44bdb8" May 8 00:37:48.415959 containerd[1637]: 2025-05-08 00:37:48.382 [INFO][6148] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="274db3b8176f7e43b0e3db3940d31fbbbe687423c83a8cb33e8e7ef5ef44bdb8" May 8 00:37:48.415959 containerd[1637]: 2025-05-08 00:37:48.394 [INFO][6161] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="274db3b8176f7e43b0e3db3940d31fbbbe687423c83a8cb33e8e7ef5ef44bdb8" HandleID="k8s-pod-network.274db3b8176f7e43b0e3db3940d31fbbbe687423c83a8cb33e8e7ef5ef44bdb8" Workload="localhost-k8s-calico--apiserver--84669494cd--gzmz2-eth0" May 8 00:37:48.415959 containerd[1637]: 2025-05-08 00:37:48.394 [INFO][6161] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:37:48.415959 containerd[1637]: 2025-05-08 00:37:48.394 [INFO][6161] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:37:48.415959 containerd[1637]: 2025-05-08 00:37:48.413 [INFO][6161] ipam/ipam_plugin.go 431: Released address using handleID ContainerID="274db3b8176f7e43b0e3db3940d31fbbbe687423c83a8cb33e8e7ef5ef44bdb8" HandleID="k8s-pod-network.274db3b8176f7e43b0e3db3940d31fbbbe687423c83a8cb33e8e7ef5ef44bdb8" Workload="localhost-k8s-calico--apiserver--84669494cd--gzmz2-eth0" May 8 00:37:48.415959 containerd[1637]: 2025-05-08 00:37:48.413 [INFO][6161] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="274db3b8176f7e43b0e3db3940d31fbbbe687423c83a8cb33e8e7ef5ef44bdb8" HandleID="k8s-pod-network.274db3b8176f7e43b0e3db3940d31fbbbe687423c83a8cb33e8e7ef5ef44bdb8" Workload="localhost-k8s-calico--apiserver--84669494cd--gzmz2-eth0" May 8 00:37:48.415959 containerd[1637]: 2025-05-08 00:37:48.414 [INFO][6161] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:37:48.415959 containerd[1637]: 2025-05-08 00:37:48.415 [INFO][6148] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="274db3b8176f7e43b0e3db3940d31fbbbe687423c83a8cb33e8e7ef5ef44bdb8" May 8 00:37:48.417065 containerd[1637]: time="2025-05-08T00:37:48.416191600Z" level=info msg="TearDown network for sandbox \"274db3b8176f7e43b0e3db3940d31fbbbe687423c83a8cb33e8e7ef5ef44bdb8\" successfully" May 8 00:37:48.417065 containerd[1637]: time="2025-05-08T00:37:48.416210491Z" level=info msg="StopPodSandbox for \"274db3b8176f7e43b0e3db3940d31fbbbe687423c83a8cb33e8e7ef5ef44bdb8\" returns successfully" May 8 00:37:48.505536 kubelet[2965]: I0508 00:37:48.505510 2965 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rpvcz\" (UniqueName: \"kubernetes.io/projected/8a350a15-4629-49d8-9537-09e1c8aafb63-kube-api-access-rpvcz\") pod \"8a350a15-4629-49d8-9537-09e1c8aafb63\" (UID: \"8a350a15-4629-49d8-9537-09e1c8aafb63\") " May 8 00:37:48.505646 kubelet[2965]: I0508 00:37:48.505548 2965 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/8a350a15-4629-49d8-9537-09e1c8aafb63-calico-apiserver-certs\") pod \"8a350a15-4629-49d8-9537-09e1c8aafb63\" (UID: \"8a350a15-4629-49d8-9537-09e1c8aafb63\") " May 8 00:37:48.508373 kubelet[2965]: I0508 00:37:48.508353 2965 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8a350a15-4629-49d8-9537-09e1c8aafb63-kube-api-access-rpvcz" (OuterVolumeSpecName: "kube-api-access-rpvcz") pod "8a350a15-4629-49d8-9537-09e1c8aafb63" (UID: "8a350a15-4629-49d8-9537-09e1c8aafb63"). InnerVolumeSpecName "kube-api-access-rpvcz". PluginName "kubernetes.io/projected", VolumeGidValue "" May 8 00:37:48.508480 kubelet[2965]: I0508 00:37:48.508461 2965 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8a350a15-4629-49d8-9537-09e1c8aafb63-calico-apiserver-certs" (OuterVolumeSpecName: "calico-apiserver-certs") pod "8a350a15-4629-49d8-9537-09e1c8aafb63" (UID: "8a350a15-4629-49d8-9537-09e1c8aafb63"). InnerVolumeSpecName "calico-apiserver-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" May 8 00:37:48.606390 kubelet[2965]: I0508 00:37:48.606364 2965 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-rpvcz\" (UniqueName: \"kubernetes.io/projected/8a350a15-4629-49d8-9537-09e1c8aafb63-kube-api-access-rpvcz\") on node \"localhost\" DevicePath \"\"" May 8 00:37:48.606390 kubelet[2965]: I0508 00:37:48.606388 2965 reconciler_common.go:289] "Volume detached for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/8a350a15-4629-49d8-9537-09e1c8aafb63-calico-apiserver-certs\") on node \"localhost\" DevicePath \"\"" May 8 00:37:48.650867 systemd[1]: run-netns-cni\x2dd6c1cdfd\x2d522e\x2d05a9\x2db52c\x2d4f9b44310079.mount: Deactivated successfully. May 8 00:37:48.651683 systemd[1]: var-lib-kubelet-pods-8a350a15\x2d4629\x2d49d8\x2d9537\x2d09e1c8aafb63-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2drpvcz.mount: Deactivated successfully. May 8 00:37:48.652039 systemd[1]: var-lib-kubelet-pods-8a350a15\x2d4629\x2d49d8\x2d9537\x2d09e1c8aafb63-volumes-kubernetes.io\x7esecret-calico\x2dapiserver\x2dcerts.mount: Deactivated successfully. May 8 00:37:48.757939 kubelet[2965]: I0508 00:37:48.757884 2965 scope.go:117] "RemoveContainer" containerID="90b431eb58d294c55f6774e885895b227e621bb4f26c69b5accddaa59beb9157" May 8 00:37:48.759390 containerd[1637]: time="2025-05-08T00:37:48.759359866Z" level=info msg="RemoveContainer for \"90b431eb58d294c55f6774e885895b227e621bb4f26c69b5accddaa59beb9157\"" May 8 00:37:48.764386 containerd[1637]: time="2025-05-08T00:37:48.764359973Z" level=info msg="RemoveContainer for \"90b431eb58d294c55f6774e885895b227e621bb4f26c69b5accddaa59beb9157\" returns successfully" May 8 00:37:48.764663 kubelet[2965]: I0508 00:37:48.764513 2965 scope.go:117] "RemoveContainer" containerID="90b431eb58d294c55f6774e885895b227e621bb4f26c69b5accddaa59beb9157" May 8 00:37:48.786302 containerd[1637]: time="2025-05-08T00:37:48.768911010Z" level=error msg="ContainerStatus for \"90b431eb58d294c55f6774e885895b227e621bb4f26c69b5accddaa59beb9157\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"90b431eb58d294c55f6774e885895b227e621bb4f26c69b5accddaa59beb9157\": not found" May 8 00:37:48.791400 kubelet[2965]: E0508 00:37:48.791333 2965 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"90b431eb58d294c55f6774e885895b227e621bb4f26c69b5accddaa59beb9157\": not found" containerID="90b431eb58d294c55f6774e885895b227e621bb4f26c69b5accddaa59beb9157" May 8 00:37:48.791400 kubelet[2965]: I0508 00:37:48.791385 2965 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"90b431eb58d294c55f6774e885895b227e621bb4f26c69b5accddaa59beb9157"} err="failed to get container status \"90b431eb58d294c55f6774e885895b227e621bb4f26c69b5accddaa59beb9157\": rpc error: code = NotFound desc = an error occurred when try to find container \"90b431eb58d294c55f6774e885895b227e621bb4f26c69b5accddaa59beb9157\": not found" May 8 00:37:49.055546 systemd-networkd[1293]: cali97c99f04de2: Gained IPv6LL May 8 00:37:49.493071 containerd[1637]: time="2025-05-08T00:37:49.492578590Z" level=info msg="StopContainer for \"3e78d53cffe15a6d49f578b062ee6db90656f4507b680b9f2c8ce41959648e82\" with timeout 300 (s)" May 8 00:37:49.495679 containerd[1637]: time="2025-05-08T00:37:49.495327621Z" level=info msg="Stop container \"3e78d53cffe15a6d49f578b062ee6db90656f4507b680b9f2c8ce41959648e82\" with signal terminated" May 8 00:37:49.689940 systemd[1]: run-containerd-runc-k8s.io-1df0fc809281868abff29d85e56af98d898c199404f2ee61e2517d49aff4936a-runc.pJEW3l.mount: Deactivated successfully. May 8 00:37:49.744503 containerd[1637]: time="2025-05-08T00:37:49.744232824Z" level=info msg="StopContainer for \"1df0fc809281868abff29d85e56af98d898c199404f2ee61e2517d49aff4936a\" with timeout 5 (s)" May 8 00:37:49.745089 containerd[1637]: time="2025-05-08T00:37:49.744736184Z" level=info msg="Stop container \"1df0fc809281868abff29d85e56af98d898c199404f2ee61e2517d49aff4936a\" with signal terminated" May 8 00:37:49.779682 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1df0fc809281868abff29d85e56af98d898c199404f2ee61e2517d49aff4936a-rootfs.mount: Deactivated successfully. May 8 00:37:49.780801 containerd[1637]: time="2025-05-08T00:37:49.780678387Z" level=info msg="shim disconnected" id=1df0fc809281868abff29d85e56af98d898c199404f2ee61e2517d49aff4936a namespace=k8s.io May 8 00:37:49.780801 containerd[1637]: time="2025-05-08T00:37:49.780717141Z" level=warning msg="cleaning up after shim disconnected" id=1df0fc809281868abff29d85e56af98d898c199404f2ee61e2517d49aff4936a namespace=k8s.io May 8 00:37:49.780801 containerd[1637]: time="2025-05-08T00:37:49.780722769Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 00:37:49.795186 containerd[1637]: time="2025-05-08T00:37:49.795158716Z" level=info msg="StopContainer for \"1df0fc809281868abff29d85e56af98d898c199404f2ee61e2517d49aff4936a\" returns successfully" May 8 00:37:49.795698 containerd[1637]: time="2025-05-08T00:37:49.795495873Z" level=info msg="StopPodSandbox for \"eb2d614a0f671a46ede64a02eccc94498a3b0c4be905254c10447f476a4aef48\"" May 8 00:37:49.795698 containerd[1637]: time="2025-05-08T00:37:49.795518334Z" level=info msg="Container to stop \"33a7d5023db43c9a0eb26a408542055ff30751ec87c137a89fe308345e353d25\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 8 00:37:49.795698 containerd[1637]: time="2025-05-08T00:37:49.795526141Z" level=info msg="Container to stop \"1e74ce039d1f811396c9bb53a15b9a07606a480f5ebba0e4c71739bc2486e75f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 8 00:37:49.795698 containerd[1637]: time="2025-05-08T00:37:49.795531208Z" level=info msg="Container to stop \"1df0fc809281868abff29d85e56af98d898c199404f2ee61e2517d49aff4936a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 8 00:37:49.798320 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-eb2d614a0f671a46ede64a02eccc94498a3b0c4be905254c10447f476a4aef48-shm.mount: Deactivated successfully. May 8 00:37:49.815165 containerd[1637]: time="2025-05-08T00:37:49.815043128Z" level=info msg="shim disconnected" id=eb2d614a0f671a46ede64a02eccc94498a3b0c4be905254c10447f476a4aef48 namespace=k8s.io May 8 00:37:49.815165 containerd[1637]: time="2025-05-08T00:37:49.815080350Z" level=warning msg="cleaning up after shim disconnected" id=eb2d614a0f671a46ede64a02eccc94498a3b0c4be905254c10447f476a4aef48 namespace=k8s.io May 8 00:37:49.815165 containerd[1637]: time="2025-05-08T00:37:49.815086961Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 00:37:49.815109 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-eb2d614a0f671a46ede64a02eccc94498a3b0c4be905254c10447f476a4aef48-rootfs.mount: Deactivated successfully. May 8 00:37:49.827050 containerd[1637]: time="2025-05-08T00:37:49.826979708Z" level=info msg="TearDown network for sandbox \"eb2d614a0f671a46ede64a02eccc94498a3b0c4be905254c10447f476a4aef48\" successfully" May 8 00:37:49.827050 containerd[1637]: time="2025-05-08T00:37:49.827006180Z" level=info msg="StopPodSandbox for \"eb2d614a0f671a46ede64a02eccc94498a3b0c4be905254c10447f476a4aef48\" returns successfully" May 8 00:37:49.846548 kubelet[2965]: I0508 00:37:49.846374 2965 topology_manager.go:215] "Topology Admit Handler" podUID="e6f81d35-ad88-4592-b6a2-e4cb7e57b962" podNamespace="calico-system" podName="calico-node-sqdnl" May 8 00:37:49.853966 kubelet[2965]: E0508 00:37:49.853944 2965 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="eec1817e-5afa-4b99-9c1c-1caa3e33fbd4" containerName="flexvol-driver" May 8 00:37:49.854038 kubelet[2965]: E0508 00:37:49.853972 2965 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="eec1817e-5afa-4b99-9c1c-1caa3e33fbd4" containerName="calico-node" May 8 00:37:49.854038 kubelet[2965]: E0508 00:37:49.853979 2965 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="eec1817e-5afa-4b99-9c1c-1caa3e33fbd4" containerName="install-cni" May 8 00:37:49.854038 kubelet[2965]: E0508 00:37:49.853982 2965 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="8a350a15-4629-49d8-9537-09e1c8aafb63" containerName="calico-apiserver" May 8 00:37:49.854038 kubelet[2965]: E0508 00:37:49.853986 2965 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="600599f4-a6d1-457c-9d46-c0a1214f1987" containerName="calico-apiserver" May 8 00:37:49.856720 kubelet[2965]: I0508 00:37:49.856612 2965 memory_manager.go:354] "RemoveStaleState removing state" podUID="eec1817e-5afa-4b99-9c1c-1caa3e33fbd4" containerName="calico-node" May 8 00:37:49.856720 kubelet[2965]: I0508 00:37:49.856632 2965 memory_manager.go:354] "RemoveStaleState removing state" podUID="600599f4-a6d1-457c-9d46-c0a1214f1987" containerName="calico-apiserver" May 8 00:37:49.856720 kubelet[2965]: I0508 00:37:49.856639 2965 memory_manager.go:354] "RemoveStaleState removing state" podUID="8a350a15-4629-49d8-9537-09e1c8aafb63" containerName="calico-apiserver" May 8 00:37:49.875175 containerd[1637]: time="2025-05-08T00:37:49.875120819Z" level=info msg="StopContainer for \"0ee833785bb985b3f3101797a221cc86140c2e0087280819e7facda593131efd\" with timeout 30 (s)" May 8 00:37:49.877691 containerd[1637]: time="2025-05-08T00:37:49.877640707Z" level=info msg="Stop container \"0ee833785bb985b3f3101797a221cc86140c2e0087280819e7facda593131efd\" with signal terminated" May 8 00:37:49.909470 containerd[1637]: time="2025-05-08T00:37:49.909380170Z" level=info msg="shim disconnected" id=0ee833785bb985b3f3101797a221cc86140c2e0087280819e7facda593131efd namespace=k8s.io May 8 00:37:49.909470 containerd[1637]: time="2025-05-08T00:37:49.909416149Z" level=warning msg="cleaning up after shim disconnected" id=0ee833785bb985b3f3101797a221cc86140c2e0087280819e7facda593131efd namespace=k8s.io May 8 00:37:49.909470 containerd[1637]: time="2025-05-08T00:37:49.909421868Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 00:37:49.913330 kubelet[2965]: I0508 00:37:49.913312 2965 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-89zrj\" (UniqueName: \"kubernetes.io/projected/eec1817e-5afa-4b99-9c1c-1caa3e33fbd4-kube-api-access-89zrj\") pod \"eec1817e-5afa-4b99-9c1c-1caa3e33fbd4\" (UID: \"eec1817e-5afa-4b99-9c1c-1caa3e33fbd4\") " May 8 00:37:49.913545 kubelet[2965]: I0508 00:37:49.913334 2965 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/eec1817e-5afa-4b99-9c1c-1caa3e33fbd4-var-lib-calico\") pod \"eec1817e-5afa-4b99-9c1c-1caa3e33fbd4\" (UID: \"eec1817e-5afa-4b99-9c1c-1caa3e33fbd4\") " May 8 00:37:49.913545 kubelet[2965]: I0508 00:37:49.913360 2965 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/eec1817e-5afa-4b99-9c1c-1caa3e33fbd4-node-certs\") pod \"eec1817e-5afa-4b99-9c1c-1caa3e33fbd4\" (UID: \"eec1817e-5afa-4b99-9c1c-1caa3e33fbd4\") " May 8 00:37:49.913545 kubelet[2965]: I0508 00:37:49.913376 2965 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/eec1817e-5afa-4b99-9c1c-1caa3e33fbd4-cni-net-dir\") pod \"eec1817e-5afa-4b99-9c1c-1caa3e33fbd4\" (UID: \"eec1817e-5afa-4b99-9c1c-1caa3e33fbd4\") " May 8 00:37:49.913545 kubelet[2965]: I0508 00:37:49.913386 2965 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/eec1817e-5afa-4b99-9c1c-1caa3e33fbd4-flexvol-driver-host\") pod \"eec1817e-5afa-4b99-9c1c-1caa3e33fbd4\" (UID: \"eec1817e-5afa-4b99-9c1c-1caa3e33fbd4\") " May 8 00:37:49.913545 kubelet[2965]: I0508 00:37:49.913394 2965 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/eec1817e-5afa-4b99-9c1c-1caa3e33fbd4-policysync\") pod \"eec1817e-5afa-4b99-9c1c-1caa3e33fbd4\" (UID: \"eec1817e-5afa-4b99-9c1c-1caa3e33fbd4\") " May 8 00:37:49.913545 kubelet[2965]: I0508 00:37:49.913403 2965 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/eec1817e-5afa-4b99-9c1c-1caa3e33fbd4-cni-log-dir\") pod \"eec1817e-5afa-4b99-9c1c-1caa3e33fbd4\" (UID: \"eec1817e-5afa-4b99-9c1c-1caa3e33fbd4\") " May 8 00:37:49.914799 kubelet[2965]: I0508 00:37:49.913412 2965 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/eec1817e-5afa-4b99-9c1c-1caa3e33fbd4-var-run-calico\") pod \"eec1817e-5afa-4b99-9c1c-1caa3e33fbd4\" (UID: \"eec1817e-5afa-4b99-9c1c-1caa3e33fbd4\") " May 8 00:37:49.914799 kubelet[2965]: I0508 00:37:49.913423 2965 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/eec1817e-5afa-4b99-9c1c-1caa3e33fbd4-tigera-ca-bundle\") pod \"eec1817e-5afa-4b99-9c1c-1caa3e33fbd4\" (UID: \"eec1817e-5afa-4b99-9c1c-1caa3e33fbd4\") " May 8 00:37:49.914799 kubelet[2965]: I0508 00:37:49.913433 2965 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/eec1817e-5afa-4b99-9c1c-1caa3e33fbd4-xtables-lock\") pod \"eec1817e-5afa-4b99-9c1c-1caa3e33fbd4\" (UID: \"eec1817e-5afa-4b99-9c1c-1caa3e33fbd4\") " May 8 00:37:49.914799 kubelet[2965]: I0508 00:37:49.913447 2965 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/eec1817e-5afa-4b99-9c1c-1caa3e33fbd4-lib-modules\") pod \"eec1817e-5afa-4b99-9c1c-1caa3e33fbd4\" (UID: \"eec1817e-5afa-4b99-9c1c-1caa3e33fbd4\") " May 8 00:37:49.914799 kubelet[2965]: I0508 00:37:49.913454 2965 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/eec1817e-5afa-4b99-9c1c-1caa3e33fbd4-cni-bin-dir\") pod \"eec1817e-5afa-4b99-9c1c-1caa3e33fbd4\" (UID: \"eec1817e-5afa-4b99-9c1c-1caa3e33fbd4\") " May 8 00:37:49.915300 kubelet[2965]: I0508 00:37:49.914905 2965 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/eec1817e-5afa-4b99-9c1c-1caa3e33fbd4-var-lib-calico" (OuterVolumeSpecName: "var-lib-calico") pod "eec1817e-5afa-4b99-9c1c-1caa3e33fbd4" (UID: "eec1817e-5afa-4b99-9c1c-1caa3e33fbd4"). InnerVolumeSpecName "var-lib-calico". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 8 00:37:49.915300 kubelet[2965]: I0508 00:37:49.914943 2965 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/eec1817e-5afa-4b99-9c1c-1caa3e33fbd4-cni-net-dir" (OuterVolumeSpecName: "cni-net-dir") pod "eec1817e-5afa-4b99-9c1c-1caa3e33fbd4" (UID: "eec1817e-5afa-4b99-9c1c-1caa3e33fbd4"). InnerVolumeSpecName "cni-net-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 8 00:37:49.915300 kubelet[2965]: I0508 00:37:49.914955 2965 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/eec1817e-5afa-4b99-9c1c-1caa3e33fbd4-flexvol-driver-host" (OuterVolumeSpecName: "flexvol-driver-host") pod "eec1817e-5afa-4b99-9c1c-1caa3e33fbd4" (UID: "eec1817e-5afa-4b99-9c1c-1caa3e33fbd4"). InnerVolumeSpecName "flexvol-driver-host". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 8 00:37:49.915300 kubelet[2965]: I0508 00:37:49.914971 2965 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/eec1817e-5afa-4b99-9c1c-1caa3e33fbd4-policysync" (OuterVolumeSpecName: "policysync") pod "eec1817e-5afa-4b99-9c1c-1caa3e33fbd4" (UID: "eec1817e-5afa-4b99-9c1c-1caa3e33fbd4"). InnerVolumeSpecName "policysync". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 8 00:37:49.915300 kubelet[2965]: I0508 00:37:49.914982 2965 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/eec1817e-5afa-4b99-9c1c-1caa3e33fbd4-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "eec1817e-5afa-4b99-9c1c-1caa3e33fbd4" (UID: "eec1817e-5afa-4b99-9c1c-1caa3e33fbd4"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 8 00:37:49.915496 kubelet[2965]: I0508 00:37:49.914992 2965 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/eec1817e-5afa-4b99-9c1c-1caa3e33fbd4-cni-bin-dir" (OuterVolumeSpecName: "cni-bin-dir") pod "eec1817e-5afa-4b99-9c1c-1caa3e33fbd4" (UID: "eec1817e-5afa-4b99-9c1c-1caa3e33fbd4"). InnerVolumeSpecName "cni-bin-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 8 00:37:49.915496 kubelet[2965]: I0508 00:37:49.915004 2965 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/eec1817e-5afa-4b99-9c1c-1caa3e33fbd4-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "eec1817e-5afa-4b99-9c1c-1caa3e33fbd4" (UID: "eec1817e-5afa-4b99-9c1c-1caa3e33fbd4"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 8 00:37:49.915496 kubelet[2965]: I0508 00:37:49.915134 2965 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/eec1817e-5afa-4b99-9c1c-1caa3e33fbd4-var-run-calico" (OuterVolumeSpecName: "var-run-calico") pod "eec1817e-5afa-4b99-9c1c-1caa3e33fbd4" (UID: "eec1817e-5afa-4b99-9c1c-1caa3e33fbd4"). InnerVolumeSpecName "var-run-calico". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 8 00:37:49.917003 kubelet[2965]: I0508 00:37:49.915742 2965 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/eec1817e-5afa-4b99-9c1c-1caa3e33fbd4-cni-log-dir" (OuterVolumeSpecName: "cni-log-dir") pod "eec1817e-5afa-4b99-9c1c-1caa3e33fbd4" (UID: "eec1817e-5afa-4b99-9c1c-1caa3e33fbd4"). InnerVolumeSpecName "cni-log-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 8 00:37:49.924078 kubelet[2965]: I0508 00:37:49.923959 2965 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eec1817e-5afa-4b99-9c1c-1caa3e33fbd4-kube-api-access-89zrj" (OuterVolumeSpecName: "kube-api-access-89zrj") pod "eec1817e-5afa-4b99-9c1c-1caa3e33fbd4" (UID: "eec1817e-5afa-4b99-9c1c-1caa3e33fbd4"). InnerVolumeSpecName "kube-api-access-89zrj". PluginName "kubernetes.io/projected", VolumeGidValue "" May 8 00:37:49.924236 kubelet[2965]: I0508 00:37:49.924157 2965 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eec1817e-5afa-4b99-9c1c-1caa3e33fbd4-node-certs" (OuterVolumeSpecName: "node-certs") pod "eec1817e-5afa-4b99-9c1c-1caa3e33fbd4" (UID: "eec1817e-5afa-4b99-9c1c-1caa3e33fbd4"). InnerVolumeSpecName "node-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" May 8 00:37:49.924897 kubelet[2965]: I0508 00:37:49.924871 2965 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eec1817e-5afa-4b99-9c1c-1caa3e33fbd4-tigera-ca-bundle" (OuterVolumeSpecName: "tigera-ca-bundle") pod "eec1817e-5afa-4b99-9c1c-1caa3e33fbd4" (UID: "eec1817e-5afa-4b99-9c1c-1caa3e33fbd4"). InnerVolumeSpecName "tigera-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 8 00:37:49.930314 containerd[1637]: time="2025-05-08T00:37:49.930290246Z" level=info msg="StopContainer for \"0ee833785bb985b3f3101797a221cc86140c2e0087280819e7facda593131efd\" returns successfully" May 8 00:37:49.930680 containerd[1637]: time="2025-05-08T00:37:49.930664619Z" level=info msg="StopPodSandbox for \"b2d9f88c3ef0fa40800e8de031ca1db06c427e8d5062b11021d466cf61267c0e\"" May 8 00:37:49.930714 containerd[1637]: time="2025-05-08T00:37:49.930685034Z" level=info msg="Container to stop \"0ee833785bb985b3f3101797a221cc86140c2e0087280819e7facda593131efd\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 8 00:37:49.946996 containerd[1637]: time="2025-05-08T00:37:49.946788975Z" level=info msg="shim disconnected" id=b2d9f88c3ef0fa40800e8de031ca1db06c427e8d5062b11021d466cf61267c0e namespace=k8s.io May 8 00:37:49.946996 containerd[1637]: time="2025-05-08T00:37:49.946887135Z" level=warning msg="cleaning up after shim disconnected" id=b2d9f88c3ef0fa40800e8de031ca1db06c427e8d5062b11021d466cf61267c0e namespace=k8s.io May 8 00:37:49.946996 containerd[1637]: time="2025-05-08T00:37:49.946894671Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 00:37:49.954830 containerd[1637]: time="2025-05-08T00:37:49.954694540Z" level=warning msg="cleanup warnings time=\"2025-05-08T00:37:49Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io May 8 00:37:49.986583 systemd-networkd[1293]: calid973223df80: Link DOWN May 8 00:37:49.986587 systemd-networkd[1293]: calid973223df80: Lost carrier May 8 00:37:50.014833 kubelet[2965]: I0508 00:37:50.013969 2965 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/e6f81d35-ad88-4592-b6a2-e4cb7e57b962-cni-net-dir\") pod \"calico-node-sqdnl\" (UID: \"e6f81d35-ad88-4592-b6a2-e4cb7e57b962\") " pod="calico-system/calico-node-sqdnl" May 8 00:37:50.014833 kubelet[2965]: I0508 00:37:50.014000 2965 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k7gbc\" (UniqueName: \"kubernetes.io/projected/e6f81d35-ad88-4592-b6a2-e4cb7e57b962-kube-api-access-k7gbc\") pod \"calico-node-sqdnl\" (UID: \"e6f81d35-ad88-4592-b6a2-e4cb7e57b962\") " pod="calico-system/calico-node-sqdnl" May 8 00:37:50.014833 kubelet[2965]: I0508 00:37:50.014012 2965 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/e6f81d35-ad88-4592-b6a2-e4cb7e57b962-var-run-calico\") pod \"calico-node-sqdnl\" (UID: \"e6f81d35-ad88-4592-b6a2-e4cb7e57b962\") " pod="calico-system/calico-node-sqdnl" May 8 00:37:50.014833 kubelet[2965]: I0508 00:37:50.014023 2965 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/e6f81d35-ad88-4592-b6a2-e4cb7e57b962-var-lib-calico\") pod \"calico-node-sqdnl\" (UID: \"e6f81d35-ad88-4592-b6a2-e4cb7e57b962\") " pod="calico-system/calico-node-sqdnl" May 8 00:37:50.014833 kubelet[2965]: I0508 00:37:50.014035 2965 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/e6f81d35-ad88-4592-b6a2-e4cb7e57b962-flexvol-driver-host\") pod \"calico-node-sqdnl\" (UID: \"e6f81d35-ad88-4592-b6a2-e4cb7e57b962\") " pod="calico-system/calico-node-sqdnl" May 8 00:37:50.015739 kubelet[2965]: I0508 00:37:50.014045 2965 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e6f81d35-ad88-4592-b6a2-e4cb7e57b962-lib-modules\") pod \"calico-node-sqdnl\" (UID: \"e6f81d35-ad88-4592-b6a2-e4cb7e57b962\") " pod="calico-system/calico-node-sqdnl" May 8 00:37:50.015739 kubelet[2965]: I0508 00:37:50.014055 2965 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e6f81d35-ad88-4592-b6a2-e4cb7e57b962-xtables-lock\") pod \"calico-node-sqdnl\" (UID: \"e6f81d35-ad88-4592-b6a2-e4cb7e57b962\") " pod="calico-system/calico-node-sqdnl" May 8 00:37:50.015739 kubelet[2965]: I0508 00:37:50.014063 2965 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/e6f81d35-ad88-4592-b6a2-e4cb7e57b962-cni-log-dir\") pod \"calico-node-sqdnl\" (UID: \"e6f81d35-ad88-4592-b6a2-e4cb7e57b962\") " pod="calico-system/calico-node-sqdnl" May 8 00:37:50.015739 kubelet[2965]: I0508 00:37:50.014074 2965 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e6f81d35-ad88-4592-b6a2-e4cb7e57b962-tigera-ca-bundle\") pod \"calico-node-sqdnl\" (UID: \"e6f81d35-ad88-4592-b6a2-e4cb7e57b962\") " pod="calico-system/calico-node-sqdnl" May 8 00:37:50.015739 kubelet[2965]: I0508 00:37:50.014087 2965 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/e6f81d35-ad88-4592-b6a2-e4cb7e57b962-policysync\") pod \"calico-node-sqdnl\" (UID: \"e6f81d35-ad88-4592-b6a2-e4cb7e57b962\") " pod="calico-system/calico-node-sqdnl" May 8 00:37:50.015843 kubelet[2965]: I0508 00:37:50.014111 2965 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/e6f81d35-ad88-4592-b6a2-e4cb7e57b962-node-certs\") pod \"calico-node-sqdnl\" (UID: \"e6f81d35-ad88-4592-b6a2-e4cb7e57b962\") " pod="calico-system/calico-node-sqdnl" May 8 00:37:50.015843 kubelet[2965]: I0508 00:37:50.014125 2965 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/e6f81d35-ad88-4592-b6a2-e4cb7e57b962-cni-bin-dir\") pod \"calico-node-sqdnl\" (UID: \"e6f81d35-ad88-4592-b6a2-e4cb7e57b962\") " pod="calico-system/calico-node-sqdnl" May 8 00:37:50.015843 kubelet[2965]: I0508 00:37:50.014140 2965 reconciler_common.go:289] "Volume detached for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/eec1817e-5afa-4b99-9c1c-1caa3e33fbd4-var-lib-calico\") on node \"localhost\" DevicePath \"\"" May 8 00:37:50.015843 kubelet[2965]: I0508 00:37:50.014147 2965 reconciler_common.go:289] "Volume detached for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/eec1817e-5afa-4b99-9c1c-1caa3e33fbd4-node-certs\") on node \"localhost\" DevicePath \"\"" May 8 00:37:50.015843 kubelet[2965]: I0508 00:37:50.014152 2965 reconciler_common.go:289] "Volume detached for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/eec1817e-5afa-4b99-9c1c-1caa3e33fbd4-cni-net-dir\") on node \"localhost\" DevicePath \"\"" May 8 00:37:50.015843 kubelet[2965]: I0508 00:37:50.014158 2965 reconciler_common.go:289] "Volume detached for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/eec1817e-5afa-4b99-9c1c-1caa3e33fbd4-flexvol-driver-host\") on node \"localhost\" DevicePath \"\"" May 8 00:37:50.015843 kubelet[2965]: I0508 00:37:50.014163 2965 reconciler_common.go:289] "Volume detached for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/eec1817e-5afa-4b99-9c1c-1caa3e33fbd4-policysync\") on node \"localhost\" DevicePath \"\"" May 8 00:37:50.016018 kubelet[2965]: I0508 00:37:50.014169 2965 reconciler_common.go:289] "Volume detached for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/eec1817e-5afa-4b99-9c1c-1caa3e33fbd4-cni-log-dir\") on node \"localhost\" DevicePath \"\"" May 8 00:37:50.016018 kubelet[2965]: I0508 00:37:50.014174 2965 reconciler_common.go:289] "Volume detached for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/eec1817e-5afa-4b99-9c1c-1caa3e33fbd4-var-run-calico\") on node \"localhost\" DevicePath \"\"" May 8 00:37:50.016018 kubelet[2965]: I0508 00:37:50.014178 2965 reconciler_common.go:289] "Volume detached for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/eec1817e-5afa-4b99-9c1c-1caa3e33fbd4-tigera-ca-bundle\") on node \"localhost\" DevicePath \"\"" May 8 00:37:50.016018 kubelet[2965]: I0508 00:37:50.014184 2965 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/eec1817e-5afa-4b99-9c1c-1caa3e33fbd4-xtables-lock\") on node \"localhost\" DevicePath \"\"" May 8 00:37:50.016018 kubelet[2965]: I0508 00:37:50.014189 2965 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/eec1817e-5afa-4b99-9c1c-1caa3e33fbd4-lib-modules\") on node \"localhost\" DevicePath \"\"" May 8 00:37:50.016018 kubelet[2965]: I0508 00:37:50.014193 2965 reconciler_common.go:289] "Volume detached for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/eec1817e-5afa-4b99-9c1c-1caa3e33fbd4-cni-bin-dir\") on node \"localhost\" DevicePath \"\"" May 8 00:37:50.016018 kubelet[2965]: I0508 00:37:50.014198 2965 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-89zrj\" (UniqueName: \"kubernetes.io/projected/eec1817e-5afa-4b99-9c1c-1caa3e33fbd4-kube-api-access-89zrj\") on node \"localhost\" DevicePath \"\"" May 8 00:37:50.032303 containerd[1637]: 2025-05-08 00:37:49.985 [INFO][6347] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="b2d9f88c3ef0fa40800e8de031ca1db06c427e8d5062b11021d466cf61267c0e" May 8 00:37:50.032303 containerd[1637]: 2025-05-08 00:37:49.985 [INFO][6347] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="b2d9f88c3ef0fa40800e8de031ca1db06c427e8d5062b11021d466cf61267c0e" iface="eth0" netns="/var/run/netns/cni-e5b7d3a7-01d8-32c9-0c88-6b0543bb0f38" May 8 00:37:50.032303 containerd[1637]: 2025-05-08 00:37:49.986 [INFO][6347] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="b2d9f88c3ef0fa40800e8de031ca1db06c427e8d5062b11021d466cf61267c0e" iface="eth0" netns="/var/run/netns/cni-e5b7d3a7-01d8-32c9-0c88-6b0543bb0f38" May 8 00:37:50.032303 containerd[1637]: 2025-05-08 00:37:49.997 [INFO][6347] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="b2d9f88c3ef0fa40800e8de031ca1db06c427e8d5062b11021d466cf61267c0e" after=11.521153ms iface="eth0" netns="/var/run/netns/cni-e5b7d3a7-01d8-32c9-0c88-6b0543bb0f38" May 8 00:37:50.032303 containerd[1637]: 2025-05-08 00:37:49.997 [INFO][6347] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="b2d9f88c3ef0fa40800e8de031ca1db06c427e8d5062b11021d466cf61267c0e" May 8 00:37:50.032303 containerd[1637]: 2025-05-08 00:37:49.997 [INFO][6347] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b2d9f88c3ef0fa40800e8de031ca1db06c427e8d5062b11021d466cf61267c0e" May 8 00:37:50.032303 containerd[1637]: 2025-05-08 00:37:50.010 [INFO][6354] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b2d9f88c3ef0fa40800e8de031ca1db06c427e8d5062b11021d466cf61267c0e" HandleID="k8s-pod-network.b2d9f88c3ef0fa40800e8de031ca1db06c427e8d5062b11021d466cf61267c0e" Workload="localhost-k8s-calico--kube--controllers--67ddb48bf6--z6mbt-eth0" May 8 00:37:50.032303 containerd[1637]: 2025-05-08 00:37:50.010 [INFO][6354] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:37:50.032303 containerd[1637]: 2025-05-08 00:37:50.010 [INFO][6354] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:37:50.032303 containerd[1637]: 2025-05-08 00:37:50.029 [INFO][6354] ipam/ipam_plugin.go 431: Released address using handleID ContainerID="b2d9f88c3ef0fa40800e8de031ca1db06c427e8d5062b11021d466cf61267c0e" HandleID="k8s-pod-network.b2d9f88c3ef0fa40800e8de031ca1db06c427e8d5062b11021d466cf61267c0e" Workload="localhost-k8s-calico--kube--controllers--67ddb48bf6--z6mbt-eth0" May 8 00:37:50.032303 containerd[1637]: 2025-05-08 00:37:50.029 [INFO][6354] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b2d9f88c3ef0fa40800e8de031ca1db06c427e8d5062b11021d466cf61267c0e" HandleID="k8s-pod-network.b2d9f88c3ef0fa40800e8de031ca1db06c427e8d5062b11021d466cf61267c0e" Workload="localhost-k8s-calico--kube--controllers--67ddb48bf6--z6mbt-eth0" May 8 00:37:50.032303 containerd[1637]: 2025-05-08 00:37:50.030 [INFO][6354] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:37:50.032303 containerd[1637]: 2025-05-08 00:37:50.031 [INFO][6347] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="b2d9f88c3ef0fa40800e8de031ca1db06c427e8d5062b11021d466cf61267c0e" May 8 00:37:50.032963 containerd[1637]: time="2025-05-08T00:37:50.032454093Z" level=info msg="TearDown network for sandbox \"b2d9f88c3ef0fa40800e8de031ca1db06c427e8d5062b11021d466cf61267c0e\" successfully" May 8 00:37:50.032963 containerd[1637]: time="2025-05-08T00:37:50.032470551Z" level=info msg="StopPodSandbox for \"b2d9f88c3ef0fa40800e8de031ca1db06c427e8d5062b11021d466cf61267c0e\" returns successfully" May 8 00:37:50.114623 kubelet[2965]: I0508 00:37:50.114492 2965 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b9fm8\" (UniqueName: \"kubernetes.io/projected/9d6ebfe3-fdd8-4570-8cd5-315117175ab6-kube-api-access-b9fm8\") pod \"9d6ebfe3-fdd8-4570-8cd5-315117175ab6\" (UID: \"9d6ebfe3-fdd8-4570-8cd5-315117175ab6\") " May 8 00:37:50.114623 kubelet[2965]: I0508 00:37:50.114527 2965 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9d6ebfe3-fdd8-4570-8cd5-315117175ab6-tigera-ca-bundle\") pod \"9d6ebfe3-fdd8-4570-8cd5-315117175ab6\" (UID: \"9d6ebfe3-fdd8-4570-8cd5-315117175ab6\") " May 8 00:37:50.118194 kubelet[2965]: I0508 00:37:50.118091 2965 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d6ebfe3-fdd8-4570-8cd5-315117175ab6-kube-api-access-b9fm8" (OuterVolumeSpecName: "kube-api-access-b9fm8") pod "9d6ebfe3-fdd8-4570-8cd5-315117175ab6" (UID: "9d6ebfe3-fdd8-4570-8cd5-315117175ab6"). InnerVolumeSpecName "kube-api-access-b9fm8". PluginName "kubernetes.io/projected", VolumeGidValue "" May 8 00:37:50.119622 kubelet[2965]: I0508 00:37:50.119535 2965 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d6ebfe3-fdd8-4570-8cd5-315117175ab6-tigera-ca-bundle" (OuterVolumeSpecName: "tigera-ca-bundle") pod "9d6ebfe3-fdd8-4570-8cd5-315117175ab6" (UID: "9d6ebfe3-fdd8-4570-8cd5-315117175ab6"). InnerVolumeSpecName "tigera-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 8 00:37:50.168100 containerd[1637]: time="2025-05-08T00:37:50.168008706Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-sqdnl,Uid:e6f81d35-ad88-4592-b6a2-e4cb7e57b962,Namespace:calico-system,Attempt:0,}" May 8 00:37:50.185939 kubelet[2965]: I0508 00:37:50.185868 2965 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8a350a15-4629-49d8-9537-09e1c8aafb63" path="/var/lib/kubelet/pods/8a350a15-4629-49d8-9537-09e1c8aafb63/volumes" May 8 00:37:50.194655 containerd[1637]: time="2025-05-08T00:37:50.194477037Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:37:50.194655 containerd[1637]: time="2025-05-08T00:37:50.194514609Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:37:50.194655 containerd[1637]: time="2025-05-08T00:37:50.194525037Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:37:50.194655 containerd[1637]: time="2025-05-08T00:37:50.194574605Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:37:50.215454 kubelet[2965]: I0508 00:37:50.215120 2965 reconciler_common.go:289] "Volume detached for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9d6ebfe3-fdd8-4570-8cd5-315117175ab6-tigera-ca-bundle\") on node \"localhost\" DevicePath \"\"" May 8 00:37:50.216090 kubelet[2965]: I0508 00:37:50.216077 2965 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-b9fm8\" (UniqueName: \"kubernetes.io/projected/9d6ebfe3-fdd8-4570-8cd5-315117175ab6-kube-api-access-b9fm8\") on node \"localhost\" DevicePath \"\"" May 8 00:37:50.222546 containerd[1637]: time="2025-05-08T00:37:50.222492552Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-sqdnl,Uid:e6f81d35-ad88-4592-b6a2-e4cb7e57b962,Namespace:calico-system,Attempt:0,} returns sandbox id \"161d04a0cc3223a9364aa3a792d728304bf70cc515fe521c68dc05ca5ef12d17\"" May 8 00:37:50.226844 containerd[1637]: time="2025-05-08T00:37:50.226815991Z" level=info msg="CreateContainer within sandbox \"161d04a0cc3223a9364aa3a792d728304bf70cc515fe521c68dc05ca5ef12d17\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" May 8 00:37:50.232431 containerd[1637]: time="2025-05-08T00:37:50.232360290Z" level=info msg="CreateContainer within sandbox \"161d04a0cc3223a9364aa3a792d728304bf70cc515fe521c68dc05ca5ef12d17\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"318a46d46b8c087887319cba6458d646f53578c25d2376e84a6816e7549c31b7\"" May 8 00:37:50.233031 containerd[1637]: time="2025-05-08T00:37:50.232984960Z" level=info msg="StartContainer for \"318a46d46b8c087887319cba6458d646f53578c25d2376e84a6816e7549c31b7\"" May 8 00:37:50.290533 containerd[1637]: time="2025-05-08T00:37:50.290471739Z" level=info msg="StartContainer for \"318a46d46b8c087887319cba6458d646f53578c25d2376e84a6816e7549c31b7\" returns successfully" May 8 00:37:50.373073 containerd[1637]: time="2025-05-08T00:37:50.373017115Z" level=info msg="shim disconnected" id=318a46d46b8c087887319cba6458d646f53578c25d2376e84a6816e7549c31b7 namespace=k8s.io May 8 00:37:50.373280 containerd[1637]: time="2025-05-08T00:37:50.373131197Z" level=warning msg="cleaning up after shim disconnected" id=318a46d46b8c087887319cba6458d646f53578c25d2376e84a6816e7549c31b7 namespace=k8s.io May 8 00:37:50.373280 containerd[1637]: time="2025-05-08T00:37:50.373139578Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 00:37:50.559952 containerd[1637]: time="2025-05-08T00:37:50.559832229Z" level=info msg="shim disconnected" id=3e78d53cffe15a6d49f578b062ee6db90656f4507b680b9f2c8ce41959648e82 namespace=k8s.io May 8 00:37:50.559952 containerd[1637]: time="2025-05-08T00:37:50.559874853Z" level=warning msg="cleaning up after shim disconnected" id=3e78d53cffe15a6d49f578b062ee6db90656f4507b680b9f2c8ce41959648e82 namespace=k8s.io May 8 00:37:50.559952 containerd[1637]: time="2025-05-08T00:37:50.559881072Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 00:37:50.573084 containerd[1637]: time="2025-05-08T00:37:50.573055453Z" level=info msg="StopContainer for \"3e78d53cffe15a6d49f578b062ee6db90656f4507b680b9f2c8ce41959648e82\" returns successfully" May 8 00:37:50.573563 containerd[1637]: time="2025-05-08T00:37:50.573462054Z" level=info msg="StopPodSandbox for \"1e48c04261d2a4a0219eb4a6f6095118446d72554edd4a7c1275a8dfaa8904ba\"" May 8 00:37:50.573563 containerd[1637]: time="2025-05-08T00:37:50.573500234Z" level=info msg="Container to stop \"3e78d53cffe15a6d49f578b062ee6db90656f4507b680b9f2c8ce41959648e82\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 8 00:37:50.591233 containerd[1637]: time="2025-05-08T00:37:50.590953250Z" level=info msg="shim disconnected" id=1e48c04261d2a4a0219eb4a6f6095118446d72554edd4a7c1275a8dfaa8904ba namespace=k8s.io May 8 00:37:50.591233 containerd[1637]: time="2025-05-08T00:37:50.591180626Z" level=warning msg="cleaning up after shim disconnected" id=1e48c04261d2a4a0219eb4a6f6095118446d72554edd4a7c1275a8dfaa8904ba namespace=k8s.io May 8 00:37:50.591233 containerd[1637]: time="2025-05-08T00:37:50.591187392Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 00:37:50.606783 containerd[1637]: time="2025-05-08T00:37:50.606668906Z" level=info msg="TearDown network for sandbox \"1e48c04261d2a4a0219eb4a6f6095118446d72554edd4a7c1275a8dfaa8904ba\" successfully" May 8 00:37:50.607607 containerd[1637]: time="2025-05-08T00:37:50.606916123Z" level=info msg="StopPodSandbox for \"1e48c04261d2a4a0219eb4a6f6095118446d72554edd4a7c1275a8dfaa8904ba\" returns successfully" May 8 00:37:50.689102 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0ee833785bb985b3f3101797a221cc86140c2e0087280819e7facda593131efd-rootfs.mount: Deactivated successfully. May 8 00:37:50.689202 systemd[1]: var-lib-kubelet-pods-9d6ebfe3\x2dfdd8\x2d4570\x2d8cd5\x2d315117175ab6-volume\x2dsubpaths-tigera\x2dca\x2dbundle-calico\x2dkube\x2dcontrollers-1.mount: Deactivated successfully. May 8 00:37:50.689543 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b2d9f88c3ef0fa40800e8de031ca1db06c427e8d5062b11021d466cf61267c0e-rootfs.mount: Deactivated successfully. May 8 00:37:50.690003 systemd[1]: run-netns-cni\x2de5b7d3a7\x2d01d8\x2d32c9\x2d0c88\x2d6b0543bb0f38.mount: Deactivated successfully. May 8 00:37:50.690153 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b2d9f88c3ef0fa40800e8de031ca1db06c427e8d5062b11021d466cf61267c0e-shm.mount: Deactivated successfully. May 8 00:37:50.690308 systemd[1]: var-lib-kubelet-pods-eec1817e\x2d5afa\x2d4b99\x2d9c1c\x2d1caa3e33fbd4-volume\x2dsubpaths-tigera\x2dca\x2dbundle-calico\x2dnode-1.mount: Deactivated successfully. May 8 00:37:50.690525 systemd[1]: var-lib-kubelet-pods-9d6ebfe3\x2dfdd8\x2d4570\x2d8cd5\x2d315117175ab6-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2db9fm8.mount: Deactivated successfully. May 8 00:37:50.690669 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3e78d53cffe15a6d49f578b062ee6db90656f4507b680b9f2c8ce41959648e82-rootfs.mount: Deactivated successfully. May 8 00:37:50.691541 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1e48c04261d2a4a0219eb4a6f6095118446d72554edd4a7c1275a8dfaa8904ba-rootfs.mount: Deactivated successfully. May 8 00:37:50.691850 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1e48c04261d2a4a0219eb4a6f6095118446d72554edd4a7c1275a8dfaa8904ba-shm.mount: Deactivated successfully. May 8 00:37:50.692036 systemd[1]: var-lib-kubelet-pods-eec1817e\x2d5afa\x2d4b99\x2d9c1c\x2d1caa3e33fbd4-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d89zrj.mount: Deactivated successfully. May 8 00:37:50.692202 systemd[1]: var-lib-kubelet-pods-eec1817e\x2d5afa\x2d4b99\x2d9c1c\x2d1caa3e33fbd4-volumes-kubernetes.io\x7esecret-node\x2dcerts.mount: Deactivated successfully. May 8 00:37:50.720922 kubelet[2965]: I0508 00:37:50.720887 2965 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tnnwx\" (UniqueName: \"kubernetes.io/projected/2c167c8d-a0b7-4c38-a16f-3f86af2c2838-kube-api-access-tnnwx\") pod \"2c167c8d-a0b7-4c38-a16f-3f86af2c2838\" (UID: \"2c167c8d-a0b7-4c38-a16f-3f86af2c2838\") " May 8 00:37:50.721019 kubelet[2965]: I0508 00:37:50.720932 2965 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/2c167c8d-a0b7-4c38-a16f-3f86af2c2838-typha-certs\") pod \"2c167c8d-a0b7-4c38-a16f-3f86af2c2838\" (UID: \"2c167c8d-a0b7-4c38-a16f-3f86af2c2838\") " May 8 00:37:50.721019 kubelet[2965]: I0508 00:37:50.720953 2965 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2c167c8d-a0b7-4c38-a16f-3f86af2c2838-tigera-ca-bundle\") pod \"2c167c8d-a0b7-4c38-a16f-3f86af2c2838\" (UID: \"2c167c8d-a0b7-4c38-a16f-3f86af2c2838\") " May 8 00:37:50.724919 systemd[1]: var-lib-kubelet-pods-2c167c8d\x2da0b7\x2d4c38\x2da16f\x2d3f86af2c2838-volume\x2dsubpaths-tigera\x2dca\x2dbundle-calico\x2dtypha-1.mount: Deactivated successfully. May 8 00:37:50.726978 systemd[1]: var-lib-kubelet-pods-2c167c8d\x2da0b7\x2d4c38\x2da16f\x2d3f86af2c2838-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dtnnwx.mount: Deactivated successfully. May 8 00:37:50.729531 kubelet[2965]: I0508 00:37:50.729505 2965 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2c167c8d-a0b7-4c38-a16f-3f86af2c2838-kube-api-access-tnnwx" (OuterVolumeSpecName: "kube-api-access-tnnwx") pod "2c167c8d-a0b7-4c38-a16f-3f86af2c2838" (UID: "2c167c8d-a0b7-4c38-a16f-3f86af2c2838"). InnerVolumeSpecName "kube-api-access-tnnwx". PluginName "kubernetes.io/projected", VolumeGidValue "" May 8 00:37:50.730135 kubelet[2965]: I0508 00:37:50.730022 2965 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2c167c8d-a0b7-4c38-a16f-3f86af2c2838-tigera-ca-bundle" (OuterVolumeSpecName: "tigera-ca-bundle") pod "2c167c8d-a0b7-4c38-a16f-3f86af2c2838" (UID: "2c167c8d-a0b7-4c38-a16f-3f86af2c2838"). InnerVolumeSpecName "tigera-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 8 00:37:50.732221 systemd[1]: var-lib-kubelet-pods-2c167c8d\x2da0b7\x2d4c38\x2da16f\x2d3f86af2c2838-volumes-kubernetes.io\x7esecret-typha\x2dcerts.mount: Deactivated successfully. May 8 00:37:50.733269 kubelet[2965]: I0508 00:37:50.733246 2965 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c167c8d-a0b7-4c38-a16f-3f86af2c2838-typha-certs" (OuterVolumeSpecName: "typha-certs") pod "2c167c8d-a0b7-4c38-a16f-3f86af2c2838" (UID: "2c167c8d-a0b7-4c38-a16f-3f86af2c2838"). InnerVolumeSpecName "typha-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" May 8 00:37:50.772354 kubelet[2965]: I0508 00:37:50.771034 2965 scope.go:117] "RemoveContainer" containerID="0ee833785bb985b3f3101797a221cc86140c2e0087280819e7facda593131efd" May 8 00:37:50.772448 containerd[1637]: time="2025-05-08T00:37:50.771899148Z" level=info msg="CreateContainer within sandbox \"161d04a0cc3223a9364aa3a792d728304bf70cc515fe521c68dc05ca5ef12d17\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" May 8 00:37:50.773918 containerd[1637]: time="2025-05-08T00:37:50.773866968Z" level=info msg="RemoveContainer for \"0ee833785bb985b3f3101797a221cc86140c2e0087280819e7facda593131efd\"" May 8 00:37:50.778756 containerd[1637]: time="2025-05-08T00:37:50.778630322Z" level=info msg="RemoveContainer for \"0ee833785bb985b3f3101797a221cc86140c2e0087280819e7facda593131efd\" returns successfully" May 8 00:37:50.780488 kubelet[2965]: I0508 00:37:50.779631 2965 scope.go:117] "RemoveContainer" containerID="0ee833785bb985b3f3101797a221cc86140c2e0087280819e7facda593131efd" May 8 00:37:50.781376 containerd[1637]: time="2025-05-08T00:37:50.780563645Z" level=error msg="ContainerStatus for \"0ee833785bb985b3f3101797a221cc86140c2e0087280819e7facda593131efd\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0ee833785bb985b3f3101797a221cc86140c2e0087280819e7facda593131efd\": not found" May 8 00:37:50.785246 kubelet[2965]: E0508 00:37:50.783596 2965 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0ee833785bb985b3f3101797a221cc86140c2e0087280819e7facda593131efd\": not found" containerID="0ee833785bb985b3f3101797a221cc86140c2e0087280819e7facda593131efd" May 8 00:37:50.785246 kubelet[2965]: I0508 00:37:50.783640 2965 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"0ee833785bb985b3f3101797a221cc86140c2e0087280819e7facda593131efd"} err="failed to get container status \"0ee833785bb985b3f3101797a221cc86140c2e0087280819e7facda593131efd\": rpc error: code = NotFound desc = an error occurred when try to find container \"0ee833785bb985b3f3101797a221cc86140c2e0087280819e7facda593131efd\": not found" May 8 00:37:50.785246 kubelet[2965]: I0508 00:37:50.783658 2965 scope.go:117] "RemoveContainer" containerID="1df0fc809281868abff29d85e56af98d898c199404f2ee61e2517d49aff4936a" May 8 00:37:50.790655 containerd[1637]: time="2025-05-08T00:37:50.790614101Z" level=info msg="CreateContainer within sandbox \"161d04a0cc3223a9364aa3a792d728304bf70cc515fe521c68dc05ca5ef12d17\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"07fc706465932a7a43324305465d67ddaad7ccf163c4b396a2dce05c5b7fa647\"" May 8 00:37:50.792718 containerd[1637]: time="2025-05-08T00:37:50.791746318Z" level=info msg="RemoveContainer for \"1df0fc809281868abff29d85e56af98d898c199404f2ee61e2517d49aff4936a\"" May 8 00:37:50.794416 containerd[1637]: time="2025-05-08T00:37:50.794080455Z" level=info msg="StartContainer for \"07fc706465932a7a43324305465d67ddaad7ccf163c4b396a2dce05c5b7fa647\"" May 8 00:37:50.798422 containerd[1637]: time="2025-05-08T00:37:50.796552258Z" level=info msg="RemoveContainer for \"1df0fc809281868abff29d85e56af98d898c199404f2ee61e2517d49aff4936a\" returns successfully" May 8 00:37:50.798903 kubelet[2965]: I0508 00:37:50.798892 2965 scope.go:117] "RemoveContainer" containerID="1e74ce039d1f811396c9bb53a15b9a07606a480f5ebba0e4c71739bc2486e75f" May 8 00:37:50.806021 containerd[1637]: time="2025-05-08T00:37:50.805993708Z" level=info msg="RemoveContainer for \"1e74ce039d1f811396c9bb53a15b9a07606a480f5ebba0e4c71739bc2486e75f\"" May 8 00:37:50.812862 containerd[1637]: time="2025-05-08T00:37:50.812138734Z" level=info msg="RemoveContainer for \"1e74ce039d1f811396c9bb53a15b9a07606a480f5ebba0e4c71739bc2486e75f\" returns successfully" May 8 00:37:50.813093 kubelet[2965]: I0508 00:37:50.813081 2965 scope.go:117] "RemoveContainer" containerID="33a7d5023db43c9a0eb26a408542055ff30751ec87c137a89fe308345e353d25" May 8 00:37:50.814715 containerd[1637]: time="2025-05-08T00:37:50.814607182Z" level=info msg="RemoveContainer for \"33a7d5023db43c9a0eb26a408542055ff30751ec87c137a89fe308345e353d25\"" May 8 00:37:50.820412 containerd[1637]: time="2025-05-08T00:37:50.820318017Z" level=info msg="RemoveContainer for \"33a7d5023db43c9a0eb26a408542055ff30751ec87c137a89fe308345e353d25\" returns successfully" May 8 00:37:50.822843 kubelet[2965]: I0508 00:37:50.822158 2965 reconciler_common.go:289] "Volume detached for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2c167c8d-a0b7-4c38-a16f-3f86af2c2838-tigera-ca-bundle\") on node \"localhost\" DevicePath \"\"" May 8 00:37:50.823049 kubelet[2965]: I0508 00:37:50.822946 2965 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-tnnwx\" (UniqueName: \"kubernetes.io/projected/2c167c8d-a0b7-4c38-a16f-3f86af2c2838-kube-api-access-tnnwx\") on node \"localhost\" DevicePath \"\"" May 8 00:37:50.823049 kubelet[2965]: I0508 00:37:50.822956 2965 reconciler_common.go:289] "Volume detached for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/2c167c8d-a0b7-4c38-a16f-3f86af2c2838-typha-certs\") on node \"localhost\" DevicePath \"\"" May 8 00:37:50.823235 kubelet[2965]: I0508 00:37:50.823227 2965 scope.go:117] "RemoveContainer" containerID="1df0fc809281868abff29d85e56af98d898c199404f2ee61e2517d49aff4936a" May 8 00:37:50.824149 containerd[1637]: time="2025-05-08T00:37:50.824114767Z" level=error msg="ContainerStatus for \"1df0fc809281868abff29d85e56af98d898c199404f2ee61e2517d49aff4936a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1df0fc809281868abff29d85e56af98d898c199404f2ee61e2517d49aff4936a\": not found" May 8 00:37:50.824302 kubelet[2965]: E0508 00:37:50.824208 2965 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1df0fc809281868abff29d85e56af98d898c199404f2ee61e2517d49aff4936a\": not found" containerID="1df0fc809281868abff29d85e56af98d898c199404f2ee61e2517d49aff4936a" May 8 00:37:50.824302 kubelet[2965]: I0508 00:37:50.824226 2965 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"1df0fc809281868abff29d85e56af98d898c199404f2ee61e2517d49aff4936a"} err="failed to get container status \"1df0fc809281868abff29d85e56af98d898c199404f2ee61e2517d49aff4936a\": rpc error: code = NotFound desc = an error occurred when try to find container \"1df0fc809281868abff29d85e56af98d898c199404f2ee61e2517d49aff4936a\": not found" May 8 00:37:50.824471 kubelet[2965]: I0508 00:37:50.824370 2965 scope.go:117] "RemoveContainer" containerID="1e74ce039d1f811396c9bb53a15b9a07606a480f5ebba0e4c71739bc2486e75f" May 8 00:37:50.825034 containerd[1637]: time="2025-05-08T00:37:50.825000995Z" level=error msg="ContainerStatus for \"1e74ce039d1f811396c9bb53a15b9a07606a480f5ebba0e4c71739bc2486e75f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1e74ce039d1f811396c9bb53a15b9a07606a480f5ebba0e4c71739bc2486e75f\": not found" May 8 00:37:50.826168 kubelet[2965]: E0508 00:37:50.826064 2965 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1e74ce039d1f811396c9bb53a15b9a07606a480f5ebba0e4c71739bc2486e75f\": not found" containerID="1e74ce039d1f811396c9bb53a15b9a07606a480f5ebba0e4c71739bc2486e75f" May 8 00:37:50.826168 kubelet[2965]: I0508 00:37:50.826099 2965 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"1e74ce039d1f811396c9bb53a15b9a07606a480f5ebba0e4c71739bc2486e75f"} err="failed to get container status \"1e74ce039d1f811396c9bb53a15b9a07606a480f5ebba0e4c71739bc2486e75f\": rpc error: code = NotFound desc = an error occurred when try to find container \"1e74ce039d1f811396c9bb53a15b9a07606a480f5ebba0e4c71739bc2486e75f\": not found" May 8 00:37:50.826168 kubelet[2965]: I0508 00:37:50.826114 2965 scope.go:117] "RemoveContainer" containerID="33a7d5023db43c9a0eb26a408542055ff30751ec87c137a89fe308345e353d25" May 8 00:37:50.826265 containerd[1637]: time="2025-05-08T00:37:50.826249172Z" level=error msg="ContainerStatus for \"33a7d5023db43c9a0eb26a408542055ff30751ec87c137a89fe308345e353d25\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"33a7d5023db43c9a0eb26a408542055ff30751ec87c137a89fe308345e353d25\": not found" May 8 00:37:50.826326 kubelet[2965]: E0508 00:37:50.826312 2965 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"33a7d5023db43c9a0eb26a408542055ff30751ec87c137a89fe308345e353d25\": not found" containerID="33a7d5023db43c9a0eb26a408542055ff30751ec87c137a89fe308345e353d25" May 8 00:37:50.826377 kubelet[2965]: I0508 00:37:50.826326 2965 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"33a7d5023db43c9a0eb26a408542055ff30751ec87c137a89fe308345e353d25"} err="failed to get container status \"33a7d5023db43c9a0eb26a408542055ff30751ec87c137a89fe308345e353d25\": rpc error: code = NotFound desc = an error occurred when try to find container \"33a7d5023db43c9a0eb26a408542055ff30751ec87c137a89fe308345e353d25\": not found" May 8 00:37:50.826377 kubelet[2965]: I0508 00:37:50.826336 2965 scope.go:117] "RemoveContainer" containerID="3e78d53cffe15a6d49f578b062ee6db90656f4507b680b9f2c8ce41959648e82" May 8 00:37:50.832804 containerd[1637]: time="2025-05-08T00:37:50.832515113Z" level=info msg="RemoveContainer for \"3e78d53cffe15a6d49f578b062ee6db90656f4507b680b9f2c8ce41959648e82\"" May 8 00:37:50.839330 containerd[1637]: time="2025-05-08T00:37:50.837164785Z" level=info msg="RemoveContainer for \"3e78d53cffe15a6d49f578b062ee6db90656f4507b680b9f2c8ce41959648e82\" returns successfully" May 8 00:37:50.839330 containerd[1637]: time="2025-05-08T00:37:50.837719395Z" level=error msg="ContainerStatus for \"3e78d53cffe15a6d49f578b062ee6db90656f4507b680b9f2c8ce41959648e82\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3e78d53cffe15a6d49f578b062ee6db90656f4507b680b9f2c8ce41959648e82\": not found" May 8 00:37:50.839487 kubelet[2965]: I0508 00:37:50.837575 2965 scope.go:117] "RemoveContainer" containerID="3e78d53cffe15a6d49f578b062ee6db90656f4507b680b9f2c8ce41959648e82" May 8 00:37:50.839487 kubelet[2965]: E0508 00:37:50.838434 2965 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3e78d53cffe15a6d49f578b062ee6db90656f4507b680b9f2c8ce41959648e82\": not found" containerID="3e78d53cffe15a6d49f578b062ee6db90656f4507b680b9f2c8ce41959648e82" May 8 00:37:50.839487 kubelet[2965]: I0508 00:37:50.838450 2965 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"3e78d53cffe15a6d49f578b062ee6db90656f4507b680b9f2c8ce41959648e82"} err="failed to get container status \"3e78d53cffe15a6d49f578b062ee6db90656f4507b680b9f2c8ce41959648e82\": rpc error: code = NotFound desc = an error occurred when try to find container \"3e78d53cffe15a6d49f578b062ee6db90656f4507b680b9f2c8ce41959648e82\": not found" May 8 00:37:50.863795 containerd[1637]: time="2025-05-08T00:37:50.863729823Z" level=info msg="StartContainer for \"07fc706465932a7a43324305465d67ddaad7ccf163c4b396a2dce05c5b7fa647\" returns successfully" May 8 00:37:50.913431 kubelet[2965]: I0508 00:37:50.913295 2965 topology_manager.go:215] "Topology Admit Handler" podUID="2f3c66cf-411d-49d8-af78-e03452ba511d" podNamespace="calico-system" podName="calico-typha-74cc8d76f7-2cff4" May 8 00:37:50.913431 kubelet[2965]: E0508 00:37:50.913379 2965 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="2c167c8d-a0b7-4c38-a16f-3f86af2c2838" containerName="calico-typha" May 8 00:37:50.913431 kubelet[2965]: E0508 00:37:50.913388 2965 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="9d6ebfe3-fdd8-4570-8cd5-315117175ab6" containerName="calico-kube-controllers" May 8 00:37:50.913431 kubelet[2965]: I0508 00:37:50.913405 2965 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c167c8d-a0b7-4c38-a16f-3f86af2c2838" containerName="calico-typha" May 8 00:37:50.913431 kubelet[2965]: I0508 00:37:50.913409 2965 memory_manager.go:354] "RemoveStaleState removing state" podUID="9d6ebfe3-fdd8-4570-8cd5-315117175ab6" containerName="calico-kube-controllers" May 8 00:37:51.024365 kubelet[2965]: I0508 00:37:51.024302 2965 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/2f3c66cf-411d-49d8-af78-e03452ba511d-typha-certs\") pod \"calico-typha-74cc8d76f7-2cff4\" (UID: \"2f3c66cf-411d-49d8-af78-e03452ba511d\") " pod="calico-system/calico-typha-74cc8d76f7-2cff4" May 8 00:37:51.024365 kubelet[2965]: I0508 00:37:51.024332 2965 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2f3c66cf-411d-49d8-af78-e03452ba511d-tigera-ca-bundle\") pod \"calico-typha-74cc8d76f7-2cff4\" (UID: \"2f3c66cf-411d-49d8-af78-e03452ba511d\") " pod="calico-system/calico-typha-74cc8d76f7-2cff4" May 8 00:37:51.024365 kubelet[2965]: I0508 00:37:51.024371 2965 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h7dl8\" (UniqueName: \"kubernetes.io/projected/2f3c66cf-411d-49d8-af78-e03452ba511d-kube-api-access-h7dl8\") pod \"calico-typha-74cc8d76f7-2cff4\" (UID: \"2f3c66cf-411d-49d8-af78-e03452ba511d\") " pod="calico-system/calico-typha-74cc8d76f7-2cff4" May 8 00:37:51.217606 containerd[1637]: time="2025-05-08T00:37:51.217572712Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-74cc8d76f7-2cff4,Uid:2f3c66cf-411d-49d8-af78-e03452ba511d,Namespace:calico-system,Attempt:0,}" May 8 00:37:51.279431 containerd[1637]: time="2025-05-08T00:37:51.279333786Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:37:51.279431 containerd[1637]: time="2025-05-08T00:37:51.279390677Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:37:51.279431 containerd[1637]: time="2025-05-08T00:37:51.279398653Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:37:51.279653 containerd[1637]: time="2025-05-08T00:37:51.279593239Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:37:51.324824 containerd[1637]: time="2025-05-08T00:37:51.324669345Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-74cc8d76f7-2cff4,Uid:2f3c66cf-411d-49d8-af78-e03452ba511d,Namespace:calico-system,Attempt:0,} returns sandbox id \"608de813b40952cd0e419ad87b994a49947fb4490f1de74973ad75a2dda216e7\"" May 8 00:37:51.329920 containerd[1637]: time="2025-05-08T00:37:51.329844516Z" level=info msg="CreateContainer within sandbox \"608de813b40952cd0e419ad87b994a49947fb4490f1de74973ad75a2dda216e7\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" May 8 00:37:51.413506 containerd[1637]: time="2025-05-08T00:37:51.413468171Z" level=info msg="CreateContainer within sandbox \"608de813b40952cd0e419ad87b994a49947fb4490f1de74973ad75a2dda216e7\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"4a7437887222fcd96fbf0303daf8f62e13ff3a81286c7e5c44bc8a297fd81e40\"" May 8 00:37:51.416918 containerd[1637]: time="2025-05-08T00:37:51.414318055Z" level=info msg="StartContainer for \"4a7437887222fcd96fbf0303daf8f62e13ff3a81286c7e5c44bc8a297fd81e40\"" May 8 00:37:51.529881 containerd[1637]: time="2025-05-08T00:37:51.529817744Z" level=info msg="StartContainer for \"4a7437887222fcd96fbf0303daf8f62e13ff3a81286c7e5c44bc8a297fd81e40\" returns successfully" May 8 00:37:51.812668 kubelet[2965]: I0508 00:37:51.812578 2965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-74cc8d76f7-2cff4" podStartSLOduration=2.8125641 podStartE2EDuration="2.8125641s" podCreationTimestamp="2025-05-08 00:37:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 00:37:51.797294274 +0000 UTC m=+67.730602902" watchObservedRunningTime="2025-05-08 00:37:51.8125641 +0000 UTC m=+67.745872728" May 8 00:37:52.184677 kubelet[2965]: I0508 00:37:52.184580 2965 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2c167c8d-a0b7-4c38-a16f-3f86af2c2838" path="/var/lib/kubelet/pods/2c167c8d-a0b7-4c38-a16f-3f86af2c2838/volumes" May 8 00:37:52.189865 kubelet[2965]: I0508 00:37:52.189794 2965 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d6ebfe3-fdd8-4570-8cd5-315117175ab6" path="/var/lib/kubelet/pods/9d6ebfe3-fdd8-4570-8cd5-315117175ab6/volumes" May 8 00:37:52.191182 kubelet[2965]: I0508 00:37:52.191081 2965 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eec1817e-5afa-4b99-9c1c-1caa3e33fbd4" path="/var/lib/kubelet/pods/eec1817e-5afa-4b99-9c1c-1caa3e33fbd4/volumes" May 8 00:37:52.459517 systemd[1]: Started sshd@8-139.178.70.106:22-139.178.68.195:39522.service - OpenSSH per-connection server daemon (139.178.68.195:39522). May 8 00:37:52.644114 sshd[6654]: Accepted publickey for core from 139.178.68.195 port 39522 ssh2: RSA SHA256:K6koWqi65G0NEZIdyqBHM11YGd87HXVeKfxzt5n0Rpg May 8 00:37:52.648730 sshd[6654]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:37:52.667751 systemd-logind[1616]: New session 10 of user core. May 8 00:37:52.673566 systemd[1]: Started session-10.scope - Session 10 of User core. May 8 00:37:53.555545 sshd[6654]: pam_unix(sshd:session): session closed for user core May 8 00:37:53.566476 systemd[1]: sshd@8-139.178.70.106:22-139.178.68.195:39522.service: Deactivated successfully. May 8 00:37:53.571252 systemd[1]: session-10.scope: Deactivated successfully. May 8 00:37:53.573964 systemd-logind[1616]: Session 10 logged out. Waiting for processes to exit. May 8 00:37:53.578330 systemd-logind[1616]: Removed session 10. May 8 00:37:54.489851 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-07fc706465932a7a43324305465d67ddaad7ccf163c4b396a2dce05c5b7fa647-rootfs.mount: Deactivated successfully. May 8 00:37:54.495373 containerd[1637]: time="2025-05-08T00:37:54.495322599Z" level=info msg="shim disconnected" id=07fc706465932a7a43324305465d67ddaad7ccf163c4b396a2dce05c5b7fa647 namespace=k8s.io May 8 00:37:54.496463 containerd[1637]: time="2025-05-08T00:37:54.495646877Z" level=warning msg="cleaning up after shim disconnected" id=07fc706465932a7a43324305465d67ddaad7ccf163c4b396a2dce05c5b7fa647 namespace=k8s.io May 8 00:37:54.496463 containerd[1637]: time="2025-05-08T00:37:54.495658494Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 00:37:54.828621 containerd[1637]: time="2025-05-08T00:37:54.828499361Z" level=info msg="CreateContainer within sandbox \"161d04a0cc3223a9364aa3a792d728304bf70cc515fe521c68dc05ca5ef12d17\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" May 8 00:37:54.901176 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount460556212.mount: Deactivated successfully. May 8 00:37:54.914215 containerd[1637]: time="2025-05-08T00:37:54.914185805Z" level=info msg="CreateContainer within sandbox \"161d04a0cc3223a9364aa3a792d728304bf70cc515fe521c68dc05ca5ef12d17\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"0b7b52bdb8b189c821daf78baecea9367fd9c80e8551256e85c5c344932fceb6\"" May 8 00:37:54.919134 containerd[1637]: time="2025-05-08T00:37:54.914995962Z" level=info msg="StartContainer for \"0b7b52bdb8b189c821daf78baecea9367fd9c80e8551256e85c5c344932fceb6\"" May 8 00:37:54.970940 containerd[1637]: time="2025-05-08T00:37:54.970783847Z" level=info msg="StartContainer for \"0b7b52bdb8b189c821daf78baecea9367fd9c80e8551256e85c5c344932fceb6\" returns successfully" May 8 00:37:55.544957 kubelet[2965]: I0508 00:37:55.544921 2965 topology_manager.go:215] "Topology Admit Handler" podUID="cc3cbce3-e010-43f8-9c0d-b620a698e076" podNamespace="calico-system" podName="calico-kube-controllers-fd5bc9757-2fqjb" May 8 00:37:55.662851 kubelet[2965]: I0508 00:37:55.662778 2965 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-526h8\" (UniqueName: \"kubernetes.io/projected/cc3cbce3-e010-43f8-9c0d-b620a698e076-kube-api-access-526h8\") pod \"calico-kube-controllers-fd5bc9757-2fqjb\" (UID: \"cc3cbce3-e010-43f8-9c0d-b620a698e076\") " pod="calico-system/calico-kube-controllers-fd5bc9757-2fqjb" May 8 00:37:55.662851 kubelet[2965]: I0508 00:37:55.662822 2965 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cc3cbce3-e010-43f8-9c0d-b620a698e076-tigera-ca-bundle\") pod \"calico-kube-controllers-fd5bc9757-2fqjb\" (UID: \"cc3cbce3-e010-43f8-9c0d-b620a698e076\") " pod="calico-system/calico-kube-controllers-fd5bc9757-2fqjb" May 8 00:37:55.830767 kubelet[2965]: I0508 00:37:55.829978 2965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-sqdnl" podStartSLOduration=6.829964017 podStartE2EDuration="6.829964017s" podCreationTimestamp="2025-05-08 00:37:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 00:37:55.829749678 +0000 UTC m=+71.763058318" watchObservedRunningTime="2025-05-08 00:37:55.829964017 +0000 UTC m=+71.763272646" May 8 00:37:55.889298 containerd[1637]: time="2025-05-08T00:37:55.889147852Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-fd5bc9757-2fqjb,Uid:cc3cbce3-e010-43f8-9c0d-b620a698e076,Namespace:calico-system,Attempt:0,}" May 8 00:37:56.037439 systemd-journald[1182]: Under memory pressure, flushing caches. May 8 00:37:56.031532 systemd-resolved[1546]: Under memory pressure, flushing caches. May 8 00:37:56.031538 systemd-resolved[1546]: Flushed all caches. May 8 00:37:56.349172 systemd-networkd[1293]: cali3d8d36acb5c: Link UP May 8 00:37:56.350133 systemd-networkd[1293]: cali3d8d36acb5c: Gained carrier May 8 00:37:56.357977 containerd[1637]: 2025-05-08 00:37:56.279 [INFO][6786] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--fd5bc9757--2fqjb-eth0 calico-kube-controllers-fd5bc9757- calico-system cc3cbce3-e010-43f8-9c0d-b620a698e076 1154 0 2025-05-08 00:37:50 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:fd5bc9757 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-fd5bc9757-2fqjb eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali3d8d36acb5c [] []}} ContainerID="955f4e0a869ad26e2274f66e085b38330d2c17cfe219ac83caa212731c0b0d1e" Namespace="calico-system" Pod="calico-kube-controllers-fd5bc9757-2fqjb" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--fd5bc9757--2fqjb-" May 8 00:37:56.357977 containerd[1637]: 2025-05-08 00:37:56.280 [INFO][6786] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="955f4e0a869ad26e2274f66e085b38330d2c17cfe219ac83caa212731c0b0d1e" Namespace="calico-system" Pod="calico-kube-controllers-fd5bc9757-2fqjb" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--fd5bc9757--2fqjb-eth0" May 8 00:37:56.357977 containerd[1637]: 2025-05-08 00:37:56.308 [INFO][6795] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="955f4e0a869ad26e2274f66e085b38330d2c17cfe219ac83caa212731c0b0d1e" HandleID="k8s-pod-network.955f4e0a869ad26e2274f66e085b38330d2c17cfe219ac83caa212731c0b0d1e" Workload="localhost-k8s-calico--kube--controllers--fd5bc9757--2fqjb-eth0" May 8 00:37:56.357977 containerd[1637]: 2025-05-08 00:37:56.315 [INFO][6795] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="955f4e0a869ad26e2274f66e085b38330d2c17cfe219ac83caa212731c0b0d1e" HandleID="k8s-pod-network.955f4e0a869ad26e2274f66e085b38330d2c17cfe219ac83caa212731c0b0d1e" Workload="localhost-k8s-calico--kube--controllers--fd5bc9757--2fqjb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00030fa90), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-fd5bc9757-2fqjb", "timestamp":"2025-05-08 00:37:56.308355416 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 8 00:37:56.357977 containerd[1637]: 2025-05-08 00:37:56.315 [INFO][6795] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:37:56.357977 containerd[1637]: 2025-05-08 00:37:56.315 [INFO][6795] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:37:56.357977 containerd[1637]: 2025-05-08 00:37:56.315 [INFO][6795] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 8 00:37:56.357977 containerd[1637]: 2025-05-08 00:37:56.316 [INFO][6795] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.955f4e0a869ad26e2274f66e085b38330d2c17cfe219ac83caa212731c0b0d1e" host="localhost" May 8 00:37:56.357977 containerd[1637]: 2025-05-08 00:37:56.321 [INFO][6795] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 8 00:37:56.357977 containerd[1637]: 2025-05-08 00:37:56.324 [INFO][6795] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 8 00:37:56.357977 containerd[1637]: 2025-05-08 00:37:56.326 [INFO][6795] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 8 00:37:56.357977 containerd[1637]: 2025-05-08 00:37:56.328 [INFO][6795] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 8 00:37:56.357977 containerd[1637]: 2025-05-08 00:37:56.328 [INFO][6795] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.955f4e0a869ad26e2274f66e085b38330d2c17cfe219ac83caa212731c0b0d1e" host="localhost" May 8 00:37:56.357977 containerd[1637]: 2025-05-08 00:37:56.329 [INFO][6795] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.955f4e0a869ad26e2274f66e085b38330d2c17cfe219ac83caa212731c0b0d1e May 8 00:37:56.357977 containerd[1637]: 2025-05-08 00:37:56.332 [INFO][6795] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.955f4e0a869ad26e2274f66e085b38330d2c17cfe219ac83caa212731c0b0d1e" host="localhost" May 8 00:37:56.357977 containerd[1637]: 2025-05-08 00:37:56.338 [INFO][6795] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.137/26] block=192.168.88.128/26 handle="k8s-pod-network.955f4e0a869ad26e2274f66e085b38330d2c17cfe219ac83caa212731c0b0d1e" host="localhost" May 8 00:37:56.357977 containerd[1637]: 2025-05-08 00:37:56.338 [INFO][6795] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.137/26] handle="k8s-pod-network.955f4e0a869ad26e2274f66e085b38330d2c17cfe219ac83caa212731c0b0d1e" host="localhost" May 8 00:37:56.357977 containerd[1637]: 2025-05-08 00:37:56.338 [INFO][6795] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:37:56.357977 containerd[1637]: 2025-05-08 00:37:56.338 [INFO][6795] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.137/26] IPv6=[] ContainerID="955f4e0a869ad26e2274f66e085b38330d2c17cfe219ac83caa212731c0b0d1e" HandleID="k8s-pod-network.955f4e0a869ad26e2274f66e085b38330d2c17cfe219ac83caa212731c0b0d1e" Workload="localhost-k8s-calico--kube--controllers--fd5bc9757--2fqjb-eth0" May 8 00:37:56.359014 containerd[1637]: 2025-05-08 00:37:56.339 [INFO][6786] cni-plugin/k8s.go 386: Populated endpoint ContainerID="955f4e0a869ad26e2274f66e085b38330d2c17cfe219ac83caa212731c0b0d1e" Namespace="calico-system" Pod="calico-kube-controllers-fd5bc9757-2fqjb" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--fd5bc9757--2fqjb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--fd5bc9757--2fqjb-eth0", GenerateName:"calico-kube-controllers-fd5bc9757-", Namespace:"calico-system", SelfLink:"", UID:"cc3cbce3-e010-43f8-9c0d-b620a698e076", ResourceVersion:"1154", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 37, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"fd5bc9757", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-fd5bc9757-2fqjb", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali3d8d36acb5c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:37:56.359014 containerd[1637]: 2025-05-08 00:37:56.339 [INFO][6786] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.137/32] ContainerID="955f4e0a869ad26e2274f66e085b38330d2c17cfe219ac83caa212731c0b0d1e" Namespace="calico-system" Pod="calico-kube-controllers-fd5bc9757-2fqjb" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--fd5bc9757--2fqjb-eth0" May 8 00:37:56.359014 containerd[1637]: 2025-05-08 00:37:56.339 [INFO][6786] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3d8d36acb5c ContainerID="955f4e0a869ad26e2274f66e085b38330d2c17cfe219ac83caa212731c0b0d1e" Namespace="calico-system" Pod="calico-kube-controllers-fd5bc9757-2fqjb" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--fd5bc9757--2fqjb-eth0" May 8 00:37:56.359014 containerd[1637]: 2025-05-08 00:37:56.341 [INFO][6786] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="955f4e0a869ad26e2274f66e085b38330d2c17cfe219ac83caa212731c0b0d1e" Namespace="calico-system" Pod="calico-kube-controllers-fd5bc9757-2fqjb" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--fd5bc9757--2fqjb-eth0" May 8 00:37:56.359014 containerd[1637]: 2025-05-08 00:37:56.342 [INFO][6786] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="955f4e0a869ad26e2274f66e085b38330d2c17cfe219ac83caa212731c0b0d1e" Namespace="calico-system" Pod="calico-kube-controllers-fd5bc9757-2fqjb" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--fd5bc9757--2fqjb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--fd5bc9757--2fqjb-eth0", GenerateName:"calico-kube-controllers-fd5bc9757-", Namespace:"calico-system", SelfLink:"", UID:"cc3cbce3-e010-43f8-9c0d-b620a698e076", ResourceVersion:"1154", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 37, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"fd5bc9757", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"955f4e0a869ad26e2274f66e085b38330d2c17cfe219ac83caa212731c0b0d1e", Pod:"calico-kube-controllers-fd5bc9757-2fqjb", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali3d8d36acb5c", MAC:"6e:c5:f3:dd:03:50", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:37:56.359014 containerd[1637]: 2025-05-08 00:37:56.352 [INFO][6786] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="955f4e0a869ad26e2274f66e085b38330d2c17cfe219ac83caa212731c0b0d1e" Namespace="calico-system" Pod="calico-kube-controllers-fd5bc9757-2fqjb" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--fd5bc9757--2fqjb-eth0" May 8 00:37:56.451131 containerd[1637]: time="2025-05-08T00:37:56.450905966Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:37:56.451131 containerd[1637]: time="2025-05-08T00:37:56.450971746Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:37:56.451131 containerd[1637]: time="2025-05-08T00:37:56.450988531Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:37:56.451131 containerd[1637]: time="2025-05-08T00:37:56.451056746Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:37:56.477660 systemd-resolved[1546]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 8 00:37:56.527413 containerd[1637]: time="2025-05-08T00:37:56.527378533Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-fd5bc9757-2fqjb,Uid:cc3cbce3-e010-43f8-9c0d-b620a698e076,Namespace:calico-system,Attempt:0,} returns sandbox id \"955f4e0a869ad26e2274f66e085b38330d2c17cfe219ac83caa212731c0b0d1e\"" May 8 00:37:56.536405 containerd[1637]: time="2025-05-08T00:37:56.536337198Z" level=info msg="CreateContainer within sandbox \"955f4e0a869ad26e2274f66e085b38330d2c17cfe219ac83caa212731c0b0d1e\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" May 8 00:37:56.734430 containerd[1637]: time="2025-05-08T00:37:56.733338761Z" level=info msg="CreateContainer within sandbox \"955f4e0a869ad26e2274f66e085b38330d2c17cfe219ac83caa212731c0b0d1e\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"4a20805e5d6fdeb2d0976b8920ae8fe61e627a60c37f7d27f878c3a3db5bb6ae\"" May 8 00:37:56.735055 containerd[1637]: time="2025-05-08T00:37:56.735021027Z" level=info msg="StartContainer for \"4a20805e5d6fdeb2d0976b8920ae8fe61e627a60c37f7d27f878c3a3db5bb6ae\"" May 8 00:37:56.918497 containerd[1637]: time="2025-05-08T00:37:56.918465372Z" level=info msg="StartContainer for \"4a20805e5d6fdeb2d0976b8920ae8fe61e627a60c37f7d27f878c3a3db5bb6ae\" returns successfully" May 8 00:37:57.937574 systemd[1]: run-containerd-runc-k8s.io-4a20805e5d6fdeb2d0976b8920ae8fe61e627a60c37f7d27f878c3a3db5bb6ae-runc.W4B60J.mount: Deactivated successfully. May 8 00:37:58.018394 systemd-networkd[1293]: cali3d8d36acb5c: Gained IPv6LL May 8 00:37:58.082093 systemd-journald[1182]: Under memory pressure, flushing caches. May 8 00:37:58.081386 systemd-resolved[1546]: Under memory pressure, flushing caches. May 8 00:37:58.081392 systemd-resolved[1546]: Flushed all caches. May 8 00:37:58.634551 systemd[1]: Started sshd@9-139.178.70.106:22-139.178.68.195:53960.service - OpenSSH per-connection server daemon (139.178.68.195:53960). May 8 00:37:59.509368 sshd[7118]: Accepted publickey for core from 139.178.68.195 port 53960 ssh2: RSA SHA256:K6koWqi65G0NEZIdyqBHM11YGd87HXVeKfxzt5n0Rpg May 8 00:37:59.530735 sshd[7118]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:37:59.550902 systemd-logind[1616]: New session 11 of user core. May 8 00:37:59.554592 systemd[1]: Started session-11.scope - Session 11 of User core. May 8 00:38:00.127670 systemd-resolved[1546]: Under memory pressure, flushing caches. May 8 00:38:00.130129 systemd-journald[1182]: Under memory pressure, flushing caches. May 8 00:38:00.127676 systemd-resolved[1546]: Flushed all caches. May 8 00:38:01.820395 sshd[7118]: pam_unix(sshd:session): session closed for user core May 8 00:38:01.827118 systemd[1]: sshd@9-139.178.70.106:22-139.178.68.195:53960.service: Deactivated successfully. May 8 00:38:01.828791 systemd-logind[1616]: Session 11 logged out. Waiting for processes to exit. May 8 00:38:01.831859 systemd[1]: session-11.scope: Deactivated successfully. May 8 00:38:01.832579 systemd-logind[1616]: Removed session 11. May 8 00:38:02.175477 systemd-resolved[1546]: Under memory pressure, flushing caches. May 8 00:38:02.175483 systemd-resolved[1546]: Flushed all caches. May 8 00:38:02.177409 systemd-journald[1182]: Under memory pressure, flushing caches. May 8 00:38:05.951624 systemd-resolved[1546]: Under memory pressure, flushing caches. May 8 00:38:05.952483 systemd-journald[1182]: Under memory pressure, flushing caches. May 8 00:38:05.951630 systemd-resolved[1546]: Flushed all caches. May 8 00:38:06.826551 systemd[1]: Started sshd@10-139.178.70.106:22-139.178.68.195:42040.service - OpenSSH per-connection server daemon (139.178.68.195:42040). May 8 00:38:06.901411 sshd[7186]: Accepted publickey for core from 139.178.68.195 port 42040 ssh2: RSA SHA256:K6koWqi65G0NEZIdyqBHM11YGd87HXVeKfxzt5n0Rpg May 8 00:38:06.903664 sshd[7186]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:38:06.911140 systemd-logind[1616]: New session 12 of user core. May 8 00:38:06.915546 systemd[1]: Started session-12.scope - Session 12 of User core. May 8 00:38:07.488565 systemd[1]: Started sshd@11-139.178.70.106:22-139.178.68.195:42052.service - OpenSSH per-connection server daemon (139.178.68.195:42052). May 8 00:38:07.493223 sshd[7186]: pam_unix(sshd:session): session closed for user core May 8 00:38:07.528251 systemd[1]: sshd@10-139.178.70.106:22-139.178.68.195:42040.service: Deactivated successfully. May 8 00:38:07.531955 systemd[1]: session-12.scope: Deactivated successfully. May 8 00:38:07.535058 systemd-logind[1616]: Session 12 logged out. Waiting for processes to exit. May 8 00:38:07.537043 systemd-logind[1616]: Removed session 12. May 8 00:38:07.553428 sshd[7198]: Accepted publickey for core from 139.178.68.195 port 42052 ssh2: RSA SHA256:K6koWqi65G0NEZIdyqBHM11YGd87HXVeKfxzt5n0Rpg May 8 00:38:07.554558 sshd[7198]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:38:07.558566 systemd-logind[1616]: New session 13 of user core. May 8 00:38:07.565631 systemd[1]: Started session-13.scope - Session 13 of User core. May 8 00:38:07.819253 sshd[7198]: pam_unix(sshd:session): session closed for user core May 8 00:38:07.825256 systemd[1]: Started sshd@12-139.178.70.106:22-139.178.68.195:42068.service - OpenSSH per-connection server daemon (139.178.68.195:42068). May 8 00:38:07.825568 systemd[1]: sshd@11-139.178.70.106:22-139.178.68.195:42052.service: Deactivated successfully. May 8 00:38:07.830595 systemd[1]: session-13.scope: Deactivated successfully. May 8 00:38:07.833486 systemd-logind[1616]: Session 13 logged out. Waiting for processes to exit. May 8 00:38:07.835075 systemd-logind[1616]: Removed session 13. May 8 00:38:07.883184 sshd[7210]: Accepted publickey for core from 139.178.68.195 port 42068 ssh2: RSA SHA256:K6koWqi65G0NEZIdyqBHM11YGd87HXVeKfxzt5n0Rpg May 8 00:38:07.885520 sshd[7210]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:38:07.889916 systemd-logind[1616]: New session 14 of user core. May 8 00:38:07.893553 systemd[1]: Started session-14.scope - Session 14 of User core. May 8 00:38:07.999514 systemd-resolved[1546]: Under memory pressure, flushing caches. May 8 00:38:08.000507 systemd-journald[1182]: Under memory pressure, flushing caches. May 8 00:38:07.999519 systemd-resolved[1546]: Flushed all caches. May 8 00:38:08.297470 sshd[7210]: pam_unix(sshd:session): session closed for user core May 8 00:38:08.307801 systemd[1]: sshd@12-139.178.70.106:22-139.178.68.195:42068.service: Deactivated successfully. May 8 00:38:08.309735 systemd-logind[1616]: Session 14 logged out. Waiting for processes to exit. May 8 00:38:08.309888 systemd[1]: session-14.scope: Deactivated successfully. May 8 00:38:08.310749 systemd-logind[1616]: Removed session 14. May 8 00:38:13.308764 systemd[1]: Started sshd@13-139.178.70.106:22-139.178.68.195:42072.service - OpenSSH per-connection server daemon (139.178.68.195:42072). May 8 00:38:13.382936 sshd[7237]: Accepted publickey for core from 139.178.68.195 port 42072 ssh2: RSA SHA256:K6koWqi65G0NEZIdyqBHM11YGd87HXVeKfxzt5n0Rpg May 8 00:38:13.383855 sshd[7237]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:38:13.387438 systemd-logind[1616]: New session 15 of user core. May 8 00:38:13.388563 systemd[1]: Started session-15.scope - Session 15 of User core. May 8 00:38:13.529750 sshd[7237]: pam_unix(sshd:session): session closed for user core May 8 00:38:13.532210 systemd-logind[1616]: Session 15 logged out. Waiting for processes to exit. May 8 00:38:13.532399 systemd[1]: sshd@13-139.178.70.106:22-139.178.68.195:42072.service: Deactivated successfully. May 8 00:38:13.534514 systemd[1]: session-15.scope: Deactivated successfully. May 8 00:38:13.536033 systemd-logind[1616]: Removed session 15. May 8 00:38:18.536493 systemd[1]: Started sshd@14-139.178.70.106:22-139.178.68.195:53866.service - OpenSSH per-connection server daemon (139.178.68.195:53866). May 8 00:38:18.560271 sshd[7252]: Accepted publickey for core from 139.178.68.195 port 53866 ssh2: RSA SHA256:K6koWqi65G0NEZIdyqBHM11YGd87HXVeKfxzt5n0Rpg May 8 00:38:18.561131 sshd[7252]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:38:18.564134 systemd-logind[1616]: New session 16 of user core. May 8 00:38:18.570870 systemd[1]: Started session-16.scope - Session 16 of User core. May 8 00:38:18.724263 sshd[7252]: pam_unix(sshd:session): session closed for user core May 8 00:38:18.726265 systemd[1]: sshd@14-139.178.70.106:22-139.178.68.195:53866.service: Deactivated successfully. May 8 00:38:18.728496 systemd-logind[1616]: Session 16 logged out. Waiting for processes to exit. May 8 00:38:18.728801 systemd[1]: session-16.scope: Deactivated successfully. May 8 00:38:18.729890 systemd-logind[1616]: Removed session 16. May 8 00:38:20.474200 kubelet[2965]: I0508 00:38:20.461912 2965 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-fd5bc9757-2fqjb" podStartSLOduration=30.431264777 podStartE2EDuration="30.431264777s" podCreationTimestamp="2025-05-08 00:37:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 00:37:57.862627728 +0000 UTC m=+73.795936357" watchObservedRunningTime="2025-05-08 00:38:20.431264777 +0000 UTC m=+96.364573405" May 8 00:38:23.734659 systemd[1]: Started sshd@15-139.178.70.106:22-139.178.68.195:53880.service - OpenSSH per-connection server daemon (139.178.68.195:53880). May 8 00:38:23.976895 sshd[7299]: Accepted publickey for core from 139.178.68.195 port 53880 ssh2: RSA SHA256:K6koWqi65G0NEZIdyqBHM11YGd87HXVeKfxzt5n0Rpg May 8 00:38:23.977812 sshd[7299]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:38:23.981391 systemd-logind[1616]: New session 17 of user core. May 8 00:38:23.985451 systemd[1]: Started session-17.scope - Session 17 of User core. May 8 00:38:24.343491 sshd[7299]: pam_unix(sshd:session): session closed for user core May 8 00:38:24.345765 systemd[1]: sshd@15-139.178.70.106:22-139.178.68.195:53880.service: Deactivated successfully. May 8 00:38:24.348195 systemd[1]: session-17.scope: Deactivated successfully. May 8 00:38:24.348322 systemd-logind[1616]: Session 17 logged out. Waiting for processes to exit. May 8 00:38:24.349489 systemd-logind[1616]: Removed session 17. May 8 00:38:29.349614 systemd[1]: Started sshd@16-139.178.70.106:22-139.178.68.195:54646.service - OpenSSH per-connection server daemon (139.178.68.195:54646). May 8 00:38:29.401382 sshd[7333]: Accepted publickey for core from 139.178.68.195 port 54646 ssh2: RSA SHA256:K6koWqi65G0NEZIdyqBHM11YGd87HXVeKfxzt5n0Rpg May 8 00:38:29.402279 sshd[7333]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:38:29.404919 systemd-logind[1616]: New session 18 of user core. May 8 00:38:29.411592 systemd[1]: Started session-18.scope - Session 18 of User core. May 8 00:38:29.676484 sshd[7333]: pam_unix(sshd:session): session closed for user core May 8 00:38:29.681586 systemd[1]: Started sshd@17-139.178.70.106:22-139.178.68.195:54662.service - OpenSSH per-connection server daemon (139.178.68.195:54662). May 8 00:38:29.683201 systemd-logind[1616]: Session 18 logged out. Waiting for processes to exit. May 8 00:38:29.683637 systemd[1]: sshd@16-139.178.70.106:22-139.178.68.195:54646.service: Deactivated successfully. May 8 00:38:29.685371 systemd[1]: session-18.scope: Deactivated successfully. May 8 00:38:29.687061 systemd-logind[1616]: Removed session 18. May 8 00:38:29.755746 sshd[7344]: Accepted publickey for core from 139.178.68.195 port 54662 ssh2: RSA SHA256:K6koWqi65G0NEZIdyqBHM11YGd87HXVeKfxzt5n0Rpg May 8 00:38:29.756801 sshd[7344]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:38:29.759816 systemd-logind[1616]: New session 19 of user core. May 8 00:38:29.765486 systemd[1]: Started session-19.scope - Session 19 of User core. May 8 00:38:30.200061 sshd[7344]: pam_unix(sshd:session): session closed for user core May 8 00:38:30.209531 systemd[1]: Started sshd@18-139.178.70.106:22-139.178.68.195:54676.service - OpenSSH per-connection server daemon (139.178.68.195:54676). May 8 00:38:30.209961 systemd[1]: sshd@17-139.178.70.106:22-139.178.68.195:54662.service: Deactivated successfully. May 8 00:38:30.211614 systemd-logind[1616]: Session 19 logged out. Waiting for processes to exit. May 8 00:38:30.214089 systemd[1]: session-19.scope: Deactivated successfully. May 8 00:38:30.215245 systemd-logind[1616]: Removed session 19. May 8 00:38:30.248002 sshd[7356]: Accepted publickey for core from 139.178.68.195 port 54676 ssh2: RSA SHA256:K6koWqi65G0NEZIdyqBHM11YGd87HXVeKfxzt5n0Rpg May 8 00:38:30.249263 sshd[7356]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:38:30.252983 systemd-logind[1616]: New session 20 of user core. May 8 00:38:30.257403 systemd[1]: Started session-20.scope - Session 20 of User core. May 8 00:38:32.025306 sshd[7356]: pam_unix(sshd:session): session closed for user core May 8 00:38:32.027551 systemd[1]: Started sshd@19-139.178.70.106:22-139.178.68.195:54688.service - OpenSSH per-connection server daemon (139.178.68.195:54688). May 8 00:38:32.038682 systemd[1]: sshd@18-139.178.70.106:22-139.178.68.195:54676.service: Deactivated successfully. May 8 00:38:32.040222 systemd[1]: session-20.scope: Deactivated successfully. May 8 00:38:32.041409 systemd-logind[1616]: Session 20 logged out. Waiting for processes to exit. May 8 00:38:32.043508 systemd-logind[1616]: Removed session 20. May 8 00:38:32.155104 sshd[7371]: Accepted publickey for core from 139.178.68.195 port 54688 ssh2: RSA SHA256:K6koWqi65G0NEZIdyqBHM11YGd87HXVeKfxzt5n0Rpg May 8 00:38:32.156206 sshd[7371]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:38:32.158923 systemd-logind[1616]: New session 21 of user core. May 8 00:38:32.163480 systemd[1]: Started session-21.scope - Session 21 of User core. May 8 00:38:33.019503 sshd[7371]: pam_unix(sshd:session): session closed for user core May 8 00:38:33.025573 systemd[1]: Started sshd@20-139.178.70.106:22-139.178.68.195:54696.service - OpenSSH per-connection server daemon (139.178.68.195:54696). May 8 00:38:33.026038 systemd[1]: sshd@19-139.178.70.106:22-139.178.68.195:54688.service: Deactivated successfully. May 8 00:38:33.028088 systemd[1]: session-21.scope: Deactivated successfully. May 8 00:38:33.028115 systemd-logind[1616]: Session 21 logged out. Waiting for processes to exit. May 8 00:38:33.029057 systemd-logind[1616]: Removed session 21. May 8 00:38:33.095475 sshd[7390]: Accepted publickey for core from 139.178.68.195 port 54696 ssh2: RSA SHA256:K6koWqi65G0NEZIdyqBHM11YGd87HXVeKfxzt5n0Rpg May 8 00:38:33.096313 sshd[7390]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:38:33.099003 systemd-logind[1616]: New session 22 of user core. May 8 00:38:33.102484 systemd[1]: Started session-22.scope - Session 22 of User core. May 8 00:38:33.211303 sshd[7390]: pam_unix(sshd:session): session closed for user core May 8 00:38:33.213200 systemd[1]: sshd@20-139.178.70.106:22-139.178.68.195:54696.service: Deactivated successfully. May 8 00:38:33.216535 systemd[1]: session-22.scope: Deactivated successfully. May 8 00:38:33.217398 systemd-logind[1616]: Session 22 logged out. Waiting for processes to exit. May 8 00:38:33.218051 systemd-logind[1616]: Removed session 22. May 8 00:38:33.983636 systemd-resolved[1546]: Under memory pressure, flushing caches. May 8 00:38:34.001069 systemd-journald[1182]: Under memory pressure, flushing caches. May 8 00:38:33.983640 systemd-resolved[1546]: Flushed all caches. May 8 00:38:38.218571 systemd[1]: Started sshd@21-139.178.70.106:22-139.178.68.195:54020.service - OpenSSH per-connection server daemon (139.178.68.195:54020). May 8 00:38:38.244867 sshd[7411]: Accepted publickey for core from 139.178.68.195 port 54020 ssh2: RSA SHA256:K6koWqi65G0NEZIdyqBHM11YGd87HXVeKfxzt5n0Rpg May 8 00:38:38.245721 sshd[7411]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:38:38.248378 systemd-logind[1616]: New session 23 of user core. May 8 00:38:38.251543 systemd[1]: Started session-23.scope - Session 23 of User core. May 8 00:38:38.363328 sshd[7411]: pam_unix(sshd:session): session closed for user core May 8 00:38:38.367098 systemd[1]: sshd@21-139.178.70.106:22-139.178.68.195:54020.service: Deactivated successfully. May 8 00:38:38.368119 systemd[1]: session-23.scope: Deactivated successfully. May 8 00:38:38.369379 systemd-logind[1616]: Session 23 logged out. Waiting for processes to exit. May 8 00:38:38.370652 systemd-logind[1616]: Removed session 23. May 8 00:38:43.372512 systemd[1]: Started sshd@22-139.178.70.106:22-139.178.68.195:54034.service - OpenSSH per-connection server daemon (139.178.68.195:54034). May 8 00:38:43.404213 sshd[7433]: Accepted publickey for core from 139.178.68.195 port 54034 ssh2: RSA SHA256:K6koWqi65G0NEZIdyqBHM11YGd87HXVeKfxzt5n0Rpg May 8 00:38:43.405362 sshd[7433]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:38:43.408288 systemd-logind[1616]: New session 24 of user core. May 8 00:38:43.414477 systemd[1]: Started session-24.scope - Session 24 of User core. May 8 00:38:43.633023 sshd[7433]: pam_unix(sshd:session): session closed for user core May 8 00:38:43.635802 systemd[1]: sshd@22-139.178.70.106:22-139.178.68.195:54034.service: Deactivated successfully. May 8 00:38:43.638474 systemd-logind[1616]: Session 24 logged out. Waiting for processes to exit. May 8 00:38:43.638514 systemd[1]: session-24.scope: Deactivated successfully. May 8 00:38:43.639628 systemd-logind[1616]: Removed session 24. May 8 00:38:45.722531 containerd[1637]: time="2025-05-08T00:38:45.706147063Z" level=info msg="StopPodSandbox for \"edcfe40c015f0a7859aa760f892672195034bf9a30e6ea648f2bc5b902d78768\"" May 8 00:38:46.252632 containerd[1637]: 2025-05-08 00:38:46.038 [WARNING][7462] cni-plugin/k8s.go 566: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="edcfe40c015f0a7859aa760f892672195034bf9a30e6ea648f2bc5b902d78768" WorkloadEndpoint="localhost-k8s-calico--apiserver--84669494cd--c6tg9-eth0" May 8 00:38:46.252632 containerd[1637]: 2025-05-08 00:38:46.040 [INFO][7462] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="edcfe40c015f0a7859aa760f892672195034bf9a30e6ea648f2bc5b902d78768" May 8 00:38:46.252632 containerd[1637]: 2025-05-08 00:38:46.040 [INFO][7462] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="edcfe40c015f0a7859aa760f892672195034bf9a30e6ea648f2bc5b902d78768" iface="eth0" netns="" May 8 00:38:46.252632 containerd[1637]: 2025-05-08 00:38:46.040 [INFO][7462] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="edcfe40c015f0a7859aa760f892672195034bf9a30e6ea648f2bc5b902d78768" May 8 00:38:46.252632 containerd[1637]: 2025-05-08 00:38:46.040 [INFO][7462] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="edcfe40c015f0a7859aa760f892672195034bf9a30e6ea648f2bc5b902d78768" May 8 00:38:46.252632 containerd[1637]: 2025-05-08 00:38:46.238 [INFO][7469] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="edcfe40c015f0a7859aa760f892672195034bf9a30e6ea648f2bc5b902d78768" HandleID="k8s-pod-network.edcfe40c015f0a7859aa760f892672195034bf9a30e6ea648f2bc5b902d78768" Workload="localhost-k8s-calico--apiserver--84669494cd--c6tg9-eth0" May 8 00:38:46.252632 containerd[1637]: 2025-05-08 00:38:46.240 [INFO][7469] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:38:46.252632 containerd[1637]: 2025-05-08 00:38:46.240 [INFO][7469] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:38:46.252632 containerd[1637]: 2025-05-08 00:38:46.249 [WARNING][7469] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="edcfe40c015f0a7859aa760f892672195034bf9a30e6ea648f2bc5b902d78768" HandleID="k8s-pod-network.edcfe40c015f0a7859aa760f892672195034bf9a30e6ea648f2bc5b902d78768" Workload="localhost-k8s-calico--apiserver--84669494cd--c6tg9-eth0" May 8 00:38:46.252632 containerd[1637]: 2025-05-08 00:38:46.249 [INFO][7469] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="edcfe40c015f0a7859aa760f892672195034bf9a30e6ea648f2bc5b902d78768" HandleID="k8s-pod-network.edcfe40c015f0a7859aa760f892672195034bf9a30e6ea648f2bc5b902d78768" Workload="localhost-k8s-calico--apiserver--84669494cd--c6tg9-eth0" May 8 00:38:46.252632 containerd[1637]: 2025-05-08 00:38:46.250 [INFO][7469] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:38:46.252632 containerd[1637]: 2025-05-08 00:38:46.251 [INFO][7462] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="edcfe40c015f0a7859aa760f892672195034bf9a30e6ea648f2bc5b902d78768" May 8 00:38:46.257630 containerd[1637]: time="2025-05-08T00:38:46.257600460Z" level=info msg="TearDown network for sandbox \"edcfe40c015f0a7859aa760f892672195034bf9a30e6ea648f2bc5b902d78768\" successfully" May 8 00:38:46.257661 containerd[1637]: time="2025-05-08T00:38:46.257631602Z" level=info msg="StopPodSandbox for \"edcfe40c015f0a7859aa760f892672195034bf9a30e6ea648f2bc5b902d78768\" returns successfully" May 8 00:38:46.271915 containerd[1637]: time="2025-05-08T00:38:46.271842789Z" level=info msg="RemovePodSandbox for \"edcfe40c015f0a7859aa760f892672195034bf9a30e6ea648f2bc5b902d78768\"" May 8 00:38:46.278438 containerd[1637]: time="2025-05-08T00:38:46.278075033Z" level=info msg="Forcibly stopping sandbox \"edcfe40c015f0a7859aa760f892672195034bf9a30e6ea648f2bc5b902d78768\"" May 8 00:38:46.618918 containerd[1637]: 2025-05-08 00:38:46.569 [WARNING][7488] cni-plugin/k8s.go 566: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="edcfe40c015f0a7859aa760f892672195034bf9a30e6ea648f2bc5b902d78768" WorkloadEndpoint="localhost-k8s-calico--apiserver--84669494cd--c6tg9-eth0" May 8 00:38:46.618918 containerd[1637]: 2025-05-08 00:38:46.569 [INFO][7488] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="edcfe40c015f0a7859aa760f892672195034bf9a30e6ea648f2bc5b902d78768" May 8 00:38:46.618918 containerd[1637]: 2025-05-08 00:38:46.569 [INFO][7488] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="edcfe40c015f0a7859aa760f892672195034bf9a30e6ea648f2bc5b902d78768" iface="eth0" netns="" May 8 00:38:46.618918 containerd[1637]: 2025-05-08 00:38:46.569 [INFO][7488] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="edcfe40c015f0a7859aa760f892672195034bf9a30e6ea648f2bc5b902d78768" May 8 00:38:46.618918 containerd[1637]: 2025-05-08 00:38:46.569 [INFO][7488] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="edcfe40c015f0a7859aa760f892672195034bf9a30e6ea648f2bc5b902d78768" May 8 00:38:46.618918 containerd[1637]: 2025-05-08 00:38:46.610 [INFO][7495] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="edcfe40c015f0a7859aa760f892672195034bf9a30e6ea648f2bc5b902d78768" HandleID="k8s-pod-network.edcfe40c015f0a7859aa760f892672195034bf9a30e6ea648f2bc5b902d78768" Workload="localhost-k8s-calico--apiserver--84669494cd--c6tg9-eth0" May 8 00:38:46.618918 containerd[1637]: 2025-05-08 00:38:46.611 [INFO][7495] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:38:46.618918 containerd[1637]: 2025-05-08 00:38:46.611 [INFO][7495] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:38:46.618918 containerd[1637]: 2025-05-08 00:38:46.614 [WARNING][7495] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="edcfe40c015f0a7859aa760f892672195034bf9a30e6ea648f2bc5b902d78768" HandleID="k8s-pod-network.edcfe40c015f0a7859aa760f892672195034bf9a30e6ea648f2bc5b902d78768" Workload="localhost-k8s-calico--apiserver--84669494cd--c6tg9-eth0" May 8 00:38:46.618918 containerd[1637]: 2025-05-08 00:38:46.614 [INFO][7495] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="edcfe40c015f0a7859aa760f892672195034bf9a30e6ea648f2bc5b902d78768" HandleID="k8s-pod-network.edcfe40c015f0a7859aa760f892672195034bf9a30e6ea648f2bc5b902d78768" Workload="localhost-k8s-calico--apiserver--84669494cd--c6tg9-eth0" May 8 00:38:46.618918 containerd[1637]: 2025-05-08 00:38:46.615 [INFO][7495] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:38:46.618918 containerd[1637]: 2025-05-08 00:38:46.617 [INFO][7488] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="edcfe40c015f0a7859aa760f892672195034bf9a30e6ea648f2bc5b902d78768" May 8 00:38:46.624527 containerd[1637]: time="2025-05-08T00:38:46.619305678Z" level=info msg="TearDown network for sandbox \"edcfe40c015f0a7859aa760f892672195034bf9a30e6ea648f2bc5b902d78768\" successfully" May 8 00:38:46.791424 containerd[1637]: time="2025-05-08T00:38:46.791126748Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"edcfe40c015f0a7859aa760f892672195034bf9a30e6ea648f2bc5b902d78768\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 8 00:38:46.791424 containerd[1637]: time="2025-05-08T00:38:46.791200713Z" level=info msg="RemovePodSandbox \"edcfe40c015f0a7859aa760f892672195034bf9a30e6ea648f2bc5b902d78768\" returns successfully" May 8 00:38:46.799371 containerd[1637]: time="2025-05-08T00:38:46.799350500Z" level=info msg="StopPodSandbox for \"eb2d614a0f671a46ede64a02eccc94498a3b0c4be905254c10447f476a4aef48\"" May 8 00:38:46.799453 containerd[1637]: time="2025-05-08T00:38:46.799415635Z" level=info msg="TearDown network for sandbox \"eb2d614a0f671a46ede64a02eccc94498a3b0c4be905254c10447f476a4aef48\" successfully" May 8 00:38:46.799453 containerd[1637]: time="2025-05-08T00:38:46.799428101Z" level=info msg="StopPodSandbox for \"eb2d614a0f671a46ede64a02eccc94498a3b0c4be905254c10447f476a4aef48\" returns successfully" May 8 00:38:46.799604 containerd[1637]: time="2025-05-08T00:38:46.799594226Z" level=info msg="RemovePodSandbox for \"eb2d614a0f671a46ede64a02eccc94498a3b0c4be905254c10447f476a4aef48\"" May 8 00:38:46.804925 containerd[1637]: time="2025-05-08T00:38:46.799607753Z" level=info msg="Forcibly stopping sandbox \"eb2d614a0f671a46ede64a02eccc94498a3b0c4be905254c10447f476a4aef48\"" May 8 00:38:46.804925 containerd[1637]: time="2025-05-08T00:38:46.799634877Z" level=info msg="TearDown network for sandbox \"eb2d614a0f671a46ede64a02eccc94498a3b0c4be905254c10447f476a4aef48\" successfully" May 8 00:38:47.241828 containerd[1637]: time="2025-05-08T00:38:47.241700138Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"eb2d614a0f671a46ede64a02eccc94498a3b0c4be905254c10447f476a4aef48\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 8 00:38:47.241828 containerd[1637]: time="2025-05-08T00:38:47.241754298Z" level=info msg="RemovePodSandbox \"eb2d614a0f671a46ede64a02eccc94498a3b0c4be905254c10447f476a4aef48\" returns successfully" May 8 00:38:47.242539 containerd[1637]: time="2025-05-08T00:38:47.242235027Z" level=info msg="StopPodSandbox for \"274db3b8176f7e43b0e3db3940d31fbbbe687423c83a8cb33e8e7ef5ef44bdb8\"" May 8 00:38:47.359573 containerd[1637]: 2025-05-08 00:38:47.336 [WARNING][7513] cni-plugin/k8s.go 566: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="274db3b8176f7e43b0e3db3940d31fbbbe687423c83a8cb33e8e7ef5ef44bdb8" WorkloadEndpoint="localhost-k8s-calico--apiserver--84669494cd--gzmz2-eth0" May 8 00:38:47.359573 containerd[1637]: 2025-05-08 00:38:47.336 [INFO][7513] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="274db3b8176f7e43b0e3db3940d31fbbbe687423c83a8cb33e8e7ef5ef44bdb8" May 8 00:38:47.359573 containerd[1637]: 2025-05-08 00:38:47.336 [INFO][7513] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="274db3b8176f7e43b0e3db3940d31fbbbe687423c83a8cb33e8e7ef5ef44bdb8" iface="eth0" netns="" May 8 00:38:47.359573 containerd[1637]: 2025-05-08 00:38:47.336 [INFO][7513] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="274db3b8176f7e43b0e3db3940d31fbbbe687423c83a8cb33e8e7ef5ef44bdb8" May 8 00:38:47.359573 containerd[1637]: 2025-05-08 00:38:47.337 [INFO][7513] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="274db3b8176f7e43b0e3db3940d31fbbbe687423c83a8cb33e8e7ef5ef44bdb8" May 8 00:38:47.359573 containerd[1637]: 2025-05-08 00:38:47.352 [INFO][7520] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="274db3b8176f7e43b0e3db3940d31fbbbe687423c83a8cb33e8e7ef5ef44bdb8" HandleID="k8s-pod-network.274db3b8176f7e43b0e3db3940d31fbbbe687423c83a8cb33e8e7ef5ef44bdb8" Workload="localhost-k8s-calico--apiserver--84669494cd--gzmz2-eth0" May 8 00:38:47.359573 containerd[1637]: 2025-05-08 00:38:47.352 [INFO][7520] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:38:47.359573 containerd[1637]: 2025-05-08 00:38:47.352 [INFO][7520] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:38:47.359573 containerd[1637]: 2025-05-08 00:38:47.356 [WARNING][7520] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="274db3b8176f7e43b0e3db3940d31fbbbe687423c83a8cb33e8e7ef5ef44bdb8" HandleID="k8s-pod-network.274db3b8176f7e43b0e3db3940d31fbbbe687423c83a8cb33e8e7ef5ef44bdb8" Workload="localhost-k8s-calico--apiserver--84669494cd--gzmz2-eth0" May 8 00:38:47.359573 containerd[1637]: 2025-05-08 00:38:47.356 [INFO][7520] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="274db3b8176f7e43b0e3db3940d31fbbbe687423c83a8cb33e8e7ef5ef44bdb8" HandleID="k8s-pod-network.274db3b8176f7e43b0e3db3940d31fbbbe687423c83a8cb33e8e7ef5ef44bdb8" Workload="localhost-k8s-calico--apiserver--84669494cd--gzmz2-eth0" May 8 00:38:47.359573 containerd[1637]: 2025-05-08 00:38:47.357 [INFO][7520] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:38:47.359573 containerd[1637]: 2025-05-08 00:38:47.358 [INFO][7513] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="274db3b8176f7e43b0e3db3940d31fbbbe687423c83a8cb33e8e7ef5ef44bdb8" May 8 00:38:47.361013 containerd[1637]: time="2025-05-08T00:38:47.359596693Z" level=info msg="TearDown network for sandbox \"274db3b8176f7e43b0e3db3940d31fbbbe687423c83a8cb33e8e7ef5ef44bdb8\" successfully" May 8 00:38:47.361013 containerd[1637]: time="2025-05-08T00:38:47.359615665Z" level=info msg="StopPodSandbox for \"274db3b8176f7e43b0e3db3940d31fbbbe687423c83a8cb33e8e7ef5ef44bdb8\" returns successfully" May 8 00:38:47.361013 containerd[1637]: time="2025-05-08T00:38:47.360760150Z" level=info msg="RemovePodSandbox for \"274db3b8176f7e43b0e3db3940d31fbbbe687423c83a8cb33e8e7ef5ef44bdb8\"" May 8 00:38:47.361013 containerd[1637]: time="2025-05-08T00:38:47.360778001Z" level=info msg="Forcibly stopping sandbox \"274db3b8176f7e43b0e3db3940d31fbbbe687423c83a8cb33e8e7ef5ef44bdb8\"" May 8 00:38:47.409495 containerd[1637]: 2025-05-08 00:38:47.384 [WARNING][7539] cni-plugin/k8s.go 566: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="274db3b8176f7e43b0e3db3940d31fbbbe687423c83a8cb33e8e7ef5ef44bdb8" WorkloadEndpoint="localhost-k8s-calico--apiserver--84669494cd--gzmz2-eth0" May 8 00:38:47.409495 containerd[1637]: 2025-05-08 00:38:47.384 [INFO][7539] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="274db3b8176f7e43b0e3db3940d31fbbbe687423c83a8cb33e8e7ef5ef44bdb8" May 8 00:38:47.409495 containerd[1637]: 2025-05-08 00:38:47.384 [INFO][7539] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="274db3b8176f7e43b0e3db3940d31fbbbe687423c83a8cb33e8e7ef5ef44bdb8" iface="eth0" netns="" May 8 00:38:47.409495 containerd[1637]: 2025-05-08 00:38:47.384 [INFO][7539] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="274db3b8176f7e43b0e3db3940d31fbbbe687423c83a8cb33e8e7ef5ef44bdb8" May 8 00:38:47.409495 containerd[1637]: 2025-05-08 00:38:47.384 [INFO][7539] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="274db3b8176f7e43b0e3db3940d31fbbbe687423c83a8cb33e8e7ef5ef44bdb8" May 8 00:38:47.409495 containerd[1637]: 2025-05-08 00:38:47.401 [INFO][7546] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="274db3b8176f7e43b0e3db3940d31fbbbe687423c83a8cb33e8e7ef5ef44bdb8" HandleID="k8s-pod-network.274db3b8176f7e43b0e3db3940d31fbbbe687423c83a8cb33e8e7ef5ef44bdb8" Workload="localhost-k8s-calico--apiserver--84669494cd--gzmz2-eth0" May 8 00:38:47.409495 containerd[1637]: 2025-05-08 00:38:47.401 [INFO][7546] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:38:47.409495 containerd[1637]: 2025-05-08 00:38:47.401 [INFO][7546] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:38:47.409495 containerd[1637]: 2025-05-08 00:38:47.406 [WARNING][7546] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="274db3b8176f7e43b0e3db3940d31fbbbe687423c83a8cb33e8e7ef5ef44bdb8" HandleID="k8s-pod-network.274db3b8176f7e43b0e3db3940d31fbbbe687423c83a8cb33e8e7ef5ef44bdb8" Workload="localhost-k8s-calico--apiserver--84669494cd--gzmz2-eth0" May 8 00:38:47.409495 containerd[1637]: 2025-05-08 00:38:47.406 [INFO][7546] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="274db3b8176f7e43b0e3db3940d31fbbbe687423c83a8cb33e8e7ef5ef44bdb8" HandleID="k8s-pod-network.274db3b8176f7e43b0e3db3940d31fbbbe687423c83a8cb33e8e7ef5ef44bdb8" Workload="localhost-k8s-calico--apiserver--84669494cd--gzmz2-eth0" May 8 00:38:47.409495 containerd[1637]: 2025-05-08 00:38:47.407 [INFO][7546] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:38:47.409495 containerd[1637]: 2025-05-08 00:38:47.408 [INFO][7539] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="274db3b8176f7e43b0e3db3940d31fbbbe687423c83a8cb33e8e7ef5ef44bdb8" May 8 00:38:47.410572 containerd[1637]: time="2025-05-08T00:38:47.409525123Z" level=info msg="TearDown network for sandbox \"274db3b8176f7e43b0e3db3940d31fbbbe687423c83a8cb33e8e7ef5ef44bdb8\" successfully" May 8 00:38:47.435055 containerd[1637]: time="2025-05-08T00:38:47.435024607Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"274db3b8176f7e43b0e3db3940d31fbbbe687423c83a8cb33e8e7ef5ef44bdb8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 8 00:38:47.435303 containerd[1637]: time="2025-05-08T00:38:47.435072634Z" level=info msg="RemovePodSandbox \"274db3b8176f7e43b0e3db3940d31fbbbe687423c83a8cb33e8e7ef5ef44bdb8\" returns successfully" May 8 00:38:47.435415 containerd[1637]: time="2025-05-08T00:38:47.435368499Z" level=info msg="StopPodSandbox for \"1e48c04261d2a4a0219eb4a6f6095118446d72554edd4a7c1275a8dfaa8904ba\"" May 8 00:38:47.435415 containerd[1637]: time="2025-05-08T00:38:47.435411187Z" level=info msg="TearDown network for sandbox \"1e48c04261d2a4a0219eb4a6f6095118446d72554edd4a7c1275a8dfaa8904ba\" successfully" May 8 00:38:47.435494 containerd[1637]: time="2025-05-08T00:38:47.435417452Z" level=info msg="StopPodSandbox for \"1e48c04261d2a4a0219eb4a6f6095118446d72554edd4a7c1275a8dfaa8904ba\" returns successfully" May 8 00:38:47.436053 containerd[1637]: time="2025-05-08T00:38:47.435633563Z" level=info msg="RemovePodSandbox for \"1e48c04261d2a4a0219eb4a6f6095118446d72554edd4a7c1275a8dfaa8904ba\"" May 8 00:38:47.436053 containerd[1637]: time="2025-05-08T00:38:47.435649934Z" level=info msg="Forcibly stopping sandbox \"1e48c04261d2a4a0219eb4a6f6095118446d72554edd4a7c1275a8dfaa8904ba\"" May 8 00:38:47.436053 containerd[1637]: time="2025-05-08T00:38:47.435679244Z" level=info msg="TearDown network for sandbox \"1e48c04261d2a4a0219eb4a6f6095118446d72554edd4a7c1275a8dfaa8904ba\" successfully" May 8 00:38:47.442536 containerd[1637]: time="2025-05-08T00:38:47.442516377Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1e48c04261d2a4a0219eb4a6f6095118446d72554edd4a7c1275a8dfaa8904ba\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 8 00:38:47.442638 containerd[1637]: time="2025-05-08T00:38:47.442627753Z" level=info msg="RemovePodSandbox \"1e48c04261d2a4a0219eb4a6f6095118446d72554edd4a7c1275a8dfaa8904ba\" returns successfully" May 8 00:38:47.442917 containerd[1637]: time="2025-05-08T00:38:47.442885456Z" level=info msg="StopPodSandbox for \"b2d9f88c3ef0fa40800e8de031ca1db06c427e8d5062b11021d466cf61267c0e\"" May 8 00:38:47.486401 containerd[1637]: 2025-05-08 00:38:47.465 [WARNING][7565] cni-plugin/k8s.go 566: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="b2d9f88c3ef0fa40800e8de031ca1db06c427e8d5062b11021d466cf61267c0e" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--67ddb48bf6--z6mbt-eth0" May 8 00:38:47.486401 containerd[1637]: 2025-05-08 00:38:47.465 [INFO][7565] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="b2d9f88c3ef0fa40800e8de031ca1db06c427e8d5062b11021d466cf61267c0e" May 8 00:38:47.486401 containerd[1637]: 2025-05-08 00:38:47.465 [INFO][7565] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b2d9f88c3ef0fa40800e8de031ca1db06c427e8d5062b11021d466cf61267c0e" iface="eth0" netns="" May 8 00:38:47.486401 containerd[1637]: 2025-05-08 00:38:47.465 [INFO][7565] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="b2d9f88c3ef0fa40800e8de031ca1db06c427e8d5062b11021d466cf61267c0e" May 8 00:38:47.486401 containerd[1637]: 2025-05-08 00:38:47.465 [INFO][7565] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b2d9f88c3ef0fa40800e8de031ca1db06c427e8d5062b11021d466cf61267c0e" May 8 00:38:47.486401 containerd[1637]: 2025-05-08 00:38:47.479 [INFO][7572] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b2d9f88c3ef0fa40800e8de031ca1db06c427e8d5062b11021d466cf61267c0e" HandleID="k8s-pod-network.b2d9f88c3ef0fa40800e8de031ca1db06c427e8d5062b11021d466cf61267c0e" Workload="localhost-k8s-calico--kube--controllers--67ddb48bf6--z6mbt-eth0" May 8 00:38:47.486401 containerd[1637]: 2025-05-08 00:38:47.479 [INFO][7572] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:38:47.486401 containerd[1637]: 2025-05-08 00:38:47.479 [INFO][7572] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:38:47.486401 containerd[1637]: 2025-05-08 00:38:47.483 [WARNING][7572] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b2d9f88c3ef0fa40800e8de031ca1db06c427e8d5062b11021d466cf61267c0e" HandleID="k8s-pod-network.b2d9f88c3ef0fa40800e8de031ca1db06c427e8d5062b11021d466cf61267c0e" Workload="localhost-k8s-calico--kube--controllers--67ddb48bf6--z6mbt-eth0" May 8 00:38:47.486401 containerd[1637]: 2025-05-08 00:38:47.483 [INFO][7572] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b2d9f88c3ef0fa40800e8de031ca1db06c427e8d5062b11021d466cf61267c0e" HandleID="k8s-pod-network.b2d9f88c3ef0fa40800e8de031ca1db06c427e8d5062b11021d466cf61267c0e" Workload="localhost-k8s-calico--kube--controllers--67ddb48bf6--z6mbt-eth0" May 8 00:38:47.486401 containerd[1637]: 2025-05-08 00:38:47.484 [INFO][7572] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:38:47.486401 containerd[1637]: 2025-05-08 00:38:47.485 [INFO][7565] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="b2d9f88c3ef0fa40800e8de031ca1db06c427e8d5062b11021d466cf61267c0e" May 8 00:38:47.486833 containerd[1637]: time="2025-05-08T00:38:47.486426760Z" level=info msg="TearDown network for sandbox \"b2d9f88c3ef0fa40800e8de031ca1db06c427e8d5062b11021d466cf61267c0e\" successfully" May 8 00:38:47.486833 containerd[1637]: time="2025-05-08T00:38:47.486454661Z" level=info msg="StopPodSandbox for \"b2d9f88c3ef0fa40800e8de031ca1db06c427e8d5062b11021d466cf61267c0e\" returns successfully" May 8 00:38:47.487239 containerd[1637]: time="2025-05-08T00:38:47.486992835Z" level=info msg="RemovePodSandbox for \"b2d9f88c3ef0fa40800e8de031ca1db06c427e8d5062b11021d466cf61267c0e\"" May 8 00:38:47.487239 containerd[1637]: time="2025-05-08T00:38:47.487010604Z" level=info msg="Forcibly stopping sandbox \"b2d9f88c3ef0fa40800e8de031ca1db06c427e8d5062b11021d466cf61267c0e\"" May 8 00:38:47.533229 containerd[1637]: 2025-05-08 00:38:47.511 [WARNING][7590] cni-plugin/k8s.go 566: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="b2d9f88c3ef0fa40800e8de031ca1db06c427e8d5062b11021d466cf61267c0e" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--67ddb48bf6--z6mbt-eth0" May 8 00:38:47.533229 containerd[1637]: 2025-05-08 00:38:47.511 [INFO][7590] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="b2d9f88c3ef0fa40800e8de031ca1db06c427e8d5062b11021d466cf61267c0e" May 8 00:38:47.533229 containerd[1637]: 2025-05-08 00:38:47.511 [INFO][7590] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b2d9f88c3ef0fa40800e8de031ca1db06c427e8d5062b11021d466cf61267c0e" iface="eth0" netns="" May 8 00:38:47.533229 containerd[1637]: 2025-05-08 00:38:47.511 [INFO][7590] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="b2d9f88c3ef0fa40800e8de031ca1db06c427e8d5062b11021d466cf61267c0e" May 8 00:38:47.533229 containerd[1637]: 2025-05-08 00:38:47.511 [INFO][7590] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b2d9f88c3ef0fa40800e8de031ca1db06c427e8d5062b11021d466cf61267c0e" May 8 00:38:47.533229 containerd[1637]: 2025-05-08 00:38:47.526 [INFO][7597] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b2d9f88c3ef0fa40800e8de031ca1db06c427e8d5062b11021d466cf61267c0e" HandleID="k8s-pod-network.b2d9f88c3ef0fa40800e8de031ca1db06c427e8d5062b11021d466cf61267c0e" Workload="localhost-k8s-calico--kube--controllers--67ddb48bf6--z6mbt-eth0" May 8 00:38:47.533229 containerd[1637]: 2025-05-08 00:38:47.526 [INFO][7597] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:38:47.533229 containerd[1637]: 2025-05-08 00:38:47.526 [INFO][7597] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:38:47.533229 containerd[1637]: 2025-05-08 00:38:47.530 [WARNING][7597] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b2d9f88c3ef0fa40800e8de031ca1db06c427e8d5062b11021d466cf61267c0e" HandleID="k8s-pod-network.b2d9f88c3ef0fa40800e8de031ca1db06c427e8d5062b11021d466cf61267c0e" Workload="localhost-k8s-calico--kube--controllers--67ddb48bf6--z6mbt-eth0" May 8 00:38:47.533229 containerd[1637]: 2025-05-08 00:38:47.530 [INFO][7597] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b2d9f88c3ef0fa40800e8de031ca1db06c427e8d5062b11021d466cf61267c0e" HandleID="k8s-pod-network.b2d9f88c3ef0fa40800e8de031ca1db06c427e8d5062b11021d466cf61267c0e" Workload="localhost-k8s-calico--kube--controllers--67ddb48bf6--z6mbt-eth0" May 8 00:38:47.533229 containerd[1637]: 2025-05-08 00:38:47.530 [INFO][7597] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:38:47.533229 containerd[1637]: 2025-05-08 00:38:47.531 [INFO][7590] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="b2d9f88c3ef0fa40800e8de031ca1db06c427e8d5062b11021d466cf61267c0e" May 8 00:38:47.533229 containerd[1637]: time="2025-05-08T00:38:47.533199025Z" level=info msg="TearDown network for sandbox \"b2d9f88c3ef0fa40800e8de031ca1db06c427e8d5062b11021d466cf61267c0e\" successfully" May 8 00:38:47.535695 containerd[1637]: time="2025-05-08T00:38:47.535681787Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b2d9f88c3ef0fa40800e8de031ca1db06c427e8d5062b11021d466cf61267c0e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 8 00:38:47.535814 containerd[1637]: time="2025-05-08T00:38:47.535751406Z" level=info msg="RemovePodSandbox \"b2d9f88c3ef0fa40800e8de031ca1db06c427e8d5062b11021d466cf61267c0e\" returns successfully" May 8 00:38:48.063463 systemd-resolved[1546]: Under memory pressure, flushing caches. May 8 00:38:48.063474 systemd-resolved[1546]: Flushed all caches. May 8 00:38:48.065364 systemd-journald[1182]: Under memory pressure, flushing caches. May 8 00:38:48.641015 systemd[1]: Started sshd@23-139.178.70.106:22-139.178.68.195:59356.service - OpenSSH per-connection server daemon (139.178.68.195:59356). May 8 00:38:48.744583 sshd[7603]: Accepted publickey for core from 139.178.68.195 port 59356 ssh2: RSA SHA256:K6koWqi65G0NEZIdyqBHM11YGd87HXVeKfxzt5n0Rpg May 8 00:38:48.746629 sshd[7603]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:38:48.749777 systemd-logind[1616]: New session 25 of user core. May 8 00:38:48.753485 systemd[1]: Started session-25.scope - Session 25 of User core. May 8 00:38:49.719948 sshd[7603]: pam_unix(sshd:session): session closed for user core May 8 00:38:49.723261 systemd[1]: sshd@23-139.178.70.106:22-139.178.68.195:59356.service: Deactivated successfully. May 8 00:38:49.727193 systemd-logind[1616]: Session 25 logged out. Waiting for processes to exit. May 8 00:38:49.727256 systemd[1]: session-25.scope: Deactivated successfully. May 8 00:38:49.729117 systemd-logind[1616]: Removed session 25. May 8 00:38:50.112389 systemd-journald[1182]: Under memory pressure, flushing caches. May 8 00:38:50.111608 systemd-resolved[1546]: Under memory pressure, flushing caches. May 8 00:38:50.111614 systemd-resolved[1546]: Flushed all caches.