May 13 00:25:49.756080 kernel: Linux version 6.6.89-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Mon May 12 22:46:21 -00 2025 May 13 00:25:49.756098 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=a30636f72ddb6c7dc7c9bee07b7cf23b403029ba1ff64eed2705530c62c7b592 May 13 00:25:49.756104 kernel: Disabled fast string operations May 13 00:25:49.756108 kernel: BIOS-provided physical RAM map: May 13 00:25:49.756112 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ebff] usable May 13 00:25:49.756116 kernel: BIOS-e820: [mem 0x000000000009ec00-0x000000000009ffff] reserved May 13 00:25:49.756122 kernel: BIOS-e820: [mem 0x00000000000dc000-0x00000000000fffff] reserved May 13 00:25:49.756126 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007fedffff] usable May 13 00:25:49.756130 kernel: BIOS-e820: [mem 0x000000007fee0000-0x000000007fefefff] ACPI data May 13 00:25:49.756135 kernel: BIOS-e820: [mem 0x000000007feff000-0x000000007fefffff] ACPI NVS May 13 00:25:49.756139 kernel: BIOS-e820: [mem 0x000000007ff00000-0x000000007fffffff] usable May 13 00:25:49.756143 kernel: BIOS-e820: [mem 0x00000000f0000000-0x00000000f7ffffff] reserved May 13 00:25:49.756147 kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec0ffff] reserved May 13 00:25:49.756151 kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved May 13 00:25:49.756162 kernel: BIOS-e820: [mem 0x00000000fffe0000-0x00000000ffffffff] reserved May 13 00:25:49.756167 kernel: NX (Execute Disable) protection: active May 13 00:25:49.756172 kernel: APIC: Static calls initialized May 13 00:25:49.756180 kernel: SMBIOS 2.7 present. May 13 00:25:49.756185 kernel: DMI: VMware, Inc. VMware Virtual Platform/440BX Desktop Reference Platform, BIOS 6.00 05/28/2020 May 13 00:25:49.756190 kernel: vmware: hypercall mode: 0x00 May 13 00:25:49.756194 kernel: Hypervisor detected: VMware May 13 00:25:49.756199 kernel: vmware: TSC freq read from hypervisor : 3408.000 MHz May 13 00:25:49.756205 kernel: vmware: Host bus clock speed read from hypervisor : 66000000 Hz May 13 00:25:49.756210 kernel: vmware: using clock offset of 3586072690 ns May 13 00:25:49.757716 kernel: tsc: Detected 3408.000 MHz processor May 13 00:25:49.757725 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved May 13 00:25:49.757730 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable May 13 00:25:49.757735 kernel: last_pfn = 0x80000 max_arch_pfn = 0x400000000 May 13 00:25:49.757740 kernel: total RAM covered: 3072M May 13 00:25:49.757745 kernel: Found optimal setting for mtrr clean up May 13 00:25:49.757750 kernel: gran_size: 64K chunk_size: 64K num_reg: 2 lose cover RAM: 0G May 13 00:25:49.757758 kernel: MTRR map: 6 entries (5 fixed + 1 variable; max 21), built from 8 variable MTRRs May 13 00:25:49.757763 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT May 13 00:25:49.757768 kernel: Using GB pages for direct mapping May 13 00:25:49.757772 kernel: ACPI: Early table checksum verification disabled May 13 00:25:49.757777 kernel: ACPI: RSDP 0x00000000000F6A00 000024 (v02 PTLTD ) May 13 00:25:49.757782 kernel: ACPI: XSDT 0x000000007FEE965B 00005C (v01 INTEL 440BX 06040000 VMW 01324272) May 13 00:25:49.757787 kernel: ACPI: FACP 0x000000007FEFEE73 0000F4 (v04 INTEL 440BX 06040000 PTL 000F4240) May 13 00:25:49.757792 kernel: ACPI: DSDT 0x000000007FEEAD55 01411E (v01 PTLTD Custom 06040000 MSFT 03000001) May 13 00:25:49.757797 kernel: ACPI: FACS 0x000000007FEFFFC0 000040 May 13 00:25:49.757805 kernel: ACPI: FACS 0x000000007FEFFFC0 000040 May 13 00:25:49.757810 kernel: ACPI: BOOT 0x000000007FEEAD2D 000028 (v01 PTLTD $SBFTBL$ 06040000 LTP 00000001) May 13 00:25:49.757815 kernel: ACPI: APIC 0x000000007FEEA5EB 000742 (v01 PTLTD ? APIC 06040000 LTP 00000000) May 13 00:25:49.757820 kernel: ACPI: MCFG 0x000000007FEEA5AF 00003C (v01 PTLTD $PCITBL$ 06040000 LTP 00000001) May 13 00:25:49.757825 kernel: ACPI: SRAT 0x000000007FEE9757 0008A8 (v02 VMWARE MEMPLUG 06040000 VMW 00000001) May 13 00:25:49.757832 kernel: ACPI: HPET 0x000000007FEE971F 000038 (v01 VMWARE VMW HPET 06040000 VMW 00000001) May 13 00:25:49.757837 kernel: ACPI: WAET 0x000000007FEE96F7 000028 (v01 VMWARE VMW WAET 06040000 VMW 00000001) May 13 00:25:49.757842 kernel: ACPI: Reserving FACP table memory at [mem 0x7fefee73-0x7fefef66] May 13 00:25:49.757847 kernel: ACPI: Reserving DSDT table memory at [mem 0x7feead55-0x7fefee72] May 13 00:25:49.757852 kernel: ACPI: Reserving FACS table memory at [mem 0x7fefffc0-0x7fefffff] May 13 00:25:49.757857 kernel: ACPI: Reserving FACS table memory at [mem 0x7fefffc0-0x7fefffff] May 13 00:25:49.757862 kernel: ACPI: Reserving BOOT table memory at [mem 0x7feead2d-0x7feead54] May 13 00:25:49.757867 kernel: ACPI: Reserving APIC table memory at [mem 0x7feea5eb-0x7feead2c] May 13 00:25:49.757872 kernel: ACPI: Reserving MCFG table memory at [mem 0x7feea5af-0x7feea5ea] May 13 00:25:49.757877 kernel: ACPI: Reserving SRAT table memory at [mem 0x7fee9757-0x7fee9ffe] May 13 00:25:49.757883 kernel: ACPI: Reserving HPET table memory at [mem 0x7fee971f-0x7fee9756] May 13 00:25:49.757888 kernel: ACPI: Reserving WAET table memory at [mem 0x7fee96f7-0x7fee971e] May 13 00:25:49.757894 kernel: system APIC only can use physical flat May 13 00:25:49.757899 kernel: APIC: Switched APIC routing to: physical flat May 13 00:25:49.757904 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 May 13 00:25:49.757909 kernel: SRAT: PXM 0 -> APIC 0x02 -> Node 0 May 13 00:25:49.757914 kernel: SRAT: PXM 0 -> APIC 0x04 -> Node 0 May 13 00:25:49.757919 kernel: SRAT: PXM 0 -> APIC 0x06 -> Node 0 May 13 00:25:49.757924 kernel: SRAT: PXM 0 -> APIC 0x08 -> Node 0 May 13 00:25:49.757930 kernel: SRAT: PXM 0 -> APIC 0x0a -> Node 0 May 13 00:25:49.757935 kernel: SRAT: PXM 0 -> APIC 0x0c -> Node 0 May 13 00:25:49.757940 kernel: SRAT: PXM 0 -> APIC 0x0e -> Node 0 May 13 00:25:49.757945 kernel: SRAT: PXM 0 -> APIC 0x10 -> Node 0 May 13 00:25:49.757950 kernel: SRAT: PXM 0 -> APIC 0x12 -> Node 0 May 13 00:25:49.757955 kernel: SRAT: PXM 0 -> APIC 0x14 -> Node 0 May 13 00:25:49.757960 kernel: SRAT: PXM 0 -> APIC 0x16 -> Node 0 May 13 00:25:49.757965 kernel: SRAT: PXM 0 -> APIC 0x18 -> Node 0 May 13 00:25:49.757970 kernel: SRAT: PXM 0 -> APIC 0x1a -> Node 0 May 13 00:25:49.757975 kernel: SRAT: PXM 0 -> APIC 0x1c -> Node 0 May 13 00:25:49.757981 kernel: SRAT: PXM 0 -> APIC 0x1e -> Node 0 May 13 00:25:49.757986 kernel: SRAT: PXM 0 -> APIC 0x20 -> Node 0 May 13 00:25:49.757991 kernel: SRAT: PXM 0 -> APIC 0x22 -> Node 0 May 13 00:25:49.757996 kernel: SRAT: PXM 0 -> APIC 0x24 -> Node 0 May 13 00:25:49.758001 kernel: SRAT: PXM 0 -> APIC 0x26 -> Node 0 May 13 00:25:49.758006 kernel: SRAT: PXM 0 -> APIC 0x28 -> Node 0 May 13 00:25:49.758011 kernel: SRAT: PXM 0 -> APIC 0x2a -> Node 0 May 13 00:25:49.758016 kernel: SRAT: PXM 0 -> APIC 0x2c -> Node 0 May 13 00:25:49.758021 kernel: SRAT: PXM 0 -> APIC 0x2e -> Node 0 May 13 00:25:49.758026 kernel: SRAT: PXM 0 -> APIC 0x30 -> Node 0 May 13 00:25:49.758032 kernel: SRAT: PXM 0 -> APIC 0x32 -> Node 0 May 13 00:25:49.758037 kernel: SRAT: PXM 0 -> APIC 0x34 -> Node 0 May 13 00:25:49.758042 kernel: SRAT: PXM 0 -> APIC 0x36 -> Node 0 May 13 00:25:49.758047 kernel: SRAT: PXM 0 -> APIC 0x38 -> Node 0 May 13 00:25:49.758052 kernel: SRAT: PXM 0 -> APIC 0x3a -> Node 0 May 13 00:25:49.758057 kernel: SRAT: PXM 0 -> APIC 0x3c -> Node 0 May 13 00:25:49.758062 kernel: SRAT: PXM 0 -> APIC 0x3e -> Node 0 May 13 00:25:49.758067 kernel: SRAT: PXM 0 -> APIC 0x40 -> Node 0 May 13 00:25:49.758072 kernel: SRAT: PXM 0 -> APIC 0x42 -> Node 0 May 13 00:25:49.758077 kernel: SRAT: PXM 0 -> APIC 0x44 -> Node 0 May 13 00:25:49.758081 kernel: SRAT: PXM 0 -> APIC 0x46 -> Node 0 May 13 00:25:49.758088 kernel: SRAT: PXM 0 -> APIC 0x48 -> Node 0 May 13 00:25:49.758092 kernel: SRAT: PXM 0 -> APIC 0x4a -> Node 0 May 13 00:25:49.758098 kernel: SRAT: PXM 0 -> APIC 0x4c -> Node 0 May 13 00:25:49.758102 kernel: SRAT: PXM 0 -> APIC 0x4e -> Node 0 May 13 00:25:49.758107 kernel: SRAT: PXM 0 -> APIC 0x50 -> Node 0 May 13 00:25:49.758112 kernel: SRAT: PXM 0 -> APIC 0x52 -> Node 0 May 13 00:25:49.758117 kernel: SRAT: PXM 0 -> APIC 0x54 -> Node 0 May 13 00:25:49.758122 kernel: SRAT: PXM 0 -> APIC 0x56 -> Node 0 May 13 00:25:49.758127 kernel: SRAT: PXM 0 -> APIC 0x58 -> Node 0 May 13 00:25:49.758132 kernel: SRAT: PXM 0 -> APIC 0x5a -> Node 0 May 13 00:25:49.758138 kernel: SRAT: PXM 0 -> APIC 0x5c -> Node 0 May 13 00:25:49.758143 kernel: SRAT: PXM 0 -> APIC 0x5e -> Node 0 May 13 00:25:49.758148 kernel: SRAT: PXM 0 -> APIC 0x60 -> Node 0 May 13 00:25:49.758153 kernel: SRAT: PXM 0 -> APIC 0x62 -> Node 0 May 13 00:25:49.758158 kernel: SRAT: PXM 0 -> APIC 0x64 -> Node 0 May 13 00:25:49.758163 kernel: SRAT: PXM 0 -> APIC 0x66 -> Node 0 May 13 00:25:49.758168 kernel: SRAT: PXM 0 -> APIC 0x68 -> Node 0 May 13 00:25:49.758173 kernel: SRAT: PXM 0 -> APIC 0x6a -> Node 0 May 13 00:25:49.758178 kernel: SRAT: PXM 0 -> APIC 0x6c -> Node 0 May 13 00:25:49.758182 kernel: SRAT: PXM 0 -> APIC 0x6e -> Node 0 May 13 00:25:49.758188 kernel: SRAT: PXM 0 -> APIC 0x70 -> Node 0 May 13 00:25:49.758202 kernel: SRAT: PXM 0 -> APIC 0x72 -> Node 0 May 13 00:25:49.758208 kernel: SRAT: PXM 0 -> APIC 0x74 -> Node 0 May 13 00:25:49.758232 kernel: SRAT: PXM 0 -> APIC 0x76 -> Node 0 May 13 00:25:49.758240 kernel: SRAT: PXM 0 -> APIC 0x78 -> Node 0 May 13 00:25:49.758245 kernel: SRAT: PXM 0 -> APIC 0x7a -> Node 0 May 13 00:25:49.758251 kernel: SRAT: PXM 0 -> APIC 0x7c -> Node 0 May 13 00:25:49.758256 kernel: SRAT: PXM 0 -> APIC 0x7e -> Node 0 May 13 00:25:49.758261 kernel: SRAT: PXM 0 -> APIC 0x80 -> Node 0 May 13 00:25:49.758271 kernel: SRAT: PXM 0 -> APIC 0x82 -> Node 0 May 13 00:25:49.758276 kernel: SRAT: PXM 0 -> APIC 0x84 -> Node 0 May 13 00:25:49.758282 kernel: SRAT: PXM 0 -> APIC 0x86 -> Node 0 May 13 00:25:49.758287 kernel: SRAT: PXM 0 -> APIC 0x88 -> Node 0 May 13 00:25:49.758292 kernel: SRAT: PXM 0 -> APIC 0x8a -> Node 0 May 13 00:25:49.758300 kernel: SRAT: PXM 0 -> APIC 0x8c -> Node 0 May 13 00:25:49.758305 kernel: SRAT: PXM 0 -> APIC 0x8e -> Node 0 May 13 00:25:49.758310 kernel: SRAT: PXM 0 -> APIC 0x90 -> Node 0 May 13 00:25:49.758316 kernel: SRAT: PXM 0 -> APIC 0x92 -> Node 0 May 13 00:25:49.758321 kernel: SRAT: PXM 0 -> APIC 0x94 -> Node 0 May 13 00:25:49.758328 kernel: SRAT: PXM 0 -> APIC 0x96 -> Node 0 May 13 00:25:49.758333 kernel: SRAT: PXM 0 -> APIC 0x98 -> Node 0 May 13 00:25:49.758338 kernel: SRAT: PXM 0 -> APIC 0x9a -> Node 0 May 13 00:25:49.758344 kernel: SRAT: PXM 0 -> APIC 0x9c -> Node 0 May 13 00:25:49.758349 kernel: SRAT: PXM 0 -> APIC 0x9e -> Node 0 May 13 00:25:49.758354 kernel: SRAT: PXM 0 -> APIC 0xa0 -> Node 0 May 13 00:25:49.758359 kernel: SRAT: PXM 0 -> APIC 0xa2 -> Node 0 May 13 00:25:49.758365 kernel: SRAT: PXM 0 -> APIC 0xa4 -> Node 0 May 13 00:25:49.758370 kernel: SRAT: PXM 0 -> APIC 0xa6 -> Node 0 May 13 00:25:49.758375 kernel: SRAT: PXM 0 -> APIC 0xa8 -> Node 0 May 13 00:25:49.758382 kernel: SRAT: PXM 0 -> APIC 0xaa -> Node 0 May 13 00:25:49.758387 kernel: SRAT: PXM 0 -> APIC 0xac -> Node 0 May 13 00:25:49.758392 kernel: SRAT: PXM 0 -> APIC 0xae -> Node 0 May 13 00:25:49.758397 kernel: SRAT: PXM 0 -> APIC 0xb0 -> Node 0 May 13 00:25:49.758403 kernel: SRAT: PXM 0 -> APIC 0xb2 -> Node 0 May 13 00:25:49.758408 kernel: SRAT: PXM 0 -> APIC 0xb4 -> Node 0 May 13 00:25:49.758413 kernel: SRAT: PXM 0 -> APIC 0xb6 -> Node 0 May 13 00:25:49.758418 kernel: SRAT: PXM 0 -> APIC 0xb8 -> Node 0 May 13 00:25:49.758424 kernel: SRAT: PXM 0 -> APIC 0xba -> Node 0 May 13 00:25:49.758429 kernel: SRAT: PXM 0 -> APIC 0xbc -> Node 0 May 13 00:25:49.758435 kernel: SRAT: PXM 0 -> APIC 0xbe -> Node 0 May 13 00:25:49.758441 kernel: SRAT: PXM 0 -> APIC 0xc0 -> Node 0 May 13 00:25:49.758446 kernel: SRAT: PXM 0 -> APIC 0xc2 -> Node 0 May 13 00:25:49.758452 kernel: SRAT: PXM 0 -> APIC 0xc4 -> Node 0 May 13 00:25:49.758457 kernel: SRAT: PXM 0 -> APIC 0xc6 -> Node 0 May 13 00:25:49.758462 kernel: SRAT: PXM 0 -> APIC 0xc8 -> Node 0 May 13 00:25:49.758467 kernel: SRAT: PXM 0 -> APIC 0xca -> Node 0 May 13 00:25:49.758473 kernel: SRAT: PXM 0 -> APIC 0xcc -> Node 0 May 13 00:25:49.758478 kernel: SRAT: PXM 0 -> APIC 0xce -> Node 0 May 13 00:25:49.758483 kernel: SRAT: PXM 0 -> APIC 0xd0 -> Node 0 May 13 00:25:49.758490 kernel: SRAT: PXM 0 -> APIC 0xd2 -> Node 0 May 13 00:25:49.758495 kernel: SRAT: PXM 0 -> APIC 0xd4 -> Node 0 May 13 00:25:49.758500 kernel: SRAT: PXM 0 -> APIC 0xd6 -> Node 0 May 13 00:25:49.758505 kernel: SRAT: PXM 0 -> APIC 0xd8 -> Node 0 May 13 00:25:49.758511 kernel: SRAT: PXM 0 -> APIC 0xda -> Node 0 May 13 00:25:49.758516 kernel: SRAT: PXM 0 -> APIC 0xdc -> Node 0 May 13 00:25:49.758521 kernel: SRAT: PXM 0 -> APIC 0xde -> Node 0 May 13 00:25:49.758527 kernel: SRAT: PXM 0 -> APIC 0xe0 -> Node 0 May 13 00:25:49.758532 kernel: SRAT: PXM 0 -> APIC 0xe2 -> Node 0 May 13 00:25:49.758537 kernel: SRAT: PXM 0 -> APIC 0xe4 -> Node 0 May 13 00:25:49.758542 kernel: SRAT: PXM 0 -> APIC 0xe6 -> Node 0 May 13 00:25:49.758549 kernel: SRAT: PXM 0 -> APIC 0xe8 -> Node 0 May 13 00:25:49.758554 kernel: SRAT: PXM 0 -> APIC 0xea -> Node 0 May 13 00:25:49.758560 kernel: SRAT: PXM 0 -> APIC 0xec -> Node 0 May 13 00:25:49.758565 kernel: SRAT: PXM 0 -> APIC 0xee -> Node 0 May 13 00:25:49.758570 kernel: SRAT: PXM 0 -> APIC 0xf0 -> Node 0 May 13 00:25:49.758575 kernel: SRAT: PXM 0 -> APIC 0xf2 -> Node 0 May 13 00:25:49.758581 kernel: SRAT: PXM 0 -> APIC 0xf4 -> Node 0 May 13 00:25:49.758586 kernel: SRAT: PXM 0 -> APIC 0xf6 -> Node 0 May 13 00:25:49.758591 kernel: SRAT: PXM 0 -> APIC 0xf8 -> Node 0 May 13 00:25:49.758598 kernel: SRAT: PXM 0 -> APIC 0xfa -> Node 0 May 13 00:25:49.758609 kernel: SRAT: PXM 0 -> APIC 0xfc -> Node 0 May 13 00:25:49.758615 kernel: SRAT: PXM 0 -> APIC 0xfe -> Node 0 May 13 00:25:49.758620 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] May 13 00:25:49.758626 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] May 13 00:25:49.758632 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000-0xbfffffff] hotplug May 13 00:25:49.758639 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7fffffff] -> [mem 0x00000000-0x7fffffff] May 13 00:25:49.758645 kernel: NODE_DATA(0) allocated [mem 0x7fffa000-0x7fffffff] May 13 00:25:49.758650 kernel: Zone ranges: May 13 00:25:49.758656 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] May 13 00:25:49.758663 kernel: DMA32 [mem 0x0000000001000000-0x000000007fffffff] May 13 00:25:49.758668 kernel: Normal empty May 13 00:25:49.758674 kernel: Movable zone start for each node May 13 00:25:49.758679 kernel: Early memory node ranges May 13 00:25:49.758685 kernel: node 0: [mem 0x0000000000001000-0x000000000009dfff] May 13 00:25:49.758690 kernel: node 0: [mem 0x0000000000100000-0x000000007fedffff] May 13 00:25:49.758695 kernel: node 0: [mem 0x000000007ff00000-0x000000007fffffff] May 13 00:25:49.758701 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007fffffff] May 13 00:25:49.758706 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 13 00:25:49.758712 kernel: On node 0, zone DMA: 98 pages in unavailable ranges May 13 00:25:49.758719 kernel: On node 0, zone DMA32: 32 pages in unavailable ranges May 13 00:25:49.758725 kernel: ACPI: PM-Timer IO Port: 0x1008 May 13 00:25:49.758730 kernel: system APIC only can use physical flat May 13 00:25:49.758735 kernel: ACPI: LAPIC_NMI (acpi_id[0x00] high edge lint[0x1]) May 13 00:25:49.758741 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1]) May 13 00:25:49.758747 kernel: ACPI: LAPIC_NMI (acpi_id[0x02] high edge lint[0x1]) May 13 00:25:49.758752 kernel: ACPI: LAPIC_NMI (acpi_id[0x03] high edge lint[0x1]) May 13 00:25:49.758757 kernel: ACPI: LAPIC_NMI (acpi_id[0x04] high edge lint[0x1]) May 13 00:25:49.758763 kernel: ACPI: LAPIC_NMI (acpi_id[0x05] high edge lint[0x1]) May 13 00:25:49.758769 kernel: ACPI: LAPIC_NMI (acpi_id[0x06] high edge lint[0x1]) May 13 00:25:49.758775 kernel: ACPI: LAPIC_NMI (acpi_id[0x07] high edge lint[0x1]) May 13 00:25:49.758780 kernel: ACPI: LAPIC_NMI (acpi_id[0x08] high edge lint[0x1]) May 13 00:25:49.758785 kernel: ACPI: LAPIC_NMI (acpi_id[0x09] high edge lint[0x1]) May 13 00:25:49.758791 kernel: ACPI: LAPIC_NMI (acpi_id[0x0a] high edge lint[0x1]) May 13 00:25:49.758796 kernel: ACPI: LAPIC_NMI (acpi_id[0x0b] high edge lint[0x1]) May 13 00:25:49.758801 kernel: ACPI: LAPIC_NMI (acpi_id[0x0c] high edge lint[0x1]) May 13 00:25:49.758807 kernel: ACPI: LAPIC_NMI (acpi_id[0x0d] high edge lint[0x1]) May 13 00:25:49.758812 kernel: ACPI: LAPIC_NMI (acpi_id[0x0e] high edge lint[0x1]) May 13 00:25:49.758817 kernel: ACPI: LAPIC_NMI (acpi_id[0x0f] high edge lint[0x1]) May 13 00:25:49.758827 kernel: ACPI: LAPIC_NMI (acpi_id[0x10] high edge lint[0x1]) May 13 00:25:49.758833 kernel: ACPI: LAPIC_NMI (acpi_id[0x11] high edge lint[0x1]) May 13 00:25:49.758842 kernel: ACPI: LAPIC_NMI (acpi_id[0x12] high edge lint[0x1]) May 13 00:25:49.758851 kernel: ACPI: LAPIC_NMI (acpi_id[0x13] high edge lint[0x1]) May 13 00:25:49.758860 kernel: ACPI: LAPIC_NMI (acpi_id[0x14] high edge lint[0x1]) May 13 00:25:49.758866 kernel: ACPI: LAPIC_NMI (acpi_id[0x15] high edge lint[0x1]) May 13 00:25:49.758871 kernel: ACPI: LAPIC_NMI (acpi_id[0x16] high edge lint[0x1]) May 13 00:25:49.758876 kernel: ACPI: LAPIC_NMI (acpi_id[0x17] high edge lint[0x1]) May 13 00:25:49.758882 kernel: ACPI: LAPIC_NMI (acpi_id[0x18] high edge lint[0x1]) May 13 00:25:49.758887 kernel: ACPI: LAPIC_NMI (acpi_id[0x19] high edge lint[0x1]) May 13 00:25:49.758894 kernel: ACPI: LAPIC_NMI (acpi_id[0x1a] high edge lint[0x1]) May 13 00:25:49.758900 kernel: ACPI: LAPIC_NMI (acpi_id[0x1b] high edge lint[0x1]) May 13 00:25:49.758905 kernel: ACPI: LAPIC_NMI (acpi_id[0x1c] high edge lint[0x1]) May 13 00:25:49.758913 kernel: ACPI: LAPIC_NMI (acpi_id[0x1d] high edge lint[0x1]) May 13 00:25:49.758922 kernel: ACPI: LAPIC_NMI (acpi_id[0x1e] high edge lint[0x1]) May 13 00:25:49.758932 kernel: ACPI: LAPIC_NMI (acpi_id[0x1f] high edge lint[0x1]) May 13 00:25:49.758940 kernel: ACPI: LAPIC_NMI (acpi_id[0x20] high edge lint[0x1]) May 13 00:25:49.758946 kernel: ACPI: LAPIC_NMI (acpi_id[0x21] high edge lint[0x1]) May 13 00:25:49.758951 kernel: ACPI: LAPIC_NMI (acpi_id[0x22] high edge lint[0x1]) May 13 00:25:49.758959 kernel: ACPI: LAPIC_NMI (acpi_id[0x23] high edge lint[0x1]) May 13 00:25:49.758964 kernel: ACPI: LAPIC_NMI (acpi_id[0x24] high edge lint[0x1]) May 13 00:25:49.758969 kernel: ACPI: LAPIC_NMI (acpi_id[0x25] high edge lint[0x1]) May 13 00:25:49.758974 kernel: ACPI: LAPIC_NMI (acpi_id[0x26] high edge lint[0x1]) May 13 00:25:49.758980 kernel: ACPI: LAPIC_NMI (acpi_id[0x27] high edge lint[0x1]) May 13 00:25:49.758985 kernel: ACPI: LAPIC_NMI (acpi_id[0x28] high edge lint[0x1]) May 13 00:25:49.758991 kernel: ACPI: LAPIC_NMI (acpi_id[0x29] high edge lint[0x1]) May 13 00:25:49.758996 kernel: ACPI: LAPIC_NMI (acpi_id[0x2a] high edge lint[0x1]) May 13 00:25:49.759002 kernel: ACPI: LAPIC_NMI (acpi_id[0x2b] high edge lint[0x1]) May 13 00:25:49.759007 kernel: ACPI: LAPIC_NMI (acpi_id[0x2c] high edge lint[0x1]) May 13 00:25:49.759014 kernel: ACPI: LAPIC_NMI (acpi_id[0x2d] high edge lint[0x1]) May 13 00:25:49.759019 kernel: ACPI: LAPIC_NMI (acpi_id[0x2e] high edge lint[0x1]) May 13 00:25:49.759024 kernel: ACPI: LAPIC_NMI (acpi_id[0x2f] high edge lint[0x1]) May 13 00:25:49.759030 kernel: ACPI: LAPIC_NMI (acpi_id[0x30] high edge lint[0x1]) May 13 00:25:49.759035 kernel: ACPI: LAPIC_NMI (acpi_id[0x31] high edge lint[0x1]) May 13 00:25:49.759040 kernel: ACPI: LAPIC_NMI (acpi_id[0x32] high edge lint[0x1]) May 13 00:25:49.759046 kernel: ACPI: LAPIC_NMI (acpi_id[0x33] high edge lint[0x1]) May 13 00:25:49.759051 kernel: ACPI: LAPIC_NMI (acpi_id[0x34] high edge lint[0x1]) May 13 00:25:49.759056 kernel: ACPI: LAPIC_NMI (acpi_id[0x35] high edge lint[0x1]) May 13 00:25:49.759063 kernel: ACPI: LAPIC_NMI (acpi_id[0x36] high edge lint[0x1]) May 13 00:25:49.759068 kernel: ACPI: LAPIC_NMI (acpi_id[0x37] high edge lint[0x1]) May 13 00:25:49.759073 kernel: ACPI: LAPIC_NMI (acpi_id[0x38] high edge lint[0x1]) May 13 00:25:49.759079 kernel: ACPI: LAPIC_NMI (acpi_id[0x39] high edge lint[0x1]) May 13 00:25:49.759084 kernel: ACPI: LAPIC_NMI (acpi_id[0x3a] high edge lint[0x1]) May 13 00:25:49.759090 kernel: ACPI: LAPIC_NMI (acpi_id[0x3b] high edge lint[0x1]) May 13 00:25:49.759095 kernel: ACPI: LAPIC_NMI (acpi_id[0x3c] high edge lint[0x1]) May 13 00:25:49.759100 kernel: ACPI: LAPIC_NMI (acpi_id[0x3d] high edge lint[0x1]) May 13 00:25:49.759106 kernel: ACPI: LAPIC_NMI (acpi_id[0x3e] high edge lint[0x1]) May 13 00:25:49.759111 kernel: ACPI: LAPIC_NMI (acpi_id[0x3f] high edge lint[0x1]) May 13 00:25:49.759117 kernel: ACPI: LAPIC_NMI (acpi_id[0x40] high edge lint[0x1]) May 13 00:25:49.759123 kernel: ACPI: LAPIC_NMI (acpi_id[0x41] high edge lint[0x1]) May 13 00:25:49.759128 kernel: ACPI: LAPIC_NMI (acpi_id[0x42] high edge lint[0x1]) May 13 00:25:49.759133 kernel: ACPI: LAPIC_NMI (acpi_id[0x43] high edge lint[0x1]) May 13 00:25:49.759139 kernel: ACPI: LAPIC_NMI (acpi_id[0x44] high edge lint[0x1]) May 13 00:25:49.759144 kernel: ACPI: LAPIC_NMI (acpi_id[0x45] high edge lint[0x1]) May 13 00:25:49.759149 kernel: ACPI: LAPIC_NMI (acpi_id[0x46] high edge lint[0x1]) May 13 00:25:49.759155 kernel: ACPI: LAPIC_NMI (acpi_id[0x47] high edge lint[0x1]) May 13 00:25:49.759160 kernel: ACPI: LAPIC_NMI (acpi_id[0x48] high edge lint[0x1]) May 13 00:25:49.759166 kernel: ACPI: LAPIC_NMI (acpi_id[0x49] high edge lint[0x1]) May 13 00:25:49.759172 kernel: ACPI: LAPIC_NMI (acpi_id[0x4a] high edge lint[0x1]) May 13 00:25:49.759177 kernel: ACPI: LAPIC_NMI (acpi_id[0x4b] high edge lint[0x1]) May 13 00:25:49.759183 kernel: ACPI: LAPIC_NMI (acpi_id[0x4c] high edge lint[0x1]) May 13 00:25:49.759188 kernel: ACPI: LAPIC_NMI (acpi_id[0x4d] high edge lint[0x1]) May 13 00:25:49.759193 kernel: ACPI: LAPIC_NMI (acpi_id[0x4e] high edge lint[0x1]) May 13 00:25:49.759199 kernel: ACPI: LAPIC_NMI (acpi_id[0x4f] high edge lint[0x1]) May 13 00:25:49.759204 kernel: ACPI: LAPIC_NMI (acpi_id[0x50] high edge lint[0x1]) May 13 00:25:49.759210 kernel: ACPI: LAPIC_NMI (acpi_id[0x51] high edge lint[0x1]) May 13 00:25:49.761264 kernel: ACPI: LAPIC_NMI (acpi_id[0x52] high edge lint[0x1]) May 13 00:25:49.761276 kernel: ACPI: LAPIC_NMI (acpi_id[0x53] high edge lint[0x1]) May 13 00:25:49.761282 kernel: ACPI: LAPIC_NMI (acpi_id[0x54] high edge lint[0x1]) May 13 00:25:49.761288 kernel: ACPI: LAPIC_NMI (acpi_id[0x55] high edge lint[0x1]) May 13 00:25:49.761293 kernel: ACPI: LAPIC_NMI (acpi_id[0x56] high edge lint[0x1]) May 13 00:25:49.761299 kernel: ACPI: LAPIC_NMI (acpi_id[0x57] high edge lint[0x1]) May 13 00:25:49.761307 kernel: ACPI: LAPIC_NMI (acpi_id[0x58] high edge lint[0x1]) May 13 00:25:49.761314 kernel: ACPI: LAPIC_NMI (acpi_id[0x59] high edge lint[0x1]) May 13 00:25:49.761323 kernel: ACPI: LAPIC_NMI (acpi_id[0x5a] high edge lint[0x1]) May 13 00:25:49.761330 kernel: ACPI: LAPIC_NMI (acpi_id[0x5b] high edge lint[0x1]) May 13 00:25:49.761335 kernel: ACPI: LAPIC_NMI (acpi_id[0x5c] high edge lint[0x1]) May 13 00:25:49.761343 kernel: ACPI: LAPIC_NMI (acpi_id[0x5d] high edge lint[0x1]) May 13 00:25:49.761348 kernel: ACPI: LAPIC_NMI (acpi_id[0x5e] high edge lint[0x1]) May 13 00:25:49.761354 kernel: ACPI: LAPIC_NMI (acpi_id[0x5f] high edge lint[0x1]) May 13 00:25:49.761359 kernel: ACPI: LAPIC_NMI (acpi_id[0x60] high edge lint[0x1]) May 13 00:25:49.761366 kernel: ACPI: LAPIC_NMI (acpi_id[0x61] high edge lint[0x1]) May 13 00:25:49.761373 kernel: ACPI: LAPIC_NMI (acpi_id[0x62] high edge lint[0x1]) May 13 00:25:49.761378 kernel: ACPI: LAPIC_NMI (acpi_id[0x63] high edge lint[0x1]) May 13 00:25:49.761387 kernel: ACPI: LAPIC_NMI (acpi_id[0x64] high edge lint[0x1]) May 13 00:25:49.761395 kernel: ACPI: LAPIC_NMI (acpi_id[0x65] high edge lint[0x1]) May 13 00:25:49.761402 kernel: ACPI: LAPIC_NMI (acpi_id[0x66] high edge lint[0x1]) May 13 00:25:49.761409 kernel: ACPI: LAPIC_NMI (acpi_id[0x67] high edge lint[0x1]) May 13 00:25:49.761416 kernel: ACPI: LAPIC_NMI (acpi_id[0x68] high edge lint[0x1]) May 13 00:25:49.761421 kernel: ACPI: LAPIC_NMI (acpi_id[0x69] high edge lint[0x1]) May 13 00:25:49.761426 kernel: ACPI: LAPIC_NMI (acpi_id[0x6a] high edge lint[0x1]) May 13 00:25:49.761432 kernel: ACPI: LAPIC_NMI (acpi_id[0x6b] high edge lint[0x1]) May 13 00:25:49.761438 kernel: ACPI: LAPIC_NMI (acpi_id[0x6c] high edge lint[0x1]) May 13 00:25:49.761446 kernel: ACPI: LAPIC_NMI (acpi_id[0x6d] high edge lint[0x1]) May 13 00:25:49.761462 kernel: ACPI: LAPIC_NMI (acpi_id[0x6e] high edge lint[0x1]) May 13 00:25:49.761469 kernel: ACPI: LAPIC_NMI (acpi_id[0x6f] high edge lint[0x1]) May 13 00:25:49.761477 kernel: ACPI: LAPIC_NMI (acpi_id[0x70] high edge lint[0x1]) May 13 00:25:49.761482 kernel: ACPI: LAPIC_NMI (acpi_id[0x71] high edge lint[0x1]) May 13 00:25:49.761488 kernel: ACPI: LAPIC_NMI (acpi_id[0x72] high edge lint[0x1]) May 13 00:25:49.761493 kernel: ACPI: LAPIC_NMI (acpi_id[0x73] high edge lint[0x1]) May 13 00:25:49.761498 kernel: ACPI: LAPIC_NMI (acpi_id[0x74] high edge lint[0x1]) May 13 00:25:49.761503 kernel: ACPI: LAPIC_NMI (acpi_id[0x75] high edge lint[0x1]) May 13 00:25:49.761509 kernel: ACPI: LAPIC_NMI (acpi_id[0x76] high edge lint[0x1]) May 13 00:25:49.761514 kernel: ACPI: LAPIC_NMI (acpi_id[0x77] high edge lint[0x1]) May 13 00:25:49.761521 kernel: ACPI: LAPIC_NMI (acpi_id[0x78] high edge lint[0x1]) May 13 00:25:49.761528 kernel: ACPI: LAPIC_NMI (acpi_id[0x79] high edge lint[0x1]) May 13 00:25:49.761537 kernel: ACPI: LAPIC_NMI (acpi_id[0x7a] high edge lint[0x1]) May 13 00:25:49.761545 kernel: ACPI: LAPIC_NMI (acpi_id[0x7b] high edge lint[0x1]) May 13 00:25:49.761551 kernel: ACPI: LAPIC_NMI (acpi_id[0x7c] high edge lint[0x1]) May 13 00:25:49.761556 kernel: ACPI: LAPIC_NMI (acpi_id[0x7d] high edge lint[0x1]) May 13 00:25:49.761562 kernel: ACPI: LAPIC_NMI (acpi_id[0x7e] high edge lint[0x1]) May 13 00:25:49.761567 kernel: ACPI: LAPIC_NMI (acpi_id[0x7f] high edge lint[0x1]) May 13 00:25:49.761576 kernel: IOAPIC[0]: apic_id 1, version 17, address 0xfec00000, GSI 0-23 May 13 00:25:49.761585 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 high edge) May 13 00:25:49.761595 kernel: ACPI: Using ACPI (MADT) for SMP configuration information May 13 00:25:49.761605 kernel: ACPI: HPET id: 0x8086af01 base: 0xfed00000 May 13 00:25:49.761610 kernel: TSC deadline timer available May 13 00:25:49.761619 kernel: smpboot: Allowing 128 CPUs, 126 hotplug CPUs May 13 00:25:49.761624 kernel: [mem 0x80000000-0xefffffff] available for PCI devices May 13 00:25:49.761631 kernel: Booting paravirtualized kernel on VMware hypervisor May 13 00:25:49.761641 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns May 13 00:25:49.761650 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:128 nr_cpu_ids:128 nr_node_ids:1 May 13 00:25:49.761659 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u262144 May 13 00:25:49.761669 kernel: pcpu-alloc: s197032 r8192 d32344 u262144 alloc=1*2097152 May 13 00:25:49.761679 kernel: pcpu-alloc: [0] 000 001 002 003 004 005 006 007 May 13 00:25:49.761685 kernel: pcpu-alloc: [0] 008 009 010 011 012 013 014 015 May 13 00:25:49.761690 kernel: pcpu-alloc: [0] 016 017 018 019 020 021 022 023 May 13 00:25:49.761695 kernel: pcpu-alloc: [0] 024 025 026 027 028 029 030 031 May 13 00:25:49.761701 kernel: pcpu-alloc: [0] 032 033 034 035 036 037 038 039 May 13 00:25:49.761717 kernel: pcpu-alloc: [0] 040 041 042 043 044 045 046 047 May 13 00:25:49.761728 kernel: pcpu-alloc: [0] 048 049 050 051 052 053 054 055 May 13 00:25:49.761735 kernel: pcpu-alloc: [0] 056 057 058 059 060 061 062 063 May 13 00:25:49.761740 kernel: pcpu-alloc: [0] 064 065 066 067 068 069 070 071 May 13 00:25:49.761747 kernel: pcpu-alloc: [0] 072 073 074 075 076 077 078 079 May 13 00:25:49.761753 kernel: pcpu-alloc: [0] 080 081 082 083 084 085 086 087 May 13 00:25:49.761759 kernel: pcpu-alloc: [0] 088 089 090 091 092 093 094 095 May 13 00:25:49.761765 kernel: pcpu-alloc: [0] 096 097 098 099 100 101 102 103 May 13 00:25:49.761771 kernel: pcpu-alloc: [0] 104 105 106 107 108 109 110 111 May 13 00:25:49.761779 kernel: pcpu-alloc: [0] 112 113 114 115 116 117 118 119 May 13 00:25:49.761789 kernel: pcpu-alloc: [0] 120 121 122 123 124 125 126 127 May 13 00:25:49.761800 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=a30636f72ddb6c7dc7c9bee07b7cf23b403029ba1ff64eed2705530c62c7b592 May 13 00:25:49.761808 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 13 00:25:49.761814 kernel: random: crng init done May 13 00:25:49.761820 kernel: printk: log_buf_len individual max cpu contribution: 4096 bytes May 13 00:25:49.761828 kernel: printk: log_buf_len total cpu_extra contributions: 520192 bytes May 13 00:25:49.761834 kernel: printk: log_buf_len min size: 262144 bytes May 13 00:25:49.761841 kernel: printk: log_buf_len: 1048576 bytes May 13 00:25:49.761848 kernel: printk: early log buf free: 239648(91%) May 13 00:25:49.761854 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 13 00:25:49.761860 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) May 13 00:25:49.761869 kernel: Fallback order for Node 0: 0 May 13 00:25:49.761876 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515808 May 13 00:25:49.761882 kernel: Policy zone: DMA32 May 13 00:25:49.761888 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 13 00:25:49.761897 kernel: Memory: 1936356K/2096628K available (12288K kernel code, 2295K rwdata, 22740K rodata, 42864K init, 2328K bss, 160012K reserved, 0K cma-reserved) May 13 00:25:49.761910 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=128, Nodes=1 May 13 00:25:49.761921 kernel: ftrace: allocating 37944 entries in 149 pages May 13 00:25:49.761932 kernel: ftrace: allocated 149 pages with 4 groups May 13 00:25:49.761942 kernel: Dynamic Preempt: voluntary May 13 00:25:49.761952 kernel: rcu: Preemptible hierarchical RCU implementation. May 13 00:25:49.761963 kernel: rcu: RCU event tracing is enabled. May 13 00:25:49.761971 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=128. May 13 00:25:49.761977 kernel: Trampoline variant of Tasks RCU enabled. May 13 00:25:49.761983 kernel: Rude variant of Tasks RCU enabled. May 13 00:25:49.761991 kernel: Tracing variant of Tasks RCU enabled. May 13 00:25:49.761996 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 13 00:25:49.762002 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=128 May 13 00:25:49.762010 kernel: NR_IRQS: 33024, nr_irqs: 1448, preallocated irqs: 16 May 13 00:25:49.762019 kernel: rcu: srcu_init: Setting srcu_struct sizes to big. May 13 00:25:49.762027 kernel: Console: colour VGA+ 80x25 May 13 00:25:49.762037 kernel: printk: console [tty0] enabled May 13 00:25:49.762048 kernel: printk: console [ttyS0] enabled May 13 00:25:49.762054 kernel: ACPI: Core revision 20230628 May 13 00:25:49.762062 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 133484882848 ns May 13 00:25:49.762071 kernel: APIC: Switch to symmetric I/O mode setup May 13 00:25:49.762080 kernel: x2apic enabled May 13 00:25:49.762087 kernel: APIC: Switched APIC routing to: physical x2apic May 13 00:25:49.762096 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 May 13 00:25:49.762105 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x311fd3cd494, max_idle_ns: 440795223879 ns May 13 00:25:49.762111 kernel: Calibrating delay loop (skipped) preset value.. 6816.00 BogoMIPS (lpj=3408000) May 13 00:25:49.762117 kernel: Disabled fast string operations May 13 00:25:49.762125 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 May 13 00:25:49.762135 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 May 13 00:25:49.762148 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization May 13 00:25:49.762159 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit May 13 00:25:49.762169 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall May 13 00:25:49.762181 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS May 13 00:25:49.762191 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT May 13 00:25:49.762197 kernel: RETBleed: Mitigation: Enhanced IBRS May 13 00:25:49.762203 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier May 13 00:25:49.762209 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl May 13 00:25:49.762226 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode May 13 00:25:49.762237 kernel: SRBDS: Unknown: Dependent on hypervisor status May 13 00:25:49.762244 kernel: GDS: Unknown: Dependent on hypervisor status May 13 00:25:49.762253 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' May 13 00:25:49.762259 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' May 13 00:25:49.762265 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' May 13 00:25:49.762271 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 May 13 00:25:49.762277 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. May 13 00:25:49.762283 kernel: Freeing SMP alternatives memory: 32K May 13 00:25:49.762291 kernel: pid_max: default: 131072 minimum: 1024 May 13 00:25:49.762299 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity May 13 00:25:49.762305 kernel: landlock: Up and running. May 13 00:25:49.762314 kernel: SELinux: Initializing. May 13 00:25:49.762322 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) May 13 00:25:49.762330 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) May 13 00:25:49.762336 kernel: smpboot: CPU0: Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz (family: 0x6, model: 0x9e, stepping: 0xd) May 13 00:25:49.762342 kernel: RCU Tasks: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. May 13 00:25:49.762348 kernel: RCU Tasks Rude: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. May 13 00:25:49.762356 kernel: RCU Tasks Trace: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. May 13 00:25:49.762364 kernel: Performance Events: Skylake events, core PMU driver. May 13 00:25:49.762374 kernel: core: CPUID marked event: 'cpu cycles' unavailable May 13 00:25:49.762384 kernel: core: CPUID marked event: 'instructions' unavailable May 13 00:25:49.762393 kernel: core: CPUID marked event: 'bus cycles' unavailable May 13 00:25:49.762403 kernel: core: CPUID marked event: 'cache references' unavailable May 13 00:25:49.762411 kernel: core: CPUID marked event: 'cache misses' unavailable May 13 00:25:49.762420 kernel: core: CPUID marked event: 'branch instructions' unavailable May 13 00:25:49.762426 kernel: core: CPUID marked event: 'branch misses' unavailable May 13 00:25:49.762434 kernel: ... version: 1 May 13 00:25:49.762443 kernel: ... bit width: 48 May 13 00:25:49.762450 kernel: ... generic registers: 4 May 13 00:25:49.762459 kernel: ... value mask: 0000ffffffffffff May 13 00:25:49.762465 kernel: ... max period: 000000007fffffff May 13 00:25:49.762471 kernel: ... fixed-purpose events: 0 May 13 00:25:49.762477 kernel: ... event mask: 000000000000000f May 13 00:25:49.762482 kernel: signal: max sigframe size: 1776 May 13 00:25:49.762488 kernel: rcu: Hierarchical SRCU implementation. May 13 00:25:49.762500 kernel: rcu: Max phase no-delay instances is 400. May 13 00:25:49.762510 kernel: NMI watchdog: Perf NMI watchdog permanently disabled May 13 00:25:49.762517 kernel: smp: Bringing up secondary CPUs ... May 13 00:25:49.762525 kernel: smpboot: x86: Booting SMP configuration: May 13 00:25:49.762532 kernel: .... node #0, CPUs: #1 May 13 00:25:49.762538 kernel: Disabled fast string operations May 13 00:25:49.762544 kernel: smpboot: CPU 1 Converting physical 2 to logical package 1 May 13 00:25:49.762549 kernel: smpboot: CPU 1 Converting physical 0 to logical die 1 May 13 00:25:49.762555 kernel: smp: Brought up 1 node, 2 CPUs May 13 00:25:49.762564 kernel: smpboot: Max logical packages: 128 May 13 00:25:49.762576 kernel: smpboot: Total of 2 processors activated (13632.00 BogoMIPS) May 13 00:25:49.762584 kernel: devtmpfs: initialized May 13 00:25:49.762591 kernel: x86/mm: Memory block size: 128MB May 13 00:25:49.762599 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7feff000-0x7fefffff] (4096 bytes) May 13 00:25:49.762609 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 13 00:25:49.762616 kernel: futex hash table entries: 32768 (order: 9, 2097152 bytes, linear) May 13 00:25:49.762624 kernel: pinctrl core: initialized pinctrl subsystem May 13 00:25:49.762635 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 13 00:25:49.762643 kernel: audit: initializing netlink subsys (disabled) May 13 00:25:49.762651 kernel: audit: type=2000 audit(1747095947.068:1): state=initialized audit_enabled=0 res=1 May 13 00:25:49.762657 kernel: thermal_sys: Registered thermal governor 'step_wise' May 13 00:25:49.762663 kernel: thermal_sys: Registered thermal governor 'user_space' May 13 00:25:49.762668 kernel: cpuidle: using governor menu May 13 00:25:49.762677 kernel: Simple Boot Flag at 0x36 set to 0x80 May 13 00:25:49.762683 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 13 00:25:49.762692 kernel: dca service started, version 1.12.1 May 13 00:25:49.762699 kernel: PCI: MMCONFIG for domain 0000 [bus 00-7f] at [mem 0xf0000000-0xf7ffffff] (base 0xf0000000) May 13 00:25:49.762705 kernel: PCI: Using configuration type 1 for base access May 13 00:25:49.762712 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. May 13 00:25:49.762718 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 13 00:25:49.762723 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page May 13 00:25:49.762729 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 13 00:25:49.762737 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page May 13 00:25:49.762746 kernel: ACPI: Added _OSI(Module Device) May 13 00:25:49.762756 kernel: ACPI: Added _OSI(Processor Device) May 13 00:25:49.762764 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 13 00:25:49.762771 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 13 00:25:49.762778 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 13 00:25:49.762784 kernel: ACPI: [Firmware Bug]: BIOS _OSI(Linux) query ignored May 13 00:25:49.762790 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC May 13 00:25:49.762796 kernel: ACPI: Interpreter enabled May 13 00:25:49.762802 kernel: ACPI: PM: (supports S0 S1 S5) May 13 00:25:49.762809 kernel: ACPI: Using IOAPIC for interrupt routing May 13 00:25:49.762819 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug May 13 00:25:49.762827 kernel: PCI: Using E820 reservations for host bridge windows May 13 00:25:49.762835 kernel: ACPI: Enabled 4 GPEs in block 00 to 0F May 13 00:25:49.762844 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-7f]) May 13 00:25:49.762942 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 13 00:25:49.763021 kernel: acpi PNP0A03:00: _OSC: platform does not support [AER LTR] May 13 00:25:49.763085 kernel: acpi PNP0A03:00: _OSC: OS now controls [PCIeHotplug PME PCIeCapability] May 13 00:25:49.763095 kernel: PCI host bridge to bus 0000:00 May 13 00:25:49.763162 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] May 13 00:25:49.765287 kernel: pci_bus 0000:00: root bus resource [mem 0x000cc000-0x000dbfff window] May 13 00:25:49.765352 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] May 13 00:25:49.765414 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] May 13 00:25:49.765478 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xfeff window] May 13 00:25:49.765529 kernel: pci_bus 0000:00: root bus resource [bus 00-7f] May 13 00:25:49.765597 kernel: pci 0000:00:00.0: [8086:7190] type 00 class 0x060000 May 13 00:25:49.765673 kernel: pci 0000:00:01.0: [8086:7191] type 01 class 0x060400 May 13 00:25:49.765751 kernel: pci 0000:00:07.0: [8086:7110] type 00 class 0x060100 May 13 00:25:49.765807 kernel: pci 0000:00:07.1: [8086:7111] type 00 class 0x01018a May 13 00:25:49.765865 kernel: pci 0000:00:07.1: reg 0x20: [io 0x1060-0x106f] May 13 00:25:49.765930 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] May 13 00:25:49.765992 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x14: [io 0x03f6] May 13 00:25:49.766050 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] May 13 00:25:49.766109 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x1c: [io 0x0376] May 13 00:25:49.766182 kernel: pci 0000:00:07.3: [8086:7113] type 00 class 0x068000 May 13 00:25:49.766252 kernel: pci 0000:00:07.3: quirk: [io 0x1000-0x103f] claimed by PIIX4 ACPI May 13 00:25:49.766319 kernel: pci 0000:00:07.3: quirk: [io 0x1040-0x104f] claimed by PIIX4 SMB May 13 00:25:49.766386 kernel: pci 0000:00:07.7: [15ad:0740] type 00 class 0x088000 May 13 00:25:49.766461 kernel: pci 0000:00:07.7: reg 0x10: [io 0x1080-0x10bf] May 13 00:25:49.766520 kernel: pci 0000:00:07.7: reg 0x14: [mem 0xfebfe000-0xfebfffff 64bit] May 13 00:25:49.766583 kernel: pci 0000:00:0f.0: [15ad:0405] type 00 class 0x030000 May 13 00:25:49.766645 kernel: pci 0000:00:0f.0: reg 0x10: [io 0x1070-0x107f] May 13 00:25:49.766716 kernel: pci 0000:00:0f.0: reg 0x14: [mem 0xe8000000-0xefffffff pref] May 13 00:25:49.766779 kernel: pci 0000:00:0f.0: reg 0x18: [mem 0xfe000000-0xfe7fffff] May 13 00:25:49.766833 kernel: pci 0000:00:0f.0: reg 0x30: [mem 0x00000000-0x00007fff pref] May 13 00:25:49.766892 kernel: pci 0000:00:0f.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] May 13 00:25:49.766954 kernel: pci 0000:00:11.0: [15ad:0790] type 01 class 0x060401 May 13 00:25:49.767033 kernel: pci 0000:00:15.0: [15ad:07a0] type 01 class 0x060400 May 13 00:25:49.767098 kernel: pci 0000:00:15.0: PME# supported from D0 D3hot D3cold May 13 00:25:49.767166 kernel: pci 0000:00:15.1: [15ad:07a0] type 01 class 0x060400 May 13 00:25:49.770241 kernel: pci 0000:00:15.1: PME# supported from D0 D3hot D3cold May 13 00:25:49.770322 kernel: pci 0000:00:15.2: [15ad:07a0] type 01 class 0x060400 May 13 00:25:49.770382 kernel: pci 0000:00:15.2: PME# supported from D0 D3hot D3cold May 13 00:25:49.770448 kernel: pci 0000:00:15.3: [15ad:07a0] type 01 class 0x060400 May 13 00:25:49.770512 kernel: pci 0000:00:15.3: PME# supported from D0 D3hot D3cold May 13 00:25:49.770571 kernel: pci 0000:00:15.4: [15ad:07a0] type 01 class 0x060400 May 13 00:25:49.770629 kernel: pci 0000:00:15.4: PME# supported from D0 D3hot D3cold May 13 00:25:49.770696 kernel: pci 0000:00:15.5: [15ad:07a0] type 01 class 0x060400 May 13 00:25:49.770763 kernel: pci 0000:00:15.5: PME# supported from D0 D3hot D3cold May 13 00:25:49.770821 kernel: pci 0000:00:15.6: [15ad:07a0] type 01 class 0x060400 May 13 00:25:49.770894 kernel: pci 0000:00:15.6: PME# supported from D0 D3hot D3cold May 13 00:25:49.770978 kernel: pci 0000:00:15.7: [15ad:07a0] type 01 class 0x060400 May 13 00:25:49.771036 kernel: pci 0000:00:15.7: PME# supported from D0 D3hot D3cold May 13 00:25:49.771104 kernel: pci 0000:00:16.0: [15ad:07a0] type 01 class 0x060400 May 13 00:25:49.771164 kernel: pci 0000:00:16.0: PME# supported from D0 D3hot D3cold May 13 00:25:49.771243 kernel: pci 0000:00:16.1: [15ad:07a0] type 01 class 0x060400 May 13 00:25:49.771314 kernel: pci 0000:00:16.1: PME# supported from D0 D3hot D3cold May 13 00:25:49.771386 kernel: pci 0000:00:16.2: [15ad:07a0] type 01 class 0x060400 May 13 00:25:49.771439 kernel: pci 0000:00:16.2: PME# supported from D0 D3hot D3cold May 13 00:25:49.771510 kernel: pci 0000:00:16.3: [15ad:07a0] type 01 class 0x060400 May 13 00:25:49.771585 kernel: pci 0000:00:16.3: PME# supported from D0 D3hot D3cold May 13 00:25:49.771655 kernel: pci 0000:00:16.4: [15ad:07a0] type 01 class 0x060400 May 13 00:25:49.771716 kernel: pci 0000:00:16.4: PME# supported from D0 D3hot D3cold May 13 00:25:49.771781 kernel: pci 0000:00:16.5: [15ad:07a0] type 01 class 0x060400 May 13 00:25:49.771833 kernel: pci 0000:00:16.5: PME# supported from D0 D3hot D3cold May 13 00:25:49.771900 kernel: pci 0000:00:16.6: [15ad:07a0] type 01 class 0x060400 May 13 00:25:49.771964 kernel: pci 0000:00:16.6: PME# supported from D0 D3hot D3cold May 13 00:25:49.772025 kernel: pci 0000:00:16.7: [15ad:07a0] type 01 class 0x060400 May 13 00:25:49.772076 kernel: pci 0000:00:16.7: PME# supported from D0 D3hot D3cold May 13 00:25:49.773304 kernel: pci 0000:00:17.0: [15ad:07a0] type 01 class 0x060400 May 13 00:25:49.773379 kernel: pci 0000:00:17.0: PME# supported from D0 D3hot D3cold May 13 00:25:49.773454 kernel: pci 0000:00:17.1: [15ad:07a0] type 01 class 0x060400 May 13 00:25:49.773520 kernel: pci 0000:00:17.1: PME# supported from D0 D3hot D3cold May 13 00:25:49.773600 kernel: pci 0000:00:17.2: [15ad:07a0] type 01 class 0x060400 May 13 00:25:49.773674 kernel: pci 0000:00:17.2: PME# supported from D0 D3hot D3cold May 13 00:25:49.773748 kernel: pci 0000:00:17.3: [15ad:07a0] type 01 class 0x060400 May 13 00:25:49.773801 kernel: pci 0000:00:17.3: PME# supported from D0 D3hot D3cold May 13 00:25:49.773862 kernel: pci 0000:00:17.4: [15ad:07a0] type 01 class 0x060400 May 13 00:25:49.773921 kernel: pci 0000:00:17.4: PME# supported from D0 D3hot D3cold May 13 00:25:49.773996 kernel: pci 0000:00:17.5: [15ad:07a0] type 01 class 0x060400 May 13 00:25:49.774055 kernel: pci 0000:00:17.5: PME# supported from D0 D3hot D3cold May 13 00:25:49.774111 kernel: pci 0000:00:17.6: [15ad:07a0] type 01 class 0x060400 May 13 00:25:49.774172 kernel: pci 0000:00:17.6: PME# supported from D0 D3hot D3cold May 13 00:25:49.775251 kernel: pci 0000:00:17.7: [15ad:07a0] type 01 class 0x060400 May 13 00:25:49.775329 kernel: pci 0000:00:17.7: PME# supported from D0 D3hot D3cold May 13 00:25:49.775417 kernel: pci 0000:00:18.0: [15ad:07a0] type 01 class 0x060400 May 13 00:25:49.775475 kernel: pci 0000:00:18.0: PME# supported from D0 D3hot D3cold May 13 00:25:49.775541 kernel: pci 0000:00:18.1: [15ad:07a0] type 01 class 0x060400 May 13 00:25:49.775612 kernel: pci 0000:00:18.1: PME# supported from D0 D3hot D3cold May 13 00:25:49.775679 kernel: pci 0000:00:18.2: [15ad:07a0] type 01 class 0x060400 May 13 00:25:49.775752 kernel: pci 0000:00:18.2: PME# supported from D0 D3hot D3cold May 13 00:25:49.775819 kernel: pci 0000:00:18.3: [15ad:07a0] type 01 class 0x060400 May 13 00:25:49.775874 kernel: pci 0000:00:18.3: PME# supported from D0 D3hot D3cold May 13 00:25:49.775940 kernel: pci 0000:00:18.4: [15ad:07a0] type 01 class 0x060400 May 13 00:25:49.775999 kernel: pci 0000:00:18.4: PME# supported from D0 D3hot D3cold May 13 00:25:49.776077 kernel: pci 0000:00:18.5: [15ad:07a0] type 01 class 0x060400 May 13 00:25:49.776136 kernel: pci 0000:00:18.5: PME# supported from D0 D3hot D3cold May 13 00:25:49.776197 kernel: pci 0000:00:18.6: [15ad:07a0] type 01 class 0x060400 May 13 00:25:49.777661 kernel: pci 0000:00:18.6: PME# supported from D0 D3hot D3cold May 13 00:25:49.777729 kernel: pci 0000:00:18.7: [15ad:07a0] type 01 class 0x060400 May 13 00:25:49.777792 kernel: pci 0000:00:18.7: PME# supported from D0 D3hot D3cold May 13 00:25:49.777850 kernel: pci_bus 0000:01: extended config space not accessible May 13 00:25:49.777912 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] May 13 00:25:49.777973 kernel: pci_bus 0000:02: extended config space not accessible May 13 00:25:49.777988 kernel: acpiphp: Slot [32] registered May 13 00:25:49.777994 kernel: acpiphp: Slot [33] registered May 13 00:25:49.778001 kernel: acpiphp: Slot [34] registered May 13 00:25:49.778006 kernel: acpiphp: Slot [35] registered May 13 00:25:49.778012 kernel: acpiphp: Slot [36] registered May 13 00:25:49.778018 kernel: acpiphp: Slot [37] registered May 13 00:25:49.778024 kernel: acpiphp: Slot [38] registered May 13 00:25:49.778030 kernel: acpiphp: Slot [39] registered May 13 00:25:49.778035 kernel: acpiphp: Slot [40] registered May 13 00:25:49.778043 kernel: acpiphp: Slot [41] registered May 13 00:25:49.778049 kernel: acpiphp: Slot [42] registered May 13 00:25:49.778055 kernel: acpiphp: Slot [43] registered May 13 00:25:49.778063 kernel: acpiphp: Slot [44] registered May 13 00:25:49.778069 kernel: acpiphp: Slot [45] registered May 13 00:25:49.778074 kernel: acpiphp: Slot [46] registered May 13 00:25:49.778080 kernel: acpiphp: Slot [47] registered May 13 00:25:49.778089 kernel: acpiphp: Slot [48] registered May 13 00:25:49.778095 kernel: acpiphp: Slot [49] registered May 13 00:25:49.778101 kernel: acpiphp: Slot [50] registered May 13 00:25:49.778109 kernel: acpiphp: Slot [51] registered May 13 00:25:49.778119 kernel: acpiphp: Slot [52] registered May 13 00:25:49.778125 kernel: acpiphp: Slot [53] registered May 13 00:25:49.778135 kernel: acpiphp: Slot [54] registered May 13 00:25:49.778141 kernel: acpiphp: Slot [55] registered May 13 00:25:49.778147 kernel: acpiphp: Slot [56] registered May 13 00:25:49.778155 kernel: acpiphp: Slot [57] registered May 13 00:25:49.778162 kernel: acpiphp: Slot [58] registered May 13 00:25:49.778168 kernel: acpiphp: Slot [59] registered May 13 00:25:49.778176 kernel: acpiphp: Slot [60] registered May 13 00:25:49.778186 kernel: acpiphp: Slot [61] registered May 13 00:25:49.778196 kernel: acpiphp: Slot [62] registered May 13 00:25:49.778207 kernel: acpiphp: Slot [63] registered May 13 00:25:49.778460 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] (subtractive decode) May 13 00:25:49.778656 kernel: pci 0000:00:11.0: bridge window [io 0x2000-0x3fff] May 13 00:25:49.778717 kernel: pci 0000:00:11.0: bridge window [mem 0xfd600000-0xfdffffff] May 13 00:25:49.781302 kernel: pci 0000:00:11.0: bridge window [mem 0xe7b00000-0xe7ffffff 64bit pref] May 13 00:25:49.781367 kernel: pci 0000:00:11.0: bridge window [mem 0x000a0000-0x000bffff window] (subtractive decode) May 13 00:25:49.781426 kernel: pci 0000:00:11.0: bridge window [mem 0x000cc000-0x000dbfff window] (subtractive decode) May 13 00:25:49.781491 kernel: pci 0000:00:11.0: bridge window [mem 0xc0000000-0xfebfffff window] (subtractive decode) May 13 00:25:49.781557 kernel: pci 0000:00:11.0: bridge window [io 0x0000-0x0cf7 window] (subtractive decode) May 13 00:25:49.781633 kernel: pci 0000:00:11.0: bridge window [io 0x0d00-0xfeff window] (subtractive decode) May 13 00:25:49.781706 kernel: pci 0000:03:00.0: [15ad:07c0] type 00 class 0x010700 May 13 00:25:49.781763 kernel: pci 0000:03:00.0: reg 0x10: [io 0x4000-0x4007] May 13 00:25:49.781816 kernel: pci 0000:03:00.0: reg 0x14: [mem 0xfd5f8000-0xfd5fffff 64bit] May 13 00:25:49.781871 kernel: pci 0000:03:00.0: reg 0x30: [mem 0x00000000-0x0000ffff pref] May 13 00:25:49.781926 kernel: pci 0000:03:00.0: PME# supported from D0 D3hot D3cold May 13 00:25:49.781994 kernel: pci 0000:03:00.0: disabling ASPM on pre-1.1 PCIe device. You can enable it with 'pcie_aspm=force' May 13 00:25:49.782064 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] May 13 00:25:49.782145 kernel: pci 0000:00:15.0: bridge window [io 0x4000-0x4fff] May 13 00:25:49.782199 kernel: pci 0000:00:15.0: bridge window [mem 0xfd500000-0xfd5fffff] May 13 00:25:49.782342 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] May 13 00:25:49.782410 kernel: pci 0000:00:15.1: bridge window [io 0x8000-0x8fff] May 13 00:25:49.782514 kernel: pci 0000:00:15.1: bridge window [mem 0xfd100000-0xfd1fffff] May 13 00:25:49.782594 kernel: pci 0000:00:15.1: bridge window [mem 0xe7800000-0xe78fffff 64bit pref] May 13 00:25:49.782670 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] May 13 00:25:49.782739 kernel: pci 0000:00:15.2: bridge window [io 0xc000-0xcfff] May 13 00:25:49.782821 kernel: pci 0000:00:15.2: bridge window [mem 0xfcd00000-0xfcdfffff] May 13 00:25:49.782890 kernel: pci 0000:00:15.2: bridge window [mem 0xe7400000-0xe74fffff 64bit pref] May 13 00:25:49.782961 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] May 13 00:25:49.783033 kernel: pci 0000:00:15.3: bridge window [mem 0xfc900000-0xfc9fffff] May 13 00:25:49.783090 kernel: pci 0000:00:15.3: bridge window [mem 0xe7000000-0xe70fffff 64bit pref] May 13 00:25:49.783143 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] May 13 00:25:49.783195 kernel: pci 0000:00:15.4: bridge window [mem 0xfc500000-0xfc5fffff] May 13 00:25:49.783265 kernel: pci 0000:00:15.4: bridge window [mem 0xe6c00000-0xe6cfffff 64bit pref] May 13 00:25:49.783356 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] May 13 00:25:49.783431 kernel: pci 0000:00:15.5: bridge window [mem 0xfc100000-0xfc1fffff] May 13 00:25:49.783501 kernel: pci 0000:00:15.5: bridge window [mem 0xe6800000-0xe68fffff 64bit pref] May 13 00:25:49.783555 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] May 13 00:25:49.783606 kernel: pci 0000:00:15.6: bridge window [mem 0xfbd00000-0xfbdfffff] May 13 00:25:49.783657 kernel: pci 0000:00:15.6: bridge window [mem 0xe6400000-0xe64fffff 64bit pref] May 13 00:25:49.783709 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] May 13 00:25:49.783767 kernel: pci 0000:00:15.7: bridge window [mem 0xfb900000-0xfb9fffff] May 13 00:25:49.783850 kernel: pci 0000:00:15.7: bridge window [mem 0xe6000000-0xe60fffff 64bit pref] May 13 00:25:49.783944 kernel: pci 0000:0b:00.0: [15ad:07b0] type 00 class 0x020000 May 13 00:25:49.784019 kernel: pci 0000:0b:00.0: reg 0x10: [mem 0xfd4fc000-0xfd4fcfff] May 13 00:25:49.784085 kernel: pci 0000:0b:00.0: reg 0x14: [mem 0xfd4fd000-0xfd4fdfff] May 13 00:25:49.784156 kernel: pci 0000:0b:00.0: reg 0x18: [mem 0xfd4fe000-0xfd4fffff] May 13 00:25:49.784330 kernel: pci 0000:0b:00.0: reg 0x1c: [io 0x5000-0x500f] May 13 00:25:49.784388 kernel: pci 0000:0b:00.0: reg 0x30: [mem 0x00000000-0x0000ffff pref] May 13 00:25:49.784444 kernel: pci 0000:0b:00.0: supports D1 D2 May 13 00:25:49.784504 kernel: pci 0000:0b:00.0: PME# supported from D0 D1 D2 D3hot D3cold May 13 00:25:49.784558 kernel: pci 0000:0b:00.0: disabling ASPM on pre-1.1 PCIe device. You can enable it with 'pcie_aspm=force' May 13 00:25:49.784626 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] May 13 00:25:49.784710 kernel: pci 0000:00:16.0: bridge window [io 0x5000-0x5fff] May 13 00:25:49.784791 kernel: pci 0000:00:16.0: bridge window [mem 0xfd400000-0xfd4fffff] May 13 00:25:49.784874 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] May 13 00:25:49.784949 kernel: pci 0000:00:16.1: bridge window [io 0x9000-0x9fff] May 13 00:25:49.785032 kernel: pci 0000:00:16.1: bridge window [mem 0xfd000000-0xfd0fffff] May 13 00:25:49.785102 kernel: pci 0000:00:16.1: bridge window [mem 0xe7700000-0xe77fffff 64bit pref] May 13 00:25:49.785160 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] May 13 00:25:49.785247 kernel: pci 0000:00:16.2: bridge window [io 0xd000-0xdfff] May 13 00:25:49.785313 kernel: pci 0000:00:16.2: bridge window [mem 0xfcc00000-0xfccfffff] May 13 00:25:49.785396 kernel: pci 0000:00:16.2: bridge window [mem 0xe7300000-0xe73fffff 64bit pref] May 13 00:25:49.785470 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] May 13 00:25:49.785528 kernel: pci 0000:00:16.3: bridge window [mem 0xfc800000-0xfc8fffff] May 13 00:25:49.785586 kernel: pci 0000:00:16.3: bridge window [mem 0xe6f00000-0xe6ffffff 64bit pref] May 13 00:25:49.785646 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] May 13 00:25:49.785710 kernel: pci 0000:00:16.4: bridge window [mem 0xfc400000-0xfc4fffff] May 13 00:25:49.785784 kernel: pci 0000:00:16.4: bridge window [mem 0xe6b00000-0xe6bfffff 64bit pref] May 13 00:25:49.785855 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] May 13 00:25:49.785912 kernel: pci 0000:00:16.5: bridge window [mem 0xfc000000-0xfc0fffff] May 13 00:25:49.785989 kernel: pci 0000:00:16.5: bridge window [mem 0xe6700000-0xe67fffff 64bit pref] May 13 00:25:49.786078 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] May 13 00:25:49.786170 kernel: pci 0000:00:16.6: bridge window [mem 0xfbc00000-0xfbcfffff] May 13 00:25:49.786379 kernel: pci 0000:00:16.6: bridge window [mem 0xe6300000-0xe63fffff 64bit pref] May 13 00:25:49.786472 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] May 13 00:25:49.786554 kernel: pci 0000:00:16.7: bridge window [mem 0xfb800000-0xfb8fffff] May 13 00:25:49.786639 kernel: pci 0000:00:16.7: bridge window [mem 0xe5f00000-0xe5ffffff 64bit pref] May 13 00:25:49.786720 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] May 13 00:25:49.786800 kernel: pci 0000:00:17.0: bridge window [io 0x6000-0x6fff] May 13 00:25:49.786877 kernel: pci 0000:00:17.0: bridge window [mem 0xfd300000-0xfd3fffff] May 13 00:25:49.786962 kernel: pci 0000:00:17.0: bridge window [mem 0xe7a00000-0xe7afffff 64bit pref] May 13 00:25:49.787044 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] May 13 00:25:49.787128 kernel: pci 0000:00:17.1: bridge window [io 0xa000-0xafff] May 13 00:25:49.787207 kernel: pci 0000:00:17.1: bridge window [mem 0xfcf00000-0xfcffffff] May 13 00:25:49.787302 kernel: pci 0000:00:17.1: bridge window [mem 0xe7600000-0xe76fffff 64bit pref] May 13 00:25:49.787384 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] May 13 00:25:49.787463 kernel: pci 0000:00:17.2: bridge window [io 0xe000-0xefff] May 13 00:25:49.787540 kernel: pci 0000:00:17.2: bridge window [mem 0xfcb00000-0xfcbfffff] May 13 00:25:49.787623 kernel: pci 0000:00:17.2: bridge window [mem 0xe7200000-0xe72fffff 64bit pref] May 13 00:25:49.787688 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] May 13 00:25:49.787759 kernel: pci 0000:00:17.3: bridge window [mem 0xfc700000-0xfc7fffff] May 13 00:25:49.787820 kernel: pci 0000:00:17.3: bridge window [mem 0xe6e00000-0xe6efffff 64bit pref] May 13 00:25:49.787904 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] May 13 00:25:49.787974 kernel: pci 0000:00:17.4: bridge window [mem 0xfc300000-0xfc3fffff] May 13 00:25:49.788037 kernel: pci 0000:00:17.4: bridge window [mem 0xe6a00000-0xe6afffff 64bit pref] May 13 00:25:49.788114 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] May 13 00:25:49.788196 kernel: pci 0000:00:17.5: bridge window [mem 0xfbf00000-0xfbffffff] May 13 00:25:49.788345 kernel: pci 0000:00:17.5: bridge window [mem 0xe6600000-0xe66fffff 64bit pref] May 13 00:25:49.788425 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] May 13 00:25:49.788493 kernel: pci 0000:00:17.6: bridge window [mem 0xfbb00000-0xfbbfffff] May 13 00:25:49.788558 kernel: pci 0000:00:17.6: bridge window [mem 0xe6200000-0xe62fffff 64bit pref] May 13 00:25:49.788636 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] May 13 00:25:49.788704 kernel: pci 0000:00:17.7: bridge window [mem 0xfb700000-0xfb7fffff] May 13 00:25:49.788781 kernel: pci 0000:00:17.7: bridge window [mem 0xe5e00000-0xe5efffff 64bit pref] May 13 00:25:49.788864 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] May 13 00:25:49.788924 kernel: pci 0000:00:18.0: bridge window [io 0x7000-0x7fff] May 13 00:25:49.788990 kernel: pci 0000:00:18.0: bridge window [mem 0xfd200000-0xfd2fffff] May 13 00:25:49.789059 kernel: pci 0000:00:18.0: bridge window [mem 0xe7900000-0xe79fffff 64bit pref] May 13 00:25:49.789115 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] May 13 00:25:49.789191 kernel: pci 0000:00:18.1: bridge window [io 0xb000-0xbfff] May 13 00:25:49.789256 kernel: pci 0000:00:18.1: bridge window [mem 0xfce00000-0xfcefffff] May 13 00:25:49.789318 kernel: pci 0000:00:18.1: bridge window [mem 0xe7500000-0xe75fffff 64bit pref] May 13 00:25:49.789400 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] May 13 00:25:49.789456 kernel: pci 0000:00:18.2: bridge window [mem 0xfca00000-0xfcafffff] May 13 00:25:49.789529 kernel: pci 0000:00:18.2: bridge window [mem 0xe7100000-0xe71fffff 64bit pref] May 13 00:25:49.789615 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] May 13 00:25:49.789688 kernel: pci 0000:00:18.3: bridge window [mem 0xfc600000-0xfc6fffff] May 13 00:25:49.789764 kernel: pci 0000:00:18.3: bridge window [mem 0xe6d00000-0xe6dfffff 64bit pref] May 13 00:25:49.789850 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] May 13 00:25:49.789924 kernel: pci 0000:00:18.4: bridge window [mem 0xfc200000-0xfc2fffff] May 13 00:25:49.789985 kernel: pci 0000:00:18.4: bridge window [mem 0xe6900000-0xe69fffff 64bit pref] May 13 00:25:49.790067 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] May 13 00:25:49.790130 kernel: pci 0000:00:18.5: bridge window [mem 0xfbe00000-0xfbefffff] May 13 00:25:49.790193 kernel: pci 0000:00:18.5: bridge window [mem 0xe6500000-0xe65fffff 64bit pref] May 13 00:25:49.790282 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] May 13 00:25:49.790338 kernel: pci 0000:00:18.6: bridge window [mem 0xfba00000-0xfbafffff] May 13 00:25:49.790405 kernel: pci 0000:00:18.6: bridge window [mem 0xe6100000-0xe61fffff 64bit pref] May 13 00:25:49.790488 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] May 13 00:25:49.790574 kernel: pci 0000:00:18.7: bridge window [mem 0xfb600000-0xfb6fffff] May 13 00:25:49.790653 kernel: pci 0000:00:18.7: bridge window [mem 0xe5d00000-0xe5dfffff 64bit pref] May 13 00:25:49.790667 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 9 May 13 00:25:49.790679 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 0 May 13 00:25:49.790690 kernel: ACPI: PCI: Interrupt link LNKB disabled May 13 00:25:49.790700 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 May 13 00:25:49.790711 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 10 May 13 00:25:49.790717 kernel: iommu: Default domain type: Translated May 13 00:25:49.790723 kernel: iommu: DMA domain TLB invalidation policy: lazy mode May 13 00:25:49.790731 kernel: PCI: Using ACPI for IRQ routing May 13 00:25:49.790740 kernel: PCI: pci_cache_line_size set to 64 bytes May 13 00:25:49.790746 kernel: e820: reserve RAM buffer [mem 0x0009ec00-0x0009ffff] May 13 00:25:49.790753 kernel: e820: reserve RAM buffer [mem 0x7fee0000-0x7fffffff] May 13 00:25:49.790826 kernel: pci 0000:00:0f.0: vgaarb: setting as boot VGA device May 13 00:25:49.790898 kernel: pci 0000:00:0f.0: vgaarb: bridge control possible May 13 00:25:49.790955 kernel: pci 0000:00:0f.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none May 13 00:25:49.790964 kernel: vgaarb: loaded May 13 00:25:49.790970 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 May 13 00:25:49.790979 kernel: hpet0: 16 comparators, 64-bit 14.318180 MHz counter May 13 00:25:49.790986 kernel: clocksource: Switched to clocksource tsc-early May 13 00:25:49.790992 kernel: VFS: Disk quotas dquot_6.6.0 May 13 00:25:49.790998 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 13 00:25:49.791004 kernel: pnp: PnP ACPI init May 13 00:25:49.791077 kernel: system 00:00: [io 0x1000-0x103f] has been reserved May 13 00:25:49.791132 kernel: system 00:00: [io 0x1040-0x104f] has been reserved May 13 00:25:49.791207 kernel: system 00:00: [io 0x0cf0-0x0cf1] has been reserved May 13 00:25:49.791525 kernel: system 00:04: [mem 0xfed00000-0xfed003ff] has been reserved May 13 00:25:49.791609 kernel: pnp 00:06: [dma 2] May 13 00:25:49.791693 kernel: system 00:07: [io 0xfce0-0xfcff] has been reserved May 13 00:25:49.791768 kernel: system 00:07: [mem 0xf0000000-0xf7ffffff] has been reserved May 13 00:25:49.791828 kernel: system 00:07: [mem 0xfe800000-0xfe9fffff] has been reserved May 13 00:25:49.791841 kernel: pnp: PnP ACPI: found 8 devices May 13 00:25:49.791849 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns May 13 00:25:49.791855 kernel: NET: Registered PF_INET protocol family May 13 00:25:49.791861 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) May 13 00:25:49.791867 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) May 13 00:25:49.791874 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 13 00:25:49.791882 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) May 13 00:25:49.791895 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) May 13 00:25:49.791905 kernel: TCP: Hash tables configured (established 16384 bind 16384) May 13 00:25:49.791915 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) May 13 00:25:49.791925 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) May 13 00:25:49.791935 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 13 00:25:49.791944 kernel: NET: Registered PF_XDP protocol family May 13 00:25:49.792012 kernel: pci 0000:00:15.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 May 13 00:25:49.792083 kernel: pci 0000:00:15.3: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 May 13 00:25:49.792149 kernel: pci 0000:00:15.4: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 May 13 00:25:49.792240 kernel: pci 0000:00:15.5: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 May 13 00:25:49.792298 kernel: pci 0000:00:15.6: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 May 13 00:25:49.792373 kernel: pci 0000:00:15.7: bridge window [io 0x1000-0x0fff] to [bus 0a] add_size 1000 May 13 00:25:49.792448 kernel: pci 0000:00:16.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 0b] add_size 200000 add_align 100000 May 13 00:25:49.792530 kernel: pci 0000:00:16.3: bridge window [io 0x1000-0x0fff] to [bus 0e] add_size 1000 May 13 00:25:49.792622 kernel: pci 0000:00:16.4: bridge window [io 0x1000-0x0fff] to [bus 0f] add_size 1000 May 13 00:25:49.793005 kernel: pci 0000:00:16.5: bridge window [io 0x1000-0x0fff] to [bus 10] add_size 1000 May 13 00:25:49.793097 kernel: pci 0000:00:16.6: bridge window [io 0x1000-0x0fff] to [bus 11] add_size 1000 May 13 00:25:49.793171 kernel: pci 0000:00:16.7: bridge window [io 0x1000-0x0fff] to [bus 12] add_size 1000 May 13 00:25:49.793245 kernel: pci 0000:00:17.3: bridge window [io 0x1000-0x0fff] to [bus 16] add_size 1000 May 13 00:25:49.793315 kernel: pci 0000:00:17.4: bridge window [io 0x1000-0x0fff] to [bus 17] add_size 1000 May 13 00:25:49.793397 kernel: pci 0000:00:17.5: bridge window [io 0x1000-0x0fff] to [bus 18] add_size 1000 May 13 00:25:49.793459 kernel: pci 0000:00:17.6: bridge window [io 0x1000-0x0fff] to [bus 19] add_size 1000 May 13 00:25:49.793527 kernel: pci 0000:00:17.7: bridge window [io 0x1000-0x0fff] to [bus 1a] add_size 1000 May 13 00:25:49.793597 kernel: pci 0000:00:18.2: bridge window [io 0x1000-0x0fff] to [bus 1d] add_size 1000 May 13 00:25:49.793657 kernel: pci 0000:00:18.3: bridge window [io 0x1000-0x0fff] to [bus 1e] add_size 1000 May 13 00:25:49.793723 kernel: pci 0000:00:18.4: bridge window [io 0x1000-0x0fff] to [bus 1f] add_size 1000 May 13 00:25:49.793803 kernel: pci 0000:00:18.5: bridge window [io 0x1000-0x0fff] to [bus 20] add_size 1000 May 13 00:25:49.793881 kernel: pci 0000:00:18.6: bridge window [io 0x1000-0x0fff] to [bus 21] add_size 1000 May 13 00:25:49.793968 kernel: pci 0000:00:18.7: bridge window [io 0x1000-0x0fff] to [bus 22] add_size 1000 May 13 00:25:49.794029 kernel: pci 0000:00:15.0: BAR 15: assigned [mem 0xc0000000-0xc01fffff 64bit pref] May 13 00:25:49.794081 kernel: pci 0000:00:16.0: BAR 15: assigned [mem 0xc0200000-0xc03fffff 64bit pref] May 13 00:25:49.794133 kernel: pci 0000:00:15.3: BAR 13: no space for [io size 0x1000] May 13 00:25:49.794201 kernel: pci 0000:00:15.3: BAR 13: failed to assign [io size 0x1000] May 13 00:25:49.794392 kernel: pci 0000:00:15.4: BAR 13: no space for [io size 0x1000] May 13 00:25:49.794477 kernel: pci 0000:00:15.4: BAR 13: failed to assign [io size 0x1000] May 13 00:25:49.794541 kernel: pci 0000:00:15.5: BAR 13: no space for [io size 0x1000] May 13 00:25:49.794592 kernel: pci 0000:00:15.5: BAR 13: failed to assign [io size 0x1000] May 13 00:25:49.794644 kernel: pci 0000:00:15.6: BAR 13: no space for [io size 0x1000] May 13 00:25:49.794696 kernel: pci 0000:00:15.6: BAR 13: failed to assign [io size 0x1000] May 13 00:25:49.794775 kernel: pci 0000:00:15.7: BAR 13: no space for [io size 0x1000] May 13 00:25:49.794851 kernel: pci 0000:00:15.7: BAR 13: failed to assign [io size 0x1000] May 13 00:25:49.794919 kernel: pci 0000:00:16.3: BAR 13: no space for [io size 0x1000] May 13 00:25:49.794987 kernel: pci 0000:00:16.3: BAR 13: failed to assign [io size 0x1000] May 13 00:25:49.795053 kernel: pci 0000:00:16.4: BAR 13: no space for [io size 0x1000] May 13 00:25:49.795120 kernel: pci 0000:00:16.4: BAR 13: failed to assign [io size 0x1000] May 13 00:25:49.795175 kernel: pci 0000:00:16.5: BAR 13: no space for [io size 0x1000] May 13 00:25:49.795239 kernel: pci 0000:00:16.5: BAR 13: failed to assign [io size 0x1000] May 13 00:25:49.795302 kernel: pci 0000:00:16.6: BAR 13: no space for [io size 0x1000] May 13 00:25:49.795366 kernel: pci 0000:00:16.6: BAR 13: failed to assign [io size 0x1000] May 13 00:25:49.795420 kernel: pci 0000:00:16.7: BAR 13: no space for [io size 0x1000] May 13 00:25:49.795474 kernel: pci 0000:00:16.7: BAR 13: failed to assign [io size 0x1000] May 13 00:25:49.795550 kernel: pci 0000:00:17.3: BAR 13: no space for [io size 0x1000] May 13 00:25:49.795609 kernel: pci 0000:00:17.3: BAR 13: failed to assign [io size 0x1000] May 13 00:25:49.795682 kernel: pci 0000:00:17.4: BAR 13: no space for [io size 0x1000] May 13 00:25:49.795746 kernel: pci 0000:00:17.4: BAR 13: failed to assign [io size 0x1000] May 13 00:25:49.795822 kernel: pci 0000:00:17.5: BAR 13: no space for [io size 0x1000] May 13 00:25:49.795885 kernel: pci 0000:00:17.5: BAR 13: failed to assign [io size 0x1000] May 13 00:25:49.795950 kernel: pci 0000:00:17.6: BAR 13: no space for [io size 0x1000] May 13 00:25:49.796006 kernel: pci 0000:00:17.6: BAR 13: failed to assign [io size 0x1000] May 13 00:25:49.796060 kernel: pci 0000:00:17.7: BAR 13: no space for [io size 0x1000] May 13 00:25:49.796126 kernel: pci 0000:00:17.7: BAR 13: failed to assign [io size 0x1000] May 13 00:25:49.796182 kernel: pci 0000:00:18.2: BAR 13: no space for [io size 0x1000] May 13 00:25:49.797360 kernel: pci 0000:00:18.2: BAR 13: failed to assign [io size 0x1000] May 13 00:25:49.797444 kernel: pci 0000:00:18.3: BAR 13: no space for [io size 0x1000] May 13 00:25:49.797520 kernel: pci 0000:00:18.3: BAR 13: failed to assign [io size 0x1000] May 13 00:25:49.797584 kernel: pci 0000:00:18.4: BAR 13: no space for [io size 0x1000] May 13 00:25:49.797643 kernel: pci 0000:00:18.4: BAR 13: failed to assign [io size 0x1000] May 13 00:25:49.797702 kernel: pci 0000:00:18.5: BAR 13: no space for [io size 0x1000] May 13 00:25:49.797770 kernel: pci 0000:00:18.5: BAR 13: failed to assign [io size 0x1000] May 13 00:25:49.797839 kernel: pci 0000:00:18.6: BAR 13: no space for [io size 0x1000] May 13 00:25:49.797904 kernel: pci 0000:00:18.6: BAR 13: failed to assign [io size 0x1000] May 13 00:25:49.797958 kernel: pci 0000:00:18.7: BAR 13: no space for [io size 0x1000] May 13 00:25:49.798026 kernel: pci 0000:00:18.7: BAR 13: failed to assign [io size 0x1000] May 13 00:25:49.798094 kernel: pci 0000:00:18.7: BAR 13: no space for [io size 0x1000] May 13 00:25:49.798154 kernel: pci 0000:00:18.7: BAR 13: failed to assign [io size 0x1000] May 13 00:25:49.798211 kernel: pci 0000:00:18.6: BAR 13: no space for [io size 0x1000] May 13 00:25:49.798290 kernel: pci 0000:00:18.6: BAR 13: failed to assign [io size 0x1000] May 13 00:25:49.798346 kernel: pci 0000:00:18.5: BAR 13: no space for [io size 0x1000] May 13 00:25:49.798412 kernel: pci 0000:00:18.5: BAR 13: failed to assign [io size 0x1000] May 13 00:25:49.798478 kernel: pci 0000:00:18.4: BAR 13: no space for [io size 0x1000] May 13 00:25:49.798536 kernel: pci 0000:00:18.4: BAR 13: failed to assign [io size 0x1000] May 13 00:25:49.798591 kernel: pci 0000:00:18.3: BAR 13: no space for [io size 0x1000] May 13 00:25:49.798648 kernel: pci 0000:00:18.3: BAR 13: failed to assign [io size 0x1000] May 13 00:25:49.798702 kernel: pci 0000:00:18.2: BAR 13: no space for [io size 0x1000] May 13 00:25:49.798759 kernel: pci 0000:00:18.2: BAR 13: failed to assign [io size 0x1000] May 13 00:25:49.798822 kernel: pci 0000:00:17.7: BAR 13: no space for [io size 0x1000] May 13 00:25:49.798877 kernel: pci 0000:00:17.7: BAR 13: failed to assign [io size 0x1000] May 13 00:25:49.798935 kernel: pci 0000:00:17.6: BAR 13: no space for [io size 0x1000] May 13 00:25:49.798991 kernel: pci 0000:00:17.6: BAR 13: failed to assign [io size 0x1000] May 13 00:25:49.799052 kernel: pci 0000:00:17.5: BAR 13: no space for [io size 0x1000] May 13 00:25:49.799129 kernel: pci 0000:00:17.5: BAR 13: failed to assign [io size 0x1000] May 13 00:25:49.799191 kernel: pci 0000:00:17.4: BAR 13: no space for [io size 0x1000] May 13 00:25:49.800725 kernel: pci 0000:00:17.4: BAR 13: failed to assign [io size 0x1000] May 13 00:25:49.800808 kernel: pci 0000:00:17.3: BAR 13: no space for [io size 0x1000] May 13 00:25:49.800876 kernel: pci 0000:00:17.3: BAR 13: failed to assign [io size 0x1000] May 13 00:25:49.800940 kernel: pci 0000:00:16.7: BAR 13: no space for [io size 0x1000] May 13 00:25:49.801009 kernel: pci 0000:00:16.7: BAR 13: failed to assign [io size 0x1000] May 13 00:25:49.801074 kernel: pci 0000:00:16.6: BAR 13: no space for [io size 0x1000] May 13 00:25:49.801132 kernel: pci 0000:00:16.6: BAR 13: failed to assign [io size 0x1000] May 13 00:25:49.801186 kernel: pci 0000:00:16.5: BAR 13: no space for [io size 0x1000] May 13 00:25:49.801253 kernel: pci 0000:00:16.5: BAR 13: failed to assign [io size 0x1000] May 13 00:25:49.801318 kernel: pci 0000:00:16.4: BAR 13: no space for [io size 0x1000] May 13 00:25:49.801377 kernel: pci 0000:00:16.4: BAR 13: failed to assign [io size 0x1000] May 13 00:25:49.801441 kernel: pci 0000:00:16.3: BAR 13: no space for [io size 0x1000] May 13 00:25:49.801512 kernel: pci 0000:00:16.3: BAR 13: failed to assign [io size 0x1000] May 13 00:25:49.801574 kernel: pci 0000:00:15.7: BAR 13: no space for [io size 0x1000] May 13 00:25:49.801635 kernel: pci 0000:00:15.7: BAR 13: failed to assign [io size 0x1000] May 13 00:25:49.801714 kernel: pci 0000:00:15.6: BAR 13: no space for [io size 0x1000] May 13 00:25:49.801782 kernel: pci 0000:00:15.6: BAR 13: failed to assign [io size 0x1000] May 13 00:25:49.801849 kernel: pci 0000:00:15.5: BAR 13: no space for [io size 0x1000] May 13 00:25:49.801916 kernel: pci 0000:00:15.5: BAR 13: failed to assign [io size 0x1000] May 13 00:25:49.801990 kernel: pci 0000:00:15.4: BAR 13: no space for [io size 0x1000] May 13 00:25:49.802062 kernel: pci 0000:00:15.4: BAR 13: failed to assign [io size 0x1000] May 13 00:25:49.802137 kernel: pci 0000:00:15.3: BAR 13: no space for [io size 0x1000] May 13 00:25:49.802197 kernel: pci 0000:00:15.3: BAR 13: failed to assign [io size 0x1000] May 13 00:25:49.802279 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] May 13 00:25:49.802357 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] May 13 00:25:49.802424 kernel: pci 0000:00:11.0: bridge window [io 0x2000-0x3fff] May 13 00:25:49.802500 kernel: pci 0000:00:11.0: bridge window [mem 0xfd600000-0xfdffffff] May 13 00:25:49.802565 kernel: pci 0000:00:11.0: bridge window [mem 0xe7b00000-0xe7ffffff 64bit pref] May 13 00:25:49.802642 kernel: pci 0000:03:00.0: BAR 6: assigned [mem 0xfd500000-0xfd50ffff pref] May 13 00:25:49.802709 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] May 13 00:25:49.802788 kernel: pci 0000:00:15.0: bridge window [io 0x4000-0x4fff] May 13 00:25:49.802855 kernel: pci 0000:00:15.0: bridge window [mem 0xfd500000-0xfd5fffff] May 13 00:25:49.802926 kernel: pci 0000:00:15.0: bridge window [mem 0xc0000000-0xc01fffff 64bit pref] May 13 00:25:49.803003 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] May 13 00:25:49.803075 kernel: pci 0000:00:15.1: bridge window [io 0x8000-0x8fff] May 13 00:25:49.803143 kernel: pci 0000:00:15.1: bridge window [mem 0xfd100000-0xfd1fffff] May 13 00:25:49.803246 kernel: pci 0000:00:15.1: bridge window [mem 0xe7800000-0xe78fffff 64bit pref] May 13 00:25:49.803317 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] May 13 00:25:49.803378 kernel: pci 0000:00:15.2: bridge window [io 0xc000-0xcfff] May 13 00:25:49.803436 kernel: pci 0000:00:15.2: bridge window [mem 0xfcd00000-0xfcdfffff] May 13 00:25:49.803487 kernel: pci 0000:00:15.2: bridge window [mem 0xe7400000-0xe74fffff 64bit pref] May 13 00:25:49.803550 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] May 13 00:25:49.803612 kernel: pci 0000:00:15.3: bridge window [mem 0xfc900000-0xfc9fffff] May 13 00:25:49.803663 kernel: pci 0000:00:15.3: bridge window [mem 0xe7000000-0xe70fffff 64bit pref] May 13 00:25:49.803724 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] May 13 00:25:49.803795 kernel: pci 0000:00:15.4: bridge window [mem 0xfc500000-0xfc5fffff] May 13 00:25:49.803872 kernel: pci 0000:00:15.4: bridge window [mem 0xe6c00000-0xe6cfffff 64bit pref] May 13 00:25:49.803946 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] May 13 00:25:49.804020 kernel: pci 0000:00:15.5: bridge window [mem 0xfc100000-0xfc1fffff] May 13 00:25:49.804079 kernel: pci 0000:00:15.5: bridge window [mem 0xe6800000-0xe68fffff 64bit pref] May 13 00:25:49.804142 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] May 13 00:25:49.804767 kernel: pci 0000:00:15.6: bridge window [mem 0xfbd00000-0xfbdfffff] May 13 00:25:49.804852 kernel: pci 0000:00:15.6: bridge window [mem 0xe6400000-0xe64fffff 64bit pref] May 13 00:25:49.804917 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] May 13 00:25:49.804984 kernel: pci 0000:00:15.7: bridge window [mem 0xfb900000-0xfb9fffff] May 13 00:25:49.805038 kernel: pci 0000:00:15.7: bridge window [mem 0xe6000000-0xe60fffff 64bit pref] May 13 00:25:49.805106 kernel: pci 0000:0b:00.0: BAR 6: assigned [mem 0xfd400000-0xfd40ffff pref] May 13 00:25:49.805168 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] May 13 00:25:49.805237 kernel: pci 0000:00:16.0: bridge window [io 0x5000-0x5fff] May 13 00:25:49.805297 kernel: pci 0000:00:16.0: bridge window [mem 0xfd400000-0xfd4fffff] May 13 00:25:49.805351 kernel: pci 0000:00:16.0: bridge window [mem 0xc0200000-0xc03fffff 64bit pref] May 13 00:25:49.805415 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] May 13 00:25:49.805468 kernel: pci 0000:00:16.1: bridge window [io 0x9000-0x9fff] May 13 00:25:49.805526 kernel: pci 0000:00:16.1: bridge window [mem 0xfd000000-0xfd0fffff] May 13 00:25:49.805598 kernel: pci 0000:00:16.1: bridge window [mem 0xe7700000-0xe77fffff 64bit pref] May 13 00:25:49.805683 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] May 13 00:25:49.805752 kernel: pci 0000:00:16.2: bridge window [io 0xd000-0xdfff] May 13 00:25:49.805828 kernel: pci 0000:00:16.2: bridge window [mem 0xfcc00000-0xfccfffff] May 13 00:25:49.805909 kernel: pci 0000:00:16.2: bridge window [mem 0xe7300000-0xe73fffff 64bit pref] May 13 00:25:49.805982 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] May 13 00:25:49.806048 kernel: pci 0000:00:16.3: bridge window [mem 0xfc800000-0xfc8fffff] May 13 00:25:49.807529 kernel: pci 0000:00:16.3: bridge window [mem 0xe6f00000-0xe6ffffff 64bit pref] May 13 00:25:49.807589 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] May 13 00:25:49.807643 kernel: pci 0000:00:16.4: bridge window [mem 0xfc400000-0xfc4fffff] May 13 00:25:49.807707 kernel: pci 0000:00:16.4: bridge window [mem 0xe6b00000-0xe6bfffff 64bit pref] May 13 00:25:49.807767 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] May 13 00:25:49.807820 kernel: pci 0000:00:16.5: bridge window [mem 0xfc000000-0xfc0fffff] May 13 00:25:49.807876 kernel: pci 0000:00:16.5: bridge window [mem 0xe6700000-0xe67fffff 64bit pref] May 13 00:25:49.807955 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] May 13 00:25:49.808027 kernel: pci 0000:00:16.6: bridge window [mem 0xfbc00000-0xfbcfffff] May 13 00:25:49.808093 kernel: pci 0000:00:16.6: bridge window [mem 0xe6300000-0xe63fffff 64bit pref] May 13 00:25:49.808168 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] May 13 00:25:49.809574 kernel: pci 0000:00:16.7: bridge window [mem 0xfb800000-0xfb8fffff] May 13 00:25:49.809665 kernel: pci 0000:00:16.7: bridge window [mem 0xe5f00000-0xe5ffffff 64bit pref] May 13 00:25:49.809740 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] May 13 00:25:49.809811 kernel: pci 0000:00:17.0: bridge window [io 0x6000-0x6fff] May 13 00:25:49.809865 kernel: pci 0000:00:17.0: bridge window [mem 0xfd300000-0xfd3fffff] May 13 00:25:49.809921 kernel: pci 0000:00:17.0: bridge window [mem 0xe7a00000-0xe7afffff 64bit pref] May 13 00:25:49.809986 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] May 13 00:25:49.810062 kernel: pci 0000:00:17.1: bridge window [io 0xa000-0xafff] May 13 00:25:49.810126 kernel: pci 0000:00:17.1: bridge window [mem 0xfcf00000-0xfcffffff] May 13 00:25:49.810197 kernel: pci 0000:00:17.1: bridge window [mem 0xe7600000-0xe76fffff 64bit pref] May 13 00:25:49.810282 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] May 13 00:25:49.810338 kernel: pci 0000:00:17.2: bridge window [io 0xe000-0xefff] May 13 00:25:49.810408 kernel: pci 0000:00:17.2: bridge window [mem 0xfcb00000-0xfcbfffff] May 13 00:25:49.810477 kernel: pci 0000:00:17.2: bridge window [mem 0xe7200000-0xe72fffff 64bit pref] May 13 00:25:49.810544 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] May 13 00:25:49.810621 kernel: pci 0000:00:17.3: bridge window [mem 0xfc700000-0xfc7fffff] May 13 00:25:49.810691 kernel: pci 0000:00:17.3: bridge window [mem 0xe6e00000-0xe6efffff 64bit pref] May 13 00:25:49.810778 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] May 13 00:25:49.810845 kernel: pci 0000:00:17.4: bridge window [mem 0xfc300000-0xfc3fffff] May 13 00:25:49.810928 kernel: pci 0000:00:17.4: bridge window [mem 0xe6a00000-0xe6afffff 64bit pref] May 13 00:25:49.810983 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] May 13 00:25:49.811042 kernel: pci 0000:00:17.5: bridge window [mem 0xfbf00000-0xfbffffff] May 13 00:25:49.811108 kernel: pci 0000:00:17.5: bridge window [mem 0xe6600000-0xe66fffff 64bit pref] May 13 00:25:49.811202 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] May 13 00:25:49.811793 kernel: pci 0000:00:17.6: bridge window [mem 0xfbb00000-0xfbbfffff] May 13 00:25:49.811857 kernel: pci 0000:00:17.6: bridge window [mem 0xe6200000-0xe62fffff 64bit pref] May 13 00:25:49.811930 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] May 13 00:25:49.812000 kernel: pci 0000:00:17.7: bridge window [mem 0xfb700000-0xfb7fffff] May 13 00:25:49.812053 kernel: pci 0000:00:17.7: bridge window [mem 0xe5e00000-0xe5efffff 64bit pref] May 13 00:25:49.812110 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] May 13 00:25:49.812182 kernel: pci 0000:00:18.0: bridge window [io 0x7000-0x7fff] May 13 00:25:49.812292 kernel: pci 0000:00:18.0: bridge window [mem 0xfd200000-0xfd2fffff] May 13 00:25:49.812361 kernel: pci 0000:00:18.0: bridge window [mem 0xe7900000-0xe79fffff 64bit pref] May 13 00:25:49.812426 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] May 13 00:25:49.812487 kernel: pci 0000:00:18.1: bridge window [io 0xb000-0xbfff] May 13 00:25:49.812558 kernel: pci 0000:00:18.1: bridge window [mem 0xfce00000-0xfcefffff] May 13 00:25:49.812633 kernel: pci 0000:00:18.1: bridge window [mem 0xe7500000-0xe75fffff 64bit pref] May 13 00:25:49.812696 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] May 13 00:25:49.812759 kernel: pci 0000:00:18.2: bridge window [mem 0xfca00000-0xfcafffff] May 13 00:25:49.812815 kernel: pci 0000:00:18.2: bridge window [mem 0xe7100000-0xe71fffff 64bit pref] May 13 00:25:49.812878 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] May 13 00:25:49.812935 kernel: pci 0000:00:18.3: bridge window [mem 0xfc600000-0xfc6fffff] May 13 00:25:49.813004 kernel: pci 0000:00:18.3: bridge window [mem 0xe6d00000-0xe6dfffff 64bit pref] May 13 00:25:49.813086 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] May 13 00:25:49.813154 kernel: pci 0000:00:18.4: bridge window [mem 0xfc200000-0xfc2fffff] May 13 00:25:49.813232 kernel: pci 0000:00:18.4: bridge window [mem 0xe6900000-0xe69fffff 64bit pref] May 13 00:25:49.813293 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] May 13 00:25:49.813344 kernel: pci 0000:00:18.5: bridge window [mem 0xfbe00000-0xfbefffff] May 13 00:25:49.813408 kernel: pci 0000:00:18.5: bridge window [mem 0xe6500000-0xe65fffff 64bit pref] May 13 00:25:49.813480 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] May 13 00:25:49.813547 kernel: pci 0000:00:18.6: bridge window [mem 0xfba00000-0xfbafffff] May 13 00:25:49.813609 kernel: pci 0000:00:18.6: bridge window [mem 0xe6100000-0xe61fffff 64bit pref] May 13 00:25:49.813662 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] May 13 00:25:49.813719 kernel: pci 0000:00:18.7: bridge window [mem 0xfb600000-0xfb6fffff] May 13 00:25:49.813777 kernel: pci 0000:00:18.7: bridge window [mem 0xe5d00000-0xe5dfffff 64bit pref] May 13 00:25:49.813839 kernel: pci_bus 0000:00: resource 4 [mem 0x000a0000-0x000bffff window] May 13 00:25:49.813896 kernel: pci_bus 0000:00: resource 5 [mem 0x000cc000-0x000dbfff window] May 13 00:25:49.813945 kernel: pci_bus 0000:00: resource 6 [mem 0xc0000000-0xfebfffff window] May 13 00:25:49.813991 kernel: pci_bus 0000:00: resource 7 [io 0x0000-0x0cf7 window] May 13 00:25:49.814053 kernel: pci_bus 0000:00: resource 8 [io 0x0d00-0xfeff window] May 13 00:25:49.814104 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x3fff] May 13 00:25:49.814157 kernel: pci_bus 0000:02: resource 1 [mem 0xfd600000-0xfdffffff] May 13 00:25:49.814238 kernel: pci_bus 0000:02: resource 2 [mem 0xe7b00000-0xe7ffffff 64bit pref] May 13 00:25:49.814306 kernel: pci_bus 0000:02: resource 4 [mem 0x000a0000-0x000bffff window] May 13 00:25:49.814366 kernel: pci_bus 0000:02: resource 5 [mem 0x000cc000-0x000dbfff window] May 13 00:25:49.814435 kernel: pci_bus 0000:02: resource 6 [mem 0xc0000000-0xfebfffff window] May 13 00:25:49.814489 kernel: pci_bus 0000:02: resource 7 [io 0x0000-0x0cf7 window] May 13 00:25:49.814535 kernel: pci_bus 0000:02: resource 8 [io 0x0d00-0xfeff window] May 13 00:25:49.814847 kernel: pci_bus 0000:03: resource 0 [io 0x4000-0x4fff] May 13 00:25:49.814911 kernel: pci_bus 0000:03: resource 1 [mem 0xfd500000-0xfd5fffff] May 13 00:25:49.814986 kernel: pci_bus 0000:03: resource 2 [mem 0xc0000000-0xc01fffff 64bit pref] May 13 00:25:49.815061 kernel: pci_bus 0000:04: resource 0 [io 0x8000-0x8fff] May 13 00:25:49.815121 kernel: pci_bus 0000:04: resource 1 [mem 0xfd100000-0xfd1fffff] May 13 00:25:49.815176 kernel: pci_bus 0000:04: resource 2 [mem 0xe7800000-0xe78fffff 64bit pref] May 13 00:25:49.815274 kernel: pci_bus 0000:05: resource 0 [io 0xc000-0xcfff] May 13 00:25:49.815343 kernel: pci_bus 0000:05: resource 1 [mem 0xfcd00000-0xfcdfffff] May 13 00:25:49.815414 kernel: pci_bus 0000:05: resource 2 [mem 0xe7400000-0xe74fffff 64bit pref] May 13 00:25:49.815481 kernel: pci_bus 0000:06: resource 1 [mem 0xfc900000-0xfc9fffff] May 13 00:25:49.815545 kernel: pci_bus 0000:06: resource 2 [mem 0xe7000000-0xe70fffff 64bit pref] May 13 00:25:49.815596 kernel: pci_bus 0000:07: resource 1 [mem 0xfc500000-0xfc5fffff] May 13 00:25:49.815647 kernel: pci_bus 0000:07: resource 2 [mem 0xe6c00000-0xe6cfffff 64bit pref] May 13 00:25:49.815702 kernel: pci_bus 0000:08: resource 1 [mem 0xfc100000-0xfc1fffff] May 13 00:25:49.815761 kernel: pci_bus 0000:08: resource 2 [mem 0xe6800000-0xe68fffff 64bit pref] May 13 00:25:49.815828 kernel: pci_bus 0000:09: resource 1 [mem 0xfbd00000-0xfbdfffff] May 13 00:25:49.815890 kernel: pci_bus 0000:09: resource 2 [mem 0xe6400000-0xe64fffff 64bit pref] May 13 00:25:49.815959 kernel: pci_bus 0000:0a: resource 1 [mem 0xfb900000-0xfb9fffff] May 13 00:25:49.816014 kernel: pci_bus 0000:0a: resource 2 [mem 0xe6000000-0xe60fffff 64bit pref] May 13 00:25:49.816085 kernel: pci_bus 0000:0b: resource 0 [io 0x5000-0x5fff] May 13 00:25:49.816136 kernel: pci_bus 0000:0b: resource 1 [mem 0xfd400000-0xfd4fffff] May 13 00:25:49.816181 kernel: pci_bus 0000:0b: resource 2 [mem 0xc0200000-0xc03fffff 64bit pref] May 13 00:25:49.816870 kernel: pci_bus 0000:0c: resource 0 [io 0x9000-0x9fff] May 13 00:25:49.816941 kernel: pci_bus 0000:0c: resource 1 [mem 0xfd000000-0xfd0fffff] May 13 00:25:49.817005 kernel: pci_bus 0000:0c: resource 2 [mem 0xe7700000-0xe77fffff 64bit pref] May 13 00:25:49.817476 kernel: pci_bus 0000:0d: resource 0 [io 0xd000-0xdfff] May 13 00:25:49.817553 kernel: pci_bus 0000:0d: resource 1 [mem 0xfcc00000-0xfccfffff] May 13 00:25:49.817618 kernel: pci_bus 0000:0d: resource 2 [mem 0xe7300000-0xe73fffff 64bit pref] May 13 00:25:49.817685 kernel: pci_bus 0000:0e: resource 1 [mem 0xfc800000-0xfc8fffff] May 13 00:25:49.817745 kernel: pci_bus 0000:0e: resource 2 [mem 0xe6f00000-0xe6ffffff 64bit pref] May 13 00:25:49.817809 kernel: pci_bus 0000:0f: resource 1 [mem 0xfc400000-0xfc4fffff] May 13 00:25:49.817870 kernel: pci_bus 0000:0f: resource 2 [mem 0xe6b00000-0xe6bfffff 64bit pref] May 13 00:25:49.817952 kernel: pci_bus 0000:10: resource 1 [mem 0xfc000000-0xfc0fffff] May 13 00:25:49.818028 kernel: pci_bus 0000:10: resource 2 [mem 0xe6700000-0xe67fffff 64bit pref] May 13 00:25:49.818095 kernel: pci_bus 0000:11: resource 1 [mem 0xfbc00000-0xfbcfffff] May 13 00:25:49.818147 kernel: pci_bus 0000:11: resource 2 [mem 0xe6300000-0xe63fffff 64bit pref] May 13 00:25:49.818206 kernel: pci_bus 0000:12: resource 1 [mem 0xfb800000-0xfb8fffff] May 13 00:25:49.818281 kernel: pci_bus 0000:12: resource 2 [mem 0xe5f00000-0xe5ffffff 64bit pref] May 13 00:25:49.818342 kernel: pci_bus 0000:13: resource 0 [io 0x6000-0x6fff] May 13 00:25:49.818419 kernel: pci_bus 0000:13: resource 1 [mem 0xfd300000-0xfd3fffff] May 13 00:25:49.818482 kernel: pci_bus 0000:13: resource 2 [mem 0xe7a00000-0xe7afffff 64bit pref] May 13 00:25:49.818558 kernel: pci_bus 0000:14: resource 0 [io 0xa000-0xafff] May 13 00:25:49.818626 kernel: pci_bus 0000:14: resource 1 [mem 0xfcf00000-0xfcffffff] May 13 00:25:49.818691 kernel: pci_bus 0000:14: resource 2 [mem 0xe7600000-0xe76fffff 64bit pref] May 13 00:25:49.818764 kernel: pci_bus 0000:15: resource 0 [io 0xe000-0xefff] May 13 00:25:49.818834 kernel: pci_bus 0000:15: resource 1 [mem 0xfcb00000-0xfcbfffff] May 13 00:25:49.818902 kernel: pci_bus 0000:15: resource 2 [mem 0xe7200000-0xe72fffff 64bit pref] May 13 00:25:49.818962 kernel: pci_bus 0000:16: resource 1 [mem 0xfc700000-0xfc7fffff] May 13 00:25:49.819018 kernel: pci_bus 0000:16: resource 2 [mem 0xe6e00000-0xe6efffff 64bit pref] May 13 00:25:49.819081 kernel: pci_bus 0000:17: resource 1 [mem 0xfc300000-0xfc3fffff] May 13 00:25:49.819134 kernel: pci_bus 0000:17: resource 2 [mem 0xe6a00000-0xe6afffff 64bit pref] May 13 00:25:49.819200 kernel: pci_bus 0000:18: resource 1 [mem 0xfbf00000-0xfbffffff] May 13 00:25:49.819305 kernel: pci_bus 0000:18: resource 2 [mem 0xe6600000-0xe66fffff 64bit pref] May 13 00:25:49.819367 kernel: pci_bus 0000:19: resource 1 [mem 0xfbb00000-0xfbbfffff] May 13 00:25:49.819419 kernel: pci_bus 0000:19: resource 2 [mem 0xe6200000-0xe62fffff 64bit pref] May 13 00:25:49.819483 kernel: pci_bus 0000:1a: resource 1 [mem 0xfb700000-0xfb7fffff] May 13 00:25:49.819537 kernel: pci_bus 0000:1a: resource 2 [mem 0xe5e00000-0xe5efffff 64bit pref] May 13 00:25:49.819594 kernel: pci_bus 0000:1b: resource 0 [io 0x7000-0x7fff] May 13 00:25:49.819652 kernel: pci_bus 0000:1b: resource 1 [mem 0xfd200000-0xfd2fffff] May 13 00:25:49.819726 kernel: pci_bus 0000:1b: resource 2 [mem 0xe7900000-0xe79fffff 64bit pref] May 13 00:25:49.819788 kernel: pci_bus 0000:1c: resource 0 [io 0xb000-0xbfff] May 13 00:25:49.819840 kernel: pci_bus 0000:1c: resource 1 [mem 0xfce00000-0xfcefffff] May 13 00:25:49.819894 kernel: pci_bus 0000:1c: resource 2 [mem 0xe7500000-0xe75fffff 64bit pref] May 13 00:25:49.819951 kernel: pci_bus 0000:1d: resource 1 [mem 0xfca00000-0xfcafffff] May 13 00:25:49.820007 kernel: pci_bus 0000:1d: resource 2 [mem 0xe7100000-0xe71fffff 64bit pref] May 13 00:25:49.820062 kernel: pci_bus 0000:1e: resource 1 [mem 0xfc600000-0xfc6fffff] May 13 00:25:49.820115 kernel: pci_bus 0000:1e: resource 2 [mem 0xe6d00000-0xe6dfffff 64bit pref] May 13 00:25:49.820169 kernel: pci_bus 0000:1f: resource 1 [mem 0xfc200000-0xfc2fffff] May 13 00:25:49.820231 kernel: pci_bus 0000:1f: resource 2 [mem 0xe6900000-0xe69fffff 64bit pref] May 13 00:25:49.820305 kernel: pci_bus 0000:20: resource 1 [mem 0xfbe00000-0xfbefffff] May 13 00:25:49.820363 kernel: pci_bus 0000:20: resource 2 [mem 0xe6500000-0xe65fffff 64bit pref] May 13 00:25:49.820437 kernel: pci_bus 0000:21: resource 1 [mem 0xfba00000-0xfbafffff] May 13 00:25:49.820500 kernel: pci_bus 0000:21: resource 2 [mem 0xe6100000-0xe61fffff 64bit pref] May 13 00:25:49.820555 kernel: pci_bus 0000:22: resource 1 [mem 0xfb600000-0xfb6fffff] May 13 00:25:49.820615 kernel: pci_bus 0000:22: resource 2 [mem 0xe5d00000-0xe5dfffff 64bit pref] May 13 00:25:49.820683 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers May 13 00:25:49.820698 kernel: PCI: CLS 32 bytes, default 64 May 13 00:25:49.820713 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer May 13 00:25:49.820722 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x311fd3cd494, max_idle_ns: 440795223879 ns May 13 00:25:49.820730 kernel: clocksource: Switched to clocksource tsc May 13 00:25:49.820737 kernel: Initialise system trusted keyrings May 13 00:25:49.820745 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 May 13 00:25:49.820752 kernel: Key type asymmetric registered May 13 00:25:49.820758 kernel: Asymmetric key parser 'x509' registered May 13 00:25:49.820767 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) May 13 00:25:49.820774 kernel: io scheduler mq-deadline registered May 13 00:25:49.820782 kernel: io scheduler kyber registered May 13 00:25:49.820793 kernel: io scheduler bfq registered May 13 00:25:49.820862 kernel: pcieport 0000:00:15.0: PME: Signaling with IRQ 24 May 13 00:25:49.820923 kernel: pcieport 0000:00:15.0: pciehp: Slot #160 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 13 00:25:49.820984 kernel: pcieport 0000:00:15.1: PME: Signaling with IRQ 25 May 13 00:25:49.821052 kernel: pcieport 0000:00:15.1: pciehp: Slot #161 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 13 00:25:49.821125 kernel: pcieport 0000:00:15.2: PME: Signaling with IRQ 26 May 13 00:25:49.821202 kernel: pcieport 0000:00:15.2: pciehp: Slot #162 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 13 00:25:49.821295 kernel: pcieport 0000:00:15.3: PME: Signaling with IRQ 27 May 13 00:25:49.821358 kernel: pcieport 0000:00:15.3: pciehp: Slot #163 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 13 00:25:49.821420 kernel: pcieport 0000:00:15.4: PME: Signaling with IRQ 28 May 13 00:25:49.821502 kernel: pcieport 0000:00:15.4: pciehp: Slot #164 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 13 00:25:49.821568 kernel: pcieport 0000:00:15.5: PME: Signaling with IRQ 29 May 13 00:25:49.821642 kernel: pcieport 0000:00:15.5: pciehp: Slot #165 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 13 00:25:49.821724 kernel: pcieport 0000:00:15.6: PME: Signaling with IRQ 30 May 13 00:25:49.821806 kernel: pcieport 0000:00:15.6: pciehp: Slot #166 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 13 00:25:49.821879 kernel: pcieport 0000:00:15.7: PME: Signaling with IRQ 31 May 13 00:25:49.821939 kernel: pcieport 0000:00:15.7: pciehp: Slot #167 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 13 00:25:49.822012 kernel: pcieport 0000:00:16.0: PME: Signaling with IRQ 32 May 13 00:25:49.822098 kernel: pcieport 0000:00:16.0: pciehp: Slot #192 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 13 00:25:49.822171 kernel: pcieport 0000:00:16.1: PME: Signaling with IRQ 33 May 13 00:25:49.822303 kernel: pcieport 0000:00:16.1: pciehp: Slot #193 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 13 00:25:49.822382 kernel: pcieport 0000:00:16.2: PME: Signaling with IRQ 34 May 13 00:25:49.822457 kernel: pcieport 0000:00:16.2: pciehp: Slot #194 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 13 00:25:49.822525 kernel: pcieport 0000:00:16.3: PME: Signaling with IRQ 35 May 13 00:25:49.822587 kernel: pcieport 0000:00:16.3: pciehp: Slot #195 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 13 00:25:49.822655 kernel: pcieport 0000:00:16.4: PME: Signaling with IRQ 36 May 13 00:25:49.822715 kernel: pcieport 0000:00:16.4: pciehp: Slot #196 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 13 00:25:49.822781 kernel: pcieport 0000:00:16.5: PME: Signaling with IRQ 37 May 13 00:25:49.822843 kernel: pcieport 0000:00:16.5: pciehp: Slot #197 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 13 00:25:49.822912 kernel: pcieport 0000:00:16.6: PME: Signaling with IRQ 38 May 13 00:25:49.823001 kernel: pcieport 0000:00:16.6: pciehp: Slot #198 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 13 00:25:49.823070 kernel: pcieport 0000:00:16.7: PME: Signaling with IRQ 39 May 13 00:25:49.823139 kernel: pcieport 0000:00:16.7: pciehp: Slot #199 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 13 00:25:49.823200 kernel: pcieport 0000:00:17.0: PME: Signaling with IRQ 40 May 13 00:25:49.823352 kernel: pcieport 0000:00:17.0: pciehp: Slot #224 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 13 00:25:49.823419 kernel: pcieport 0000:00:17.1: PME: Signaling with IRQ 41 May 13 00:25:49.823489 kernel: pcieport 0000:00:17.1: pciehp: Slot #225 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 13 00:25:49.823556 kernel: pcieport 0000:00:17.2: PME: Signaling with IRQ 42 May 13 00:25:49.823631 kernel: pcieport 0000:00:17.2: pciehp: Slot #226 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 13 00:25:49.823696 kernel: pcieport 0000:00:17.3: PME: Signaling with IRQ 43 May 13 00:25:49.823757 kernel: pcieport 0000:00:17.3: pciehp: Slot #227 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 13 00:25:49.823821 kernel: pcieport 0000:00:17.4: PME: Signaling with IRQ 44 May 13 00:25:49.823885 kernel: pcieport 0000:00:17.4: pciehp: Slot #228 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 13 00:25:49.823947 kernel: pcieport 0000:00:17.5: PME: Signaling with IRQ 45 May 13 00:25:49.824007 kernel: pcieport 0000:00:17.5: pciehp: Slot #229 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 13 00:25:49.824067 kernel: pcieport 0000:00:17.6: PME: Signaling with IRQ 46 May 13 00:25:49.824124 kernel: pcieport 0000:00:17.6: pciehp: Slot #230 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 13 00:25:49.824185 kernel: pcieport 0000:00:17.7: PME: Signaling with IRQ 47 May 13 00:25:49.824272 kernel: pcieport 0000:00:17.7: pciehp: Slot #231 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 13 00:25:49.824334 kernel: pcieport 0000:00:18.0: PME: Signaling with IRQ 48 May 13 00:25:49.824390 kernel: pcieport 0000:00:18.0: pciehp: Slot #256 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 13 00:25:49.824468 kernel: pcieport 0000:00:18.1: PME: Signaling with IRQ 49 May 13 00:25:49.824529 kernel: pcieport 0000:00:18.1: pciehp: Slot #257 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 13 00:25:49.824596 kernel: pcieport 0000:00:18.2: PME: Signaling with IRQ 50 May 13 00:25:49.824663 kernel: pcieport 0000:00:18.2: pciehp: Slot #258 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 13 00:25:49.824734 kernel: pcieport 0000:00:18.3: PME: Signaling with IRQ 51 May 13 00:25:49.824815 kernel: pcieport 0000:00:18.3: pciehp: Slot #259 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 13 00:25:49.824895 kernel: pcieport 0000:00:18.4: PME: Signaling with IRQ 52 May 13 00:25:49.824959 kernel: pcieport 0000:00:18.4: pciehp: Slot #260 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 13 00:25:49.825039 kernel: pcieport 0000:00:18.5: PME: Signaling with IRQ 53 May 13 00:25:49.825114 kernel: pcieport 0000:00:18.5: pciehp: Slot #261 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 13 00:25:49.825179 kernel: pcieport 0000:00:18.6: PME: Signaling with IRQ 54 May 13 00:25:49.825289 kernel: pcieport 0000:00:18.6: pciehp: Slot #262 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 13 00:25:49.825370 kernel: pcieport 0000:00:18.7: PME: Signaling with IRQ 55 May 13 00:25:49.825448 kernel: pcieport 0000:00:18.7: pciehp: Slot #263 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 13 00:25:49.825468 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 May 13 00:25:49.825480 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 13 00:25:49.825489 kernel: 00:05: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A May 13 00:25:49.825496 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBC,PNP0f13:MOUS] at 0x60,0x64 irq 1,12 May 13 00:25:49.825503 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 May 13 00:25:49.825511 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 May 13 00:25:49.825581 kernel: rtc_cmos 00:01: registered as rtc0 May 13 00:25:49.825600 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 May 13 00:25:49.825662 kernel: rtc_cmos 00:01: setting system clock to 2025-05-13T00:25:49 UTC (1747095949) May 13 00:25:49.825723 kernel: rtc_cmos 00:01: alarms up to one month, y3k, 114 bytes nvram May 13 00:25:49.825738 kernel: intel_pstate: CPU model not supported May 13 00:25:49.825748 kernel: NET: Registered PF_INET6 protocol family May 13 00:25:49.825759 kernel: Segment Routing with IPv6 May 13 00:25:49.825766 kernel: In-situ OAM (IOAM) with IPv6 May 13 00:25:49.825777 kernel: NET: Registered PF_PACKET protocol family May 13 00:25:49.825789 kernel: Key type dns_resolver registered May 13 00:25:49.825803 kernel: IPI shorthand broadcast: enabled May 13 00:25:49.825811 kernel: sched_clock: Marking stable (934151254, 239808089)->(1238757294, -64797951) May 13 00:25:49.825818 kernel: registered taskstats version 1 May 13 00:25:49.825825 kernel: Loading compiled-in X.509 certificates May 13 00:25:49.825836 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.89-flatcar: b404fdaaed18d29adfca671c3bbb23eee96fb08f' May 13 00:25:49.825843 kernel: Key type .fscrypt registered May 13 00:25:49.825852 kernel: Key type fscrypt-provisioning registered May 13 00:25:49.825862 kernel: ima: No TPM chip found, activating TPM-bypass! May 13 00:25:49.825876 kernel: ima: Allocated hash algorithm: sha1 May 13 00:25:49.825887 kernel: ima: No architecture policies found May 13 00:25:49.825898 kernel: clk: Disabling unused clocks May 13 00:25:49.825909 kernel: Freeing unused kernel image (initmem) memory: 42864K May 13 00:25:49.825920 kernel: Write protecting the kernel read-only data: 36864k May 13 00:25:49.825932 kernel: Freeing unused kernel image (rodata/data gap) memory: 1836K May 13 00:25:49.825942 kernel: Run /init as init process May 13 00:25:49.825949 kernel: with arguments: May 13 00:25:49.825955 kernel: /init May 13 00:25:49.825963 kernel: with environment: May 13 00:25:49.825976 kernel: HOME=/ May 13 00:25:49.825987 kernel: TERM=linux May 13 00:25:49.825994 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 13 00:25:49.826005 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 13 00:25:49.826017 systemd[1]: Detected virtualization vmware. May 13 00:25:49.826029 systemd[1]: Detected architecture x86-64. May 13 00:25:49.826040 systemd[1]: Running in initrd. May 13 00:25:49.826051 systemd[1]: No hostname configured, using default hostname. May 13 00:25:49.826060 systemd[1]: Hostname set to . May 13 00:25:49.826069 systemd[1]: Initializing machine ID from random generator. May 13 00:25:49.826077 systemd[1]: Queued start job for default target initrd.target. May 13 00:25:49.826088 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 13 00:25:49.826096 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 13 00:25:49.826108 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 13 00:25:49.826120 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 13 00:25:49.826129 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 13 00:25:49.826138 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 13 00:25:49.826147 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 13 00:25:49.826157 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 13 00:25:49.826168 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 13 00:25:49.826175 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 13 00:25:49.826182 systemd[1]: Reached target paths.target - Path Units. May 13 00:25:49.826196 systemd[1]: Reached target slices.target - Slice Units. May 13 00:25:49.826205 systemd[1]: Reached target swap.target - Swaps. May 13 00:25:49.826224 systemd[1]: Reached target timers.target - Timer Units. May 13 00:25:49.826237 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 13 00:25:49.826247 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 13 00:25:49.826254 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 13 00:25:49.826261 systemd[1]: Listening on systemd-journald.socket - Journal Socket. May 13 00:25:49.826267 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 13 00:25:49.826277 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 13 00:25:49.826287 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 13 00:25:49.826296 systemd[1]: Reached target sockets.target - Socket Units. May 13 00:25:49.826303 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 13 00:25:49.826314 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 13 00:25:49.826322 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 13 00:25:49.826329 systemd[1]: Starting systemd-fsck-usr.service... May 13 00:25:49.826335 systemd[1]: Starting systemd-journald.service - Journal Service... May 13 00:25:49.826341 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 13 00:25:49.826350 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 13 00:25:49.826356 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 13 00:25:49.826366 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 13 00:25:49.826372 systemd[1]: Finished systemd-fsck-usr.service. May 13 00:25:49.826395 systemd-journald[215]: Collecting audit messages is disabled. May 13 00:25:49.826418 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 13 00:25:49.826431 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 13 00:25:49.826443 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 13 00:25:49.826453 kernel: Bridge firewalling registered May 13 00:25:49.826464 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 13 00:25:49.826475 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 13 00:25:49.826483 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 13 00:25:49.826493 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 13 00:25:49.826502 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 13 00:25:49.826509 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 13 00:25:49.826515 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 13 00:25:49.826524 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 13 00:25:49.826536 systemd-journald[215]: Journal started May 13 00:25:49.826552 systemd-journald[215]: Runtime Journal (/run/log/journal/8020a967d0004fc38e62b2bed645c41a) is 4.8M, max 38.6M, 33.8M free. May 13 00:25:49.834434 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 13 00:25:49.756192 systemd-modules-load[216]: Inserted module 'overlay' May 13 00:25:49.783341 systemd-modules-load[216]: Inserted module 'br_netfilter' May 13 00:25:49.836290 systemd[1]: Started systemd-journald.service - Journal Service. May 13 00:25:49.839195 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 13 00:25:49.840499 dracut-cmdline[236]: dracut-dracut-053 May 13 00:25:49.842113 dracut-cmdline[236]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=a30636f72ddb6c7dc7c9bee07b7cf23b403029ba1ff64eed2705530c62c7b592 May 13 00:25:49.846568 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 13 00:25:49.853551 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 13 00:25:49.871288 systemd-resolved[269]: Positive Trust Anchors: May 13 00:25:49.871297 systemd-resolved[269]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 13 00:25:49.871325 systemd-resolved[269]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 13 00:25:49.873072 systemd-resolved[269]: Defaulting to hostname 'linux'. May 13 00:25:49.874748 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 13 00:25:49.875101 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 13 00:25:49.892245 kernel: SCSI subsystem initialized May 13 00:25:49.900240 kernel: Loading iSCSI transport class v2.0-870. May 13 00:25:49.908237 kernel: iscsi: registered transport (tcp) May 13 00:25:49.924233 kernel: iscsi: registered transport (qla4xxx) May 13 00:25:49.924304 kernel: QLogic iSCSI HBA Driver May 13 00:25:49.945895 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 13 00:25:49.950404 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 13 00:25:49.964825 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 13 00:25:49.964855 kernel: device-mapper: uevent: version 1.0.3 May 13 00:25:49.965225 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com May 13 00:25:49.998232 kernel: raid6: avx2x4 gen() 52287 MB/s May 13 00:25:50.013238 kernel: raid6: avx2x2 gen() 48452 MB/s May 13 00:25:50.030646 kernel: raid6: avx2x1 gen() 37327 MB/s May 13 00:25:50.030707 kernel: raid6: using algorithm avx2x4 gen() 52287 MB/s May 13 00:25:50.048482 kernel: raid6: .... xor() 17115 MB/s, rmw enabled May 13 00:25:50.048528 kernel: raid6: using avx2x2 recovery algorithm May 13 00:25:50.062235 kernel: xor: automatically using best checksumming function avx May 13 00:25:50.168360 kernel: Btrfs loaded, zoned=no, fsverity=no May 13 00:25:50.173740 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 13 00:25:50.178448 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 13 00:25:50.185466 systemd-udevd[433]: Using default interface naming scheme 'v255'. May 13 00:25:50.187947 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 13 00:25:50.194313 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 13 00:25:50.201900 dracut-pre-trigger[436]: rd.md=0: removing MD RAID activation May 13 00:25:50.218182 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 13 00:25:50.222371 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 13 00:25:50.305084 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 13 00:25:50.309329 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 13 00:25:50.317841 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 13 00:25:50.318522 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 13 00:25:50.319183 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 13 00:25:50.319440 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 13 00:25:50.326402 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 13 00:25:50.332486 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 13 00:25:50.386230 kernel: VMware PVSCSI driver - version 1.0.7.0-k May 13 00:25:50.386264 kernel: vmw_pvscsi: using 64bit dma May 13 00:25:50.391297 kernel: vmw_pvscsi: max_id: 16 May 13 00:25:50.391328 kernel: vmw_pvscsi: setting ring_pages to 8 May 13 00:25:50.398664 kernel: VMware vmxnet3 virtual NIC driver - version 1.7.0.0-k-NAPI May 13 00:25:50.404112 kernel: vmxnet3 0000:0b:00.0: # of Tx queues : 2, # of Rx queues : 2 May 13 00:25:50.404242 kernel: vmw_pvscsi: enabling reqCallThreshold May 13 00:25:50.404252 kernel: vmxnet3 0000:0b:00.0 eth0: NIC Link is Up 10000 Mbps May 13 00:25:50.404321 kernel: vmw_pvscsi: driver-based request coalescing enabled May 13 00:25:50.404881 kernel: vmw_pvscsi: using MSI-X May 13 00:25:50.409241 kernel: cryptd: max_cpu_qlen set to 1000 May 13 00:25:50.415417 kernel: vmxnet3 0000:0b:00.0 ens192: renamed from eth0 May 13 00:25:50.421228 kernel: libata version 3.00 loaded. May 13 00:25:50.424385 kernel: scsi host0: VMware PVSCSI storage adapter rev 2, req/cmp/msg rings: 8/8/1 pages, cmd_per_lun=254 May 13 00:25:50.424438 kernel: ata_piix 0000:00:07.1: version 2.13 May 13 00:25:50.424562 kernel: AVX2 version of gcm_enc/dec engaged. May 13 00:25:50.425561 kernel: scsi host1: ata_piix May 13 00:25:50.426347 kernel: AES CTR mode by8 optimization enabled May 13 00:25:50.427644 kernel: scsi host2: ata_piix May 13 00:25:50.427742 kernel: vmw_pvscsi 0000:03:00.0: VMware PVSCSI rev 2 host #0 May 13 00:25:50.427827 kernel: ata1: PATA max UDMA/33 cmd 0x1f0 ctl 0x3f6 bmdma 0x1060 irq 14 May 13 00:25:50.427073 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 13 00:25:50.431318 kernel: ata2: PATA max UDMA/33 cmd 0x170 ctl 0x376 bmdma 0x1068 irq 15 May 13 00:25:50.431337 kernel: scsi 0:0:0:0: Direct-Access VMware Virtual disk 2.0 PQ: 0 ANSI: 6 May 13 00:25:50.427166 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 13 00:25:50.429900 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 13 00:25:50.431510 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 13 00:25:50.431618 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 13 00:25:50.432466 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 13 00:25:50.441233 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 13 00:25:50.453750 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 13 00:25:50.458322 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 13 00:25:50.467336 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 13 00:25:50.594241 kernel: ata2.00: ATAPI: VMware Virtual IDE CDROM Drive, 00000001, max UDMA/33 May 13 00:25:50.600272 kernel: scsi 2:0:0:0: CD-ROM NECVMWar VMware IDE CDR10 1.00 PQ: 0 ANSI: 5 May 13 00:25:50.617699 kernel: sd 0:0:0:0: [sda] 17805312 512-byte logical blocks: (9.12 GB/8.49 GiB) May 13 00:25:50.617855 kernel: sd 0:0:0:0: [sda] Write Protect is off May 13 00:25:50.617924 kernel: sd 0:0:0:0: [sda] Mode Sense: 31 00 00 00 May 13 00:25:50.617994 kernel: sd 0:0:0:0: [sda] Cache data unavailable May 13 00:25:50.618351 kernel: sd 0:0:0:0: [sda] Assuming drive cache: write through May 13 00:25:50.622566 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 1x/1x writer dvd-ram cd/rw xa/form2 cdda tray May 13 00:25:50.622759 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 May 13 00:25:50.625232 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 13 00:25:50.625265 kernel: sd 0:0:0:0: [sda] Attached SCSI disk May 13 00:25:50.633407 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 May 13 00:25:50.734249 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_disk ROOT. May 13 00:25:50.735613 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by (udev-worker) (480) May 13 00:25:50.738170 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_disk EFI-SYSTEM. May 13 00:25:50.740613 kernel: BTRFS: device fsid b9c18834-b687-45d3-9868-9ac29dc7ddd7 devid 1 transid 39 /dev/sda3 scanned by (udev-worker) (483) May 13 00:25:50.746234 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_disk OEM. May 13 00:25:50.749381 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_disk USR-A. May 13 00:25:50.749577 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_disk USR-A. May 13 00:25:50.758402 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 13 00:25:50.834243 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 13 00:25:50.858243 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 13 00:25:50.865248 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 13 00:25:51.879149 disk-uuid[588]: The operation has completed successfully. May 13 00:25:51.879598 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 13 00:25:52.031653 systemd[1]: disk-uuid.service: Deactivated successfully. May 13 00:25:52.031734 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 13 00:25:52.039391 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 13 00:25:52.041676 sh[607]: Success May 13 00:25:52.052236 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" May 13 00:25:52.211423 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 13 00:25:52.212518 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 13 00:25:52.212743 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 13 00:25:52.290756 kernel: BTRFS info (device dm-0): first mount of filesystem b9c18834-b687-45d3-9868-9ac29dc7ddd7 May 13 00:25:52.290809 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm May 13 00:25:52.290832 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead May 13 00:25:52.290846 kernel: BTRFS info (device dm-0): disabling log replay at mount time May 13 00:25:52.290860 kernel: BTRFS info (device dm-0): using free space tree May 13 00:25:52.300241 kernel: BTRFS info (device dm-0): enabling ssd optimizations May 13 00:25:52.301798 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 13 00:25:52.310392 systemd[1]: Starting afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments... May 13 00:25:52.311786 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 13 00:25:52.415383 kernel: BTRFS info (device sda6): first mount of filesystem 97fe19c2-c075-4d7e-9417-f9c367b49e5c May 13 00:25:52.415432 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm May 13 00:25:52.417232 kernel: BTRFS info (device sda6): using free space tree May 13 00:25:52.425235 kernel: BTRFS info (device sda6): enabling ssd optimizations May 13 00:25:52.436879 systemd[1]: mnt-oem.mount: Deactivated successfully. May 13 00:25:52.438250 kernel: BTRFS info (device sda6): last unmount of filesystem 97fe19c2-c075-4d7e-9417-f9c367b49e5c May 13 00:25:52.441855 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 13 00:25:52.447344 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 13 00:25:52.462953 systemd[1]: Finished afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments. May 13 00:25:52.467342 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 13 00:25:52.517374 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 13 00:25:52.524349 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 13 00:25:52.536298 systemd-networkd[795]: lo: Link UP May 13 00:25:52.536512 systemd-networkd[795]: lo: Gained carrier May 13 00:25:52.537425 systemd-networkd[795]: Enumeration completed May 13 00:25:52.537604 systemd[1]: Started systemd-networkd.service - Network Configuration. May 13 00:25:52.537771 systemd[1]: Reached target network.target - Network. May 13 00:25:52.538163 systemd-networkd[795]: ens192: Configuring with /etc/systemd/network/10-dracut-cmdline-99.network. May 13 00:25:52.541781 kernel: vmxnet3 0000:0b:00.0 ens192: intr type 3, mode 0, 3 vectors allocated May 13 00:25:52.541992 kernel: vmxnet3 0000:0b:00.0 ens192: NIC Link is Up 10000 Mbps May 13 00:25:52.541997 systemd-networkd[795]: ens192: Link UP May 13 00:25:52.542000 systemd-networkd[795]: ens192: Gained carrier May 13 00:25:52.601429 ignition[666]: Ignition 2.19.0 May 13 00:25:52.601436 ignition[666]: Stage: fetch-offline May 13 00:25:52.601478 ignition[666]: no configs at "/usr/lib/ignition/base.d" May 13 00:25:52.601486 ignition[666]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" May 13 00:25:52.601563 ignition[666]: parsed url from cmdline: "" May 13 00:25:52.601565 ignition[666]: no config URL provided May 13 00:25:52.601568 ignition[666]: reading system config file "/usr/lib/ignition/user.ign" May 13 00:25:52.601574 ignition[666]: no config at "/usr/lib/ignition/user.ign" May 13 00:25:52.601957 ignition[666]: config successfully fetched May 13 00:25:52.601981 ignition[666]: parsing config with SHA512: 53452d14a53a35e8366317d5dd39b63daaa4caf2f8c4b6ebca87f6442fa241f8962576457177befe419fbf1d02fbb3643882a8ceb80beac33ad1df7fcfadb9a4 May 13 00:25:52.605983 unknown[666]: fetched base config from "system" May 13 00:25:52.605990 unknown[666]: fetched user config from "vmware" May 13 00:25:52.606276 ignition[666]: fetch-offline: fetch-offline passed May 13 00:25:52.606320 ignition[666]: Ignition finished successfully May 13 00:25:52.606995 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 13 00:25:52.607421 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). May 13 00:25:52.611358 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 13 00:25:52.619934 ignition[804]: Ignition 2.19.0 May 13 00:25:52.619941 ignition[804]: Stage: kargs May 13 00:25:52.620056 ignition[804]: no configs at "/usr/lib/ignition/base.d" May 13 00:25:52.620063 ignition[804]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" May 13 00:25:52.620827 ignition[804]: kargs: kargs passed May 13 00:25:52.620865 ignition[804]: Ignition finished successfully May 13 00:25:52.621779 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 13 00:25:52.626361 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 13 00:25:52.634624 ignition[810]: Ignition 2.19.0 May 13 00:25:52.634635 ignition[810]: Stage: disks May 13 00:25:52.634801 ignition[810]: no configs at "/usr/lib/ignition/base.d" May 13 00:25:52.634808 ignition[810]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" May 13 00:25:52.635515 ignition[810]: disks: disks passed May 13 00:25:52.635559 ignition[810]: Ignition finished successfully May 13 00:25:52.636383 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 13 00:25:52.636991 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 13 00:25:52.637141 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 13 00:25:52.637341 systemd[1]: Reached target local-fs.target - Local File Systems. May 13 00:25:52.637530 systemd[1]: Reached target sysinit.target - System Initialization. May 13 00:25:52.637701 systemd[1]: Reached target basic.target - Basic System. May 13 00:25:52.641320 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 13 00:25:52.670262 systemd-fsck[819]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks May 13 00:25:52.672661 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 13 00:25:52.677355 systemd[1]: Mounting sysroot.mount - /sysroot... May 13 00:25:52.775196 systemd[1]: Mounted sysroot.mount - /sysroot. May 13 00:25:52.775465 kernel: EXT4-fs (sda9): mounted filesystem 422ad498-4f61-405b-9d71-25f19459d196 r/w with ordered data mode. Quota mode: none. May 13 00:25:52.775622 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 13 00:25:52.783391 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 13 00:25:52.785202 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 13 00:25:52.785666 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. May 13 00:25:52.785713 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 13 00:25:52.785737 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 13 00:25:52.794298 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (827) May 13 00:25:52.794341 kernel: BTRFS info (device sda6): first mount of filesystem 97fe19c2-c075-4d7e-9417-f9c367b49e5c May 13 00:25:52.794357 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm May 13 00:25:52.796294 kernel: BTRFS info (device sda6): using free space tree May 13 00:25:52.798176 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 13 00:25:52.799101 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 13 00:25:52.801247 kernel: BTRFS info (device sda6): enabling ssd optimizations May 13 00:25:52.802659 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 13 00:25:52.907969 initrd-setup-root[851]: cut: /sysroot/etc/passwd: No such file or directory May 13 00:25:52.910659 initrd-setup-root[858]: cut: /sysroot/etc/group: No such file or directory May 13 00:25:52.913830 initrd-setup-root[865]: cut: /sysroot/etc/shadow: No such file or directory May 13 00:25:52.916749 initrd-setup-root[872]: cut: /sysroot/etc/gshadow: No such file or directory May 13 00:25:53.116261 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 13 00:25:53.121348 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 13 00:25:53.123803 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 13 00:25:53.129236 kernel: BTRFS info (device sda6): last unmount of filesystem 97fe19c2-c075-4d7e-9417-f9c367b49e5c May 13 00:25:53.146926 ignition[940]: INFO : Ignition 2.19.0 May 13 00:25:53.147906 ignition[940]: INFO : Stage: mount May 13 00:25:53.147906 ignition[940]: INFO : no configs at "/usr/lib/ignition/base.d" May 13 00:25:53.147906 ignition[940]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" May 13 00:25:53.149139 ignition[940]: INFO : mount: mount passed May 13 00:25:53.149139 ignition[940]: INFO : Ignition finished successfully May 13 00:25:53.149551 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 13 00:25:53.158324 systemd[1]: Starting ignition-files.service - Ignition (files)... May 13 00:25:53.212656 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 13 00:25:53.285656 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 13 00:25:53.290385 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 13 00:25:53.417237 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (951) May 13 00:25:53.446638 kernel: BTRFS info (device sda6): first mount of filesystem 97fe19c2-c075-4d7e-9417-f9c367b49e5c May 13 00:25:53.446688 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm May 13 00:25:53.446704 kernel: BTRFS info (device sda6): using free space tree May 13 00:25:53.454229 kernel: BTRFS info (device sda6): enabling ssd optimizations May 13 00:25:53.456081 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 13 00:25:53.471668 ignition[968]: INFO : Ignition 2.19.0 May 13 00:25:53.471668 ignition[968]: INFO : Stage: files May 13 00:25:53.472201 ignition[968]: INFO : no configs at "/usr/lib/ignition/base.d" May 13 00:25:53.472201 ignition[968]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" May 13 00:25:53.472482 ignition[968]: DEBUG : files: compiled without relabeling support, skipping May 13 00:25:53.473019 ignition[968]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 13 00:25:53.473019 ignition[968]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 13 00:25:53.475255 ignition[968]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 13 00:25:53.475528 ignition[968]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 13 00:25:53.475788 unknown[968]: wrote ssh authorized keys file for user: core May 13 00:25:53.476039 ignition[968]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 13 00:25:53.478091 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" May 13 00:25:53.478091 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" May 13 00:25:53.478091 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" May 13 00:25:53.478091 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 May 13 00:25:53.554092 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK May 13 00:25:53.672506 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" May 13 00:25:53.672965 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" May 13 00:25:53.672965 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" May 13 00:25:53.672965 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" May 13 00:25:53.673914 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" May 13 00:25:53.673914 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 13 00:25:53.673914 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 13 00:25:53.673914 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 13 00:25:53.673914 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 13 00:25:53.673914 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" May 13 00:25:53.675220 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 13 00:25:53.675220 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" May 13 00:25:53.675220 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" May 13 00:25:53.675220 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" May 13 00:25:53.675220 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 May 13 00:25:54.205396 systemd-networkd[795]: ens192: Gained IPv6LL May 13 00:25:54.230782 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK May 13 00:25:54.602883 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" May 13 00:25:54.602883 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/etc/systemd/network/00-vmware.network" May 13 00:25:54.603577 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/etc/systemd/network/00-vmware.network" May 13 00:25:54.603577 ignition[968]: INFO : files: op(d): [started] processing unit "containerd.service" May 13 00:25:54.610355 ignition[968]: INFO : files: op(d): op(e): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" May 13 00:25:54.610597 ignition[968]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" May 13 00:25:54.610597 ignition[968]: INFO : files: op(d): [finished] processing unit "containerd.service" May 13 00:25:54.610597 ignition[968]: INFO : files: op(f): [started] processing unit "prepare-helm.service" May 13 00:25:54.610597 ignition[968]: INFO : files: op(f): op(10): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 13 00:25:54.611265 ignition[968]: INFO : files: op(f): op(10): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 13 00:25:54.611265 ignition[968]: INFO : files: op(f): [finished] processing unit "prepare-helm.service" May 13 00:25:54.611265 ignition[968]: INFO : files: op(11): [started] processing unit "coreos-metadata.service" May 13 00:25:54.611265 ignition[968]: INFO : files: op(11): op(12): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 13 00:25:54.611265 ignition[968]: INFO : files: op(11): op(12): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 13 00:25:54.611265 ignition[968]: INFO : files: op(11): [finished] processing unit "coreos-metadata.service" May 13 00:25:54.611265 ignition[968]: INFO : files: op(13): [started] setting preset to disabled for "coreos-metadata.service" May 13 00:25:54.866925 ignition[968]: INFO : files: op(13): op(14): [started] removing enablement symlink(s) for "coreos-metadata.service" May 13 00:25:54.870248 ignition[968]: INFO : files: op(13): op(14): [finished] removing enablement symlink(s) for "coreos-metadata.service" May 13 00:25:54.870248 ignition[968]: INFO : files: op(13): [finished] setting preset to disabled for "coreos-metadata.service" May 13 00:25:54.870248 ignition[968]: INFO : files: op(15): [started] setting preset to enabled for "prepare-helm.service" May 13 00:25:54.870248 ignition[968]: INFO : files: op(15): [finished] setting preset to enabled for "prepare-helm.service" May 13 00:25:54.870248 ignition[968]: INFO : files: createResultFile: createFiles: op(16): [started] writing file "/sysroot/etc/.ignition-result.json" May 13 00:25:54.871864 ignition[968]: INFO : files: createResultFile: createFiles: op(16): [finished] writing file "/sysroot/etc/.ignition-result.json" May 13 00:25:54.871864 ignition[968]: INFO : files: files passed May 13 00:25:54.871864 ignition[968]: INFO : Ignition finished successfully May 13 00:25:54.871091 systemd[1]: Finished ignition-files.service - Ignition (files). May 13 00:25:54.883423 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 13 00:25:54.885326 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 13 00:25:54.886535 systemd[1]: ignition-quench.service: Deactivated successfully. May 13 00:25:54.886601 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 13 00:25:54.894601 initrd-setup-root-after-ignition[998]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 13 00:25:54.894601 initrd-setup-root-after-ignition[998]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 13 00:25:54.895671 initrd-setup-root-after-ignition[1002]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 13 00:25:54.896613 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 13 00:25:54.896901 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 13 00:25:54.900392 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 13 00:25:54.918900 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 13 00:25:54.918968 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 13 00:25:54.919336 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 13 00:25:54.919463 systemd[1]: Reached target initrd.target - Initrd Default Target. May 13 00:25:54.919684 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 13 00:25:54.920247 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 13 00:25:54.942256 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 13 00:25:54.946327 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 13 00:25:54.952513 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 13 00:25:54.952873 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 13 00:25:54.953031 systemd[1]: Stopped target timers.target - Timer Units. May 13 00:25:54.953168 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 13 00:25:54.953253 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 13 00:25:54.953559 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 13 00:25:54.953877 systemd[1]: Stopped target basic.target - Basic System. May 13 00:25:54.954071 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 13 00:25:54.954296 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 13 00:25:54.954522 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 13 00:25:54.954733 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 13 00:25:54.954955 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 13 00:25:54.955192 systemd[1]: Stopped target sysinit.target - System Initialization. May 13 00:25:54.955424 systemd[1]: Stopped target local-fs.target - Local File Systems. May 13 00:25:54.955657 systemd[1]: Stopped target swap.target - Swaps. May 13 00:25:54.955854 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 13 00:25:54.955926 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 13 00:25:54.956228 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 13 00:25:54.956479 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 13 00:25:54.956677 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 13 00:25:54.956727 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 13 00:25:54.956890 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 13 00:25:54.956978 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 13 00:25:54.957258 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 13 00:25:54.957327 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 13 00:25:54.957594 systemd[1]: Stopped target paths.target - Path Units. May 13 00:25:54.957737 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 13 00:25:54.962250 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 13 00:25:54.962462 systemd[1]: Stopped target slices.target - Slice Units. May 13 00:25:54.962674 systemd[1]: Stopped target sockets.target - Socket Units. May 13 00:25:54.962870 systemd[1]: iscsid.socket: Deactivated successfully. May 13 00:25:54.962947 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 13 00:25:54.963186 systemd[1]: iscsiuio.socket: Deactivated successfully. May 13 00:25:54.963246 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 13 00:25:54.963400 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 13 00:25:54.963466 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 13 00:25:54.963772 systemd[1]: ignition-files.service: Deactivated successfully. May 13 00:25:54.963830 systemd[1]: Stopped ignition-files.service - Ignition (files). May 13 00:25:54.969409 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 13 00:25:54.969557 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 13 00:25:54.969685 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 13 00:25:54.971359 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 13 00:25:54.971562 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 13 00:25:54.971684 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 13 00:25:54.971983 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 13 00:25:54.972100 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 13 00:25:54.975977 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 13 00:25:54.976560 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 13 00:25:54.980279 ignition[1022]: INFO : Ignition 2.19.0 May 13 00:25:54.980784 ignition[1022]: INFO : Stage: umount May 13 00:25:54.980784 ignition[1022]: INFO : no configs at "/usr/lib/ignition/base.d" May 13 00:25:54.980784 ignition[1022]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" May 13 00:25:54.982282 ignition[1022]: INFO : umount: umount passed May 13 00:25:54.982282 ignition[1022]: INFO : Ignition finished successfully May 13 00:25:54.982733 systemd[1]: ignition-mount.service: Deactivated successfully. May 13 00:25:54.982796 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 13 00:25:54.983054 systemd[1]: Stopped target network.target - Network. May 13 00:25:54.983583 systemd[1]: ignition-disks.service: Deactivated successfully. May 13 00:25:54.983614 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 13 00:25:54.983761 systemd[1]: ignition-kargs.service: Deactivated successfully. May 13 00:25:54.983785 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 13 00:25:54.984395 systemd[1]: ignition-setup.service: Deactivated successfully. May 13 00:25:54.984420 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 13 00:25:54.985300 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 13 00:25:54.985329 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 13 00:25:54.985530 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 13 00:25:54.985674 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 13 00:25:54.993999 systemd[1]: systemd-networkd.service: Deactivated successfully. May 13 00:25:54.994059 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 13 00:25:54.994579 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 13 00:25:54.994609 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 13 00:25:54.999326 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 13 00:25:54.999433 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 13 00:25:54.999466 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 13 00:25:54.999610 systemd[1]: afterburn-network-kargs.service: Deactivated successfully. May 13 00:25:54.999634 systemd[1]: Stopped afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments. May 13 00:25:54.999808 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 13 00:25:55.000047 systemd[1]: systemd-resolved.service: Deactivated successfully. May 13 00:25:55.000109 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 13 00:25:55.004024 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 13 00:25:55.005714 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 13 00:25:55.005770 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 13 00:25:55.007056 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 13 00:25:55.007086 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 13 00:25:55.007239 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 13 00:25:55.007262 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 13 00:25:55.012766 systemd[1]: systemd-udevd.service: Deactivated successfully. May 13 00:25:55.013170 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 13 00:25:55.013748 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 13 00:25:55.013783 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 13 00:25:55.013922 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 13 00:25:55.013948 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 13 00:25:55.014082 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 13 00:25:55.014114 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 13 00:25:55.015253 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 13 00:25:55.015293 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 13 00:25:55.015557 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 13 00:25:55.015589 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 13 00:25:55.016482 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 13 00:25:55.016592 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 13 00:25:55.016629 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 13 00:25:55.016762 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 13 00:25:55.016791 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 13 00:25:55.017026 systemd[1]: network-cleanup.service: Deactivated successfully. May 13 00:25:55.017074 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 13 00:25:55.023996 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 13 00:25:55.024079 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 13 00:25:55.114631 systemd[1]: sysroot-boot.service: Deactivated successfully. May 13 00:25:55.114700 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 13 00:25:55.115106 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 13 00:25:55.115231 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 13 00:25:55.115262 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 13 00:25:55.118369 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 13 00:25:55.170192 systemd[1]: Switching root. May 13 00:25:55.195049 systemd-journald[215]: Journal stopped May 13 00:25:49.756080 kernel: Linux version 6.6.89-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Mon May 12 22:46:21 -00 2025 May 13 00:25:49.756098 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=a30636f72ddb6c7dc7c9bee07b7cf23b403029ba1ff64eed2705530c62c7b592 May 13 00:25:49.756104 kernel: Disabled fast string operations May 13 00:25:49.756108 kernel: BIOS-provided physical RAM map: May 13 00:25:49.756112 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ebff] usable May 13 00:25:49.756116 kernel: BIOS-e820: [mem 0x000000000009ec00-0x000000000009ffff] reserved May 13 00:25:49.756122 kernel: BIOS-e820: [mem 0x00000000000dc000-0x00000000000fffff] reserved May 13 00:25:49.756126 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007fedffff] usable May 13 00:25:49.756130 kernel: BIOS-e820: [mem 0x000000007fee0000-0x000000007fefefff] ACPI data May 13 00:25:49.756135 kernel: BIOS-e820: [mem 0x000000007feff000-0x000000007fefffff] ACPI NVS May 13 00:25:49.756139 kernel: BIOS-e820: [mem 0x000000007ff00000-0x000000007fffffff] usable May 13 00:25:49.756143 kernel: BIOS-e820: [mem 0x00000000f0000000-0x00000000f7ffffff] reserved May 13 00:25:49.756147 kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec0ffff] reserved May 13 00:25:49.756151 kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved May 13 00:25:49.756162 kernel: BIOS-e820: [mem 0x00000000fffe0000-0x00000000ffffffff] reserved May 13 00:25:49.756167 kernel: NX (Execute Disable) protection: active May 13 00:25:49.756172 kernel: APIC: Static calls initialized May 13 00:25:49.756180 kernel: SMBIOS 2.7 present. May 13 00:25:49.756185 kernel: DMI: VMware, Inc. VMware Virtual Platform/440BX Desktop Reference Platform, BIOS 6.00 05/28/2020 May 13 00:25:49.756190 kernel: vmware: hypercall mode: 0x00 May 13 00:25:49.756194 kernel: Hypervisor detected: VMware May 13 00:25:49.756199 kernel: vmware: TSC freq read from hypervisor : 3408.000 MHz May 13 00:25:49.756205 kernel: vmware: Host bus clock speed read from hypervisor : 66000000 Hz May 13 00:25:49.756210 kernel: vmware: using clock offset of 3586072690 ns May 13 00:25:49.757716 kernel: tsc: Detected 3408.000 MHz processor May 13 00:25:49.757725 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved May 13 00:25:49.757730 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable May 13 00:25:49.757735 kernel: last_pfn = 0x80000 max_arch_pfn = 0x400000000 May 13 00:25:49.757740 kernel: total RAM covered: 3072M May 13 00:25:49.757745 kernel: Found optimal setting for mtrr clean up May 13 00:25:49.757750 kernel: gran_size: 64K chunk_size: 64K num_reg: 2 lose cover RAM: 0G May 13 00:25:49.757758 kernel: MTRR map: 6 entries (5 fixed + 1 variable; max 21), built from 8 variable MTRRs May 13 00:25:49.757763 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT May 13 00:25:49.757768 kernel: Using GB pages for direct mapping May 13 00:25:49.757772 kernel: ACPI: Early table checksum verification disabled May 13 00:25:49.757777 kernel: ACPI: RSDP 0x00000000000F6A00 000024 (v02 PTLTD ) May 13 00:25:49.757782 kernel: ACPI: XSDT 0x000000007FEE965B 00005C (v01 INTEL 440BX 06040000 VMW 01324272) May 13 00:25:49.757787 kernel: ACPI: FACP 0x000000007FEFEE73 0000F4 (v04 INTEL 440BX 06040000 PTL 000F4240) May 13 00:25:49.757792 kernel: ACPI: DSDT 0x000000007FEEAD55 01411E (v01 PTLTD Custom 06040000 MSFT 03000001) May 13 00:25:49.757797 kernel: ACPI: FACS 0x000000007FEFFFC0 000040 May 13 00:25:49.757805 kernel: ACPI: FACS 0x000000007FEFFFC0 000040 May 13 00:25:49.757810 kernel: ACPI: BOOT 0x000000007FEEAD2D 000028 (v01 PTLTD $SBFTBL$ 06040000 LTP 00000001) May 13 00:25:49.757815 kernel: ACPI: APIC 0x000000007FEEA5EB 000742 (v01 PTLTD ? APIC 06040000 LTP 00000000) May 13 00:25:49.757820 kernel: ACPI: MCFG 0x000000007FEEA5AF 00003C (v01 PTLTD $PCITBL$ 06040000 LTP 00000001) May 13 00:25:49.757825 kernel: ACPI: SRAT 0x000000007FEE9757 0008A8 (v02 VMWARE MEMPLUG 06040000 VMW 00000001) May 13 00:25:49.757832 kernel: ACPI: HPET 0x000000007FEE971F 000038 (v01 VMWARE VMW HPET 06040000 VMW 00000001) May 13 00:25:49.757837 kernel: ACPI: WAET 0x000000007FEE96F7 000028 (v01 VMWARE VMW WAET 06040000 VMW 00000001) May 13 00:25:49.757842 kernel: ACPI: Reserving FACP table memory at [mem 0x7fefee73-0x7fefef66] May 13 00:25:49.757847 kernel: ACPI: Reserving DSDT table memory at [mem 0x7feead55-0x7fefee72] May 13 00:25:49.757852 kernel: ACPI: Reserving FACS table memory at [mem 0x7fefffc0-0x7fefffff] May 13 00:25:49.757857 kernel: ACPI: Reserving FACS table memory at [mem 0x7fefffc0-0x7fefffff] May 13 00:25:49.757862 kernel: ACPI: Reserving BOOT table memory at [mem 0x7feead2d-0x7feead54] May 13 00:25:49.757867 kernel: ACPI: Reserving APIC table memory at [mem 0x7feea5eb-0x7feead2c] May 13 00:25:49.757872 kernel: ACPI: Reserving MCFG table memory at [mem 0x7feea5af-0x7feea5ea] May 13 00:25:49.757877 kernel: ACPI: Reserving SRAT table memory at [mem 0x7fee9757-0x7fee9ffe] May 13 00:25:49.757883 kernel: ACPI: Reserving HPET table memory at [mem 0x7fee971f-0x7fee9756] May 13 00:25:49.757888 kernel: ACPI: Reserving WAET table memory at [mem 0x7fee96f7-0x7fee971e] May 13 00:25:49.757894 kernel: system APIC only can use physical flat May 13 00:25:49.757899 kernel: APIC: Switched APIC routing to: physical flat May 13 00:25:49.757904 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 May 13 00:25:49.757909 kernel: SRAT: PXM 0 -> APIC 0x02 -> Node 0 May 13 00:25:49.757914 kernel: SRAT: PXM 0 -> APIC 0x04 -> Node 0 May 13 00:25:49.757919 kernel: SRAT: PXM 0 -> APIC 0x06 -> Node 0 May 13 00:25:49.757924 kernel: SRAT: PXM 0 -> APIC 0x08 -> Node 0 May 13 00:25:49.757930 kernel: SRAT: PXM 0 -> APIC 0x0a -> Node 0 May 13 00:25:49.757935 kernel: SRAT: PXM 0 -> APIC 0x0c -> Node 0 May 13 00:25:49.757940 kernel: SRAT: PXM 0 -> APIC 0x0e -> Node 0 May 13 00:25:49.757945 kernel: SRAT: PXM 0 -> APIC 0x10 -> Node 0 May 13 00:25:49.757950 kernel: SRAT: PXM 0 -> APIC 0x12 -> Node 0 May 13 00:25:49.757955 kernel: SRAT: PXM 0 -> APIC 0x14 -> Node 0 May 13 00:25:49.757960 kernel: SRAT: PXM 0 -> APIC 0x16 -> Node 0 May 13 00:25:49.757965 kernel: SRAT: PXM 0 -> APIC 0x18 -> Node 0 May 13 00:25:49.757970 kernel: SRAT: PXM 0 -> APIC 0x1a -> Node 0 May 13 00:25:49.757975 kernel: SRAT: PXM 0 -> APIC 0x1c -> Node 0 May 13 00:25:49.757981 kernel: SRAT: PXM 0 -> APIC 0x1e -> Node 0 May 13 00:25:49.757986 kernel: SRAT: PXM 0 -> APIC 0x20 -> Node 0 May 13 00:25:49.757991 kernel: SRAT: PXM 0 -> APIC 0x22 -> Node 0 May 13 00:25:49.757996 kernel: SRAT: PXM 0 -> APIC 0x24 -> Node 0 May 13 00:25:49.758001 kernel: SRAT: PXM 0 -> APIC 0x26 -> Node 0 May 13 00:25:49.758006 kernel: SRAT: PXM 0 -> APIC 0x28 -> Node 0 May 13 00:25:49.758011 kernel: SRAT: PXM 0 -> APIC 0x2a -> Node 0 May 13 00:25:49.758016 kernel: SRAT: PXM 0 -> APIC 0x2c -> Node 0 May 13 00:25:49.758021 kernel: SRAT: PXM 0 -> APIC 0x2e -> Node 0 May 13 00:25:49.758026 kernel: SRAT: PXM 0 -> APIC 0x30 -> Node 0 May 13 00:25:49.758032 kernel: SRAT: PXM 0 -> APIC 0x32 -> Node 0 May 13 00:25:49.758037 kernel: SRAT: PXM 0 -> APIC 0x34 -> Node 0 May 13 00:25:49.758042 kernel: SRAT: PXM 0 -> APIC 0x36 -> Node 0 May 13 00:25:49.758047 kernel: SRAT: PXM 0 -> APIC 0x38 -> Node 0 May 13 00:25:49.758052 kernel: SRAT: PXM 0 -> APIC 0x3a -> Node 0 May 13 00:25:49.758057 kernel: SRAT: PXM 0 -> APIC 0x3c -> Node 0 May 13 00:25:49.758062 kernel: SRAT: PXM 0 -> APIC 0x3e -> Node 0 May 13 00:25:49.758067 kernel: SRAT: PXM 0 -> APIC 0x40 -> Node 0 May 13 00:25:49.758072 kernel: SRAT: PXM 0 -> APIC 0x42 -> Node 0 May 13 00:25:49.758077 kernel: SRAT: PXM 0 -> APIC 0x44 -> Node 0 May 13 00:25:49.758081 kernel: SRAT: PXM 0 -> APIC 0x46 -> Node 0 May 13 00:25:49.758088 kernel: SRAT: PXM 0 -> APIC 0x48 -> Node 0 May 13 00:25:49.758092 kernel: SRAT: PXM 0 -> APIC 0x4a -> Node 0 May 13 00:25:49.758098 kernel: SRAT: PXM 0 -> APIC 0x4c -> Node 0 May 13 00:25:49.758102 kernel: SRAT: PXM 0 -> APIC 0x4e -> Node 0 May 13 00:25:49.758107 kernel: SRAT: PXM 0 -> APIC 0x50 -> Node 0 May 13 00:25:49.758112 kernel: SRAT: PXM 0 -> APIC 0x52 -> Node 0 May 13 00:25:49.758117 kernel: SRAT: PXM 0 -> APIC 0x54 -> Node 0 May 13 00:25:49.758122 kernel: SRAT: PXM 0 -> APIC 0x56 -> Node 0 May 13 00:25:49.758127 kernel: SRAT: PXM 0 -> APIC 0x58 -> Node 0 May 13 00:25:49.758132 kernel: SRAT: PXM 0 -> APIC 0x5a -> Node 0 May 13 00:25:49.758138 kernel: SRAT: PXM 0 -> APIC 0x5c -> Node 0 May 13 00:25:49.758143 kernel: SRAT: PXM 0 -> APIC 0x5e -> Node 0 May 13 00:25:49.758148 kernel: SRAT: PXM 0 -> APIC 0x60 -> Node 0 May 13 00:25:49.758153 kernel: SRAT: PXM 0 -> APIC 0x62 -> Node 0 May 13 00:25:49.758158 kernel: SRAT: PXM 0 -> APIC 0x64 -> Node 0 May 13 00:25:49.758163 kernel: SRAT: PXM 0 -> APIC 0x66 -> Node 0 May 13 00:25:49.758168 kernel: SRAT: PXM 0 -> APIC 0x68 -> Node 0 May 13 00:25:49.758173 kernel: SRAT: PXM 0 -> APIC 0x6a -> Node 0 May 13 00:25:49.758178 kernel: SRAT: PXM 0 -> APIC 0x6c -> Node 0 May 13 00:25:49.758182 kernel: SRAT: PXM 0 -> APIC 0x6e -> Node 0 May 13 00:25:49.758188 kernel: SRAT: PXM 0 -> APIC 0x70 -> Node 0 May 13 00:25:49.758202 kernel: SRAT: PXM 0 -> APIC 0x72 -> Node 0 May 13 00:25:49.758208 kernel: SRAT: PXM 0 -> APIC 0x74 -> Node 0 May 13 00:25:49.758232 kernel: SRAT: PXM 0 -> APIC 0x76 -> Node 0 May 13 00:25:49.758240 kernel: SRAT: PXM 0 -> APIC 0x78 -> Node 0 May 13 00:25:49.758245 kernel: SRAT: PXM 0 -> APIC 0x7a -> Node 0 May 13 00:25:49.758251 kernel: SRAT: PXM 0 -> APIC 0x7c -> Node 0 May 13 00:25:49.758256 kernel: SRAT: PXM 0 -> APIC 0x7e -> Node 0 May 13 00:25:49.758261 kernel: SRAT: PXM 0 -> APIC 0x80 -> Node 0 May 13 00:25:49.758271 kernel: SRAT: PXM 0 -> APIC 0x82 -> Node 0 May 13 00:25:49.758276 kernel: SRAT: PXM 0 -> APIC 0x84 -> Node 0 May 13 00:25:49.758282 kernel: SRAT: PXM 0 -> APIC 0x86 -> Node 0 May 13 00:25:49.758287 kernel: SRAT: PXM 0 -> APIC 0x88 -> Node 0 May 13 00:25:49.758292 kernel: SRAT: PXM 0 -> APIC 0x8a -> Node 0 May 13 00:25:49.758300 kernel: SRAT: PXM 0 -> APIC 0x8c -> Node 0 May 13 00:25:49.758305 kernel: SRAT: PXM 0 -> APIC 0x8e -> Node 0 May 13 00:25:49.758310 kernel: SRAT: PXM 0 -> APIC 0x90 -> Node 0 May 13 00:25:49.758316 kernel: SRAT: PXM 0 -> APIC 0x92 -> Node 0 May 13 00:25:49.758321 kernel: SRAT: PXM 0 -> APIC 0x94 -> Node 0 May 13 00:25:49.758328 kernel: SRAT: PXM 0 -> APIC 0x96 -> Node 0 May 13 00:25:49.758333 kernel: SRAT: PXM 0 -> APIC 0x98 -> Node 0 May 13 00:25:49.758338 kernel: SRAT: PXM 0 -> APIC 0x9a -> Node 0 May 13 00:25:49.758344 kernel: SRAT: PXM 0 -> APIC 0x9c -> Node 0 May 13 00:25:49.758349 kernel: SRAT: PXM 0 -> APIC 0x9e -> Node 0 May 13 00:25:49.758354 kernel: SRAT: PXM 0 -> APIC 0xa0 -> Node 0 May 13 00:25:49.758359 kernel: SRAT: PXM 0 -> APIC 0xa2 -> Node 0 May 13 00:25:49.758365 kernel: SRAT: PXM 0 -> APIC 0xa4 -> Node 0 May 13 00:25:49.758370 kernel: SRAT: PXM 0 -> APIC 0xa6 -> Node 0 May 13 00:25:49.758375 kernel: SRAT: PXM 0 -> APIC 0xa8 -> Node 0 May 13 00:25:49.758382 kernel: SRAT: PXM 0 -> APIC 0xaa -> Node 0 May 13 00:25:49.758387 kernel: SRAT: PXM 0 -> APIC 0xac -> Node 0 May 13 00:25:49.758392 kernel: SRAT: PXM 0 -> APIC 0xae -> Node 0 May 13 00:25:49.758397 kernel: SRAT: PXM 0 -> APIC 0xb0 -> Node 0 May 13 00:25:49.758403 kernel: SRAT: PXM 0 -> APIC 0xb2 -> Node 0 May 13 00:25:49.758408 kernel: SRAT: PXM 0 -> APIC 0xb4 -> Node 0 May 13 00:25:49.758413 kernel: SRAT: PXM 0 -> APIC 0xb6 -> Node 0 May 13 00:25:49.758418 kernel: SRAT: PXM 0 -> APIC 0xb8 -> Node 0 May 13 00:25:49.758424 kernel: SRAT: PXM 0 -> APIC 0xba -> Node 0 May 13 00:25:49.758429 kernel: SRAT: PXM 0 -> APIC 0xbc -> Node 0 May 13 00:25:49.758435 kernel: SRAT: PXM 0 -> APIC 0xbe -> Node 0 May 13 00:25:49.758441 kernel: SRAT: PXM 0 -> APIC 0xc0 -> Node 0 May 13 00:25:49.758446 kernel: SRAT: PXM 0 -> APIC 0xc2 -> Node 0 May 13 00:25:49.758452 kernel: SRAT: PXM 0 -> APIC 0xc4 -> Node 0 May 13 00:25:49.758457 kernel: SRAT: PXM 0 -> APIC 0xc6 -> Node 0 May 13 00:25:49.758462 kernel: SRAT: PXM 0 -> APIC 0xc8 -> Node 0 May 13 00:25:49.758467 kernel: SRAT: PXM 0 -> APIC 0xca -> Node 0 May 13 00:25:49.758473 kernel: SRAT: PXM 0 -> APIC 0xcc -> Node 0 May 13 00:25:49.758478 kernel: SRAT: PXM 0 -> APIC 0xce -> Node 0 May 13 00:25:49.758483 kernel: SRAT: PXM 0 -> APIC 0xd0 -> Node 0 May 13 00:25:49.758490 kernel: SRAT: PXM 0 -> APIC 0xd2 -> Node 0 May 13 00:25:49.758495 kernel: SRAT: PXM 0 -> APIC 0xd4 -> Node 0 May 13 00:25:49.758500 kernel: SRAT: PXM 0 -> APIC 0xd6 -> Node 0 May 13 00:25:49.758505 kernel: SRAT: PXM 0 -> APIC 0xd8 -> Node 0 May 13 00:25:49.758511 kernel: SRAT: PXM 0 -> APIC 0xda -> Node 0 May 13 00:25:49.758516 kernel: SRAT: PXM 0 -> APIC 0xdc -> Node 0 May 13 00:25:49.758521 kernel: SRAT: PXM 0 -> APIC 0xde -> Node 0 May 13 00:25:49.758527 kernel: SRAT: PXM 0 -> APIC 0xe0 -> Node 0 May 13 00:25:49.758532 kernel: SRAT: PXM 0 -> APIC 0xe2 -> Node 0 May 13 00:25:49.758537 kernel: SRAT: PXM 0 -> APIC 0xe4 -> Node 0 May 13 00:25:49.758542 kernel: SRAT: PXM 0 -> APIC 0xe6 -> Node 0 May 13 00:25:49.758549 kernel: SRAT: PXM 0 -> APIC 0xe8 -> Node 0 May 13 00:25:49.758554 kernel: SRAT: PXM 0 -> APIC 0xea -> Node 0 May 13 00:25:49.758560 kernel: SRAT: PXM 0 -> APIC 0xec -> Node 0 May 13 00:25:49.758565 kernel: SRAT: PXM 0 -> APIC 0xee -> Node 0 May 13 00:25:49.758570 kernel: SRAT: PXM 0 -> APIC 0xf0 -> Node 0 May 13 00:25:49.758575 kernel: SRAT: PXM 0 -> APIC 0xf2 -> Node 0 May 13 00:25:49.758581 kernel: SRAT: PXM 0 -> APIC 0xf4 -> Node 0 May 13 00:25:49.758586 kernel: SRAT: PXM 0 -> APIC 0xf6 -> Node 0 May 13 00:25:49.758591 kernel: SRAT: PXM 0 -> APIC 0xf8 -> Node 0 May 13 00:25:49.758598 kernel: SRAT: PXM 0 -> APIC 0xfa -> Node 0 May 13 00:25:49.758609 kernel: SRAT: PXM 0 -> APIC 0xfc -> Node 0 May 13 00:25:49.758615 kernel: SRAT: PXM 0 -> APIC 0xfe -> Node 0 May 13 00:25:49.758620 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] May 13 00:25:49.758626 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] May 13 00:25:49.758632 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000-0xbfffffff] hotplug May 13 00:25:49.758639 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7fffffff] -> [mem 0x00000000-0x7fffffff] May 13 00:25:49.758645 kernel: NODE_DATA(0) allocated [mem 0x7fffa000-0x7fffffff] May 13 00:25:49.758650 kernel: Zone ranges: May 13 00:25:49.758656 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] May 13 00:25:49.758663 kernel: DMA32 [mem 0x0000000001000000-0x000000007fffffff] May 13 00:25:49.758668 kernel: Normal empty May 13 00:25:49.758674 kernel: Movable zone start for each node May 13 00:25:49.758679 kernel: Early memory node ranges May 13 00:25:49.758685 kernel: node 0: [mem 0x0000000000001000-0x000000000009dfff] May 13 00:25:49.758690 kernel: node 0: [mem 0x0000000000100000-0x000000007fedffff] May 13 00:25:49.758695 kernel: node 0: [mem 0x000000007ff00000-0x000000007fffffff] May 13 00:25:49.758701 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007fffffff] May 13 00:25:49.758706 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 13 00:25:49.758712 kernel: On node 0, zone DMA: 98 pages in unavailable ranges May 13 00:25:49.758719 kernel: On node 0, zone DMA32: 32 pages in unavailable ranges May 13 00:25:49.758725 kernel: ACPI: PM-Timer IO Port: 0x1008 May 13 00:25:49.758730 kernel: system APIC only can use physical flat May 13 00:25:49.758735 kernel: ACPI: LAPIC_NMI (acpi_id[0x00] high edge lint[0x1]) May 13 00:25:49.758741 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1]) May 13 00:25:49.758747 kernel: ACPI: LAPIC_NMI (acpi_id[0x02] high edge lint[0x1]) May 13 00:25:49.758752 kernel: ACPI: LAPIC_NMI (acpi_id[0x03] high edge lint[0x1]) May 13 00:25:49.758757 kernel: ACPI: LAPIC_NMI (acpi_id[0x04] high edge lint[0x1]) May 13 00:25:49.758763 kernel: ACPI: LAPIC_NMI (acpi_id[0x05] high edge lint[0x1]) May 13 00:25:49.758769 kernel: ACPI: LAPIC_NMI (acpi_id[0x06] high edge lint[0x1]) May 13 00:25:49.758775 kernel: ACPI: LAPIC_NMI (acpi_id[0x07] high edge lint[0x1]) May 13 00:25:49.758780 kernel: ACPI: LAPIC_NMI (acpi_id[0x08] high edge lint[0x1]) May 13 00:25:49.758785 kernel: ACPI: LAPIC_NMI (acpi_id[0x09] high edge lint[0x1]) May 13 00:25:49.758791 kernel: ACPI: LAPIC_NMI (acpi_id[0x0a] high edge lint[0x1]) May 13 00:25:49.758796 kernel: ACPI: LAPIC_NMI (acpi_id[0x0b] high edge lint[0x1]) May 13 00:25:49.758801 kernel: ACPI: LAPIC_NMI (acpi_id[0x0c] high edge lint[0x1]) May 13 00:25:49.758807 kernel: ACPI: LAPIC_NMI (acpi_id[0x0d] high edge lint[0x1]) May 13 00:25:49.758812 kernel: ACPI: LAPIC_NMI (acpi_id[0x0e] high edge lint[0x1]) May 13 00:25:49.758817 kernel: ACPI: LAPIC_NMI (acpi_id[0x0f] high edge lint[0x1]) May 13 00:25:49.758827 kernel: ACPI: LAPIC_NMI (acpi_id[0x10] high edge lint[0x1]) May 13 00:25:49.758833 kernel: ACPI: LAPIC_NMI (acpi_id[0x11] high edge lint[0x1]) May 13 00:25:49.758842 kernel: ACPI: LAPIC_NMI (acpi_id[0x12] high edge lint[0x1]) May 13 00:25:49.758851 kernel: ACPI: LAPIC_NMI (acpi_id[0x13] high edge lint[0x1]) May 13 00:25:49.758860 kernel: ACPI: LAPIC_NMI (acpi_id[0x14] high edge lint[0x1]) May 13 00:25:49.758866 kernel: ACPI: LAPIC_NMI (acpi_id[0x15] high edge lint[0x1]) May 13 00:25:49.758871 kernel: ACPI: LAPIC_NMI (acpi_id[0x16] high edge lint[0x1]) May 13 00:25:49.758876 kernel: ACPI: LAPIC_NMI (acpi_id[0x17] high edge lint[0x1]) May 13 00:25:49.758882 kernel: ACPI: LAPIC_NMI (acpi_id[0x18] high edge lint[0x1]) May 13 00:25:49.758887 kernel: ACPI: LAPIC_NMI (acpi_id[0x19] high edge lint[0x1]) May 13 00:25:49.758894 kernel: ACPI: LAPIC_NMI (acpi_id[0x1a] high edge lint[0x1]) May 13 00:25:49.758900 kernel: ACPI: LAPIC_NMI (acpi_id[0x1b] high edge lint[0x1]) May 13 00:25:49.758905 kernel: ACPI: LAPIC_NMI (acpi_id[0x1c] high edge lint[0x1]) May 13 00:25:49.758913 kernel: ACPI: LAPIC_NMI (acpi_id[0x1d] high edge lint[0x1]) May 13 00:25:49.758922 kernel: ACPI: LAPIC_NMI (acpi_id[0x1e] high edge lint[0x1]) May 13 00:25:49.758932 kernel: ACPI: LAPIC_NMI (acpi_id[0x1f] high edge lint[0x1]) May 13 00:25:49.758940 kernel: ACPI: LAPIC_NMI (acpi_id[0x20] high edge lint[0x1]) May 13 00:25:49.758946 kernel: ACPI: LAPIC_NMI (acpi_id[0x21] high edge lint[0x1]) May 13 00:25:49.758951 kernel: ACPI: LAPIC_NMI (acpi_id[0x22] high edge lint[0x1]) May 13 00:25:49.758959 kernel: ACPI: LAPIC_NMI (acpi_id[0x23] high edge lint[0x1]) May 13 00:25:49.758964 kernel: ACPI: LAPIC_NMI (acpi_id[0x24] high edge lint[0x1]) May 13 00:25:49.758969 kernel: ACPI: LAPIC_NMI (acpi_id[0x25] high edge lint[0x1]) May 13 00:25:49.758974 kernel: ACPI: LAPIC_NMI (acpi_id[0x26] high edge lint[0x1]) May 13 00:25:49.758980 kernel: ACPI: LAPIC_NMI (acpi_id[0x27] high edge lint[0x1]) May 13 00:25:49.758985 kernel: ACPI: LAPIC_NMI (acpi_id[0x28] high edge lint[0x1]) May 13 00:25:49.758991 kernel: ACPI: LAPIC_NMI (acpi_id[0x29] high edge lint[0x1]) May 13 00:25:49.758996 kernel: ACPI: LAPIC_NMI (acpi_id[0x2a] high edge lint[0x1]) May 13 00:25:49.759002 kernel: ACPI: LAPIC_NMI (acpi_id[0x2b] high edge lint[0x1]) May 13 00:25:49.759007 kernel: ACPI: LAPIC_NMI (acpi_id[0x2c] high edge lint[0x1]) May 13 00:25:49.759014 kernel: ACPI: LAPIC_NMI (acpi_id[0x2d] high edge lint[0x1]) May 13 00:25:49.759019 kernel: ACPI: LAPIC_NMI (acpi_id[0x2e] high edge lint[0x1]) May 13 00:25:49.759024 kernel: ACPI: LAPIC_NMI (acpi_id[0x2f] high edge lint[0x1]) May 13 00:25:49.759030 kernel: ACPI: LAPIC_NMI (acpi_id[0x30] high edge lint[0x1]) May 13 00:25:49.759035 kernel: ACPI: LAPIC_NMI (acpi_id[0x31] high edge lint[0x1]) May 13 00:25:49.759040 kernel: ACPI: LAPIC_NMI (acpi_id[0x32] high edge lint[0x1]) May 13 00:25:49.759046 kernel: ACPI: LAPIC_NMI (acpi_id[0x33] high edge lint[0x1]) May 13 00:25:49.759051 kernel: ACPI: LAPIC_NMI (acpi_id[0x34] high edge lint[0x1]) May 13 00:25:49.759056 kernel: ACPI: LAPIC_NMI (acpi_id[0x35] high edge lint[0x1]) May 13 00:25:49.759063 kernel: ACPI: LAPIC_NMI (acpi_id[0x36] high edge lint[0x1]) May 13 00:25:49.759068 kernel: ACPI: LAPIC_NMI (acpi_id[0x37] high edge lint[0x1]) May 13 00:25:49.759073 kernel: ACPI: LAPIC_NMI (acpi_id[0x38] high edge lint[0x1]) May 13 00:25:49.759079 kernel: ACPI: LAPIC_NMI (acpi_id[0x39] high edge lint[0x1]) May 13 00:25:49.759084 kernel: ACPI: LAPIC_NMI (acpi_id[0x3a] high edge lint[0x1]) May 13 00:25:49.759090 kernel: ACPI: LAPIC_NMI (acpi_id[0x3b] high edge lint[0x1]) May 13 00:25:49.759095 kernel: ACPI: LAPIC_NMI (acpi_id[0x3c] high edge lint[0x1]) May 13 00:25:49.759100 kernel: ACPI: LAPIC_NMI (acpi_id[0x3d] high edge lint[0x1]) May 13 00:25:49.759106 kernel: ACPI: LAPIC_NMI (acpi_id[0x3e] high edge lint[0x1]) May 13 00:25:49.759111 kernel: ACPI: LAPIC_NMI (acpi_id[0x3f] high edge lint[0x1]) May 13 00:25:49.759117 kernel: ACPI: LAPIC_NMI (acpi_id[0x40] high edge lint[0x1]) May 13 00:25:49.759123 kernel: ACPI: LAPIC_NMI (acpi_id[0x41] high edge lint[0x1]) May 13 00:25:49.759128 kernel: ACPI: LAPIC_NMI (acpi_id[0x42] high edge lint[0x1]) May 13 00:25:49.759133 kernel: ACPI: LAPIC_NMI (acpi_id[0x43] high edge lint[0x1]) May 13 00:25:49.759139 kernel: ACPI: LAPIC_NMI (acpi_id[0x44] high edge lint[0x1]) May 13 00:25:49.759144 kernel: ACPI: LAPIC_NMI (acpi_id[0x45] high edge lint[0x1]) May 13 00:25:49.759149 kernel: ACPI: LAPIC_NMI (acpi_id[0x46] high edge lint[0x1]) May 13 00:25:49.759155 kernel: ACPI: LAPIC_NMI (acpi_id[0x47] high edge lint[0x1]) May 13 00:25:49.759160 kernel: ACPI: LAPIC_NMI (acpi_id[0x48] high edge lint[0x1]) May 13 00:25:49.759166 kernel: ACPI: LAPIC_NMI (acpi_id[0x49] high edge lint[0x1]) May 13 00:25:49.759172 kernel: ACPI: LAPIC_NMI (acpi_id[0x4a] high edge lint[0x1]) May 13 00:25:49.759177 kernel: ACPI: LAPIC_NMI (acpi_id[0x4b] high edge lint[0x1]) May 13 00:25:49.759183 kernel: ACPI: LAPIC_NMI (acpi_id[0x4c] high edge lint[0x1]) May 13 00:25:49.759188 kernel: ACPI: LAPIC_NMI (acpi_id[0x4d] high edge lint[0x1]) May 13 00:25:49.759193 kernel: ACPI: LAPIC_NMI (acpi_id[0x4e] high edge lint[0x1]) May 13 00:25:49.759199 kernel: ACPI: LAPIC_NMI (acpi_id[0x4f] high edge lint[0x1]) May 13 00:25:49.759204 kernel: ACPI: LAPIC_NMI (acpi_id[0x50] high edge lint[0x1]) May 13 00:25:49.759210 kernel: ACPI: LAPIC_NMI (acpi_id[0x51] high edge lint[0x1]) May 13 00:25:49.761264 kernel: ACPI: LAPIC_NMI (acpi_id[0x52] high edge lint[0x1]) May 13 00:25:49.761276 kernel: ACPI: LAPIC_NMI (acpi_id[0x53] high edge lint[0x1]) May 13 00:25:49.761282 kernel: ACPI: LAPIC_NMI (acpi_id[0x54] high edge lint[0x1]) May 13 00:25:49.761288 kernel: ACPI: LAPIC_NMI (acpi_id[0x55] high edge lint[0x1]) May 13 00:25:49.761293 kernel: ACPI: LAPIC_NMI (acpi_id[0x56] high edge lint[0x1]) May 13 00:25:49.761299 kernel: ACPI: LAPIC_NMI (acpi_id[0x57] high edge lint[0x1]) May 13 00:25:49.761307 kernel: ACPI: LAPIC_NMI (acpi_id[0x58] high edge lint[0x1]) May 13 00:25:49.761314 kernel: ACPI: LAPIC_NMI (acpi_id[0x59] high edge lint[0x1]) May 13 00:25:49.761323 kernel: ACPI: LAPIC_NMI (acpi_id[0x5a] high edge lint[0x1]) May 13 00:25:49.761330 kernel: ACPI: LAPIC_NMI (acpi_id[0x5b] high edge lint[0x1]) May 13 00:25:49.761335 kernel: ACPI: LAPIC_NMI (acpi_id[0x5c] high edge lint[0x1]) May 13 00:25:49.761343 kernel: ACPI: LAPIC_NMI (acpi_id[0x5d] high edge lint[0x1]) May 13 00:25:49.761348 kernel: ACPI: LAPIC_NMI (acpi_id[0x5e] high edge lint[0x1]) May 13 00:25:49.761354 kernel: ACPI: LAPIC_NMI (acpi_id[0x5f] high edge lint[0x1]) May 13 00:25:49.761359 kernel: ACPI: LAPIC_NMI (acpi_id[0x60] high edge lint[0x1]) May 13 00:25:49.761366 kernel: ACPI: LAPIC_NMI (acpi_id[0x61] high edge lint[0x1]) May 13 00:25:49.761373 kernel: ACPI: LAPIC_NMI (acpi_id[0x62] high edge lint[0x1]) May 13 00:25:49.761378 kernel: ACPI: LAPIC_NMI (acpi_id[0x63] high edge lint[0x1]) May 13 00:25:49.761387 kernel: ACPI: LAPIC_NMI (acpi_id[0x64] high edge lint[0x1]) May 13 00:25:49.761395 kernel: ACPI: LAPIC_NMI (acpi_id[0x65] high edge lint[0x1]) May 13 00:25:49.761402 kernel: ACPI: LAPIC_NMI (acpi_id[0x66] high edge lint[0x1]) May 13 00:25:49.761409 kernel: ACPI: LAPIC_NMI (acpi_id[0x67] high edge lint[0x1]) May 13 00:25:49.761416 kernel: ACPI: LAPIC_NMI (acpi_id[0x68] high edge lint[0x1]) May 13 00:25:49.761421 kernel: ACPI: LAPIC_NMI (acpi_id[0x69] high edge lint[0x1]) May 13 00:25:49.761426 kernel: ACPI: LAPIC_NMI (acpi_id[0x6a] high edge lint[0x1]) May 13 00:25:49.761432 kernel: ACPI: LAPIC_NMI (acpi_id[0x6b] high edge lint[0x1]) May 13 00:25:49.761438 kernel: ACPI: LAPIC_NMI (acpi_id[0x6c] high edge lint[0x1]) May 13 00:25:49.761446 kernel: ACPI: LAPIC_NMI (acpi_id[0x6d] high edge lint[0x1]) May 13 00:25:49.761462 kernel: ACPI: LAPIC_NMI (acpi_id[0x6e] high edge lint[0x1]) May 13 00:25:49.761469 kernel: ACPI: LAPIC_NMI (acpi_id[0x6f] high edge lint[0x1]) May 13 00:25:49.761477 kernel: ACPI: LAPIC_NMI (acpi_id[0x70] high edge lint[0x1]) May 13 00:25:49.761482 kernel: ACPI: LAPIC_NMI (acpi_id[0x71] high edge lint[0x1]) May 13 00:25:49.761488 kernel: ACPI: LAPIC_NMI (acpi_id[0x72] high edge lint[0x1]) May 13 00:25:49.761493 kernel: ACPI: LAPIC_NMI (acpi_id[0x73] high edge lint[0x1]) May 13 00:25:49.761498 kernel: ACPI: LAPIC_NMI (acpi_id[0x74] high edge lint[0x1]) May 13 00:25:49.761503 kernel: ACPI: LAPIC_NMI (acpi_id[0x75] high edge lint[0x1]) May 13 00:25:49.761509 kernel: ACPI: LAPIC_NMI (acpi_id[0x76] high edge lint[0x1]) May 13 00:25:49.761514 kernel: ACPI: LAPIC_NMI (acpi_id[0x77] high edge lint[0x1]) May 13 00:25:49.761521 kernel: ACPI: LAPIC_NMI (acpi_id[0x78] high edge lint[0x1]) May 13 00:25:49.761528 kernel: ACPI: LAPIC_NMI (acpi_id[0x79] high edge lint[0x1]) May 13 00:25:49.761537 kernel: ACPI: LAPIC_NMI (acpi_id[0x7a] high edge lint[0x1]) May 13 00:25:49.761545 kernel: ACPI: LAPIC_NMI (acpi_id[0x7b] high edge lint[0x1]) May 13 00:25:49.761551 kernel: ACPI: LAPIC_NMI (acpi_id[0x7c] high edge lint[0x1]) May 13 00:25:49.761556 kernel: ACPI: LAPIC_NMI (acpi_id[0x7d] high edge lint[0x1]) May 13 00:25:49.761562 kernel: ACPI: LAPIC_NMI (acpi_id[0x7e] high edge lint[0x1]) May 13 00:25:49.761567 kernel: ACPI: LAPIC_NMI (acpi_id[0x7f] high edge lint[0x1]) May 13 00:25:49.761576 kernel: IOAPIC[0]: apic_id 1, version 17, address 0xfec00000, GSI 0-23 May 13 00:25:49.761585 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 high edge) May 13 00:25:49.761595 kernel: ACPI: Using ACPI (MADT) for SMP configuration information May 13 00:25:49.761605 kernel: ACPI: HPET id: 0x8086af01 base: 0xfed00000 May 13 00:25:49.761610 kernel: TSC deadline timer available May 13 00:25:49.761619 kernel: smpboot: Allowing 128 CPUs, 126 hotplug CPUs May 13 00:25:49.761624 kernel: [mem 0x80000000-0xefffffff] available for PCI devices May 13 00:25:49.761631 kernel: Booting paravirtualized kernel on VMware hypervisor May 13 00:25:49.761641 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns May 13 00:25:49.761650 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:128 nr_cpu_ids:128 nr_node_ids:1 May 13 00:25:49.761659 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u262144 May 13 00:25:49.761669 kernel: pcpu-alloc: s197032 r8192 d32344 u262144 alloc=1*2097152 May 13 00:25:49.761679 kernel: pcpu-alloc: [0] 000 001 002 003 004 005 006 007 May 13 00:25:49.761685 kernel: pcpu-alloc: [0] 008 009 010 011 012 013 014 015 May 13 00:25:49.761690 kernel: pcpu-alloc: [0] 016 017 018 019 020 021 022 023 May 13 00:25:49.761695 kernel: pcpu-alloc: [0] 024 025 026 027 028 029 030 031 May 13 00:25:49.761701 kernel: pcpu-alloc: [0] 032 033 034 035 036 037 038 039 May 13 00:25:49.761717 kernel: pcpu-alloc: [0] 040 041 042 043 044 045 046 047 May 13 00:25:49.761728 kernel: pcpu-alloc: [0] 048 049 050 051 052 053 054 055 May 13 00:25:49.761735 kernel: pcpu-alloc: [0] 056 057 058 059 060 061 062 063 May 13 00:25:49.761740 kernel: pcpu-alloc: [0] 064 065 066 067 068 069 070 071 May 13 00:25:49.761747 kernel: pcpu-alloc: [0] 072 073 074 075 076 077 078 079 May 13 00:25:49.761753 kernel: pcpu-alloc: [0] 080 081 082 083 084 085 086 087 May 13 00:25:49.761759 kernel: pcpu-alloc: [0] 088 089 090 091 092 093 094 095 May 13 00:25:49.761765 kernel: pcpu-alloc: [0] 096 097 098 099 100 101 102 103 May 13 00:25:49.761771 kernel: pcpu-alloc: [0] 104 105 106 107 108 109 110 111 May 13 00:25:49.761779 kernel: pcpu-alloc: [0] 112 113 114 115 116 117 118 119 May 13 00:25:49.761789 kernel: pcpu-alloc: [0] 120 121 122 123 124 125 126 127 May 13 00:25:49.761800 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=a30636f72ddb6c7dc7c9bee07b7cf23b403029ba1ff64eed2705530c62c7b592 May 13 00:25:49.761808 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 13 00:25:49.761814 kernel: random: crng init done May 13 00:25:49.761820 kernel: printk: log_buf_len individual max cpu contribution: 4096 bytes May 13 00:25:49.761828 kernel: printk: log_buf_len total cpu_extra contributions: 520192 bytes May 13 00:25:49.761834 kernel: printk: log_buf_len min size: 262144 bytes May 13 00:25:49.761841 kernel: printk: log_buf_len: 1048576 bytes May 13 00:25:49.761848 kernel: printk: early log buf free: 239648(91%) May 13 00:25:49.761854 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 13 00:25:49.761860 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) May 13 00:25:49.761869 kernel: Fallback order for Node 0: 0 May 13 00:25:49.761876 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515808 May 13 00:25:49.761882 kernel: Policy zone: DMA32 May 13 00:25:49.761888 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 13 00:25:49.761897 kernel: Memory: 1936356K/2096628K available (12288K kernel code, 2295K rwdata, 22740K rodata, 42864K init, 2328K bss, 160012K reserved, 0K cma-reserved) May 13 00:25:49.761910 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=128, Nodes=1 May 13 00:25:49.761921 kernel: ftrace: allocating 37944 entries in 149 pages May 13 00:25:49.761932 kernel: ftrace: allocated 149 pages with 4 groups May 13 00:25:49.761942 kernel: Dynamic Preempt: voluntary May 13 00:25:49.761952 kernel: rcu: Preemptible hierarchical RCU implementation. May 13 00:25:49.761963 kernel: rcu: RCU event tracing is enabled. May 13 00:25:49.761971 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=128. May 13 00:25:49.761977 kernel: Trampoline variant of Tasks RCU enabled. May 13 00:25:49.761983 kernel: Rude variant of Tasks RCU enabled. May 13 00:25:49.761991 kernel: Tracing variant of Tasks RCU enabled. May 13 00:25:49.761996 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 13 00:25:49.762002 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=128 May 13 00:25:49.762010 kernel: NR_IRQS: 33024, nr_irqs: 1448, preallocated irqs: 16 May 13 00:25:49.762019 kernel: rcu: srcu_init: Setting srcu_struct sizes to big. May 13 00:25:49.762027 kernel: Console: colour VGA+ 80x25 May 13 00:25:49.762037 kernel: printk: console [tty0] enabled May 13 00:25:49.762048 kernel: printk: console [ttyS0] enabled May 13 00:25:49.762054 kernel: ACPI: Core revision 20230628 May 13 00:25:49.762062 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 133484882848 ns May 13 00:25:49.762071 kernel: APIC: Switch to symmetric I/O mode setup May 13 00:25:49.762080 kernel: x2apic enabled May 13 00:25:49.762087 kernel: APIC: Switched APIC routing to: physical x2apic May 13 00:25:49.762096 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 May 13 00:25:49.762105 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x311fd3cd494, max_idle_ns: 440795223879 ns May 13 00:25:49.762111 kernel: Calibrating delay loop (skipped) preset value.. 6816.00 BogoMIPS (lpj=3408000) May 13 00:25:49.762117 kernel: Disabled fast string operations May 13 00:25:49.762125 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 May 13 00:25:49.762135 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 May 13 00:25:49.762148 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization May 13 00:25:49.762159 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit May 13 00:25:49.762169 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall May 13 00:25:49.762181 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS May 13 00:25:49.762191 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT May 13 00:25:49.762197 kernel: RETBleed: Mitigation: Enhanced IBRS May 13 00:25:49.762203 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier May 13 00:25:49.762209 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl May 13 00:25:49.762226 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode May 13 00:25:49.762237 kernel: SRBDS: Unknown: Dependent on hypervisor status May 13 00:25:49.762244 kernel: GDS: Unknown: Dependent on hypervisor status May 13 00:25:49.762253 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' May 13 00:25:49.762259 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' May 13 00:25:49.762265 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' May 13 00:25:49.762271 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 May 13 00:25:49.762277 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. May 13 00:25:49.762283 kernel: Freeing SMP alternatives memory: 32K May 13 00:25:49.762291 kernel: pid_max: default: 131072 minimum: 1024 May 13 00:25:49.762299 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity May 13 00:25:49.762305 kernel: landlock: Up and running. May 13 00:25:49.762314 kernel: SELinux: Initializing. May 13 00:25:49.762322 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) May 13 00:25:49.762330 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) May 13 00:25:49.762336 kernel: smpboot: CPU0: Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz (family: 0x6, model: 0x9e, stepping: 0xd) May 13 00:25:49.762342 kernel: RCU Tasks: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. May 13 00:25:49.762348 kernel: RCU Tasks Rude: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. May 13 00:25:49.762356 kernel: RCU Tasks Trace: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. May 13 00:25:49.762364 kernel: Performance Events: Skylake events, core PMU driver. May 13 00:25:49.762374 kernel: core: CPUID marked event: 'cpu cycles' unavailable May 13 00:25:49.762384 kernel: core: CPUID marked event: 'instructions' unavailable May 13 00:25:49.762393 kernel: core: CPUID marked event: 'bus cycles' unavailable May 13 00:25:49.762403 kernel: core: CPUID marked event: 'cache references' unavailable May 13 00:25:49.762411 kernel: core: CPUID marked event: 'cache misses' unavailable May 13 00:25:49.762420 kernel: core: CPUID marked event: 'branch instructions' unavailable May 13 00:25:49.762426 kernel: core: CPUID marked event: 'branch misses' unavailable May 13 00:25:49.762434 kernel: ... version: 1 May 13 00:25:49.762443 kernel: ... bit width: 48 May 13 00:25:49.762450 kernel: ... generic registers: 4 May 13 00:25:49.762459 kernel: ... value mask: 0000ffffffffffff May 13 00:25:49.762465 kernel: ... max period: 000000007fffffff May 13 00:25:49.762471 kernel: ... fixed-purpose events: 0 May 13 00:25:49.762477 kernel: ... event mask: 000000000000000f May 13 00:25:49.762482 kernel: signal: max sigframe size: 1776 May 13 00:25:49.762488 kernel: rcu: Hierarchical SRCU implementation. May 13 00:25:49.762500 kernel: rcu: Max phase no-delay instances is 400. May 13 00:25:49.762510 kernel: NMI watchdog: Perf NMI watchdog permanently disabled May 13 00:25:49.762517 kernel: smp: Bringing up secondary CPUs ... May 13 00:25:49.762525 kernel: smpboot: x86: Booting SMP configuration: May 13 00:25:49.762532 kernel: .... node #0, CPUs: #1 May 13 00:25:49.762538 kernel: Disabled fast string operations May 13 00:25:49.762544 kernel: smpboot: CPU 1 Converting physical 2 to logical package 1 May 13 00:25:49.762549 kernel: smpboot: CPU 1 Converting physical 0 to logical die 1 May 13 00:25:49.762555 kernel: smp: Brought up 1 node, 2 CPUs May 13 00:25:49.762564 kernel: smpboot: Max logical packages: 128 May 13 00:25:49.762576 kernel: smpboot: Total of 2 processors activated (13632.00 BogoMIPS) May 13 00:25:49.762584 kernel: devtmpfs: initialized May 13 00:25:49.762591 kernel: x86/mm: Memory block size: 128MB May 13 00:25:49.762599 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7feff000-0x7fefffff] (4096 bytes) May 13 00:25:49.762609 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 13 00:25:49.762616 kernel: futex hash table entries: 32768 (order: 9, 2097152 bytes, linear) May 13 00:25:49.762624 kernel: pinctrl core: initialized pinctrl subsystem May 13 00:25:49.762635 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 13 00:25:49.762643 kernel: audit: initializing netlink subsys (disabled) May 13 00:25:49.762651 kernel: audit: type=2000 audit(1747095947.068:1): state=initialized audit_enabled=0 res=1 May 13 00:25:49.762657 kernel: thermal_sys: Registered thermal governor 'step_wise' May 13 00:25:49.762663 kernel: thermal_sys: Registered thermal governor 'user_space' May 13 00:25:49.762668 kernel: cpuidle: using governor menu May 13 00:25:49.762677 kernel: Simple Boot Flag at 0x36 set to 0x80 May 13 00:25:49.762683 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 13 00:25:49.762692 kernel: dca service started, version 1.12.1 May 13 00:25:49.762699 kernel: PCI: MMCONFIG for domain 0000 [bus 00-7f] at [mem 0xf0000000-0xf7ffffff] (base 0xf0000000) May 13 00:25:49.762705 kernel: PCI: Using configuration type 1 for base access May 13 00:25:49.762712 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. May 13 00:25:49.762718 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 13 00:25:49.762723 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page May 13 00:25:49.762729 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 13 00:25:49.762737 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page May 13 00:25:49.762746 kernel: ACPI: Added _OSI(Module Device) May 13 00:25:49.762756 kernel: ACPI: Added _OSI(Processor Device) May 13 00:25:49.762764 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 13 00:25:49.762771 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 13 00:25:49.762778 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 13 00:25:49.762784 kernel: ACPI: [Firmware Bug]: BIOS _OSI(Linux) query ignored May 13 00:25:49.762790 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC May 13 00:25:49.762796 kernel: ACPI: Interpreter enabled May 13 00:25:49.762802 kernel: ACPI: PM: (supports S0 S1 S5) May 13 00:25:49.762809 kernel: ACPI: Using IOAPIC for interrupt routing May 13 00:25:49.762819 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug May 13 00:25:49.762827 kernel: PCI: Using E820 reservations for host bridge windows May 13 00:25:49.762835 kernel: ACPI: Enabled 4 GPEs in block 00 to 0F May 13 00:25:49.762844 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-7f]) May 13 00:25:49.762942 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 13 00:25:49.763021 kernel: acpi PNP0A03:00: _OSC: platform does not support [AER LTR] May 13 00:25:49.763085 kernel: acpi PNP0A03:00: _OSC: OS now controls [PCIeHotplug PME PCIeCapability] May 13 00:25:49.763095 kernel: PCI host bridge to bus 0000:00 May 13 00:25:49.763162 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] May 13 00:25:49.765287 kernel: pci_bus 0000:00: root bus resource [mem 0x000cc000-0x000dbfff window] May 13 00:25:49.765352 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] May 13 00:25:49.765414 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] May 13 00:25:49.765478 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xfeff window] May 13 00:25:49.765529 kernel: pci_bus 0000:00: root bus resource [bus 00-7f] May 13 00:25:49.765597 kernel: pci 0000:00:00.0: [8086:7190] type 00 class 0x060000 May 13 00:25:49.765673 kernel: pci 0000:00:01.0: [8086:7191] type 01 class 0x060400 May 13 00:25:49.765751 kernel: pci 0000:00:07.0: [8086:7110] type 00 class 0x060100 May 13 00:25:49.765807 kernel: pci 0000:00:07.1: [8086:7111] type 00 class 0x01018a May 13 00:25:49.765865 kernel: pci 0000:00:07.1: reg 0x20: [io 0x1060-0x106f] May 13 00:25:49.765930 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] May 13 00:25:49.765992 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x14: [io 0x03f6] May 13 00:25:49.766050 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] May 13 00:25:49.766109 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x1c: [io 0x0376] May 13 00:25:49.766182 kernel: pci 0000:00:07.3: [8086:7113] type 00 class 0x068000 May 13 00:25:49.766252 kernel: pci 0000:00:07.3: quirk: [io 0x1000-0x103f] claimed by PIIX4 ACPI May 13 00:25:49.766319 kernel: pci 0000:00:07.3: quirk: [io 0x1040-0x104f] claimed by PIIX4 SMB May 13 00:25:49.766386 kernel: pci 0000:00:07.7: [15ad:0740] type 00 class 0x088000 May 13 00:25:49.766461 kernel: pci 0000:00:07.7: reg 0x10: [io 0x1080-0x10bf] May 13 00:25:49.766520 kernel: pci 0000:00:07.7: reg 0x14: [mem 0xfebfe000-0xfebfffff 64bit] May 13 00:25:49.766583 kernel: pci 0000:00:0f.0: [15ad:0405] type 00 class 0x030000 May 13 00:25:49.766645 kernel: pci 0000:00:0f.0: reg 0x10: [io 0x1070-0x107f] May 13 00:25:49.766716 kernel: pci 0000:00:0f.0: reg 0x14: [mem 0xe8000000-0xefffffff pref] May 13 00:25:49.766779 kernel: pci 0000:00:0f.0: reg 0x18: [mem 0xfe000000-0xfe7fffff] May 13 00:25:49.766833 kernel: pci 0000:00:0f.0: reg 0x30: [mem 0x00000000-0x00007fff pref] May 13 00:25:49.766892 kernel: pci 0000:00:0f.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] May 13 00:25:49.766954 kernel: pci 0000:00:11.0: [15ad:0790] type 01 class 0x060401 May 13 00:25:49.767033 kernel: pci 0000:00:15.0: [15ad:07a0] type 01 class 0x060400 May 13 00:25:49.767098 kernel: pci 0000:00:15.0: PME# supported from D0 D3hot D3cold May 13 00:25:49.767166 kernel: pci 0000:00:15.1: [15ad:07a0] type 01 class 0x060400 May 13 00:25:49.770241 kernel: pci 0000:00:15.1: PME# supported from D0 D3hot D3cold May 13 00:25:49.770322 kernel: pci 0000:00:15.2: [15ad:07a0] type 01 class 0x060400 May 13 00:25:49.770382 kernel: pci 0000:00:15.2: PME# supported from D0 D3hot D3cold May 13 00:25:49.770448 kernel: pci 0000:00:15.3: [15ad:07a0] type 01 class 0x060400 May 13 00:25:49.770512 kernel: pci 0000:00:15.3: PME# supported from D0 D3hot D3cold May 13 00:25:49.770571 kernel: pci 0000:00:15.4: [15ad:07a0] type 01 class 0x060400 May 13 00:25:49.770629 kernel: pci 0000:00:15.4: PME# supported from D0 D3hot D3cold May 13 00:25:49.770696 kernel: pci 0000:00:15.5: [15ad:07a0] type 01 class 0x060400 May 13 00:25:49.770763 kernel: pci 0000:00:15.5: PME# supported from D0 D3hot D3cold May 13 00:25:49.770821 kernel: pci 0000:00:15.6: [15ad:07a0] type 01 class 0x060400 May 13 00:25:49.770894 kernel: pci 0000:00:15.6: PME# supported from D0 D3hot D3cold May 13 00:25:49.770978 kernel: pci 0000:00:15.7: [15ad:07a0] type 01 class 0x060400 May 13 00:25:49.771036 kernel: pci 0000:00:15.7: PME# supported from D0 D3hot D3cold May 13 00:25:49.771104 kernel: pci 0000:00:16.0: [15ad:07a0] type 01 class 0x060400 May 13 00:25:49.771164 kernel: pci 0000:00:16.0: PME# supported from D0 D3hot D3cold May 13 00:25:49.771243 kernel: pci 0000:00:16.1: [15ad:07a0] type 01 class 0x060400 May 13 00:25:49.771314 kernel: pci 0000:00:16.1: PME# supported from D0 D3hot D3cold May 13 00:25:49.771386 kernel: pci 0000:00:16.2: [15ad:07a0] type 01 class 0x060400 May 13 00:25:49.771439 kernel: pci 0000:00:16.2: PME# supported from D0 D3hot D3cold May 13 00:25:49.771510 kernel: pci 0000:00:16.3: [15ad:07a0] type 01 class 0x060400 May 13 00:25:49.771585 kernel: pci 0000:00:16.3: PME# supported from D0 D3hot D3cold May 13 00:25:49.771655 kernel: pci 0000:00:16.4: [15ad:07a0] type 01 class 0x060400 May 13 00:25:49.771716 kernel: pci 0000:00:16.4: PME# supported from D0 D3hot D3cold May 13 00:25:49.771781 kernel: pci 0000:00:16.5: [15ad:07a0] type 01 class 0x060400 May 13 00:25:49.771833 kernel: pci 0000:00:16.5: PME# supported from D0 D3hot D3cold May 13 00:25:49.771900 kernel: pci 0000:00:16.6: [15ad:07a0] type 01 class 0x060400 May 13 00:25:49.771964 kernel: pci 0000:00:16.6: PME# supported from D0 D3hot D3cold May 13 00:25:49.772025 kernel: pci 0000:00:16.7: [15ad:07a0] type 01 class 0x060400 May 13 00:25:49.772076 kernel: pci 0000:00:16.7: PME# supported from D0 D3hot D3cold May 13 00:25:49.773304 kernel: pci 0000:00:17.0: [15ad:07a0] type 01 class 0x060400 May 13 00:25:49.773379 kernel: pci 0000:00:17.0: PME# supported from D0 D3hot D3cold May 13 00:25:49.773454 kernel: pci 0000:00:17.1: [15ad:07a0] type 01 class 0x060400 May 13 00:25:49.773520 kernel: pci 0000:00:17.1: PME# supported from D0 D3hot D3cold May 13 00:25:49.773600 kernel: pci 0000:00:17.2: [15ad:07a0] type 01 class 0x060400 May 13 00:25:49.773674 kernel: pci 0000:00:17.2: PME# supported from D0 D3hot D3cold May 13 00:25:49.773748 kernel: pci 0000:00:17.3: [15ad:07a0] type 01 class 0x060400 May 13 00:25:49.773801 kernel: pci 0000:00:17.3: PME# supported from D0 D3hot D3cold May 13 00:25:49.773862 kernel: pci 0000:00:17.4: [15ad:07a0] type 01 class 0x060400 May 13 00:25:49.773921 kernel: pci 0000:00:17.4: PME# supported from D0 D3hot D3cold May 13 00:25:49.773996 kernel: pci 0000:00:17.5: [15ad:07a0] type 01 class 0x060400 May 13 00:25:49.774055 kernel: pci 0000:00:17.5: PME# supported from D0 D3hot D3cold May 13 00:25:49.774111 kernel: pci 0000:00:17.6: [15ad:07a0] type 01 class 0x060400 May 13 00:25:49.774172 kernel: pci 0000:00:17.6: PME# supported from D0 D3hot D3cold May 13 00:25:49.775251 kernel: pci 0000:00:17.7: [15ad:07a0] type 01 class 0x060400 May 13 00:25:49.775329 kernel: pci 0000:00:17.7: PME# supported from D0 D3hot D3cold May 13 00:25:49.775417 kernel: pci 0000:00:18.0: [15ad:07a0] type 01 class 0x060400 May 13 00:25:49.775475 kernel: pci 0000:00:18.0: PME# supported from D0 D3hot D3cold May 13 00:25:49.775541 kernel: pci 0000:00:18.1: [15ad:07a0] type 01 class 0x060400 May 13 00:25:49.775612 kernel: pci 0000:00:18.1: PME# supported from D0 D3hot D3cold May 13 00:25:49.775679 kernel: pci 0000:00:18.2: [15ad:07a0] type 01 class 0x060400 May 13 00:25:49.775752 kernel: pci 0000:00:18.2: PME# supported from D0 D3hot D3cold May 13 00:25:49.775819 kernel: pci 0000:00:18.3: [15ad:07a0] type 01 class 0x060400 May 13 00:25:49.775874 kernel: pci 0000:00:18.3: PME# supported from D0 D3hot D3cold May 13 00:25:49.775940 kernel: pci 0000:00:18.4: [15ad:07a0] type 01 class 0x060400 May 13 00:25:49.775999 kernel: pci 0000:00:18.4: PME# supported from D0 D3hot D3cold May 13 00:25:49.776077 kernel: pci 0000:00:18.5: [15ad:07a0] type 01 class 0x060400 May 13 00:25:49.776136 kernel: pci 0000:00:18.5: PME# supported from D0 D3hot D3cold May 13 00:25:49.776197 kernel: pci 0000:00:18.6: [15ad:07a0] type 01 class 0x060400 May 13 00:25:49.777661 kernel: pci 0000:00:18.6: PME# supported from D0 D3hot D3cold May 13 00:25:49.777729 kernel: pci 0000:00:18.7: [15ad:07a0] type 01 class 0x060400 May 13 00:25:49.777792 kernel: pci 0000:00:18.7: PME# supported from D0 D3hot D3cold May 13 00:25:49.777850 kernel: pci_bus 0000:01: extended config space not accessible May 13 00:25:49.777912 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] May 13 00:25:49.777973 kernel: pci_bus 0000:02: extended config space not accessible May 13 00:25:49.777988 kernel: acpiphp: Slot [32] registered May 13 00:25:49.777994 kernel: acpiphp: Slot [33] registered May 13 00:25:49.778001 kernel: acpiphp: Slot [34] registered May 13 00:25:49.778006 kernel: acpiphp: Slot [35] registered May 13 00:25:49.778012 kernel: acpiphp: Slot [36] registered May 13 00:25:49.778018 kernel: acpiphp: Slot [37] registered May 13 00:25:49.778024 kernel: acpiphp: Slot [38] registered May 13 00:25:49.778030 kernel: acpiphp: Slot [39] registered May 13 00:25:49.778035 kernel: acpiphp: Slot [40] registered May 13 00:25:49.778043 kernel: acpiphp: Slot [41] registered May 13 00:25:49.778049 kernel: acpiphp: Slot [42] registered May 13 00:25:49.778055 kernel: acpiphp: Slot [43] registered May 13 00:25:49.778063 kernel: acpiphp: Slot [44] registered May 13 00:25:49.778069 kernel: acpiphp: Slot [45] registered May 13 00:25:49.778074 kernel: acpiphp: Slot [46] registered May 13 00:25:49.778080 kernel: acpiphp: Slot [47] registered May 13 00:25:49.778089 kernel: acpiphp: Slot [48] registered May 13 00:25:49.778095 kernel: acpiphp: Slot [49] registered May 13 00:25:49.778101 kernel: acpiphp: Slot [50] registered May 13 00:25:49.778109 kernel: acpiphp: Slot [51] registered May 13 00:25:49.778119 kernel: acpiphp: Slot [52] registered May 13 00:25:49.778125 kernel: acpiphp: Slot [53] registered May 13 00:25:49.778135 kernel: acpiphp: Slot [54] registered May 13 00:25:49.778141 kernel: acpiphp: Slot [55] registered May 13 00:25:49.778147 kernel: acpiphp: Slot [56] registered May 13 00:25:49.778155 kernel: acpiphp: Slot [57] registered May 13 00:25:49.778162 kernel: acpiphp: Slot [58] registered May 13 00:25:49.778168 kernel: acpiphp: Slot [59] registered May 13 00:25:49.778176 kernel: acpiphp: Slot [60] registered May 13 00:25:49.778186 kernel: acpiphp: Slot [61] registered May 13 00:25:49.778196 kernel: acpiphp: Slot [62] registered May 13 00:25:49.778207 kernel: acpiphp: Slot [63] registered May 13 00:25:49.778460 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] (subtractive decode) May 13 00:25:49.778656 kernel: pci 0000:00:11.0: bridge window [io 0x2000-0x3fff] May 13 00:25:49.778717 kernel: pci 0000:00:11.0: bridge window [mem 0xfd600000-0xfdffffff] May 13 00:25:49.781302 kernel: pci 0000:00:11.0: bridge window [mem 0xe7b00000-0xe7ffffff 64bit pref] May 13 00:25:49.781367 kernel: pci 0000:00:11.0: bridge window [mem 0x000a0000-0x000bffff window] (subtractive decode) May 13 00:25:49.781426 kernel: pci 0000:00:11.0: bridge window [mem 0x000cc000-0x000dbfff window] (subtractive decode) May 13 00:25:49.781491 kernel: pci 0000:00:11.0: bridge window [mem 0xc0000000-0xfebfffff window] (subtractive decode) May 13 00:25:49.781557 kernel: pci 0000:00:11.0: bridge window [io 0x0000-0x0cf7 window] (subtractive decode) May 13 00:25:49.781633 kernel: pci 0000:00:11.0: bridge window [io 0x0d00-0xfeff window] (subtractive decode) May 13 00:25:49.781706 kernel: pci 0000:03:00.0: [15ad:07c0] type 00 class 0x010700 May 13 00:25:49.781763 kernel: pci 0000:03:00.0: reg 0x10: [io 0x4000-0x4007] May 13 00:25:49.781816 kernel: pci 0000:03:00.0: reg 0x14: [mem 0xfd5f8000-0xfd5fffff 64bit] May 13 00:25:49.781871 kernel: pci 0000:03:00.0: reg 0x30: [mem 0x00000000-0x0000ffff pref] May 13 00:25:49.781926 kernel: pci 0000:03:00.0: PME# supported from D0 D3hot D3cold May 13 00:25:49.781994 kernel: pci 0000:03:00.0: disabling ASPM on pre-1.1 PCIe device. You can enable it with 'pcie_aspm=force' May 13 00:25:49.782064 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] May 13 00:25:49.782145 kernel: pci 0000:00:15.0: bridge window [io 0x4000-0x4fff] May 13 00:25:49.782199 kernel: pci 0000:00:15.0: bridge window [mem 0xfd500000-0xfd5fffff] May 13 00:25:49.782342 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] May 13 00:25:49.782410 kernel: pci 0000:00:15.1: bridge window [io 0x8000-0x8fff] May 13 00:25:49.782514 kernel: pci 0000:00:15.1: bridge window [mem 0xfd100000-0xfd1fffff] May 13 00:25:49.782594 kernel: pci 0000:00:15.1: bridge window [mem 0xe7800000-0xe78fffff 64bit pref] May 13 00:25:49.782670 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] May 13 00:25:49.782739 kernel: pci 0000:00:15.2: bridge window [io 0xc000-0xcfff] May 13 00:25:49.782821 kernel: pci 0000:00:15.2: bridge window [mem 0xfcd00000-0xfcdfffff] May 13 00:25:49.782890 kernel: pci 0000:00:15.2: bridge window [mem 0xe7400000-0xe74fffff 64bit pref] May 13 00:25:49.782961 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] May 13 00:25:49.783033 kernel: pci 0000:00:15.3: bridge window [mem 0xfc900000-0xfc9fffff] May 13 00:25:49.783090 kernel: pci 0000:00:15.3: bridge window [mem 0xe7000000-0xe70fffff 64bit pref] May 13 00:25:49.783143 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] May 13 00:25:49.783195 kernel: pci 0000:00:15.4: bridge window [mem 0xfc500000-0xfc5fffff] May 13 00:25:49.783265 kernel: pci 0000:00:15.4: bridge window [mem 0xe6c00000-0xe6cfffff 64bit pref] May 13 00:25:49.783356 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] May 13 00:25:49.783431 kernel: pci 0000:00:15.5: bridge window [mem 0xfc100000-0xfc1fffff] May 13 00:25:49.783501 kernel: pci 0000:00:15.5: bridge window [mem 0xe6800000-0xe68fffff 64bit pref] May 13 00:25:49.783555 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] May 13 00:25:49.783606 kernel: pci 0000:00:15.6: bridge window [mem 0xfbd00000-0xfbdfffff] May 13 00:25:49.783657 kernel: pci 0000:00:15.6: bridge window [mem 0xe6400000-0xe64fffff 64bit pref] May 13 00:25:49.783709 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] May 13 00:25:49.783767 kernel: pci 0000:00:15.7: bridge window [mem 0xfb900000-0xfb9fffff] May 13 00:25:49.783850 kernel: pci 0000:00:15.7: bridge window [mem 0xe6000000-0xe60fffff 64bit pref] May 13 00:25:49.783944 kernel: pci 0000:0b:00.0: [15ad:07b0] type 00 class 0x020000 May 13 00:25:49.784019 kernel: pci 0000:0b:00.0: reg 0x10: [mem 0xfd4fc000-0xfd4fcfff] May 13 00:25:49.784085 kernel: pci 0000:0b:00.0: reg 0x14: [mem 0xfd4fd000-0xfd4fdfff] May 13 00:25:49.784156 kernel: pci 0000:0b:00.0: reg 0x18: [mem 0xfd4fe000-0xfd4fffff] May 13 00:25:49.784330 kernel: pci 0000:0b:00.0: reg 0x1c: [io 0x5000-0x500f] May 13 00:25:49.784388 kernel: pci 0000:0b:00.0: reg 0x30: [mem 0x00000000-0x0000ffff pref] May 13 00:25:49.784444 kernel: pci 0000:0b:00.0: supports D1 D2 May 13 00:25:49.784504 kernel: pci 0000:0b:00.0: PME# supported from D0 D1 D2 D3hot D3cold May 13 00:25:49.784558 kernel: pci 0000:0b:00.0: disabling ASPM on pre-1.1 PCIe device. You can enable it with 'pcie_aspm=force' May 13 00:25:49.784626 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] May 13 00:25:49.784710 kernel: pci 0000:00:16.0: bridge window [io 0x5000-0x5fff] May 13 00:25:49.784791 kernel: pci 0000:00:16.0: bridge window [mem 0xfd400000-0xfd4fffff] May 13 00:25:49.784874 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] May 13 00:25:49.784949 kernel: pci 0000:00:16.1: bridge window [io 0x9000-0x9fff] May 13 00:25:49.785032 kernel: pci 0000:00:16.1: bridge window [mem 0xfd000000-0xfd0fffff] May 13 00:25:49.785102 kernel: pci 0000:00:16.1: bridge window [mem 0xe7700000-0xe77fffff 64bit pref] May 13 00:25:49.785160 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] May 13 00:25:49.785247 kernel: pci 0000:00:16.2: bridge window [io 0xd000-0xdfff] May 13 00:25:49.785313 kernel: pci 0000:00:16.2: bridge window [mem 0xfcc00000-0xfccfffff] May 13 00:25:49.785396 kernel: pci 0000:00:16.2: bridge window [mem 0xe7300000-0xe73fffff 64bit pref] May 13 00:25:49.785470 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] May 13 00:25:49.785528 kernel: pci 0000:00:16.3: bridge window [mem 0xfc800000-0xfc8fffff] May 13 00:25:49.785586 kernel: pci 0000:00:16.3: bridge window [mem 0xe6f00000-0xe6ffffff 64bit pref] May 13 00:25:49.785646 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] May 13 00:25:49.785710 kernel: pci 0000:00:16.4: bridge window [mem 0xfc400000-0xfc4fffff] May 13 00:25:49.785784 kernel: pci 0000:00:16.4: bridge window [mem 0xe6b00000-0xe6bfffff 64bit pref] May 13 00:25:49.785855 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] May 13 00:25:49.785912 kernel: pci 0000:00:16.5: bridge window [mem 0xfc000000-0xfc0fffff] May 13 00:25:49.785989 kernel: pci 0000:00:16.5: bridge window [mem 0xe6700000-0xe67fffff 64bit pref] May 13 00:25:49.786078 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] May 13 00:25:49.786170 kernel: pci 0000:00:16.6: bridge window [mem 0xfbc00000-0xfbcfffff] May 13 00:25:49.786379 kernel: pci 0000:00:16.6: bridge window [mem 0xe6300000-0xe63fffff 64bit pref] May 13 00:25:49.786472 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] May 13 00:25:49.786554 kernel: pci 0000:00:16.7: bridge window [mem 0xfb800000-0xfb8fffff] May 13 00:25:49.786639 kernel: pci 0000:00:16.7: bridge window [mem 0xe5f00000-0xe5ffffff 64bit pref] May 13 00:25:49.786720 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] May 13 00:25:49.786800 kernel: pci 0000:00:17.0: bridge window [io 0x6000-0x6fff] May 13 00:25:49.786877 kernel: pci 0000:00:17.0: bridge window [mem 0xfd300000-0xfd3fffff] May 13 00:25:49.786962 kernel: pci 0000:00:17.0: bridge window [mem 0xe7a00000-0xe7afffff 64bit pref] May 13 00:25:49.787044 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] May 13 00:25:49.787128 kernel: pci 0000:00:17.1: bridge window [io 0xa000-0xafff] May 13 00:25:49.787207 kernel: pci 0000:00:17.1: bridge window [mem 0xfcf00000-0xfcffffff] May 13 00:25:49.787302 kernel: pci 0000:00:17.1: bridge window [mem 0xe7600000-0xe76fffff 64bit pref] May 13 00:25:49.787384 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] May 13 00:25:49.787463 kernel: pci 0000:00:17.2: bridge window [io 0xe000-0xefff] May 13 00:25:49.787540 kernel: pci 0000:00:17.2: bridge window [mem 0xfcb00000-0xfcbfffff] May 13 00:25:49.787623 kernel: pci 0000:00:17.2: bridge window [mem 0xe7200000-0xe72fffff 64bit pref] May 13 00:25:49.787688 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] May 13 00:25:49.787759 kernel: pci 0000:00:17.3: bridge window [mem 0xfc700000-0xfc7fffff] May 13 00:25:49.787820 kernel: pci 0000:00:17.3: bridge window [mem 0xe6e00000-0xe6efffff 64bit pref] May 13 00:25:49.787904 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] May 13 00:25:49.787974 kernel: pci 0000:00:17.4: bridge window [mem 0xfc300000-0xfc3fffff] May 13 00:25:49.788037 kernel: pci 0000:00:17.4: bridge window [mem 0xe6a00000-0xe6afffff 64bit pref] May 13 00:25:49.788114 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] May 13 00:25:49.788196 kernel: pci 0000:00:17.5: bridge window [mem 0xfbf00000-0xfbffffff] May 13 00:25:49.788345 kernel: pci 0000:00:17.5: bridge window [mem 0xe6600000-0xe66fffff 64bit pref] May 13 00:25:49.788425 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] May 13 00:25:49.788493 kernel: pci 0000:00:17.6: bridge window [mem 0xfbb00000-0xfbbfffff] May 13 00:25:49.788558 kernel: pci 0000:00:17.6: bridge window [mem 0xe6200000-0xe62fffff 64bit pref] May 13 00:25:49.788636 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] May 13 00:25:49.788704 kernel: pci 0000:00:17.7: bridge window [mem 0xfb700000-0xfb7fffff] May 13 00:25:49.788781 kernel: pci 0000:00:17.7: bridge window [mem 0xe5e00000-0xe5efffff 64bit pref] May 13 00:25:49.788864 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] May 13 00:25:49.788924 kernel: pci 0000:00:18.0: bridge window [io 0x7000-0x7fff] May 13 00:25:49.788990 kernel: pci 0000:00:18.0: bridge window [mem 0xfd200000-0xfd2fffff] May 13 00:25:49.789059 kernel: pci 0000:00:18.0: bridge window [mem 0xe7900000-0xe79fffff 64bit pref] May 13 00:25:49.789115 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] May 13 00:25:49.789191 kernel: pci 0000:00:18.1: bridge window [io 0xb000-0xbfff] May 13 00:25:49.789256 kernel: pci 0000:00:18.1: bridge window [mem 0xfce00000-0xfcefffff] May 13 00:25:49.789318 kernel: pci 0000:00:18.1: bridge window [mem 0xe7500000-0xe75fffff 64bit pref] May 13 00:25:49.789400 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] May 13 00:25:49.789456 kernel: pci 0000:00:18.2: bridge window [mem 0xfca00000-0xfcafffff] May 13 00:25:49.789529 kernel: pci 0000:00:18.2: bridge window [mem 0xe7100000-0xe71fffff 64bit pref] May 13 00:25:49.789615 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] May 13 00:25:49.789688 kernel: pci 0000:00:18.3: bridge window [mem 0xfc600000-0xfc6fffff] May 13 00:25:49.789764 kernel: pci 0000:00:18.3: bridge window [mem 0xe6d00000-0xe6dfffff 64bit pref] May 13 00:25:49.789850 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] May 13 00:25:49.789924 kernel: pci 0000:00:18.4: bridge window [mem 0xfc200000-0xfc2fffff] May 13 00:25:49.789985 kernel: pci 0000:00:18.4: bridge window [mem 0xe6900000-0xe69fffff 64bit pref] May 13 00:25:49.790067 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] May 13 00:25:49.790130 kernel: pci 0000:00:18.5: bridge window [mem 0xfbe00000-0xfbefffff] May 13 00:25:49.790193 kernel: pci 0000:00:18.5: bridge window [mem 0xe6500000-0xe65fffff 64bit pref] May 13 00:25:49.790282 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] May 13 00:25:49.790338 kernel: pci 0000:00:18.6: bridge window [mem 0xfba00000-0xfbafffff] May 13 00:25:49.790405 kernel: pci 0000:00:18.6: bridge window [mem 0xe6100000-0xe61fffff 64bit pref] May 13 00:25:49.790488 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] May 13 00:25:49.790574 kernel: pci 0000:00:18.7: bridge window [mem 0xfb600000-0xfb6fffff] May 13 00:25:49.790653 kernel: pci 0000:00:18.7: bridge window [mem 0xe5d00000-0xe5dfffff 64bit pref] May 13 00:25:49.790667 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 9 May 13 00:25:49.790679 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 0 May 13 00:25:49.790690 kernel: ACPI: PCI: Interrupt link LNKB disabled May 13 00:25:49.790700 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 May 13 00:25:49.790711 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 10 May 13 00:25:49.790717 kernel: iommu: Default domain type: Translated May 13 00:25:49.790723 kernel: iommu: DMA domain TLB invalidation policy: lazy mode May 13 00:25:49.790731 kernel: PCI: Using ACPI for IRQ routing May 13 00:25:49.790740 kernel: PCI: pci_cache_line_size set to 64 bytes May 13 00:25:49.790746 kernel: e820: reserve RAM buffer [mem 0x0009ec00-0x0009ffff] May 13 00:25:49.790753 kernel: e820: reserve RAM buffer [mem 0x7fee0000-0x7fffffff] May 13 00:25:49.790826 kernel: pci 0000:00:0f.0: vgaarb: setting as boot VGA device May 13 00:25:49.790898 kernel: pci 0000:00:0f.0: vgaarb: bridge control possible May 13 00:25:49.790955 kernel: pci 0000:00:0f.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none May 13 00:25:49.790964 kernel: vgaarb: loaded May 13 00:25:49.790970 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 May 13 00:25:49.790979 kernel: hpet0: 16 comparators, 64-bit 14.318180 MHz counter May 13 00:25:49.790986 kernel: clocksource: Switched to clocksource tsc-early May 13 00:25:49.790992 kernel: VFS: Disk quotas dquot_6.6.0 May 13 00:25:49.790998 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 13 00:25:49.791004 kernel: pnp: PnP ACPI init May 13 00:25:49.791077 kernel: system 00:00: [io 0x1000-0x103f] has been reserved May 13 00:25:49.791132 kernel: system 00:00: [io 0x1040-0x104f] has been reserved May 13 00:25:49.791207 kernel: system 00:00: [io 0x0cf0-0x0cf1] has been reserved May 13 00:25:49.791525 kernel: system 00:04: [mem 0xfed00000-0xfed003ff] has been reserved May 13 00:25:49.791609 kernel: pnp 00:06: [dma 2] May 13 00:25:49.791693 kernel: system 00:07: [io 0xfce0-0xfcff] has been reserved May 13 00:25:49.791768 kernel: system 00:07: [mem 0xf0000000-0xf7ffffff] has been reserved May 13 00:25:49.791828 kernel: system 00:07: [mem 0xfe800000-0xfe9fffff] has been reserved May 13 00:25:49.791841 kernel: pnp: PnP ACPI: found 8 devices May 13 00:25:49.791849 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns May 13 00:25:49.791855 kernel: NET: Registered PF_INET protocol family May 13 00:25:49.791861 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) May 13 00:25:49.791867 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) May 13 00:25:49.791874 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 13 00:25:49.791882 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) May 13 00:25:49.791895 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) May 13 00:25:49.791905 kernel: TCP: Hash tables configured (established 16384 bind 16384) May 13 00:25:49.791915 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) May 13 00:25:49.791925 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) May 13 00:25:49.791935 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 13 00:25:49.791944 kernel: NET: Registered PF_XDP protocol family May 13 00:25:49.792012 kernel: pci 0000:00:15.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 May 13 00:25:49.792083 kernel: pci 0000:00:15.3: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 May 13 00:25:49.792149 kernel: pci 0000:00:15.4: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 May 13 00:25:49.792240 kernel: pci 0000:00:15.5: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 May 13 00:25:49.792298 kernel: pci 0000:00:15.6: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 May 13 00:25:49.792373 kernel: pci 0000:00:15.7: bridge window [io 0x1000-0x0fff] to [bus 0a] add_size 1000 May 13 00:25:49.792448 kernel: pci 0000:00:16.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 0b] add_size 200000 add_align 100000 May 13 00:25:49.792530 kernel: pci 0000:00:16.3: bridge window [io 0x1000-0x0fff] to [bus 0e] add_size 1000 May 13 00:25:49.792622 kernel: pci 0000:00:16.4: bridge window [io 0x1000-0x0fff] to [bus 0f] add_size 1000 May 13 00:25:49.793005 kernel: pci 0000:00:16.5: bridge window [io 0x1000-0x0fff] to [bus 10] add_size 1000 May 13 00:25:49.793097 kernel: pci 0000:00:16.6: bridge window [io 0x1000-0x0fff] to [bus 11] add_size 1000 May 13 00:25:49.793171 kernel: pci 0000:00:16.7: bridge window [io 0x1000-0x0fff] to [bus 12] add_size 1000 May 13 00:25:49.793245 kernel: pci 0000:00:17.3: bridge window [io 0x1000-0x0fff] to [bus 16] add_size 1000 May 13 00:25:49.793315 kernel: pci 0000:00:17.4: bridge window [io 0x1000-0x0fff] to [bus 17] add_size 1000 May 13 00:25:49.793397 kernel: pci 0000:00:17.5: bridge window [io 0x1000-0x0fff] to [bus 18] add_size 1000 May 13 00:25:49.793459 kernel: pci 0000:00:17.6: bridge window [io 0x1000-0x0fff] to [bus 19] add_size 1000 May 13 00:25:49.793527 kernel: pci 0000:00:17.7: bridge window [io 0x1000-0x0fff] to [bus 1a] add_size 1000 May 13 00:25:49.793597 kernel: pci 0000:00:18.2: bridge window [io 0x1000-0x0fff] to [bus 1d] add_size 1000 May 13 00:25:49.793657 kernel: pci 0000:00:18.3: bridge window [io 0x1000-0x0fff] to [bus 1e] add_size 1000 May 13 00:25:49.793723 kernel: pci 0000:00:18.4: bridge window [io 0x1000-0x0fff] to [bus 1f] add_size 1000 May 13 00:25:49.793803 kernel: pci 0000:00:18.5: bridge window [io 0x1000-0x0fff] to [bus 20] add_size 1000 May 13 00:25:49.793881 kernel: pci 0000:00:18.6: bridge window [io 0x1000-0x0fff] to [bus 21] add_size 1000 May 13 00:25:49.793968 kernel: pci 0000:00:18.7: bridge window [io 0x1000-0x0fff] to [bus 22] add_size 1000 May 13 00:25:49.794029 kernel: pci 0000:00:15.0: BAR 15: assigned [mem 0xc0000000-0xc01fffff 64bit pref] May 13 00:25:49.794081 kernel: pci 0000:00:16.0: BAR 15: assigned [mem 0xc0200000-0xc03fffff 64bit pref] May 13 00:25:49.794133 kernel: pci 0000:00:15.3: BAR 13: no space for [io size 0x1000] May 13 00:25:49.794201 kernel: pci 0000:00:15.3: BAR 13: failed to assign [io size 0x1000] May 13 00:25:49.794392 kernel: pci 0000:00:15.4: BAR 13: no space for [io size 0x1000] May 13 00:25:49.794477 kernel: pci 0000:00:15.4: BAR 13: failed to assign [io size 0x1000] May 13 00:25:49.794541 kernel: pci 0000:00:15.5: BAR 13: no space for [io size 0x1000] May 13 00:25:49.794592 kernel: pci 0000:00:15.5: BAR 13: failed to assign [io size 0x1000] May 13 00:25:49.794644 kernel: pci 0000:00:15.6: BAR 13: no space for [io size 0x1000] May 13 00:25:49.794696 kernel: pci 0000:00:15.6: BAR 13: failed to assign [io size 0x1000] May 13 00:25:49.794775 kernel: pci 0000:00:15.7: BAR 13: no space for [io size 0x1000] May 13 00:25:49.794851 kernel: pci 0000:00:15.7: BAR 13: failed to assign [io size 0x1000] May 13 00:25:49.794919 kernel: pci 0000:00:16.3: BAR 13: no space for [io size 0x1000] May 13 00:25:49.794987 kernel: pci 0000:00:16.3: BAR 13: failed to assign [io size 0x1000] May 13 00:25:49.795053 kernel: pci 0000:00:16.4: BAR 13: no space for [io size 0x1000] May 13 00:25:49.795120 kernel: pci 0000:00:16.4: BAR 13: failed to assign [io size 0x1000] May 13 00:25:49.795175 kernel: pci 0000:00:16.5: BAR 13: no space for [io size 0x1000] May 13 00:25:49.795239 kernel: pci 0000:00:16.5: BAR 13: failed to assign [io size 0x1000] May 13 00:25:49.795302 kernel: pci 0000:00:16.6: BAR 13: no space for [io size 0x1000] May 13 00:25:49.795366 kernel: pci 0000:00:16.6: BAR 13: failed to assign [io size 0x1000] May 13 00:25:49.795420 kernel: pci 0000:00:16.7: BAR 13: no space for [io size 0x1000] May 13 00:25:49.795474 kernel: pci 0000:00:16.7: BAR 13: failed to assign [io size 0x1000] May 13 00:25:49.795550 kernel: pci 0000:00:17.3: BAR 13: no space for [io size 0x1000] May 13 00:25:49.795609 kernel: pci 0000:00:17.3: BAR 13: failed to assign [io size 0x1000] May 13 00:25:49.795682 kernel: pci 0000:00:17.4: BAR 13: no space for [io size 0x1000] May 13 00:25:49.795746 kernel: pci 0000:00:17.4: BAR 13: failed to assign [io size 0x1000] May 13 00:25:49.795822 kernel: pci 0000:00:17.5: BAR 13: no space for [io size 0x1000] May 13 00:25:49.795885 kernel: pci 0000:00:17.5: BAR 13: failed to assign [io size 0x1000] May 13 00:25:49.795950 kernel: pci 0000:00:17.6: BAR 13: no space for [io size 0x1000] May 13 00:25:49.796006 kernel: pci 0000:00:17.6: BAR 13: failed to assign [io size 0x1000] May 13 00:25:49.796060 kernel: pci 0000:00:17.7: BAR 13: no space for [io size 0x1000] May 13 00:25:49.796126 kernel: pci 0000:00:17.7: BAR 13: failed to assign [io size 0x1000] May 13 00:25:49.796182 kernel: pci 0000:00:18.2: BAR 13: no space for [io size 0x1000] May 13 00:25:49.797360 kernel: pci 0000:00:18.2: BAR 13: failed to assign [io size 0x1000] May 13 00:25:49.797444 kernel: pci 0000:00:18.3: BAR 13: no space for [io size 0x1000] May 13 00:25:49.797520 kernel: pci 0000:00:18.3: BAR 13: failed to assign [io size 0x1000] May 13 00:25:49.797584 kernel: pci 0000:00:18.4: BAR 13: no space for [io size 0x1000] May 13 00:25:49.797643 kernel: pci 0000:00:18.4: BAR 13: failed to assign [io size 0x1000] May 13 00:25:49.797702 kernel: pci 0000:00:18.5: BAR 13: no space for [io size 0x1000] May 13 00:25:49.797770 kernel: pci 0000:00:18.5: BAR 13: failed to assign [io size 0x1000] May 13 00:25:49.797839 kernel: pci 0000:00:18.6: BAR 13: no space for [io size 0x1000] May 13 00:25:49.797904 kernel: pci 0000:00:18.6: BAR 13: failed to assign [io size 0x1000] May 13 00:25:49.797958 kernel: pci 0000:00:18.7: BAR 13: no space for [io size 0x1000] May 13 00:25:49.798026 kernel: pci 0000:00:18.7: BAR 13: failed to assign [io size 0x1000] May 13 00:25:49.798094 kernel: pci 0000:00:18.7: BAR 13: no space for [io size 0x1000] May 13 00:25:49.798154 kernel: pci 0000:00:18.7: BAR 13: failed to assign [io size 0x1000] May 13 00:25:49.798211 kernel: pci 0000:00:18.6: BAR 13: no space for [io size 0x1000] May 13 00:25:49.798290 kernel: pci 0000:00:18.6: BAR 13: failed to assign [io size 0x1000] May 13 00:25:49.798346 kernel: pci 0000:00:18.5: BAR 13: no space for [io size 0x1000] May 13 00:25:49.798412 kernel: pci 0000:00:18.5: BAR 13: failed to assign [io size 0x1000] May 13 00:25:49.798478 kernel: pci 0000:00:18.4: BAR 13: no space for [io size 0x1000] May 13 00:25:49.798536 kernel: pci 0000:00:18.4: BAR 13: failed to assign [io size 0x1000] May 13 00:25:49.798591 kernel: pci 0000:00:18.3: BAR 13: no space for [io size 0x1000] May 13 00:25:49.798648 kernel: pci 0000:00:18.3: BAR 13: failed to assign [io size 0x1000] May 13 00:25:49.798702 kernel: pci 0000:00:18.2: BAR 13: no space for [io size 0x1000] May 13 00:25:49.798759 kernel: pci 0000:00:18.2: BAR 13: failed to assign [io size 0x1000] May 13 00:25:49.798822 kernel: pci 0000:00:17.7: BAR 13: no space for [io size 0x1000] May 13 00:25:49.798877 kernel: pci 0000:00:17.7: BAR 13: failed to assign [io size 0x1000] May 13 00:25:49.798935 kernel: pci 0000:00:17.6: BAR 13: no space for [io size 0x1000] May 13 00:25:49.798991 kernel: pci 0000:00:17.6: BAR 13: failed to assign [io size 0x1000] May 13 00:25:49.799052 kernel: pci 0000:00:17.5: BAR 13: no space for [io size 0x1000] May 13 00:25:49.799129 kernel: pci 0000:00:17.5: BAR 13: failed to assign [io size 0x1000] May 13 00:25:49.799191 kernel: pci 0000:00:17.4: BAR 13: no space for [io size 0x1000] May 13 00:25:49.800725 kernel: pci 0000:00:17.4: BAR 13: failed to assign [io size 0x1000] May 13 00:25:49.800808 kernel: pci 0000:00:17.3: BAR 13: no space for [io size 0x1000] May 13 00:25:49.800876 kernel: pci 0000:00:17.3: BAR 13: failed to assign [io size 0x1000] May 13 00:25:49.800940 kernel: pci 0000:00:16.7: BAR 13: no space for [io size 0x1000] May 13 00:25:49.801009 kernel: pci 0000:00:16.7: BAR 13: failed to assign [io size 0x1000] May 13 00:25:49.801074 kernel: pci 0000:00:16.6: BAR 13: no space for [io size 0x1000] May 13 00:25:49.801132 kernel: pci 0000:00:16.6: BAR 13: failed to assign [io size 0x1000] May 13 00:25:49.801186 kernel: pci 0000:00:16.5: BAR 13: no space for [io size 0x1000] May 13 00:25:49.801253 kernel: pci 0000:00:16.5: BAR 13: failed to assign [io size 0x1000] May 13 00:25:49.801318 kernel: pci 0000:00:16.4: BAR 13: no space for [io size 0x1000] May 13 00:25:49.801377 kernel: pci 0000:00:16.4: BAR 13: failed to assign [io size 0x1000] May 13 00:25:49.801441 kernel: pci 0000:00:16.3: BAR 13: no space for [io size 0x1000] May 13 00:25:49.801512 kernel: pci 0000:00:16.3: BAR 13: failed to assign [io size 0x1000] May 13 00:25:49.801574 kernel: pci 0000:00:15.7: BAR 13: no space for [io size 0x1000] May 13 00:25:49.801635 kernel: pci 0000:00:15.7: BAR 13: failed to assign [io size 0x1000] May 13 00:25:49.801714 kernel: pci 0000:00:15.6: BAR 13: no space for [io size 0x1000] May 13 00:25:49.801782 kernel: pci 0000:00:15.6: BAR 13: failed to assign [io size 0x1000] May 13 00:25:49.801849 kernel: pci 0000:00:15.5: BAR 13: no space for [io size 0x1000] May 13 00:25:49.801916 kernel: pci 0000:00:15.5: BAR 13: failed to assign [io size 0x1000] May 13 00:25:49.801990 kernel: pci 0000:00:15.4: BAR 13: no space for [io size 0x1000] May 13 00:25:49.802062 kernel: pci 0000:00:15.4: BAR 13: failed to assign [io size 0x1000] May 13 00:25:49.802137 kernel: pci 0000:00:15.3: BAR 13: no space for [io size 0x1000] May 13 00:25:49.802197 kernel: pci 0000:00:15.3: BAR 13: failed to assign [io size 0x1000] May 13 00:25:49.802279 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] May 13 00:25:49.802357 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] May 13 00:25:49.802424 kernel: pci 0000:00:11.0: bridge window [io 0x2000-0x3fff] May 13 00:25:49.802500 kernel: pci 0000:00:11.0: bridge window [mem 0xfd600000-0xfdffffff] May 13 00:25:49.802565 kernel: pci 0000:00:11.0: bridge window [mem 0xe7b00000-0xe7ffffff 64bit pref] May 13 00:25:49.802642 kernel: pci 0000:03:00.0: BAR 6: assigned [mem 0xfd500000-0xfd50ffff pref] May 13 00:25:49.802709 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] May 13 00:25:49.802788 kernel: pci 0000:00:15.0: bridge window [io 0x4000-0x4fff] May 13 00:25:49.802855 kernel: pci 0000:00:15.0: bridge window [mem 0xfd500000-0xfd5fffff] May 13 00:25:49.802926 kernel: pci 0000:00:15.0: bridge window [mem 0xc0000000-0xc01fffff 64bit pref] May 13 00:25:49.803003 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] May 13 00:25:49.803075 kernel: pci 0000:00:15.1: bridge window [io 0x8000-0x8fff] May 13 00:25:49.803143 kernel: pci 0000:00:15.1: bridge window [mem 0xfd100000-0xfd1fffff] May 13 00:25:49.803246 kernel: pci 0000:00:15.1: bridge window [mem 0xe7800000-0xe78fffff 64bit pref] May 13 00:25:49.803317 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] May 13 00:25:49.803378 kernel: pci 0000:00:15.2: bridge window [io 0xc000-0xcfff] May 13 00:25:49.803436 kernel: pci 0000:00:15.2: bridge window [mem 0xfcd00000-0xfcdfffff] May 13 00:25:49.803487 kernel: pci 0000:00:15.2: bridge window [mem 0xe7400000-0xe74fffff 64bit pref] May 13 00:25:49.803550 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] May 13 00:25:49.803612 kernel: pci 0000:00:15.3: bridge window [mem 0xfc900000-0xfc9fffff] May 13 00:25:49.803663 kernel: pci 0000:00:15.3: bridge window [mem 0xe7000000-0xe70fffff 64bit pref] May 13 00:25:49.803724 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] May 13 00:25:49.803795 kernel: pci 0000:00:15.4: bridge window [mem 0xfc500000-0xfc5fffff] May 13 00:25:49.803872 kernel: pci 0000:00:15.4: bridge window [mem 0xe6c00000-0xe6cfffff 64bit pref] May 13 00:25:49.803946 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] May 13 00:25:49.804020 kernel: pci 0000:00:15.5: bridge window [mem 0xfc100000-0xfc1fffff] May 13 00:25:49.804079 kernel: pci 0000:00:15.5: bridge window [mem 0xe6800000-0xe68fffff 64bit pref] May 13 00:25:49.804142 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] May 13 00:25:49.804767 kernel: pci 0000:00:15.6: bridge window [mem 0xfbd00000-0xfbdfffff] May 13 00:25:49.804852 kernel: pci 0000:00:15.6: bridge window [mem 0xe6400000-0xe64fffff 64bit pref] May 13 00:25:49.804917 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] May 13 00:25:49.804984 kernel: pci 0000:00:15.7: bridge window [mem 0xfb900000-0xfb9fffff] May 13 00:25:49.805038 kernel: pci 0000:00:15.7: bridge window [mem 0xe6000000-0xe60fffff 64bit pref] May 13 00:25:49.805106 kernel: pci 0000:0b:00.0: BAR 6: assigned [mem 0xfd400000-0xfd40ffff pref] May 13 00:25:49.805168 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] May 13 00:25:49.805237 kernel: pci 0000:00:16.0: bridge window [io 0x5000-0x5fff] May 13 00:25:49.805297 kernel: pci 0000:00:16.0: bridge window [mem 0xfd400000-0xfd4fffff] May 13 00:25:49.805351 kernel: pci 0000:00:16.0: bridge window [mem 0xc0200000-0xc03fffff 64bit pref] May 13 00:25:49.805415 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] May 13 00:25:49.805468 kernel: pci 0000:00:16.1: bridge window [io 0x9000-0x9fff] May 13 00:25:49.805526 kernel: pci 0000:00:16.1: bridge window [mem 0xfd000000-0xfd0fffff] May 13 00:25:49.805598 kernel: pci 0000:00:16.1: bridge window [mem 0xe7700000-0xe77fffff 64bit pref] May 13 00:25:49.805683 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] May 13 00:25:49.805752 kernel: pci 0000:00:16.2: bridge window [io 0xd000-0xdfff] May 13 00:25:49.805828 kernel: pci 0000:00:16.2: bridge window [mem 0xfcc00000-0xfccfffff] May 13 00:25:49.805909 kernel: pci 0000:00:16.2: bridge window [mem 0xe7300000-0xe73fffff 64bit pref] May 13 00:25:49.805982 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] May 13 00:25:49.806048 kernel: pci 0000:00:16.3: bridge window [mem 0xfc800000-0xfc8fffff] May 13 00:25:49.807529 kernel: pci 0000:00:16.3: bridge window [mem 0xe6f00000-0xe6ffffff 64bit pref] May 13 00:25:49.807589 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] May 13 00:25:49.807643 kernel: pci 0000:00:16.4: bridge window [mem 0xfc400000-0xfc4fffff] May 13 00:25:49.807707 kernel: pci 0000:00:16.4: bridge window [mem 0xe6b00000-0xe6bfffff 64bit pref] May 13 00:25:49.807767 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] May 13 00:25:49.807820 kernel: pci 0000:00:16.5: bridge window [mem 0xfc000000-0xfc0fffff] May 13 00:25:49.807876 kernel: pci 0000:00:16.5: bridge window [mem 0xe6700000-0xe67fffff 64bit pref] May 13 00:25:49.807955 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] May 13 00:25:49.808027 kernel: pci 0000:00:16.6: bridge window [mem 0xfbc00000-0xfbcfffff] May 13 00:25:49.808093 kernel: pci 0000:00:16.6: bridge window [mem 0xe6300000-0xe63fffff 64bit pref] May 13 00:25:49.808168 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] May 13 00:25:49.809574 kernel: pci 0000:00:16.7: bridge window [mem 0xfb800000-0xfb8fffff] May 13 00:25:49.809665 kernel: pci 0000:00:16.7: bridge window [mem 0xe5f00000-0xe5ffffff 64bit pref] May 13 00:25:49.809740 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] May 13 00:25:49.809811 kernel: pci 0000:00:17.0: bridge window [io 0x6000-0x6fff] May 13 00:25:49.809865 kernel: pci 0000:00:17.0: bridge window [mem 0xfd300000-0xfd3fffff] May 13 00:25:49.809921 kernel: pci 0000:00:17.0: bridge window [mem 0xe7a00000-0xe7afffff 64bit pref] May 13 00:25:49.809986 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] May 13 00:25:49.810062 kernel: pci 0000:00:17.1: bridge window [io 0xa000-0xafff] May 13 00:25:49.810126 kernel: pci 0000:00:17.1: bridge window [mem 0xfcf00000-0xfcffffff] May 13 00:25:49.810197 kernel: pci 0000:00:17.1: bridge window [mem 0xe7600000-0xe76fffff 64bit pref] May 13 00:25:49.810282 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] May 13 00:25:49.810338 kernel: pci 0000:00:17.2: bridge window [io 0xe000-0xefff] May 13 00:25:49.810408 kernel: pci 0000:00:17.2: bridge window [mem 0xfcb00000-0xfcbfffff] May 13 00:25:49.810477 kernel: pci 0000:00:17.2: bridge window [mem 0xe7200000-0xe72fffff 64bit pref] May 13 00:25:49.810544 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] May 13 00:25:49.810621 kernel: pci 0000:00:17.3: bridge window [mem 0xfc700000-0xfc7fffff] May 13 00:25:49.810691 kernel: pci 0000:00:17.3: bridge window [mem 0xe6e00000-0xe6efffff 64bit pref] May 13 00:25:49.810778 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] May 13 00:25:49.810845 kernel: pci 0000:00:17.4: bridge window [mem 0xfc300000-0xfc3fffff] May 13 00:25:49.810928 kernel: pci 0000:00:17.4: bridge window [mem 0xe6a00000-0xe6afffff 64bit pref] May 13 00:25:49.810983 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] May 13 00:25:49.811042 kernel: pci 0000:00:17.5: bridge window [mem 0xfbf00000-0xfbffffff] May 13 00:25:49.811108 kernel: pci 0000:00:17.5: bridge window [mem 0xe6600000-0xe66fffff 64bit pref] May 13 00:25:49.811202 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] May 13 00:25:49.811793 kernel: pci 0000:00:17.6: bridge window [mem 0xfbb00000-0xfbbfffff] May 13 00:25:49.811857 kernel: pci 0000:00:17.6: bridge window [mem 0xe6200000-0xe62fffff 64bit pref] May 13 00:25:49.811930 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] May 13 00:25:49.812000 kernel: pci 0000:00:17.7: bridge window [mem 0xfb700000-0xfb7fffff] May 13 00:25:49.812053 kernel: pci 0000:00:17.7: bridge window [mem 0xe5e00000-0xe5efffff 64bit pref] May 13 00:25:49.812110 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] May 13 00:25:49.812182 kernel: pci 0000:00:18.0: bridge window [io 0x7000-0x7fff] May 13 00:25:49.812292 kernel: pci 0000:00:18.0: bridge window [mem 0xfd200000-0xfd2fffff] May 13 00:25:49.812361 kernel: pci 0000:00:18.0: bridge window [mem 0xe7900000-0xe79fffff 64bit pref] May 13 00:25:49.812426 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] May 13 00:25:49.812487 kernel: pci 0000:00:18.1: bridge window [io 0xb000-0xbfff] May 13 00:25:49.812558 kernel: pci 0000:00:18.1: bridge window [mem 0xfce00000-0xfcefffff] May 13 00:25:49.812633 kernel: pci 0000:00:18.1: bridge window [mem 0xe7500000-0xe75fffff 64bit pref] May 13 00:25:49.812696 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] May 13 00:25:49.812759 kernel: pci 0000:00:18.2: bridge window [mem 0xfca00000-0xfcafffff] May 13 00:25:49.812815 kernel: pci 0000:00:18.2: bridge window [mem 0xe7100000-0xe71fffff 64bit pref] May 13 00:25:49.812878 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] May 13 00:25:49.812935 kernel: pci 0000:00:18.3: bridge window [mem 0xfc600000-0xfc6fffff] May 13 00:25:49.813004 kernel: pci 0000:00:18.3: bridge window [mem 0xe6d00000-0xe6dfffff 64bit pref] May 13 00:25:49.813086 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] May 13 00:25:49.813154 kernel: pci 0000:00:18.4: bridge window [mem 0xfc200000-0xfc2fffff] May 13 00:25:49.813232 kernel: pci 0000:00:18.4: bridge window [mem 0xe6900000-0xe69fffff 64bit pref] May 13 00:25:49.813293 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] May 13 00:25:49.813344 kernel: pci 0000:00:18.5: bridge window [mem 0xfbe00000-0xfbefffff] May 13 00:25:49.813408 kernel: pci 0000:00:18.5: bridge window [mem 0xe6500000-0xe65fffff 64bit pref] May 13 00:25:49.813480 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] May 13 00:25:49.813547 kernel: pci 0000:00:18.6: bridge window [mem 0xfba00000-0xfbafffff] May 13 00:25:49.813609 kernel: pci 0000:00:18.6: bridge window [mem 0xe6100000-0xe61fffff 64bit pref] May 13 00:25:49.813662 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] May 13 00:25:49.813719 kernel: pci 0000:00:18.7: bridge window [mem 0xfb600000-0xfb6fffff] May 13 00:25:49.813777 kernel: pci 0000:00:18.7: bridge window [mem 0xe5d00000-0xe5dfffff 64bit pref] May 13 00:25:49.813839 kernel: pci_bus 0000:00: resource 4 [mem 0x000a0000-0x000bffff window] May 13 00:25:49.813896 kernel: pci_bus 0000:00: resource 5 [mem 0x000cc000-0x000dbfff window] May 13 00:25:49.813945 kernel: pci_bus 0000:00: resource 6 [mem 0xc0000000-0xfebfffff window] May 13 00:25:49.813991 kernel: pci_bus 0000:00: resource 7 [io 0x0000-0x0cf7 window] May 13 00:25:49.814053 kernel: pci_bus 0000:00: resource 8 [io 0x0d00-0xfeff window] May 13 00:25:49.814104 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x3fff] May 13 00:25:49.814157 kernel: pci_bus 0000:02: resource 1 [mem 0xfd600000-0xfdffffff] May 13 00:25:49.814238 kernel: pci_bus 0000:02: resource 2 [mem 0xe7b00000-0xe7ffffff 64bit pref] May 13 00:25:49.814306 kernel: pci_bus 0000:02: resource 4 [mem 0x000a0000-0x000bffff window] May 13 00:25:49.814366 kernel: pci_bus 0000:02: resource 5 [mem 0x000cc000-0x000dbfff window] May 13 00:25:49.814435 kernel: pci_bus 0000:02: resource 6 [mem 0xc0000000-0xfebfffff window] May 13 00:25:49.814489 kernel: pci_bus 0000:02: resource 7 [io 0x0000-0x0cf7 window] May 13 00:25:49.814535 kernel: pci_bus 0000:02: resource 8 [io 0x0d00-0xfeff window] May 13 00:25:49.814847 kernel: pci_bus 0000:03: resource 0 [io 0x4000-0x4fff] May 13 00:25:49.814911 kernel: pci_bus 0000:03: resource 1 [mem 0xfd500000-0xfd5fffff] May 13 00:25:49.814986 kernel: pci_bus 0000:03: resource 2 [mem 0xc0000000-0xc01fffff 64bit pref] May 13 00:25:49.815061 kernel: pci_bus 0000:04: resource 0 [io 0x8000-0x8fff] May 13 00:25:49.815121 kernel: pci_bus 0000:04: resource 1 [mem 0xfd100000-0xfd1fffff] May 13 00:25:49.815176 kernel: pci_bus 0000:04: resource 2 [mem 0xe7800000-0xe78fffff 64bit pref] May 13 00:25:49.815274 kernel: pci_bus 0000:05: resource 0 [io 0xc000-0xcfff] May 13 00:25:49.815343 kernel: pci_bus 0000:05: resource 1 [mem 0xfcd00000-0xfcdfffff] May 13 00:25:49.815414 kernel: pci_bus 0000:05: resource 2 [mem 0xe7400000-0xe74fffff 64bit pref] May 13 00:25:49.815481 kernel: pci_bus 0000:06: resource 1 [mem 0xfc900000-0xfc9fffff] May 13 00:25:49.815545 kernel: pci_bus 0000:06: resource 2 [mem 0xe7000000-0xe70fffff 64bit pref] May 13 00:25:49.815596 kernel: pci_bus 0000:07: resource 1 [mem 0xfc500000-0xfc5fffff] May 13 00:25:49.815647 kernel: pci_bus 0000:07: resource 2 [mem 0xe6c00000-0xe6cfffff 64bit pref] May 13 00:25:49.815702 kernel: pci_bus 0000:08: resource 1 [mem 0xfc100000-0xfc1fffff] May 13 00:25:49.815761 kernel: pci_bus 0000:08: resource 2 [mem 0xe6800000-0xe68fffff 64bit pref] May 13 00:25:49.815828 kernel: pci_bus 0000:09: resource 1 [mem 0xfbd00000-0xfbdfffff] May 13 00:25:49.815890 kernel: pci_bus 0000:09: resource 2 [mem 0xe6400000-0xe64fffff 64bit pref] May 13 00:25:49.815959 kernel: pci_bus 0000:0a: resource 1 [mem 0xfb900000-0xfb9fffff] May 13 00:25:49.816014 kernel: pci_bus 0000:0a: resource 2 [mem 0xe6000000-0xe60fffff 64bit pref] May 13 00:25:49.816085 kernel: pci_bus 0000:0b: resource 0 [io 0x5000-0x5fff] May 13 00:25:49.816136 kernel: pci_bus 0000:0b: resource 1 [mem 0xfd400000-0xfd4fffff] May 13 00:25:49.816181 kernel: pci_bus 0000:0b: resource 2 [mem 0xc0200000-0xc03fffff 64bit pref] May 13 00:25:49.816870 kernel: pci_bus 0000:0c: resource 0 [io 0x9000-0x9fff] May 13 00:25:49.816941 kernel: pci_bus 0000:0c: resource 1 [mem 0xfd000000-0xfd0fffff] May 13 00:25:49.817005 kernel: pci_bus 0000:0c: resource 2 [mem 0xe7700000-0xe77fffff 64bit pref] May 13 00:25:49.817476 kernel: pci_bus 0000:0d: resource 0 [io 0xd000-0xdfff] May 13 00:25:49.817553 kernel: pci_bus 0000:0d: resource 1 [mem 0xfcc00000-0xfccfffff] May 13 00:25:49.817618 kernel: pci_bus 0000:0d: resource 2 [mem 0xe7300000-0xe73fffff 64bit pref] May 13 00:25:49.817685 kernel: pci_bus 0000:0e: resource 1 [mem 0xfc800000-0xfc8fffff] May 13 00:25:49.817745 kernel: pci_bus 0000:0e: resource 2 [mem 0xe6f00000-0xe6ffffff 64bit pref] May 13 00:25:49.817809 kernel: pci_bus 0000:0f: resource 1 [mem 0xfc400000-0xfc4fffff] May 13 00:25:49.817870 kernel: pci_bus 0000:0f: resource 2 [mem 0xe6b00000-0xe6bfffff 64bit pref] May 13 00:25:49.817952 kernel: pci_bus 0000:10: resource 1 [mem 0xfc000000-0xfc0fffff] May 13 00:25:49.818028 kernel: pci_bus 0000:10: resource 2 [mem 0xe6700000-0xe67fffff 64bit pref] May 13 00:25:49.818095 kernel: pci_bus 0000:11: resource 1 [mem 0xfbc00000-0xfbcfffff] May 13 00:25:49.818147 kernel: pci_bus 0000:11: resource 2 [mem 0xe6300000-0xe63fffff 64bit pref] May 13 00:25:49.818206 kernel: pci_bus 0000:12: resource 1 [mem 0xfb800000-0xfb8fffff] May 13 00:25:49.818281 kernel: pci_bus 0000:12: resource 2 [mem 0xe5f00000-0xe5ffffff 64bit pref] May 13 00:25:49.818342 kernel: pci_bus 0000:13: resource 0 [io 0x6000-0x6fff] May 13 00:25:49.818419 kernel: pci_bus 0000:13: resource 1 [mem 0xfd300000-0xfd3fffff] May 13 00:25:49.818482 kernel: pci_bus 0000:13: resource 2 [mem 0xe7a00000-0xe7afffff 64bit pref] May 13 00:25:49.818558 kernel: pci_bus 0000:14: resource 0 [io 0xa000-0xafff] May 13 00:25:49.818626 kernel: pci_bus 0000:14: resource 1 [mem 0xfcf00000-0xfcffffff] May 13 00:25:49.818691 kernel: pci_bus 0000:14: resource 2 [mem 0xe7600000-0xe76fffff 64bit pref] May 13 00:25:49.818764 kernel: pci_bus 0000:15: resource 0 [io 0xe000-0xefff] May 13 00:25:49.818834 kernel: pci_bus 0000:15: resource 1 [mem 0xfcb00000-0xfcbfffff] May 13 00:25:49.818902 kernel: pci_bus 0000:15: resource 2 [mem 0xe7200000-0xe72fffff 64bit pref] May 13 00:25:49.818962 kernel: pci_bus 0000:16: resource 1 [mem 0xfc700000-0xfc7fffff] May 13 00:25:49.819018 kernel: pci_bus 0000:16: resource 2 [mem 0xe6e00000-0xe6efffff 64bit pref] May 13 00:25:49.819081 kernel: pci_bus 0000:17: resource 1 [mem 0xfc300000-0xfc3fffff] May 13 00:25:49.819134 kernel: pci_bus 0000:17: resource 2 [mem 0xe6a00000-0xe6afffff 64bit pref] May 13 00:25:49.819200 kernel: pci_bus 0000:18: resource 1 [mem 0xfbf00000-0xfbffffff] May 13 00:25:49.819305 kernel: pci_bus 0000:18: resource 2 [mem 0xe6600000-0xe66fffff 64bit pref] May 13 00:25:49.819367 kernel: pci_bus 0000:19: resource 1 [mem 0xfbb00000-0xfbbfffff] May 13 00:25:49.819419 kernel: pci_bus 0000:19: resource 2 [mem 0xe6200000-0xe62fffff 64bit pref] May 13 00:25:49.819483 kernel: pci_bus 0000:1a: resource 1 [mem 0xfb700000-0xfb7fffff] May 13 00:25:49.819537 kernel: pci_bus 0000:1a: resource 2 [mem 0xe5e00000-0xe5efffff 64bit pref] May 13 00:25:49.819594 kernel: pci_bus 0000:1b: resource 0 [io 0x7000-0x7fff] May 13 00:25:49.819652 kernel: pci_bus 0000:1b: resource 1 [mem 0xfd200000-0xfd2fffff] May 13 00:25:49.819726 kernel: pci_bus 0000:1b: resource 2 [mem 0xe7900000-0xe79fffff 64bit pref] May 13 00:25:49.819788 kernel: pci_bus 0000:1c: resource 0 [io 0xb000-0xbfff] May 13 00:25:49.819840 kernel: pci_bus 0000:1c: resource 1 [mem 0xfce00000-0xfcefffff] May 13 00:25:49.819894 kernel: pci_bus 0000:1c: resource 2 [mem 0xe7500000-0xe75fffff 64bit pref] May 13 00:25:49.819951 kernel: pci_bus 0000:1d: resource 1 [mem 0xfca00000-0xfcafffff] May 13 00:25:49.820007 kernel: pci_bus 0000:1d: resource 2 [mem 0xe7100000-0xe71fffff 64bit pref] May 13 00:25:49.820062 kernel: pci_bus 0000:1e: resource 1 [mem 0xfc600000-0xfc6fffff] May 13 00:25:49.820115 kernel: pci_bus 0000:1e: resource 2 [mem 0xe6d00000-0xe6dfffff 64bit pref] May 13 00:25:49.820169 kernel: pci_bus 0000:1f: resource 1 [mem 0xfc200000-0xfc2fffff] May 13 00:25:49.820231 kernel: pci_bus 0000:1f: resource 2 [mem 0xe6900000-0xe69fffff 64bit pref] May 13 00:25:49.820305 kernel: pci_bus 0000:20: resource 1 [mem 0xfbe00000-0xfbefffff] May 13 00:25:49.820363 kernel: pci_bus 0000:20: resource 2 [mem 0xe6500000-0xe65fffff 64bit pref] May 13 00:25:49.820437 kernel: pci_bus 0000:21: resource 1 [mem 0xfba00000-0xfbafffff] May 13 00:25:49.820500 kernel: pci_bus 0000:21: resource 2 [mem 0xe6100000-0xe61fffff 64bit pref] May 13 00:25:49.820555 kernel: pci_bus 0000:22: resource 1 [mem 0xfb600000-0xfb6fffff] May 13 00:25:49.820615 kernel: pci_bus 0000:22: resource 2 [mem 0xe5d00000-0xe5dfffff 64bit pref] May 13 00:25:49.820683 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers May 13 00:25:49.820698 kernel: PCI: CLS 32 bytes, default 64 May 13 00:25:49.820713 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer May 13 00:25:49.820722 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x311fd3cd494, max_idle_ns: 440795223879 ns May 13 00:25:49.820730 kernel: clocksource: Switched to clocksource tsc May 13 00:25:49.820737 kernel: Initialise system trusted keyrings May 13 00:25:49.820745 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 May 13 00:25:49.820752 kernel: Key type asymmetric registered May 13 00:25:49.820758 kernel: Asymmetric key parser 'x509' registered May 13 00:25:49.820767 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) May 13 00:25:49.820774 kernel: io scheduler mq-deadline registered May 13 00:25:49.820782 kernel: io scheduler kyber registered May 13 00:25:49.820793 kernel: io scheduler bfq registered May 13 00:25:49.820862 kernel: pcieport 0000:00:15.0: PME: Signaling with IRQ 24 May 13 00:25:49.820923 kernel: pcieport 0000:00:15.0: pciehp: Slot #160 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 13 00:25:49.820984 kernel: pcieport 0000:00:15.1: PME: Signaling with IRQ 25 May 13 00:25:49.821052 kernel: pcieport 0000:00:15.1: pciehp: Slot #161 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 13 00:25:49.821125 kernel: pcieport 0000:00:15.2: PME: Signaling with IRQ 26 May 13 00:25:49.821202 kernel: pcieport 0000:00:15.2: pciehp: Slot #162 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 13 00:25:49.821295 kernel: pcieport 0000:00:15.3: PME: Signaling with IRQ 27 May 13 00:25:49.821358 kernel: pcieport 0000:00:15.3: pciehp: Slot #163 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 13 00:25:49.821420 kernel: pcieport 0000:00:15.4: PME: Signaling with IRQ 28 May 13 00:25:49.821502 kernel: pcieport 0000:00:15.4: pciehp: Slot #164 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 13 00:25:49.821568 kernel: pcieport 0000:00:15.5: PME: Signaling with IRQ 29 May 13 00:25:49.821642 kernel: pcieport 0000:00:15.5: pciehp: Slot #165 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 13 00:25:49.821724 kernel: pcieport 0000:00:15.6: PME: Signaling with IRQ 30 May 13 00:25:49.821806 kernel: pcieport 0000:00:15.6: pciehp: Slot #166 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 13 00:25:49.821879 kernel: pcieport 0000:00:15.7: PME: Signaling with IRQ 31 May 13 00:25:49.821939 kernel: pcieport 0000:00:15.7: pciehp: Slot #167 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 13 00:25:49.822012 kernel: pcieport 0000:00:16.0: PME: Signaling with IRQ 32 May 13 00:25:49.822098 kernel: pcieport 0000:00:16.0: pciehp: Slot #192 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 13 00:25:49.822171 kernel: pcieport 0000:00:16.1: PME: Signaling with IRQ 33 May 13 00:25:49.822303 kernel: pcieport 0000:00:16.1: pciehp: Slot #193 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 13 00:25:49.822382 kernel: pcieport 0000:00:16.2: PME: Signaling with IRQ 34 May 13 00:25:49.822457 kernel: pcieport 0000:00:16.2: pciehp: Slot #194 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 13 00:25:49.822525 kernel: pcieport 0000:00:16.3: PME: Signaling with IRQ 35 May 13 00:25:49.822587 kernel: pcieport 0000:00:16.3: pciehp: Slot #195 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 13 00:25:49.822655 kernel: pcieport 0000:00:16.4: PME: Signaling with IRQ 36 May 13 00:25:49.822715 kernel: pcieport 0000:00:16.4: pciehp: Slot #196 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 13 00:25:49.822781 kernel: pcieport 0000:00:16.5: PME: Signaling with IRQ 37 May 13 00:25:49.822843 kernel: pcieport 0000:00:16.5: pciehp: Slot #197 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 13 00:25:49.822912 kernel: pcieport 0000:00:16.6: PME: Signaling with IRQ 38 May 13 00:25:49.823001 kernel: pcieport 0000:00:16.6: pciehp: Slot #198 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 13 00:25:49.823070 kernel: pcieport 0000:00:16.7: PME: Signaling with IRQ 39 May 13 00:25:49.823139 kernel: pcieport 0000:00:16.7: pciehp: Slot #199 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 13 00:25:49.823200 kernel: pcieport 0000:00:17.0: PME: Signaling with IRQ 40 May 13 00:25:49.823352 kernel: pcieport 0000:00:17.0: pciehp: Slot #224 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 13 00:25:49.823419 kernel: pcieport 0000:00:17.1: PME: Signaling with IRQ 41 May 13 00:25:49.823489 kernel: pcieport 0000:00:17.1: pciehp: Slot #225 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 13 00:25:49.823556 kernel: pcieport 0000:00:17.2: PME: Signaling with IRQ 42 May 13 00:25:49.823631 kernel: pcieport 0000:00:17.2: pciehp: Slot #226 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 13 00:25:49.823696 kernel: pcieport 0000:00:17.3: PME: Signaling with IRQ 43 May 13 00:25:49.823757 kernel: pcieport 0000:00:17.3: pciehp: Slot #227 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 13 00:25:49.823821 kernel: pcieport 0000:00:17.4: PME: Signaling with IRQ 44 May 13 00:25:49.823885 kernel: pcieport 0000:00:17.4: pciehp: Slot #228 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 13 00:25:49.823947 kernel: pcieport 0000:00:17.5: PME: Signaling with IRQ 45 May 13 00:25:49.824007 kernel: pcieport 0000:00:17.5: pciehp: Slot #229 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 13 00:25:49.824067 kernel: pcieport 0000:00:17.6: PME: Signaling with IRQ 46 May 13 00:25:49.824124 kernel: pcieport 0000:00:17.6: pciehp: Slot #230 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 13 00:25:49.824185 kernel: pcieport 0000:00:17.7: PME: Signaling with IRQ 47 May 13 00:25:49.824272 kernel: pcieport 0000:00:17.7: pciehp: Slot #231 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 13 00:25:49.824334 kernel: pcieport 0000:00:18.0: PME: Signaling with IRQ 48 May 13 00:25:49.824390 kernel: pcieport 0000:00:18.0: pciehp: Slot #256 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 13 00:25:49.824468 kernel: pcieport 0000:00:18.1: PME: Signaling with IRQ 49 May 13 00:25:49.824529 kernel: pcieport 0000:00:18.1: pciehp: Slot #257 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 13 00:25:49.824596 kernel: pcieport 0000:00:18.2: PME: Signaling with IRQ 50 May 13 00:25:49.824663 kernel: pcieport 0000:00:18.2: pciehp: Slot #258 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 13 00:25:49.824734 kernel: pcieport 0000:00:18.3: PME: Signaling with IRQ 51 May 13 00:25:49.824815 kernel: pcieport 0000:00:18.3: pciehp: Slot #259 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 13 00:25:49.824895 kernel: pcieport 0000:00:18.4: PME: Signaling with IRQ 52 May 13 00:25:49.824959 kernel: pcieport 0000:00:18.4: pciehp: Slot #260 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 13 00:25:49.825039 kernel: pcieport 0000:00:18.5: PME: Signaling with IRQ 53 May 13 00:25:49.825114 kernel: pcieport 0000:00:18.5: pciehp: Slot #261 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 13 00:25:49.825179 kernel: pcieport 0000:00:18.6: PME: Signaling with IRQ 54 May 13 00:25:49.825289 kernel: pcieport 0000:00:18.6: pciehp: Slot #262 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 13 00:25:49.825370 kernel: pcieport 0000:00:18.7: PME: Signaling with IRQ 55 May 13 00:25:49.825448 kernel: pcieport 0000:00:18.7: pciehp: Slot #263 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 13 00:25:49.825468 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 May 13 00:25:49.825480 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 13 00:25:49.825489 kernel: 00:05: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A May 13 00:25:49.825496 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBC,PNP0f13:MOUS] at 0x60,0x64 irq 1,12 May 13 00:25:49.825503 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 May 13 00:25:49.825511 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 May 13 00:25:49.825581 kernel: rtc_cmos 00:01: registered as rtc0 May 13 00:25:49.825600 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 May 13 00:25:49.825662 kernel: rtc_cmos 00:01: setting system clock to 2025-05-13T00:25:49 UTC (1747095949) May 13 00:25:49.825723 kernel: rtc_cmos 00:01: alarms up to one month, y3k, 114 bytes nvram May 13 00:25:49.825738 kernel: intel_pstate: CPU model not supported May 13 00:25:49.825748 kernel: NET: Registered PF_INET6 protocol family May 13 00:25:49.825759 kernel: Segment Routing with IPv6 May 13 00:25:49.825766 kernel: In-situ OAM (IOAM) with IPv6 May 13 00:25:49.825777 kernel: NET: Registered PF_PACKET protocol family May 13 00:25:49.825789 kernel: Key type dns_resolver registered May 13 00:25:49.825803 kernel: IPI shorthand broadcast: enabled May 13 00:25:49.825811 kernel: sched_clock: Marking stable (934151254, 239808089)->(1238757294, -64797951) May 13 00:25:49.825818 kernel: registered taskstats version 1 May 13 00:25:49.825825 kernel: Loading compiled-in X.509 certificates May 13 00:25:49.825836 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.89-flatcar: b404fdaaed18d29adfca671c3bbb23eee96fb08f' May 13 00:25:49.825843 kernel: Key type .fscrypt registered May 13 00:25:49.825852 kernel: Key type fscrypt-provisioning registered May 13 00:25:49.825862 kernel: ima: No TPM chip found, activating TPM-bypass! May 13 00:25:49.825876 kernel: ima: Allocated hash algorithm: sha1 May 13 00:25:49.825887 kernel: ima: No architecture policies found May 13 00:25:49.825898 kernel: clk: Disabling unused clocks May 13 00:25:49.825909 kernel: Freeing unused kernel image (initmem) memory: 42864K May 13 00:25:49.825920 kernel: Write protecting the kernel read-only data: 36864k May 13 00:25:49.825932 kernel: Freeing unused kernel image (rodata/data gap) memory: 1836K May 13 00:25:49.825942 kernel: Run /init as init process May 13 00:25:49.825949 kernel: with arguments: May 13 00:25:49.825955 kernel: /init May 13 00:25:49.825963 kernel: with environment: May 13 00:25:49.825976 kernel: HOME=/ May 13 00:25:49.825987 kernel: TERM=linux May 13 00:25:49.825994 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 13 00:25:49.826005 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 13 00:25:49.826017 systemd[1]: Detected virtualization vmware. May 13 00:25:49.826029 systemd[1]: Detected architecture x86-64. May 13 00:25:49.826040 systemd[1]: Running in initrd. May 13 00:25:49.826051 systemd[1]: No hostname configured, using default hostname. May 13 00:25:49.826060 systemd[1]: Hostname set to . May 13 00:25:49.826069 systemd[1]: Initializing machine ID from random generator. May 13 00:25:49.826077 systemd[1]: Queued start job for default target initrd.target. May 13 00:25:49.826088 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 13 00:25:49.826096 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 13 00:25:49.826108 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 13 00:25:49.826120 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 13 00:25:49.826129 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 13 00:25:49.826138 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 13 00:25:49.826147 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 13 00:25:49.826157 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 13 00:25:49.826168 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 13 00:25:49.826175 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 13 00:25:49.826182 systemd[1]: Reached target paths.target - Path Units. May 13 00:25:49.826196 systemd[1]: Reached target slices.target - Slice Units. May 13 00:25:49.826205 systemd[1]: Reached target swap.target - Swaps. May 13 00:25:49.826224 systemd[1]: Reached target timers.target - Timer Units. May 13 00:25:49.826237 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 13 00:25:49.826247 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 13 00:25:49.826254 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 13 00:25:49.826261 systemd[1]: Listening on systemd-journald.socket - Journal Socket. May 13 00:25:49.826267 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 13 00:25:49.826277 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 13 00:25:49.826287 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 13 00:25:49.826296 systemd[1]: Reached target sockets.target - Socket Units. May 13 00:25:49.826303 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 13 00:25:49.826314 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 13 00:25:49.826322 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 13 00:25:49.826329 systemd[1]: Starting systemd-fsck-usr.service... May 13 00:25:49.826335 systemd[1]: Starting systemd-journald.service - Journal Service... May 13 00:25:49.826341 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 13 00:25:49.826350 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 13 00:25:49.826356 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 13 00:25:49.826366 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 13 00:25:49.826372 systemd[1]: Finished systemd-fsck-usr.service. May 13 00:25:49.826395 systemd-journald[215]: Collecting audit messages is disabled. May 13 00:25:49.826418 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 13 00:25:49.826431 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 13 00:25:49.826443 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 13 00:25:49.826453 kernel: Bridge firewalling registered May 13 00:25:49.826464 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 13 00:25:49.826475 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 13 00:25:49.826483 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 13 00:25:49.826493 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 13 00:25:49.826502 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 13 00:25:49.826509 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 13 00:25:49.826515 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 13 00:25:49.826524 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 13 00:25:49.826536 systemd-journald[215]: Journal started May 13 00:25:49.826552 systemd-journald[215]: Runtime Journal (/run/log/journal/8020a967d0004fc38e62b2bed645c41a) is 4.8M, max 38.6M, 33.8M free. May 13 00:25:49.834434 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 13 00:25:49.756192 systemd-modules-load[216]: Inserted module 'overlay' May 13 00:25:49.783341 systemd-modules-load[216]: Inserted module 'br_netfilter' May 13 00:25:49.836290 systemd[1]: Started systemd-journald.service - Journal Service. May 13 00:25:49.839195 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 13 00:25:49.840499 dracut-cmdline[236]: dracut-dracut-053 May 13 00:25:49.842113 dracut-cmdline[236]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=a30636f72ddb6c7dc7c9bee07b7cf23b403029ba1ff64eed2705530c62c7b592 May 13 00:25:49.846568 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 13 00:25:49.853551 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 13 00:25:49.871288 systemd-resolved[269]: Positive Trust Anchors: May 13 00:25:49.871297 systemd-resolved[269]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 13 00:25:49.871325 systemd-resolved[269]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 13 00:25:49.873072 systemd-resolved[269]: Defaulting to hostname 'linux'. May 13 00:25:49.874748 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 13 00:25:49.875101 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 13 00:25:49.892245 kernel: SCSI subsystem initialized May 13 00:25:49.900240 kernel: Loading iSCSI transport class v2.0-870. May 13 00:25:49.908237 kernel: iscsi: registered transport (tcp) May 13 00:25:49.924233 kernel: iscsi: registered transport (qla4xxx) May 13 00:25:49.924304 kernel: QLogic iSCSI HBA Driver May 13 00:25:49.945895 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 13 00:25:49.950404 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 13 00:25:49.964825 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 13 00:25:49.964855 kernel: device-mapper: uevent: version 1.0.3 May 13 00:25:49.965225 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com May 13 00:25:49.998232 kernel: raid6: avx2x4 gen() 52287 MB/s May 13 00:25:50.013238 kernel: raid6: avx2x2 gen() 48452 MB/s May 13 00:25:50.030646 kernel: raid6: avx2x1 gen() 37327 MB/s May 13 00:25:50.030707 kernel: raid6: using algorithm avx2x4 gen() 52287 MB/s May 13 00:25:50.048482 kernel: raid6: .... xor() 17115 MB/s, rmw enabled May 13 00:25:50.048528 kernel: raid6: using avx2x2 recovery algorithm May 13 00:25:50.062235 kernel: xor: automatically using best checksumming function avx May 13 00:25:50.168360 kernel: Btrfs loaded, zoned=no, fsverity=no May 13 00:25:50.173740 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 13 00:25:50.178448 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 13 00:25:50.185466 systemd-udevd[433]: Using default interface naming scheme 'v255'. May 13 00:25:50.187947 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 13 00:25:50.194313 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 13 00:25:50.201900 dracut-pre-trigger[436]: rd.md=0: removing MD RAID activation May 13 00:25:50.218182 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 13 00:25:50.222371 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 13 00:25:50.305084 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 13 00:25:50.309329 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 13 00:25:50.317841 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 13 00:25:50.318522 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 13 00:25:50.319183 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 13 00:25:50.319440 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 13 00:25:50.326402 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 13 00:25:50.332486 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 13 00:25:50.386230 kernel: VMware PVSCSI driver - version 1.0.7.0-k May 13 00:25:50.386264 kernel: vmw_pvscsi: using 64bit dma May 13 00:25:50.391297 kernel: vmw_pvscsi: max_id: 16 May 13 00:25:50.391328 kernel: vmw_pvscsi: setting ring_pages to 8 May 13 00:25:50.398664 kernel: VMware vmxnet3 virtual NIC driver - version 1.7.0.0-k-NAPI May 13 00:25:50.404112 kernel: vmxnet3 0000:0b:00.0: # of Tx queues : 2, # of Rx queues : 2 May 13 00:25:50.404242 kernel: vmw_pvscsi: enabling reqCallThreshold May 13 00:25:50.404252 kernel: vmxnet3 0000:0b:00.0 eth0: NIC Link is Up 10000 Mbps May 13 00:25:50.404321 kernel: vmw_pvscsi: driver-based request coalescing enabled May 13 00:25:50.404881 kernel: vmw_pvscsi: using MSI-X May 13 00:25:50.409241 kernel: cryptd: max_cpu_qlen set to 1000 May 13 00:25:50.415417 kernel: vmxnet3 0000:0b:00.0 ens192: renamed from eth0 May 13 00:25:50.421228 kernel: libata version 3.00 loaded. May 13 00:25:50.424385 kernel: scsi host0: VMware PVSCSI storage adapter rev 2, req/cmp/msg rings: 8/8/1 pages, cmd_per_lun=254 May 13 00:25:50.424438 kernel: ata_piix 0000:00:07.1: version 2.13 May 13 00:25:50.424562 kernel: AVX2 version of gcm_enc/dec engaged. May 13 00:25:50.425561 kernel: scsi host1: ata_piix May 13 00:25:50.426347 kernel: AES CTR mode by8 optimization enabled May 13 00:25:50.427644 kernel: scsi host2: ata_piix May 13 00:25:50.427742 kernel: vmw_pvscsi 0000:03:00.0: VMware PVSCSI rev 2 host #0 May 13 00:25:50.427827 kernel: ata1: PATA max UDMA/33 cmd 0x1f0 ctl 0x3f6 bmdma 0x1060 irq 14 May 13 00:25:50.427073 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 13 00:25:50.431318 kernel: ata2: PATA max UDMA/33 cmd 0x170 ctl 0x376 bmdma 0x1068 irq 15 May 13 00:25:50.431337 kernel: scsi 0:0:0:0: Direct-Access VMware Virtual disk 2.0 PQ: 0 ANSI: 6 May 13 00:25:50.427166 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 13 00:25:50.429900 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 13 00:25:50.431510 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 13 00:25:50.431618 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 13 00:25:50.432466 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 13 00:25:50.441233 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 13 00:25:50.453750 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 13 00:25:50.458322 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 13 00:25:50.467336 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 13 00:25:50.594241 kernel: ata2.00: ATAPI: VMware Virtual IDE CDROM Drive, 00000001, max UDMA/33 May 13 00:25:50.600272 kernel: scsi 2:0:0:0: CD-ROM NECVMWar VMware IDE CDR10 1.00 PQ: 0 ANSI: 5 May 13 00:25:50.617699 kernel: sd 0:0:0:0: [sda] 17805312 512-byte logical blocks: (9.12 GB/8.49 GiB) May 13 00:25:50.617855 kernel: sd 0:0:0:0: [sda] Write Protect is off May 13 00:25:50.617924 kernel: sd 0:0:0:0: [sda] Mode Sense: 31 00 00 00 May 13 00:25:50.617994 kernel: sd 0:0:0:0: [sda] Cache data unavailable May 13 00:25:50.618351 kernel: sd 0:0:0:0: [sda] Assuming drive cache: write through May 13 00:25:50.622566 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 1x/1x writer dvd-ram cd/rw xa/form2 cdda tray May 13 00:25:50.622759 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 May 13 00:25:50.625232 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 13 00:25:50.625265 kernel: sd 0:0:0:0: [sda] Attached SCSI disk May 13 00:25:50.633407 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 May 13 00:25:50.734249 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_disk ROOT. May 13 00:25:50.735613 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by (udev-worker) (480) May 13 00:25:50.738170 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_disk EFI-SYSTEM. May 13 00:25:50.740613 kernel: BTRFS: device fsid b9c18834-b687-45d3-9868-9ac29dc7ddd7 devid 1 transid 39 /dev/sda3 scanned by (udev-worker) (483) May 13 00:25:50.746234 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_disk OEM. May 13 00:25:50.749381 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_disk USR-A. May 13 00:25:50.749577 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_disk USR-A. May 13 00:25:50.758402 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 13 00:25:50.834243 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 13 00:25:50.858243 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 13 00:25:50.865248 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 13 00:25:51.879149 disk-uuid[588]: The operation has completed successfully. May 13 00:25:51.879598 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 13 00:25:52.031653 systemd[1]: disk-uuid.service: Deactivated successfully. May 13 00:25:52.031734 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 13 00:25:52.039391 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 13 00:25:52.041676 sh[607]: Success May 13 00:25:52.052236 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" May 13 00:25:52.211423 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 13 00:25:52.212518 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 13 00:25:52.212743 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 13 00:25:52.290756 kernel: BTRFS info (device dm-0): first mount of filesystem b9c18834-b687-45d3-9868-9ac29dc7ddd7 May 13 00:25:52.290809 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm May 13 00:25:52.290832 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead May 13 00:25:52.290846 kernel: BTRFS info (device dm-0): disabling log replay at mount time May 13 00:25:52.290860 kernel: BTRFS info (device dm-0): using free space tree May 13 00:25:52.300241 kernel: BTRFS info (device dm-0): enabling ssd optimizations May 13 00:25:52.301798 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 13 00:25:52.310392 systemd[1]: Starting afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments... May 13 00:25:52.311786 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 13 00:25:52.415383 kernel: BTRFS info (device sda6): first mount of filesystem 97fe19c2-c075-4d7e-9417-f9c367b49e5c May 13 00:25:52.415432 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm May 13 00:25:52.417232 kernel: BTRFS info (device sda6): using free space tree May 13 00:25:52.425235 kernel: BTRFS info (device sda6): enabling ssd optimizations May 13 00:25:52.436879 systemd[1]: mnt-oem.mount: Deactivated successfully. May 13 00:25:52.438250 kernel: BTRFS info (device sda6): last unmount of filesystem 97fe19c2-c075-4d7e-9417-f9c367b49e5c May 13 00:25:52.441855 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 13 00:25:52.447344 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 13 00:25:52.462953 systemd[1]: Finished afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments. May 13 00:25:52.467342 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 13 00:25:52.517374 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 13 00:25:52.524349 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 13 00:25:52.536298 systemd-networkd[795]: lo: Link UP May 13 00:25:52.536512 systemd-networkd[795]: lo: Gained carrier May 13 00:25:52.537425 systemd-networkd[795]: Enumeration completed May 13 00:25:52.537604 systemd[1]: Started systemd-networkd.service - Network Configuration. May 13 00:25:52.537771 systemd[1]: Reached target network.target - Network. May 13 00:25:52.538163 systemd-networkd[795]: ens192: Configuring with /etc/systemd/network/10-dracut-cmdline-99.network. May 13 00:25:52.541781 kernel: vmxnet3 0000:0b:00.0 ens192: intr type 3, mode 0, 3 vectors allocated May 13 00:25:52.541992 kernel: vmxnet3 0000:0b:00.0 ens192: NIC Link is Up 10000 Mbps May 13 00:25:52.541997 systemd-networkd[795]: ens192: Link UP May 13 00:25:52.542000 systemd-networkd[795]: ens192: Gained carrier May 13 00:25:52.601429 ignition[666]: Ignition 2.19.0 May 13 00:25:52.601436 ignition[666]: Stage: fetch-offline May 13 00:25:52.601478 ignition[666]: no configs at "/usr/lib/ignition/base.d" May 13 00:25:52.601486 ignition[666]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" May 13 00:25:52.601563 ignition[666]: parsed url from cmdline: "" May 13 00:25:52.601565 ignition[666]: no config URL provided May 13 00:25:52.601568 ignition[666]: reading system config file "/usr/lib/ignition/user.ign" May 13 00:25:52.601574 ignition[666]: no config at "/usr/lib/ignition/user.ign" May 13 00:25:52.601957 ignition[666]: config successfully fetched May 13 00:25:52.601981 ignition[666]: parsing config with SHA512: 53452d14a53a35e8366317d5dd39b63daaa4caf2f8c4b6ebca87f6442fa241f8962576457177befe419fbf1d02fbb3643882a8ceb80beac33ad1df7fcfadb9a4 May 13 00:25:52.605983 unknown[666]: fetched base config from "system" May 13 00:25:52.605990 unknown[666]: fetched user config from "vmware" May 13 00:25:52.606276 ignition[666]: fetch-offline: fetch-offline passed May 13 00:25:52.606320 ignition[666]: Ignition finished successfully May 13 00:25:52.606995 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 13 00:25:52.607421 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). May 13 00:25:52.611358 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 13 00:25:52.619934 ignition[804]: Ignition 2.19.0 May 13 00:25:52.619941 ignition[804]: Stage: kargs May 13 00:25:52.620056 ignition[804]: no configs at "/usr/lib/ignition/base.d" May 13 00:25:52.620063 ignition[804]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" May 13 00:25:52.620827 ignition[804]: kargs: kargs passed May 13 00:25:52.620865 ignition[804]: Ignition finished successfully May 13 00:25:52.621779 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 13 00:25:52.626361 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 13 00:25:52.634624 ignition[810]: Ignition 2.19.0 May 13 00:25:52.634635 ignition[810]: Stage: disks May 13 00:25:52.634801 ignition[810]: no configs at "/usr/lib/ignition/base.d" May 13 00:25:52.634808 ignition[810]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" May 13 00:25:52.635515 ignition[810]: disks: disks passed May 13 00:25:52.635559 ignition[810]: Ignition finished successfully May 13 00:25:52.636383 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 13 00:25:52.636991 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 13 00:25:52.637141 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 13 00:25:52.637341 systemd[1]: Reached target local-fs.target - Local File Systems. May 13 00:25:52.637530 systemd[1]: Reached target sysinit.target - System Initialization. May 13 00:25:52.637701 systemd[1]: Reached target basic.target - Basic System. May 13 00:25:52.641320 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 13 00:25:52.670262 systemd-fsck[819]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks May 13 00:25:52.672661 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 13 00:25:52.677355 systemd[1]: Mounting sysroot.mount - /sysroot... May 13 00:25:52.775196 systemd[1]: Mounted sysroot.mount - /sysroot. May 13 00:25:52.775465 kernel: EXT4-fs (sda9): mounted filesystem 422ad498-4f61-405b-9d71-25f19459d196 r/w with ordered data mode. Quota mode: none. May 13 00:25:52.775622 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 13 00:25:52.783391 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 13 00:25:52.785202 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 13 00:25:52.785666 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. May 13 00:25:52.785713 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 13 00:25:52.785737 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 13 00:25:52.794298 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (827) May 13 00:25:52.794341 kernel: BTRFS info (device sda6): first mount of filesystem 97fe19c2-c075-4d7e-9417-f9c367b49e5c May 13 00:25:52.794357 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm May 13 00:25:52.796294 kernel: BTRFS info (device sda6): using free space tree May 13 00:25:52.798176 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 13 00:25:52.799101 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 13 00:25:52.801247 kernel: BTRFS info (device sda6): enabling ssd optimizations May 13 00:25:52.802659 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 13 00:25:52.907969 initrd-setup-root[851]: cut: /sysroot/etc/passwd: No such file or directory May 13 00:25:52.910659 initrd-setup-root[858]: cut: /sysroot/etc/group: No such file or directory May 13 00:25:52.913830 initrd-setup-root[865]: cut: /sysroot/etc/shadow: No such file or directory May 13 00:25:52.916749 initrd-setup-root[872]: cut: /sysroot/etc/gshadow: No such file or directory May 13 00:25:53.116261 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 13 00:25:53.121348 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 13 00:25:53.123803 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 13 00:25:53.129236 kernel: BTRFS info (device sda6): last unmount of filesystem 97fe19c2-c075-4d7e-9417-f9c367b49e5c May 13 00:25:53.146926 ignition[940]: INFO : Ignition 2.19.0 May 13 00:25:53.147906 ignition[940]: INFO : Stage: mount May 13 00:25:53.147906 ignition[940]: INFO : no configs at "/usr/lib/ignition/base.d" May 13 00:25:53.147906 ignition[940]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" May 13 00:25:53.149139 ignition[940]: INFO : mount: mount passed May 13 00:25:53.149139 ignition[940]: INFO : Ignition finished successfully May 13 00:25:53.149551 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 13 00:25:53.158324 systemd[1]: Starting ignition-files.service - Ignition (files)... May 13 00:25:53.212656 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 13 00:25:53.285656 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 13 00:25:53.290385 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 13 00:25:53.417237 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (951) May 13 00:25:53.446638 kernel: BTRFS info (device sda6): first mount of filesystem 97fe19c2-c075-4d7e-9417-f9c367b49e5c May 13 00:25:53.446688 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm May 13 00:25:53.446704 kernel: BTRFS info (device sda6): using free space tree May 13 00:25:53.454229 kernel: BTRFS info (device sda6): enabling ssd optimizations May 13 00:25:53.456081 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 13 00:25:53.471668 ignition[968]: INFO : Ignition 2.19.0 May 13 00:25:53.471668 ignition[968]: INFO : Stage: files May 13 00:25:53.472201 ignition[968]: INFO : no configs at "/usr/lib/ignition/base.d" May 13 00:25:53.472201 ignition[968]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" May 13 00:25:53.472482 ignition[968]: DEBUG : files: compiled without relabeling support, skipping May 13 00:25:53.473019 ignition[968]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 13 00:25:53.473019 ignition[968]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 13 00:25:53.475255 ignition[968]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 13 00:25:53.475528 ignition[968]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 13 00:25:53.475788 unknown[968]: wrote ssh authorized keys file for user: core May 13 00:25:53.476039 ignition[968]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 13 00:25:53.478091 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" May 13 00:25:53.478091 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" May 13 00:25:53.478091 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" May 13 00:25:53.478091 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 May 13 00:25:53.554092 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK May 13 00:25:53.672506 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" May 13 00:25:53.672965 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" May 13 00:25:53.672965 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" May 13 00:25:53.672965 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" May 13 00:25:53.673914 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" May 13 00:25:53.673914 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 13 00:25:53.673914 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 13 00:25:53.673914 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 13 00:25:53.673914 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 13 00:25:53.673914 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" May 13 00:25:53.675220 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 13 00:25:53.675220 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" May 13 00:25:53.675220 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" May 13 00:25:53.675220 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" May 13 00:25:53.675220 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 May 13 00:25:54.205396 systemd-networkd[795]: ens192: Gained IPv6LL May 13 00:25:54.230782 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK May 13 00:25:54.602883 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" May 13 00:25:54.602883 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/etc/systemd/network/00-vmware.network" May 13 00:25:54.603577 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/etc/systemd/network/00-vmware.network" May 13 00:25:54.603577 ignition[968]: INFO : files: op(d): [started] processing unit "containerd.service" May 13 00:25:54.610355 ignition[968]: INFO : files: op(d): op(e): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" May 13 00:25:54.610597 ignition[968]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" May 13 00:25:54.610597 ignition[968]: INFO : files: op(d): [finished] processing unit "containerd.service" May 13 00:25:54.610597 ignition[968]: INFO : files: op(f): [started] processing unit "prepare-helm.service" May 13 00:25:54.610597 ignition[968]: INFO : files: op(f): op(10): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 13 00:25:54.611265 ignition[968]: INFO : files: op(f): op(10): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 13 00:25:54.611265 ignition[968]: INFO : files: op(f): [finished] processing unit "prepare-helm.service" May 13 00:25:54.611265 ignition[968]: INFO : files: op(11): [started] processing unit "coreos-metadata.service" May 13 00:25:54.611265 ignition[968]: INFO : files: op(11): op(12): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 13 00:25:54.611265 ignition[968]: INFO : files: op(11): op(12): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 13 00:25:54.611265 ignition[968]: INFO : files: op(11): [finished] processing unit "coreos-metadata.service" May 13 00:25:54.611265 ignition[968]: INFO : files: op(13): [started] setting preset to disabled for "coreos-metadata.service" May 13 00:25:54.866925 ignition[968]: INFO : files: op(13): op(14): [started] removing enablement symlink(s) for "coreos-metadata.service" May 13 00:25:54.870248 ignition[968]: INFO : files: op(13): op(14): [finished] removing enablement symlink(s) for "coreos-metadata.service" May 13 00:25:54.870248 ignition[968]: INFO : files: op(13): [finished] setting preset to disabled for "coreos-metadata.service" May 13 00:25:54.870248 ignition[968]: INFO : files: op(15): [started] setting preset to enabled for "prepare-helm.service" May 13 00:25:54.870248 ignition[968]: INFO : files: op(15): [finished] setting preset to enabled for "prepare-helm.service" May 13 00:25:54.870248 ignition[968]: INFO : files: createResultFile: createFiles: op(16): [started] writing file "/sysroot/etc/.ignition-result.json" May 13 00:25:54.871864 ignition[968]: INFO : files: createResultFile: createFiles: op(16): [finished] writing file "/sysroot/etc/.ignition-result.json" May 13 00:25:54.871864 ignition[968]: INFO : files: files passed May 13 00:25:54.871864 ignition[968]: INFO : Ignition finished successfully May 13 00:25:54.871091 systemd[1]: Finished ignition-files.service - Ignition (files). May 13 00:25:54.883423 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 13 00:25:54.885326 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 13 00:25:54.886535 systemd[1]: ignition-quench.service: Deactivated successfully. May 13 00:25:54.886601 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 13 00:25:54.894601 initrd-setup-root-after-ignition[998]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 13 00:25:54.894601 initrd-setup-root-after-ignition[998]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 13 00:25:54.895671 initrd-setup-root-after-ignition[1002]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 13 00:25:54.896613 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 13 00:25:54.896901 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 13 00:25:54.900392 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 13 00:25:54.918900 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 13 00:25:54.918968 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 13 00:25:54.919336 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 13 00:25:54.919463 systemd[1]: Reached target initrd.target - Initrd Default Target. May 13 00:25:54.919684 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 13 00:25:54.920247 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 13 00:25:54.942256 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 13 00:25:54.946327 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 13 00:25:54.952513 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 13 00:25:54.952873 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 13 00:25:54.953031 systemd[1]: Stopped target timers.target - Timer Units. May 13 00:25:54.953168 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 13 00:25:54.953253 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 13 00:25:54.953559 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 13 00:25:54.953877 systemd[1]: Stopped target basic.target - Basic System. May 13 00:25:54.954071 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 13 00:25:54.954296 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 13 00:25:54.954522 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 13 00:25:54.954733 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 13 00:25:54.954955 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 13 00:25:54.955192 systemd[1]: Stopped target sysinit.target - System Initialization. May 13 00:25:54.955424 systemd[1]: Stopped target local-fs.target - Local File Systems. May 13 00:25:54.955657 systemd[1]: Stopped target swap.target - Swaps. May 13 00:25:54.955854 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 13 00:25:54.955926 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 13 00:25:54.956228 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 13 00:25:54.956479 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 13 00:25:54.956677 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 13 00:25:54.956727 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 13 00:25:54.956890 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 13 00:25:54.956978 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 13 00:25:54.957258 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 13 00:25:54.957327 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 13 00:25:54.957594 systemd[1]: Stopped target paths.target - Path Units. May 13 00:25:54.957737 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 13 00:25:54.962250 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 13 00:25:54.962462 systemd[1]: Stopped target slices.target - Slice Units. May 13 00:25:54.962674 systemd[1]: Stopped target sockets.target - Socket Units. May 13 00:25:54.962870 systemd[1]: iscsid.socket: Deactivated successfully. May 13 00:25:54.962947 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 13 00:25:54.963186 systemd[1]: iscsiuio.socket: Deactivated successfully. May 13 00:25:54.963246 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 13 00:25:54.963400 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 13 00:25:54.963466 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 13 00:25:54.963772 systemd[1]: ignition-files.service: Deactivated successfully. May 13 00:25:54.963830 systemd[1]: Stopped ignition-files.service - Ignition (files). May 13 00:25:54.969409 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 13 00:25:54.969557 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 13 00:25:54.969685 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 13 00:25:54.971359 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 13 00:25:54.971562 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 13 00:25:54.971684 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 13 00:25:54.971983 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 13 00:25:54.972100 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 13 00:25:54.975977 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 13 00:25:54.976560 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 13 00:25:54.980279 ignition[1022]: INFO : Ignition 2.19.0 May 13 00:25:54.980784 ignition[1022]: INFO : Stage: umount May 13 00:25:54.980784 ignition[1022]: INFO : no configs at "/usr/lib/ignition/base.d" May 13 00:25:54.980784 ignition[1022]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" May 13 00:25:54.982282 ignition[1022]: INFO : umount: umount passed May 13 00:25:54.982282 ignition[1022]: INFO : Ignition finished successfully May 13 00:25:54.982733 systemd[1]: ignition-mount.service: Deactivated successfully. May 13 00:25:54.982796 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 13 00:25:54.983054 systemd[1]: Stopped target network.target - Network. May 13 00:25:54.983583 systemd[1]: ignition-disks.service: Deactivated successfully. May 13 00:25:54.983614 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 13 00:25:54.983761 systemd[1]: ignition-kargs.service: Deactivated successfully. May 13 00:25:54.983785 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 13 00:25:54.984395 systemd[1]: ignition-setup.service: Deactivated successfully. May 13 00:25:54.984420 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 13 00:25:54.985300 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 13 00:25:54.985329 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 13 00:25:54.985530 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 13 00:25:54.985674 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 13 00:25:54.993999 systemd[1]: systemd-networkd.service: Deactivated successfully. May 13 00:25:54.994059 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 13 00:25:54.994579 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 13 00:25:54.994609 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 13 00:25:54.999326 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 13 00:25:54.999433 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 13 00:25:54.999466 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 13 00:25:54.999610 systemd[1]: afterburn-network-kargs.service: Deactivated successfully. May 13 00:25:54.999634 systemd[1]: Stopped afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments. May 13 00:25:54.999808 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 13 00:25:55.000047 systemd[1]: systemd-resolved.service: Deactivated successfully. May 13 00:25:55.000109 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 13 00:25:55.004024 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 13 00:25:55.005714 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 13 00:25:55.005770 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 13 00:25:55.007056 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 13 00:25:55.007086 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 13 00:25:55.007239 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 13 00:25:55.007262 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 13 00:25:55.012766 systemd[1]: systemd-udevd.service: Deactivated successfully. May 13 00:25:55.013170 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 13 00:25:55.013748 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 13 00:25:55.013783 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 13 00:25:55.013922 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 13 00:25:55.013948 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 13 00:25:55.014082 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 13 00:25:55.014114 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 13 00:25:55.015253 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 13 00:25:55.015293 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 13 00:25:55.015557 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 13 00:25:55.015589 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 13 00:25:55.016482 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 13 00:25:55.016592 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 13 00:25:55.016629 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 13 00:25:55.016762 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 13 00:25:55.016791 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 13 00:25:55.017026 systemd[1]: network-cleanup.service: Deactivated successfully. May 13 00:25:55.017074 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 13 00:25:55.023996 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 13 00:25:55.024079 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 13 00:25:55.114631 systemd[1]: sysroot-boot.service: Deactivated successfully. May 13 00:25:55.114700 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 13 00:25:55.115106 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 13 00:25:55.115231 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 13 00:25:55.115262 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 13 00:25:55.118369 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 13 00:25:55.170192 systemd[1]: Switching root. May 13 00:25:55.195049 systemd-journald[215]: Journal stopped May 13 00:25:58.809979 systemd-journald[215]: Received SIGTERM from PID 1 (systemd). May 13 00:25:58.810002 kernel: SELinux: policy capability network_peer_controls=1 May 13 00:25:58.810010 kernel: SELinux: policy capability open_perms=1 May 13 00:25:58.810016 kernel: SELinux: policy capability extended_socket_class=1 May 13 00:25:58.810021 kernel: SELinux: policy capability always_check_network=0 May 13 00:25:58.810029 kernel: SELinux: policy capability cgroup_seclabel=1 May 13 00:25:58.810038 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 13 00:25:58.810048 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 13 00:25:58.810058 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 13 00:25:58.810066 systemd[1]: Successfully loaded SELinux policy in 39.822ms. May 13 00:25:58.810072 kernel: audit: type=1403 audit(1747095957.577:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 13 00:25:58.810078 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.622ms. May 13 00:25:58.810085 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 13 00:25:58.810096 systemd[1]: Detected virtualization vmware. May 13 00:25:58.810103 systemd[1]: Detected architecture x86-64. May 13 00:25:58.810112 systemd[1]: Detected first boot. May 13 00:25:58.810119 systemd[1]: Initializing machine ID from random generator. May 13 00:25:58.810127 zram_generator::config[1081]: No configuration found. May 13 00:25:58.810134 systemd[1]: Populated /etc with preset unit settings. May 13 00:25:58.810141 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") May 13 00:25:58.810148 systemd[1]: COREOS_CUSTOM_PUBLIC_IPV4=$(ip addr show ens192 | grep -v "inet 10." | grep -Po "inet \K[\d.]+")" > ${OUTPUT}" May 13 00:25:58.810158 systemd[1]: Queued start job for default target multi-user.target. May 13 00:25:58.810165 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. May 13 00:25:58.810171 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 13 00:25:58.810179 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 13 00:25:58.810186 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 13 00:25:58.810193 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 13 00:25:58.810199 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 13 00:25:58.810206 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 13 00:25:58.810212 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 13 00:25:58.814068 systemd[1]: Created slice user.slice - User and Session Slice. May 13 00:25:58.814083 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 13 00:25:58.814091 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 13 00:25:58.814097 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 13 00:25:58.814105 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 13 00:25:58.814112 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 13 00:25:58.814119 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 13 00:25:58.814127 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... May 13 00:25:58.814139 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 13 00:25:58.814152 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 13 00:25:58.814161 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 13 00:25:58.814171 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 13 00:25:58.814180 systemd[1]: Reached target slices.target - Slice Units. May 13 00:25:58.814187 systemd[1]: Reached target swap.target - Swaps. May 13 00:25:58.814196 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 13 00:25:58.814204 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 13 00:25:58.814210 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 13 00:25:58.814236 systemd[1]: Listening on systemd-journald.socket - Journal Socket. May 13 00:25:58.814245 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 13 00:25:58.814252 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 13 00:25:58.814259 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 13 00:25:58.814266 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 13 00:25:58.814275 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 13 00:25:58.814282 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 13 00:25:58.814289 systemd[1]: Mounting media.mount - External Media Directory... May 13 00:25:58.814296 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 00:25:58.814307 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 13 00:25:58.814319 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 13 00:25:58.814326 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 13 00:25:58.814333 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 13 00:25:58.814342 systemd[1]: Starting ignition-delete-config.service - Ignition (delete config)... May 13 00:25:58.814349 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 13 00:25:58.814356 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 13 00:25:58.814362 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 13 00:25:58.814369 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 13 00:25:58.814376 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 13 00:25:58.814384 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 13 00:25:58.814396 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 13 00:25:58.814405 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 13 00:25:58.814414 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. May 13 00:25:58.814421 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) May 13 00:25:58.814428 systemd[1]: Starting systemd-journald.service - Journal Service... May 13 00:25:58.814435 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 13 00:25:58.814441 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 13 00:25:58.814448 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 13 00:25:58.814455 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 13 00:25:58.814462 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 00:25:58.814470 kernel: fuse: init (API version 7.39) May 13 00:25:58.814477 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 13 00:25:58.814484 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 13 00:25:58.814496 systemd[1]: Mounted media.mount - External Media Directory. May 13 00:25:58.814505 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 13 00:25:58.814512 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 13 00:25:58.814519 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 13 00:25:58.814544 systemd-journald[1185]: Collecting audit messages is disabled. May 13 00:25:58.814565 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 13 00:25:58.814572 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 13 00:25:58.814579 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 13 00:25:58.814586 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 00:25:58.814594 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 13 00:25:58.814601 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 00:25:58.814609 systemd-journald[1185]: Journal started May 13 00:25:58.814624 systemd-journald[1185]: Runtime Journal (/run/log/journal/f483aea3b96d4229a451d78ad03b92e9) is 4.8M, max 38.6M, 33.8M free. May 13 00:25:58.814991 jq[1161]: true May 13 00:25:58.818013 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 13 00:25:58.818033 systemd[1]: Started systemd-journald.service - Journal Service. May 13 00:25:58.818043 kernel: loop: module loaded May 13 00:25:58.817802 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 13 00:25:58.817934 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 13 00:25:58.818525 jq[1203]: true May 13 00:25:58.819126 systemd[1]: modprobe@loop.service: Deactivated successfully. May 13 00:25:58.819469 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 13 00:25:58.819737 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 13 00:25:58.820009 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 13 00:25:58.831834 systemd[1]: Reached target network-pre.target - Preparation for Network. May 13 00:25:58.849937 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 13 00:25:58.854710 kernel: ACPI: bus type drm_connector registered May 13 00:25:58.853205 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 13 00:25:58.853363 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 13 00:25:58.870323 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 13 00:25:58.871493 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 13 00:25:58.871613 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 13 00:25:58.874328 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 13 00:25:58.874468 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 13 00:25:58.875200 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 13 00:25:58.876114 systemd[1]: modprobe@drm.service: Deactivated successfully. May 13 00:25:58.880381 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 13 00:25:58.880734 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 13 00:25:58.885372 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 13 00:25:58.885527 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 13 00:25:58.889202 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 13 00:25:58.908293 systemd-journald[1185]: Time spent on flushing to /var/log/journal/f483aea3b96d4229a451d78ad03b92e9 is 30.037ms for 1820 entries. May 13 00:25:58.908293 systemd-journald[1185]: System Journal (/var/log/journal/f483aea3b96d4229a451d78ad03b92e9) is 8.0M, max 584.8M, 576.8M free. May 13 00:25:59.049386 systemd-journald[1185]: Received client request to flush runtime journal. May 13 00:25:58.939814 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 13 00:25:58.941390 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 13 00:25:58.946928 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 13 00:25:58.967070 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 13 00:25:58.971319 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... May 13 00:25:58.977614 udevadm[1256]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. May 13 00:25:59.038687 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 13 00:25:59.045616 systemd-tmpfiles[1233]: ACLs are not supported, ignoring. May 13 00:25:59.045632 systemd-tmpfiles[1233]: ACLs are not supported, ignoring. May 13 00:25:59.049101 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 13 00:25:59.062705 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 13 00:25:59.064508 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 13 00:25:59.065816 ignition[1207]: Ignition 2.19.0 May 13 00:25:59.066537 ignition[1207]: deleting config from guestinfo properties May 13 00:25:59.082393 ignition[1207]: Successfully deleted config May 13 00:25:59.085471 systemd[1]: Finished ignition-delete-config.service - Ignition (delete config). May 13 00:25:59.328674 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 13 00:25:59.334335 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 13 00:25:59.343411 systemd-tmpfiles[1275]: ACLs are not supported, ignoring. May 13 00:25:59.343623 systemd-tmpfiles[1275]: ACLs are not supported, ignoring. May 13 00:25:59.346403 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 13 00:26:00.694913 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 13 00:26:00.699423 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 13 00:26:00.713289 systemd-udevd[1281]: Using default interface naming scheme 'v255'. May 13 00:26:00.820333 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 13 00:26:00.829383 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 13 00:26:00.852358 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 13 00:26:00.869758 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. May 13 00:26:00.896058 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 13 00:26:00.948232 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 May 13 00:26:00.959227 kernel: ACPI: button: Power Button [PWRF] May 13 00:26:00.992392 systemd-networkd[1286]: lo: Link UP May 13 00:26:00.992590 systemd-networkd[1286]: lo: Gained carrier May 13 00:26:00.993757 systemd-networkd[1286]: Enumeration completed May 13 00:26:00.993978 systemd-networkd[1286]: ens192: Configuring with /etc/systemd/network/00-vmware.network. May 13 00:26:00.996267 kernel: vmxnet3 0000:0b:00.0 ens192: intr type 3, mode 0, 3 vectors allocated May 13 00:26:00.996409 kernel: vmxnet3 0000:0b:00.0 ens192: NIC Link is Up 10000 Mbps May 13 00:26:00.996296 systemd[1]: Started systemd-networkd.service - Network Configuration. May 13 00:26:00.998600 systemd-networkd[1286]: ens192: Link UP May 13 00:26:00.998726 systemd-networkd[1286]: ens192: Gained carrier May 13 00:26:01.001299 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (1289) May 13 00:26:01.002427 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 13 00:26:01.014228 kernel: piix4_smbus 0000:00:07.3: SMBus Host Controller not enabled! May 13 00:26:01.040392 kernel: vmw_vmci 0000:00:07.7: Using capabilities 0xc May 13 00:26:01.043237 kernel: Guest personality initialized and is active May 13 00:26:01.046330 kernel: VMCI host device registered (name=vmci, major=10, minor=125) May 13 00:26:01.046387 kernel: Initialized host personality May 13 00:26:01.055235 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input3 May 13 00:26:01.060195 (udev-worker)[1284]: id: Truncating stdout of 'dmi_memory_id' up to 16384 byte. May 13 00:26:01.065236 kernel: mousedev: PS/2 mouse device common for all mice May 13 00:26:01.073409 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 13 00:26:01.092824 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_disk OEM. May 13 00:26:01.114062 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. May 13 00:26:01.120330 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... May 13 00:26:01.144239 lvm[1323]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 13 00:26:01.172862 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. May 13 00:26:01.173424 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 13 00:26:01.179394 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... May 13 00:26:01.182280 lvm[1326]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 13 00:26:01.213970 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. May 13 00:26:01.214273 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 13 00:26:01.214421 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 13 00:26:01.214444 systemd[1]: Reached target local-fs.target - Local File Systems. May 13 00:26:01.214563 systemd[1]: Reached target machines.target - Containers. May 13 00:26:01.216117 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). May 13 00:26:01.220337 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 13 00:26:01.221835 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 13 00:26:01.222090 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 13 00:26:01.224531 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 13 00:26:01.226363 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... May 13 00:26:01.229328 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 13 00:26:01.240675 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 13 00:26:01.263237 kernel: loop0: detected capacity change from 0 to 142488 May 13 00:26:01.271765 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 13 00:26:01.601819 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 13 00:26:01.650353 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 13 00:26:01.694265 kernel: loop1: detected capacity change from 0 to 210664 May 13 00:26:01.959317 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 13 00:26:01.959875 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. May 13 00:26:02.078322 kernel: loop2: detected capacity change from 0 to 140768 May 13 00:26:02.494273 kernel: loop3: detected capacity change from 0 to 2976 May 13 00:26:02.738238 kernel: loop4: detected capacity change from 0 to 142488 May 13 00:26:02.845299 systemd-networkd[1286]: ens192: Gained IPv6LL May 13 00:26:02.850660 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 13 00:26:02.990236 kernel: loop5: detected capacity change from 0 to 210664 May 13 00:26:03.214322 kernel: loop6: detected capacity change from 0 to 140768 May 13 00:26:03.561230 kernel: loop7: detected capacity change from 0 to 2976 May 13 00:26:03.765966 (sd-merge)[1352]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-vmware'. May 13 00:26:03.766271 (sd-merge)[1352]: Merged extensions into '/usr'. May 13 00:26:03.774410 systemd[1]: Reloading requested from client PID 1333 ('systemd-sysext') (unit systemd-sysext.service)... May 13 00:26:03.774424 systemd[1]: Reloading... May 13 00:26:03.800275 zram_generator::config[1378]: No configuration found. May 13 00:26:03.872176 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") May 13 00:26:03.887418 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 00:26:03.926498 systemd[1]: Reloading finished in 151 ms. May 13 00:26:03.936606 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 13 00:26:03.942301 systemd[1]: Starting ensure-sysext.service... May 13 00:26:03.943499 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 13 00:26:03.948015 systemd[1]: Reloading requested from client PID 1443 ('systemctl') (unit ensure-sysext.service)... May 13 00:26:03.948024 systemd[1]: Reloading... May 13 00:26:03.980279 zram_generator::config[1470]: No configuration found. May 13 00:26:03.984040 systemd-tmpfiles[1444]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 13 00:26:03.984260 systemd-tmpfiles[1444]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 13 00:26:03.984743 systemd-tmpfiles[1444]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 13 00:26:03.984905 systemd-tmpfiles[1444]: ACLs are not supported, ignoring. May 13 00:26:03.984940 systemd-tmpfiles[1444]: ACLs are not supported, ignoring. May 13 00:26:04.028928 systemd-tmpfiles[1444]: Detected autofs mount point /boot during canonicalization of boot. May 13 00:26:04.028934 systemd-tmpfiles[1444]: Skipping /boot May 13 00:26:04.033325 systemd-tmpfiles[1444]: Detected autofs mount point /boot during canonicalization of boot. May 13 00:26:04.033465 systemd-tmpfiles[1444]: Skipping /boot May 13 00:26:04.050486 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") May 13 00:26:04.065961 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 00:26:04.104797 systemd[1]: Reloading finished in 156 ms. May 13 00:26:04.119291 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 13 00:26:04.128226 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... May 13 00:26:04.143412 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 13 00:26:04.144876 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 13 00:26:04.148444 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 13 00:26:04.150410 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 13 00:26:04.153801 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 00:26:04.155477 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 13 00:26:04.157408 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 13 00:26:04.158329 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 13 00:26:04.158534 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 13 00:26:04.159240 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 00:26:04.163964 systemd[1]: modprobe@loop.service: Deactivated successfully. May 13 00:26:04.164052 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 13 00:26:04.168396 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 00:26:04.168484 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 13 00:26:04.169140 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 00:26:04.170141 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 13 00:26:04.170294 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 13 00:26:04.170353 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 00:26:04.175538 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 00:26:04.175637 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 13 00:26:04.178171 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 00:26:04.185619 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 13 00:26:04.188793 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 13 00:26:04.189614 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 13 00:26:04.189829 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 13 00:26:04.190004 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 13 00:26:04.190175 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 00:26:04.192609 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 00:26:04.193357 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 13 00:26:04.198462 systemd[1]: Finished ensure-sysext.service. May 13 00:26:04.198783 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 13 00:26:04.213962 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 13 00:26:04.214379 systemd[1]: modprobe@drm.service: Deactivated successfully. May 13 00:26:04.214477 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 13 00:26:04.216514 systemd[1]: modprobe@loop.service: Deactivated successfully. May 13 00:26:04.216798 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 13 00:26:04.218314 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 13 00:26:04.240730 systemd-resolved[1542]: Positive Trust Anchors: May 13 00:26:04.240740 systemd-resolved[1542]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 13 00:26:04.240763 systemd-resolved[1542]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 13 00:26:04.267041 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 13 00:26:04.267252 systemd[1]: Reached target time-set.target - System Time Set. May 13 00:26:04.269294 systemd-resolved[1542]: Defaulting to hostname 'linux'. May 13 00:26:04.270606 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 13 00:26:04.270821 systemd[1]: Reached target network.target - Network. May 13 00:26:04.270989 systemd[1]: Reached target network-online.target - Network is Online. May 13 00:26:04.271135 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 13 00:26:04.283249 augenrules[1583]: No rules May 13 00:26:04.283709 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. May 13 00:26:04.320583 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 13 00:27:24.908754 systemd-resolved[1542]: Clock change detected. Flushing caches. May 13 00:27:24.908821 systemd-timesyncd[1570]: Contacted time server 23.155.40.38:123 (0.flatcar.pool.ntp.org). May 13 00:27:24.908871 systemd-timesyncd[1570]: Initial clock synchronization to Tue 2025-05-13 00:27:24.908713 UTC. May 13 00:27:25.671096 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 13 00:27:25.671380 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 13 00:27:25.686737 ldconfig[1330]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 13 00:27:25.689664 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 13 00:27:25.695545 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 13 00:27:25.702132 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 13 00:27:25.702805 systemd[1]: Reached target sysinit.target - System Initialization. May 13 00:27:25.703047 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 13 00:27:25.703201 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 13 00:27:25.703457 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 13 00:27:25.703668 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 13 00:27:25.703794 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 13 00:27:25.703923 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 13 00:27:25.703944 systemd[1]: Reached target paths.target - Path Units. May 13 00:27:25.704040 systemd[1]: Reached target timers.target - Timer Units. May 13 00:27:25.704960 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 13 00:27:25.706171 systemd[1]: Starting docker.socket - Docker Socket for the API... May 13 00:27:25.709953 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 13 00:27:25.710579 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 13 00:27:25.710767 systemd[1]: Reached target sockets.target - Socket Units. May 13 00:27:25.710909 systemd[1]: Reached target basic.target - Basic System. May 13 00:27:25.711140 systemd[1]: System is tainted: cgroupsv1 May 13 00:27:25.711210 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 13 00:27:25.711263 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 13 00:27:25.712268 systemd[1]: Starting containerd.service - containerd container runtime... May 13 00:27:25.715551 systemd[1]: Starting coreos-metadata.service - VMware metadata agent... May 13 00:27:25.716741 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 13 00:27:25.723418 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 13 00:27:25.725637 jq[1606]: false May 13 00:27:25.727343 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 13 00:27:25.727479 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 13 00:27:25.736819 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 00:27:25.739457 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 13 00:27:25.740402 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 13 00:27:25.743860 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 13 00:27:25.745451 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 13 00:27:25.751864 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 13 00:27:25.758551 systemd[1]: Starting systemd-logind.service - User Login Management... May 13 00:27:25.759354 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 13 00:27:25.764458 systemd[1]: Starting update-engine.service - Update Engine... May 13 00:27:25.769399 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 13 00:27:25.770409 extend-filesystems[1607]: Found loop4 May 13 00:27:25.770409 extend-filesystems[1607]: Found loop5 May 13 00:27:25.770409 extend-filesystems[1607]: Found loop6 May 13 00:27:25.770409 extend-filesystems[1607]: Found loop7 May 13 00:27:25.770409 extend-filesystems[1607]: Found sda May 13 00:27:25.770409 extend-filesystems[1607]: Found sda1 May 13 00:27:25.770409 extend-filesystems[1607]: Found sda2 May 13 00:27:25.770409 extend-filesystems[1607]: Found sda3 May 13 00:27:25.770409 extend-filesystems[1607]: Found usr May 13 00:27:25.770409 extend-filesystems[1607]: Found sda4 May 13 00:27:25.770409 extend-filesystems[1607]: Found sda6 May 13 00:27:25.770409 extend-filesystems[1607]: Found sda7 May 13 00:27:25.770409 extend-filesystems[1607]: Found sda9 May 13 00:27:25.770409 extend-filesystems[1607]: Checking size of /dev/sda9 May 13 00:27:25.773743 systemd[1]: Starting vgauthd.service - VGAuth Service for open-vm-tools... May 13 00:27:25.784448 jq[1629]: true May 13 00:27:25.780003 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 13 00:27:25.780138 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 13 00:27:25.793639 systemd[1]: motdgen.service: Deactivated successfully. May 13 00:27:25.793771 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 13 00:27:25.795499 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 13 00:27:25.795624 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 13 00:27:25.801177 extend-filesystems[1607]: Old size kept for /dev/sda9 May 13 00:27:25.801177 extend-filesystems[1607]: Found sr0 May 13 00:27:25.811171 update_engine[1628]: I20250513 00:27:25.809475 1628 main.cc:92] Flatcar Update Engine starting May 13 00:27:25.811647 systemd[1]: extend-filesystems.service: Deactivated successfully. May 13 00:27:25.811776 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 13 00:27:25.816521 systemd[1]: Started vgauthd.service - VGAuth Service for open-vm-tools. May 13 00:27:25.819024 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 13 00:27:25.826750 jq[1646]: true May 13 00:27:25.835091 (ntainerd)[1654]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 13 00:27:25.846414 systemd[1]: Starting vmtoolsd.service - Service for virtual machines hosted on VMware... May 13 00:27:25.862601 tar[1642]: linux-amd64/helm May 13 00:27:25.865988 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (1681) May 13 00:27:25.884480 systemd[1]: Started vmtoolsd.service - Service for virtual machines hosted on VMware. May 13 00:27:25.885684 systemd-logind[1625]: Watching system buttons on /dev/input/event1 (Power Button) May 13 00:27:25.885695 systemd-logind[1625]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) May 13 00:27:25.887023 systemd-logind[1625]: New seat seat0. May 13 00:27:25.888013 systemd[1]: Started systemd-logind.service - User Login Management. May 13 00:27:25.888738 unknown[1653]: Pref_Init: Using '/etc/vmware-tools/vgauth.conf' as preferences filepath May 13 00:27:25.896238 unknown[1653]: Core dump limit set to -1 May 13 00:27:25.903785 dbus-daemon[1605]: [system] SELinux support is enabled May 13 00:27:25.904314 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 13 00:27:25.906478 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 13 00:27:25.906496 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 13 00:27:25.906662 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 13 00:27:25.906671 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 13 00:27:25.909767 dbus-daemon[1605]: [system] Successfully activated service 'org.freedesktop.systemd1' May 13 00:27:25.914014 bash[1701]: Updated "/home/core/.ssh/authorized_keys" May 13 00:27:25.913807 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 13 00:27:25.914819 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. May 13 00:27:25.917513 kernel: NET: Registered PF_VSOCK protocol family May 13 00:27:25.920615 systemd[1]: Started update-engine.service - Update Engine. May 13 00:27:25.921128 update_engine[1628]: I20250513 00:27:25.920671 1628 update_check_scheduler.cc:74] Next update check in 5m41s May 13 00:27:25.922166 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 13 00:27:25.931537 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 13 00:27:25.999366 systemd[1]: coreos-metadata.service: Deactivated successfully. May 13 00:27:25.999519 systemd[1]: Finished coreos-metadata.service - VMware metadata agent. May 13 00:27:26.002985 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 13 00:27:26.168663 locksmithd[1716]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 13 00:27:26.302392 containerd[1654]: time="2025-05-13T00:27:26.302325795Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 May 13 00:27:26.354229 containerd[1654]: time="2025-05-13T00:27:26.354194854Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 May 13 00:27:26.359319 containerd[1654]: time="2025-05-13T00:27:26.359286672Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.89-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 May 13 00:27:26.359319 containerd[1654]: time="2025-05-13T00:27:26.359315837Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 May 13 00:27:26.359428 containerd[1654]: time="2025-05-13T00:27:26.359337376Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 May 13 00:27:26.359445 containerd[1654]: time="2025-05-13T00:27:26.359437593Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 May 13 00:27:26.359463 containerd[1654]: time="2025-05-13T00:27:26.359447949Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 May 13 00:27:26.361343 containerd[1654]: time="2025-05-13T00:27:26.359484879Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 May 13 00:27:26.361343 containerd[1654]: time="2025-05-13T00:27:26.359495868Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 May 13 00:27:26.362278 containerd[1654]: time="2025-05-13T00:27:26.362243167Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 13 00:27:26.362326 containerd[1654]: time="2025-05-13T00:27:26.362276834Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 May 13 00:27:26.362326 containerd[1654]: time="2025-05-13T00:27:26.362296853Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 May 13 00:27:26.362326 containerd[1654]: time="2025-05-13T00:27:26.362308445Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 May 13 00:27:26.363125 containerd[1654]: time="2025-05-13T00:27:26.363101752Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 May 13 00:27:26.363295 containerd[1654]: time="2025-05-13T00:27:26.363282728Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 May 13 00:27:26.363440 containerd[1654]: time="2025-05-13T00:27:26.363424735Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 13 00:27:26.363440 containerd[1654]: time="2025-05-13T00:27:26.363436606Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 May 13 00:27:26.363489 containerd[1654]: time="2025-05-13T00:27:26.363484516Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 May 13 00:27:26.363529 containerd[1654]: time="2025-05-13T00:27:26.363516935Z" level=info msg="metadata content store policy set" policy=shared May 13 00:27:26.370392 containerd[1654]: time="2025-05-13T00:27:26.370354714Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 May 13 00:27:26.370392 containerd[1654]: time="2025-05-13T00:27:26.370397672Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 May 13 00:27:26.370468 containerd[1654]: time="2025-05-13T00:27:26.370409702Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 May 13 00:27:26.370468 containerd[1654]: time="2025-05-13T00:27:26.370423670Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 May 13 00:27:26.370468 containerd[1654]: time="2025-05-13T00:27:26.370434907Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 May 13 00:27:26.370540 containerd[1654]: time="2025-05-13T00:27:26.370529482Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 May 13 00:27:26.370716 containerd[1654]: time="2025-05-13T00:27:26.370706469Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 May 13 00:27:26.370777 containerd[1654]: time="2025-05-13T00:27:26.370767281Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 May 13 00:27:26.370793 containerd[1654]: time="2025-05-13T00:27:26.370779552Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 May 13 00:27:26.370793 containerd[1654]: time="2025-05-13T00:27:26.370787985Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 May 13 00:27:26.370820 containerd[1654]: time="2025-05-13T00:27:26.370795525Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 May 13 00:27:26.370820 containerd[1654]: time="2025-05-13T00:27:26.370803032Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 May 13 00:27:26.370820 containerd[1654]: time="2025-05-13T00:27:26.370810299Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 May 13 00:27:26.370820 containerd[1654]: time="2025-05-13T00:27:26.370818653Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 May 13 00:27:26.370873 containerd[1654]: time="2025-05-13T00:27:26.370831542Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 May 13 00:27:26.370873 containerd[1654]: time="2025-05-13T00:27:26.370841419Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 May 13 00:27:26.370873 containerd[1654]: time="2025-05-13T00:27:26.370848669Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 May 13 00:27:26.370873 containerd[1654]: time="2025-05-13T00:27:26.370855452Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 May 13 00:27:26.370873 containerd[1654]: time="2025-05-13T00:27:26.370867324Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 May 13 00:27:26.370950 containerd[1654]: time="2025-05-13T00:27:26.370875076Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 May 13 00:27:26.370950 containerd[1654]: time="2025-05-13T00:27:26.370882356Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 May 13 00:27:26.370950 containerd[1654]: time="2025-05-13T00:27:26.370890559Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 May 13 00:27:26.370950 containerd[1654]: time="2025-05-13T00:27:26.370898961Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 May 13 00:27:26.370950 containerd[1654]: time="2025-05-13T00:27:26.370914597Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 May 13 00:27:26.370950 containerd[1654]: time="2025-05-13T00:27:26.370921470Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 May 13 00:27:26.370950 containerd[1654]: time="2025-05-13T00:27:26.370928629Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 May 13 00:27:26.370950 containerd[1654]: time="2025-05-13T00:27:26.370935350Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 May 13 00:27:26.370950 containerd[1654]: time="2025-05-13T00:27:26.370943143Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 May 13 00:27:26.370950 containerd[1654]: time="2025-05-13T00:27:26.370949607Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 May 13 00:27:26.371087 containerd[1654]: time="2025-05-13T00:27:26.370957447Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 May 13 00:27:26.371087 containerd[1654]: time="2025-05-13T00:27:26.370964541Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 May 13 00:27:26.371087 containerd[1654]: time="2025-05-13T00:27:26.370975877Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 May 13 00:27:26.371087 containerd[1654]: time="2025-05-13T00:27:26.370989340Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 May 13 00:27:26.371087 containerd[1654]: time="2025-05-13T00:27:26.370996113Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 May 13 00:27:26.371087 containerd[1654]: time="2025-05-13T00:27:26.371002014Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 May 13 00:27:26.371087 containerd[1654]: time="2025-05-13T00:27:26.371026294Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 May 13 00:27:26.371087 containerd[1654]: time="2025-05-13T00:27:26.371036770Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 May 13 00:27:26.371087 containerd[1654]: time="2025-05-13T00:27:26.371042946Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 May 13 00:27:26.371087 containerd[1654]: time="2025-05-13T00:27:26.371052003Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 May 13 00:27:26.371087 containerd[1654]: time="2025-05-13T00:27:26.371057567Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 May 13 00:27:26.371087 containerd[1654]: time="2025-05-13T00:27:26.371066724Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 May 13 00:27:26.371087 containerd[1654]: time="2025-05-13T00:27:26.371072611Z" level=info msg="NRI interface is disabled by configuration." May 13 00:27:26.371087 containerd[1654]: time="2025-05-13T00:27:26.371078407Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 May 13 00:27:26.371323 containerd[1654]: time="2025-05-13T00:27:26.371234063Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" May 13 00:27:26.371323 containerd[1654]: time="2025-05-13T00:27:26.371270732Z" level=info msg="Connect containerd service" May 13 00:27:26.371323 containerd[1654]: time="2025-05-13T00:27:26.371296835Z" level=info msg="using legacy CRI server" May 13 00:27:26.371323 containerd[1654]: time="2025-05-13T00:27:26.371301532Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 13 00:27:26.374657 containerd[1654]: time="2025-05-13T00:27:26.373909468Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" May 13 00:27:26.374657 containerd[1654]: time="2025-05-13T00:27:26.374254470Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 13 00:27:26.376119 containerd[1654]: time="2025-05-13T00:27:26.376103249Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 13 00:27:26.376148 containerd[1654]: time="2025-05-13T00:27:26.376139118Z" level=info msg=serving... address=/run/containerd/containerd.sock May 13 00:27:26.376675 containerd[1654]: time="2025-05-13T00:27:26.376170293Z" level=info msg="Start subscribing containerd event" May 13 00:27:26.376675 containerd[1654]: time="2025-05-13T00:27:26.376197096Z" level=info msg="Start recovering state" May 13 00:27:26.376675 containerd[1654]: time="2025-05-13T00:27:26.376235925Z" level=info msg="Start event monitor" May 13 00:27:26.376675 containerd[1654]: time="2025-05-13T00:27:26.376247522Z" level=info msg="Start snapshots syncer" May 13 00:27:26.376675 containerd[1654]: time="2025-05-13T00:27:26.376257549Z" level=info msg="Start cni network conf syncer for default" May 13 00:27:26.376675 containerd[1654]: time="2025-05-13T00:27:26.376262360Z" level=info msg="Start streaming server" May 13 00:27:26.376675 containerd[1654]: time="2025-05-13T00:27:26.376295455Z" level=info msg="containerd successfully booted in 0.075681s" May 13 00:27:26.376388 systemd[1]: Started containerd.service - containerd container runtime. May 13 00:27:26.401213 tar[1642]: linux-amd64/LICENSE May 13 00:27:26.401277 tar[1642]: linux-amd64/README.md May 13 00:27:26.412645 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 13 00:27:26.504141 sshd_keygen[1650]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 13 00:27:26.519384 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 13 00:27:26.525624 systemd[1]: Starting issuegen.service - Generate /run/issue... May 13 00:27:26.535099 systemd[1]: issuegen.service: Deactivated successfully. May 13 00:27:26.536673 systemd[1]: Finished issuegen.service - Generate /run/issue. May 13 00:27:26.543630 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 13 00:27:26.552146 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 13 00:27:26.558727 systemd[1]: Started getty@tty1.service - Getty on tty1. May 13 00:27:26.561154 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. May 13 00:27:26.561516 systemd[1]: Reached target getty.target - Login Prompts. May 13 00:27:27.731488 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 00:27:27.732971 systemd[1]: Reached target multi-user.target - Multi-User System. May 13 00:27:27.734524 systemd[1]: Startup finished in 8.959s (kernel) + 9.779s (userspace) = 18.739s. May 13 00:27:27.738811 (kubelet)[1819]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 13 00:27:27.768396 login[1810]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) May 13 00:27:27.768656 login[1809]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) May 13 00:27:27.777620 systemd-logind[1625]: New session 1 of user core. May 13 00:27:27.778888 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 13 00:27:27.784590 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 13 00:27:27.786114 systemd-logind[1625]: New session 2 of user core. May 13 00:27:27.800463 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 13 00:27:27.806547 systemd[1]: Starting user@500.service - User Manager for UID 500... May 13 00:27:27.815603 (systemd)[1828]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 13 00:27:27.945739 systemd[1828]: Queued start job for default target default.target. May 13 00:27:27.945999 systemd[1828]: Created slice app.slice - User Application Slice. May 13 00:27:27.946016 systemd[1828]: Reached target paths.target - Paths. May 13 00:27:27.946025 systemd[1828]: Reached target timers.target - Timers. May 13 00:27:27.951487 systemd[1828]: Starting dbus.socket - D-Bus User Message Bus Socket... May 13 00:27:27.956515 systemd[1828]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 13 00:27:27.957010 systemd[1828]: Reached target sockets.target - Sockets. May 13 00:27:27.957449 systemd[1828]: Reached target basic.target - Basic System. May 13 00:27:27.957492 systemd[1828]: Reached target default.target - Main User Target. May 13 00:27:27.957510 systemd[1828]: Startup finished in 138ms. May 13 00:27:27.957647 systemd[1]: Started user@500.service - User Manager for UID 500. May 13 00:27:27.964645 systemd[1]: Started session-1.scope - Session 1 of User core. May 13 00:27:27.965995 systemd[1]: Started session-2.scope - Session 2 of User core. May 13 00:27:29.127964 kubelet[1819]: E0513 00:27:29.127921 1819 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 13 00:27:29.129470 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 13 00:27:29.129580 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 13 00:27:39.249918 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 13 00:27:39.258602 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 00:27:39.575425 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 00:27:39.577684 (kubelet)[1879]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 13 00:27:39.611981 kubelet[1879]: E0513 00:27:39.611927 1879 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 13 00:27:39.614283 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 13 00:27:39.614384 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 13 00:27:49.749698 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 13 00:27:49.759533 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 00:27:50.067735 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 00:27:50.069905 (kubelet)[1901]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 13 00:27:50.104205 kubelet[1901]: E0513 00:27:50.104147 1901 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 13 00:27:50.105208 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 13 00:27:50.105295 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 13 00:27:56.024277 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 13 00:27:56.030494 systemd[1]: Started sshd@0-139.178.70.107:22-139.178.68.195:38184.service - OpenSSH per-connection server daemon (139.178.68.195:38184). May 13 00:27:56.063343 sshd[1910]: Accepted publickey for core from 139.178.68.195 port 38184 ssh2: RSA SHA256:iRPXTnUjHi5KLd+fyNY68ZfnJOxaHNAGmZgm5aHJa9U May 13 00:27:56.064116 sshd[1910]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:27:56.066637 systemd-logind[1625]: New session 3 of user core. May 13 00:27:56.072574 systemd[1]: Started session-3.scope - Session 3 of User core. May 13 00:27:56.128042 systemd[1]: Started sshd@1-139.178.70.107:22-139.178.68.195:38192.service - OpenSSH per-connection server daemon (139.178.68.195:38192). May 13 00:27:56.154101 sshd[1915]: Accepted publickey for core from 139.178.68.195 port 38192 ssh2: RSA SHA256:iRPXTnUjHi5KLd+fyNY68ZfnJOxaHNAGmZgm5aHJa9U May 13 00:27:56.155439 sshd[1915]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:27:56.158952 systemd-logind[1625]: New session 4 of user core. May 13 00:27:56.162480 systemd[1]: Started session-4.scope - Session 4 of User core. May 13 00:27:56.213227 sshd[1915]: pam_unix(sshd:session): session closed for user core May 13 00:27:56.223595 systemd[1]: Started sshd@2-139.178.70.107:22-139.178.68.195:38208.service - OpenSSH per-connection server daemon (139.178.68.195:38208). May 13 00:27:56.223958 systemd[1]: sshd@1-139.178.70.107:22-139.178.68.195:38192.service: Deactivated successfully. May 13 00:27:56.225072 systemd[1]: session-4.scope: Deactivated successfully. May 13 00:27:56.226994 systemd-logind[1625]: Session 4 logged out. Waiting for processes to exit. May 13 00:27:56.228293 systemd-logind[1625]: Removed session 4. May 13 00:27:56.253920 sshd[1920]: Accepted publickey for core from 139.178.68.195 port 38208 ssh2: RSA SHA256:iRPXTnUjHi5KLd+fyNY68ZfnJOxaHNAGmZgm5aHJa9U May 13 00:27:56.254642 sshd[1920]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:27:56.256971 systemd-logind[1625]: New session 5 of user core. May 13 00:27:56.264461 systemd[1]: Started session-5.scope - Session 5 of User core. May 13 00:27:56.314440 sshd[1920]: pam_unix(sshd:session): session closed for user core May 13 00:27:56.328401 systemd[1]: Started sshd@3-139.178.70.107:22-139.178.68.195:38210.service - OpenSSH per-connection server daemon (139.178.68.195:38210). May 13 00:27:56.328774 systemd[1]: sshd@2-139.178.70.107:22-139.178.68.195:38208.service: Deactivated successfully. May 13 00:27:56.332309 systemd-logind[1625]: Session 5 logged out. Waiting for processes to exit. May 13 00:27:56.333205 systemd[1]: session-5.scope: Deactivated successfully. May 13 00:27:56.334865 systemd-logind[1625]: Removed session 5. May 13 00:27:56.357682 sshd[1928]: Accepted publickey for core from 139.178.68.195 port 38210 ssh2: RSA SHA256:iRPXTnUjHi5KLd+fyNY68ZfnJOxaHNAGmZgm5aHJa9U May 13 00:27:56.358520 sshd[1928]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:27:56.361093 systemd-logind[1625]: New session 6 of user core. May 13 00:27:56.368548 systemd[1]: Started session-6.scope - Session 6 of User core. May 13 00:27:56.419351 sshd[1928]: pam_unix(sshd:session): session closed for user core May 13 00:27:56.434586 systemd[1]: Started sshd@4-139.178.70.107:22-139.178.68.195:38224.service - OpenSSH per-connection server daemon (139.178.68.195:38224). May 13 00:27:56.434933 systemd[1]: sshd@3-139.178.70.107:22-139.178.68.195:38210.service: Deactivated successfully. May 13 00:27:56.435879 systemd[1]: session-6.scope: Deactivated successfully. May 13 00:27:56.437480 systemd-logind[1625]: Session 6 logged out. Waiting for processes to exit. May 13 00:27:56.438313 systemd-logind[1625]: Removed session 6. May 13 00:27:56.466785 sshd[1936]: Accepted publickey for core from 139.178.68.195 port 38224 ssh2: RSA SHA256:iRPXTnUjHi5KLd+fyNY68ZfnJOxaHNAGmZgm5aHJa9U May 13 00:27:56.467778 sshd[1936]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:27:56.471165 systemd-logind[1625]: New session 7 of user core. May 13 00:27:56.480468 systemd[1]: Started session-7.scope - Session 7 of User core. May 13 00:27:56.549643 sudo[1943]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 13 00:27:56.549826 sudo[1943]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 13 00:27:56.559857 sudo[1943]: pam_unix(sudo:session): session closed for user root May 13 00:27:56.561719 sshd[1936]: pam_unix(sshd:session): session closed for user core May 13 00:27:56.571514 systemd[1]: Started sshd@5-139.178.70.107:22-139.178.68.195:38232.service - OpenSSH per-connection server daemon (139.178.68.195:38232). May 13 00:27:56.571853 systemd[1]: sshd@4-139.178.70.107:22-139.178.68.195:38224.service: Deactivated successfully. May 13 00:27:56.572798 systemd[1]: session-7.scope: Deactivated successfully. May 13 00:27:56.574939 systemd-logind[1625]: Session 7 logged out. Waiting for processes to exit. May 13 00:27:56.576182 systemd-logind[1625]: Removed session 7. May 13 00:27:56.602876 sshd[1945]: Accepted publickey for core from 139.178.68.195 port 38232 ssh2: RSA SHA256:iRPXTnUjHi5KLd+fyNY68ZfnJOxaHNAGmZgm5aHJa9U May 13 00:27:56.603803 sshd[1945]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:27:56.607366 systemd-logind[1625]: New session 8 of user core. May 13 00:27:56.620548 systemd[1]: Started session-8.scope - Session 8 of User core. May 13 00:27:56.670113 sudo[1953]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 13 00:27:56.670320 sudo[1953]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 13 00:27:56.672718 sudo[1953]: pam_unix(sudo:session): session closed for user root May 13 00:27:56.676382 sudo[1952]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules May 13 00:27:56.676585 sudo[1952]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 13 00:27:56.688659 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... May 13 00:27:56.689480 auditctl[1956]: No rules May 13 00:27:56.689774 systemd[1]: audit-rules.service: Deactivated successfully. May 13 00:27:56.689921 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. May 13 00:27:56.692560 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... May 13 00:27:56.709681 augenrules[1975]: No rules May 13 00:27:56.710308 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. May 13 00:27:56.711480 sudo[1952]: pam_unix(sudo:session): session closed for user root May 13 00:27:56.713500 sshd[1945]: pam_unix(sshd:session): session closed for user core May 13 00:27:56.722577 systemd[1]: Started sshd@6-139.178.70.107:22-139.178.68.195:38242.service - OpenSSH per-connection server daemon (139.178.68.195:38242). May 13 00:27:56.722823 systemd[1]: sshd@5-139.178.70.107:22-139.178.68.195:38232.service: Deactivated successfully. May 13 00:27:56.725487 systemd[1]: session-8.scope: Deactivated successfully. May 13 00:27:56.725742 systemd-logind[1625]: Session 8 logged out. Waiting for processes to exit. May 13 00:27:56.727748 systemd-logind[1625]: Removed session 8. May 13 00:27:56.751719 sshd[1981]: Accepted publickey for core from 139.178.68.195 port 38242 ssh2: RSA SHA256:iRPXTnUjHi5KLd+fyNY68ZfnJOxaHNAGmZgm5aHJa9U May 13 00:27:56.752601 sshd[1981]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:27:56.756484 systemd-logind[1625]: New session 9 of user core. May 13 00:27:56.762538 systemd[1]: Started session-9.scope - Session 9 of User core. May 13 00:27:56.811948 sudo[1988]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 13 00:27:56.812162 sudo[1988]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 13 00:27:57.106552 systemd[1]: Starting docker.service - Docker Application Container Engine... May 13 00:27:57.106670 (dockerd)[2005]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 13 00:27:57.357626 dockerd[2005]: time="2025-05-13T00:27:57.357221865Z" level=info msg="Starting up" May 13 00:27:57.439847 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport3748790900-merged.mount: Deactivated successfully. May 13 00:27:57.762982 dockerd[2005]: time="2025-05-13T00:27:57.762943322Z" level=info msg="Loading containers: start." May 13 00:27:57.834348 kernel: Initializing XFRM netlink socket May 13 00:27:57.888031 systemd-networkd[1286]: docker0: Link UP May 13 00:27:57.895337 dockerd[2005]: time="2025-05-13T00:27:57.895296105Z" level=info msg="Loading containers: done." May 13 00:27:57.906659 dockerd[2005]: time="2025-05-13T00:27:57.906622381Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 13 00:27:57.906763 dockerd[2005]: time="2025-05-13T00:27:57.906707880Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 May 13 00:27:57.906833 dockerd[2005]: time="2025-05-13T00:27:57.906780986Z" level=info msg="Daemon has completed initialization" May 13 00:27:57.925477 dockerd[2005]: time="2025-05-13T00:27:57.925416858Z" level=info msg="API listen on /run/docker.sock" May 13 00:27:57.925813 systemd[1]: Started docker.service - Docker Application Container Engine. May 13 00:27:58.438400 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1947077330-merged.mount: Deactivated successfully. May 13 00:27:58.795168 containerd[1654]: time="2025-05-13T00:27:58.795136697Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\"" May 13 00:27:59.657086 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount790999246.mount: Deactivated successfully. May 13 00:28:00.249584 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. May 13 00:28:00.257526 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 00:28:00.330486 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 00:28:00.333254 (kubelet)[2210]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 13 00:28:00.386920 kubelet[2210]: E0513 00:28:00.386886 2210 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 13 00:28:00.389447 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 13 00:28:00.389544 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 13 00:28:01.320353 containerd[1654]: time="2025-05-13T00:28:01.320085217Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:28:01.325782 containerd[1654]: time="2025-05-13T00:28:01.325742656Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.12: active requests=0, bytes read=32674873" May 13 00:28:01.336137 containerd[1654]: time="2025-05-13T00:28:01.336097453Z" level=info msg="ImageCreate event name:\"sha256:e113c59aa22f0650435e2a3ed64aadb01e87f3d2835aa3825fe078cd39699bfb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:28:01.346767 containerd[1654]: time="2025-05-13T00:28:01.346737451Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:4878682f7a044274d42399a6316ef452c5411aafd4ad99cc57de7235ca490e4e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:28:01.347545 containerd[1654]: time="2025-05-13T00:28:01.347413040Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.12\" with image id \"sha256:e113c59aa22f0650435e2a3ed64aadb01e87f3d2835aa3825fe078cd39699bfb\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:4878682f7a044274d42399a6316ef452c5411aafd4ad99cc57de7235ca490e4e\", size \"32671673\" in 2.552245082s" May 13 00:28:01.347545 containerd[1654]: time="2025-05-13T00:28:01.347433899Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\" returns image reference \"sha256:e113c59aa22f0650435e2a3ed64aadb01e87f3d2835aa3825fe078cd39699bfb\"" May 13 00:28:01.361353 containerd[1654]: time="2025-05-13T00:28:01.361194937Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\"" May 13 00:28:02.821744 containerd[1654]: time="2025-05-13T00:28:02.821702540Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:28:02.822679 containerd[1654]: time="2025-05-13T00:28:02.822646991Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.12: active requests=0, bytes read=29617534" May 13 00:28:02.823986 containerd[1654]: time="2025-05-13T00:28:02.823959547Z" level=info msg="ImageCreate event name:\"sha256:70742b7b7d90a618a1fa06d89248dbe2c291c19d7f75f4ad60a69d0454dbbac8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:28:02.826491 containerd[1654]: time="2025-05-13T00:28:02.826470110Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3a36711d0409d565b370a18d0c19339e93d4f1b1f2b3fd382eb31c714c463b74\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:28:02.827497 containerd[1654]: time="2025-05-13T00:28:02.827173966Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.12\" with image id \"sha256:70742b7b7d90a618a1fa06d89248dbe2c291c19d7f75f4ad60a69d0454dbbac8\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3a36711d0409d565b370a18d0c19339e93d4f1b1f2b3fd382eb31c714c463b74\", size \"31105907\" in 1.465953985s" May 13 00:28:02.827497 containerd[1654]: time="2025-05-13T00:28:02.827194569Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\" returns image reference \"sha256:70742b7b7d90a618a1fa06d89248dbe2c291c19d7f75f4ad60a69d0454dbbac8\"" May 13 00:28:02.840660 containerd[1654]: time="2025-05-13T00:28:02.840545205Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\"" May 13 00:28:04.011995 containerd[1654]: time="2025-05-13T00:28:04.011943284Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:28:04.012758 containerd[1654]: time="2025-05-13T00:28:04.012667627Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.12: active requests=0, bytes read=17903682" May 13 00:28:04.013375 containerd[1654]: time="2025-05-13T00:28:04.013134446Z" level=info msg="ImageCreate event name:\"sha256:c0b91cfea9f9a1c09fc5d056f3a015e52604fd0d63671ff5bf31e642402ef05d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:28:04.016431 containerd[1654]: time="2025-05-13T00:28:04.016389269Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:521c843d01025be7d4e246ddee8cde74556eb9813c606d6db9f0f03236f6d029\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:28:04.017414 containerd[1654]: time="2025-05-13T00:28:04.017299025Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.12\" with image id \"sha256:c0b91cfea9f9a1c09fc5d056f3a015e52604fd0d63671ff5bf31e642402ef05d\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:521c843d01025be7d4e246ddee8cde74556eb9813c606d6db9f0f03236f6d029\", size \"19392073\" in 1.176529922s" May 13 00:28:04.017414 containerd[1654]: time="2025-05-13T00:28:04.017322507Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\" returns image reference \"sha256:c0b91cfea9f9a1c09fc5d056f3a015e52604fd0d63671ff5bf31e642402ef05d\"" May 13 00:28:04.030386 containerd[1654]: time="2025-05-13T00:28:04.030322604Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\"" May 13 00:28:05.349019 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1286349962.mount: Deactivated successfully. May 13 00:28:05.640400 containerd[1654]: time="2025-05-13T00:28:05.640042340Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:28:05.645260 containerd[1654]: time="2025-05-13T00:28:05.645207532Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.12: active requests=0, bytes read=29185817" May 13 00:28:05.651039 containerd[1654]: time="2025-05-13T00:28:05.650983323Z" level=info msg="ImageCreate event name:\"sha256:c9356fea5d151501039907c3ba870272461396117eabc74063632616f4e31b2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:28:05.656171 containerd[1654]: time="2025-05-13T00:28:05.656137575Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:28:05.656757 containerd[1654]: time="2025-05-13T00:28:05.656561563Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.12\" with image id \"sha256:c9356fea5d151501039907c3ba870272461396117eabc74063632616f4e31b2b\", repo tag \"registry.k8s.io/kube-proxy:v1.30.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15\", size \"29184836\" in 1.626209339s" May 13 00:28:05.656757 containerd[1654]: time="2025-05-13T00:28:05.656586185Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\" returns image reference \"sha256:c9356fea5d151501039907c3ba870272461396117eabc74063632616f4e31b2b\"" May 13 00:28:05.674696 containerd[1654]: time="2025-05-13T00:28:05.674653578Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" May 13 00:28:06.168202 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount499917471.mount: Deactivated successfully. May 13 00:28:07.118077 containerd[1654]: time="2025-05-13T00:28:07.117894563Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" May 13 00:28:07.118077 containerd[1654]: time="2025-05-13T00:28:07.118047090Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:28:07.119412 containerd[1654]: time="2025-05-13T00:28:07.119400076Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:28:07.120137 containerd[1654]: time="2025-05-13T00:28:07.120051818Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.445347047s" May 13 00:28:07.120137 containerd[1654]: time="2025-05-13T00:28:07.120071455Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" May 13 00:28:07.120603 containerd[1654]: time="2025-05-13T00:28:07.120589320Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:28:07.132901 containerd[1654]: time="2025-05-13T00:28:07.132877431Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" May 13 00:28:07.983745 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2123735265.mount: Deactivated successfully. May 13 00:28:07.986411 containerd[1654]: time="2025-05-13T00:28:07.986365312Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:28:07.987533 containerd[1654]: time="2025-05-13T00:28:07.987488743Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" May 13 00:28:07.988002 containerd[1654]: time="2025-05-13T00:28:07.987984171Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:28:07.989117 containerd[1654]: time="2025-05-13T00:28:07.989094477Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:28:07.989793 containerd[1654]: time="2025-05-13T00:28:07.989554498Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 856.653449ms" May 13 00:28:07.989793 containerd[1654]: time="2025-05-13T00:28:07.989574272Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" May 13 00:28:08.002194 containerd[1654]: time="2025-05-13T00:28:08.002151717Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" May 13 00:28:08.804305 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1609170812.mount: Deactivated successfully. May 13 00:28:10.391690 containerd[1654]: time="2025-05-13T00:28:10.391651128Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:28:10.406020 containerd[1654]: time="2025-05-13T00:28:10.405979249Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=57238571" May 13 00:28:10.424014 containerd[1654]: time="2025-05-13T00:28:10.423953543Z" level=info msg="ImageCreate event name:\"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:28:10.446786 containerd[1654]: time="2025-05-13T00:28:10.446704907Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:28:10.447620 containerd[1654]: time="2025-05-13T00:28:10.447529049Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"57236178\" in 2.445115932s" May 13 00:28:10.447620 containerd[1654]: time="2025-05-13T00:28:10.447549935Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" May 13 00:28:10.499481 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. May 13 00:28:10.506489 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 00:28:11.009179 update_engine[1628]: I20250513 00:28:11.009109 1628 update_attempter.cc:509] Updating boot flags... May 13 00:28:11.155395 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (2383) May 13 00:28:11.272865 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (2379) May 13 00:28:11.717484 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 00:28:11.726737 (kubelet)[2451]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 13 00:28:11.770999 kubelet[2451]: E0513 00:28:11.770960 2451 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 13 00:28:11.772810 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 13 00:28:11.773158 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 13 00:28:12.920314 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 13 00:28:12.931545 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 00:28:12.949966 systemd[1]: Reloading requested from client PID 2469 ('systemctl') (unit session-9.scope)... May 13 00:28:12.949977 systemd[1]: Reloading... May 13 00:28:13.010350 zram_generator::config[2516]: No configuration found. May 13 00:28:13.061729 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") May 13 00:28:13.076981 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 00:28:13.121297 systemd[1]: Reloading finished in 171 ms. May 13 00:28:13.156480 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM May 13 00:28:13.156574 systemd[1]: kubelet.service: Failed with result 'signal'. May 13 00:28:13.156799 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 13 00:28:13.161840 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 00:28:13.559034 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 00:28:13.561343 (kubelet)[2584]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 13 00:28:13.603562 kubelet[2584]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 00:28:13.603562 kubelet[2584]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 13 00:28:13.603562 kubelet[2584]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 00:28:13.618815 kubelet[2584]: I0513 00:28:13.618760 2584 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 13 00:28:13.907295 kubelet[2584]: I0513 00:28:13.907235 2584 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" May 13 00:28:13.907295 kubelet[2584]: I0513 00:28:13.907252 2584 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 13 00:28:13.907402 kubelet[2584]: I0513 00:28:13.907391 2584 server.go:927] "Client rotation is on, will bootstrap in background" May 13 00:28:14.088366 kubelet[2584]: I0513 00:28:14.088017 2584 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 13 00:28:14.108441 kubelet[2584]: E0513 00:28:14.108408 2584 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://139.178.70.107:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 139.178.70.107:6443: connect: connection refused May 13 00:28:14.172598 kubelet[2584]: I0513 00:28:14.172436 2584 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 13 00:28:14.187172 kubelet[2584]: I0513 00:28:14.187127 2584 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 13 00:28:14.202415 kubelet[2584]: I0513 00:28:14.187173 2584 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} May 13 00:28:14.202577 kubelet[2584]: I0513 00:28:14.202428 2584 topology_manager.go:138] "Creating topology manager with none policy" May 13 00:28:14.202577 kubelet[2584]: I0513 00:28:14.202441 2584 container_manager_linux.go:301] "Creating device plugin manager" May 13 00:28:14.202577 kubelet[2584]: I0513 00:28:14.202550 2584 state_mem.go:36] "Initialized new in-memory state store" May 13 00:28:14.209913 kubelet[2584]: W0513 00:28:14.209846 2584 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://139.178.70.107:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 139.178.70.107:6443: connect: connection refused May 13 00:28:14.209913 kubelet[2584]: E0513 00:28:14.209896 2584 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://139.178.70.107:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 139.178.70.107:6443: connect: connection refused May 13 00:28:14.219162 kubelet[2584]: I0513 00:28:14.219129 2584 kubelet.go:400] "Attempting to sync node with API server" May 13 00:28:14.219162 kubelet[2584]: I0513 00:28:14.219168 2584 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" May 13 00:28:14.222687 kubelet[2584]: I0513 00:28:14.222675 2584 kubelet.go:312] "Adding apiserver pod source" May 13 00:28:14.227753 kubelet[2584]: I0513 00:28:14.227733 2584 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 13 00:28:14.251998 kubelet[2584]: I0513 00:28:14.251763 2584 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" May 13 00:28:14.257056 kubelet[2584]: I0513 00:28:14.256923 2584 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 13 00:28:14.257455 kubelet[2584]: W0513 00:28:14.257419 2584 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://139.178.70.107:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 139.178.70.107:6443: connect: connection refused May 13 00:28:14.257492 kubelet[2584]: E0513 00:28:14.257459 2584 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://139.178.70.107:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 139.178.70.107:6443: connect: connection refused May 13 00:28:14.266599 kubelet[2584]: W0513 00:28:14.266409 2584 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 13 00:28:14.270767 kubelet[2584]: I0513 00:28:14.270636 2584 server.go:1264] "Started kubelet" May 13 00:28:14.270767 kubelet[2584]: I0513 00:28:14.270729 2584 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 13 00:28:14.276965 kubelet[2584]: I0513 00:28:14.276539 2584 server.go:455] "Adding debug handlers to kubelet server" May 13 00:28:14.280178 kubelet[2584]: I0513 00:28:14.279757 2584 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 13 00:28:14.280178 kubelet[2584]: I0513 00:28:14.279956 2584 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 13 00:28:14.282315 kubelet[2584]: I0513 00:28:14.282197 2584 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 13 00:28:14.288716 kubelet[2584]: I0513 00:28:14.288702 2584 volume_manager.go:291] "Starting Kubelet Volume Manager" May 13 00:28:14.290505 kubelet[2584]: I0513 00:28:14.290494 2584 reconciler.go:26] "Reconciler: start to sync state" May 13 00:28:14.290815 kubelet[2584]: E0513 00:28:14.280084 2584 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://139.178.70.107:6443/api/v1/namespaces/default/events\": dial tcp 139.178.70.107:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.183eeea42778fe1b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-13 00:28:14.270619163 +0000 UTC m=+0.707127923,LastTimestamp:2025-05-13 00:28:14.270619163 +0000 UTC m=+0.707127923,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 13 00:28:14.290815 kubelet[2584]: I0513 00:28:14.290792 2584 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 13 00:28:14.290889 kubelet[2584]: W0513 00:28:14.290872 2584 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://139.178.70.107:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 139.178.70.107:6443: connect: connection refused May 13 00:28:14.290908 kubelet[2584]: E0513 00:28:14.290898 2584 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://139.178.70.107:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 139.178.70.107:6443: connect: connection refused May 13 00:28:14.291084 kubelet[2584]: E0513 00:28:14.290932 2584 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.107:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.107:6443: connect: connection refused" interval="200ms" May 13 00:28:14.292127 kubelet[2584]: I0513 00:28:14.291972 2584 factory.go:221] Registration of the systemd container factory successfully May 13 00:28:14.292127 kubelet[2584]: I0513 00:28:14.292024 2584 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 13 00:28:14.293101 kubelet[2584]: I0513 00:28:14.293076 2584 factory.go:221] Registration of the containerd container factory successfully May 13 00:28:14.298567 kubelet[2584]: I0513 00:28:14.298194 2584 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 13 00:28:14.298974 kubelet[2584]: I0513 00:28:14.298959 2584 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 13 00:28:14.299015 kubelet[2584]: I0513 00:28:14.298978 2584 status_manager.go:217] "Starting to sync pod status with apiserver" May 13 00:28:14.299015 kubelet[2584]: I0513 00:28:14.298997 2584 kubelet.go:2337] "Starting kubelet main sync loop" May 13 00:28:14.299076 kubelet[2584]: E0513 00:28:14.299023 2584 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 13 00:28:14.303155 kubelet[2584]: W0513 00:28:14.303119 2584 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://139.178.70.107:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 139.178.70.107:6443: connect: connection refused May 13 00:28:14.303282 kubelet[2584]: E0513 00:28:14.303273 2584 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://139.178.70.107:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 139.178.70.107:6443: connect: connection refused May 13 00:28:14.313941 kubelet[2584]: E0513 00:28:14.313920 2584 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 13 00:28:14.319530 kubelet[2584]: I0513 00:28:14.319467 2584 cpu_manager.go:214] "Starting CPU manager" policy="none" May 13 00:28:14.319530 kubelet[2584]: I0513 00:28:14.319525 2584 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 13 00:28:14.319637 kubelet[2584]: I0513 00:28:14.319540 2584 state_mem.go:36] "Initialized new in-memory state store" May 13 00:28:14.320877 kubelet[2584]: I0513 00:28:14.320865 2584 policy_none.go:49] "None policy: Start" May 13 00:28:14.321195 kubelet[2584]: I0513 00:28:14.321181 2584 memory_manager.go:170] "Starting memorymanager" policy="None" May 13 00:28:14.321195 kubelet[2584]: I0513 00:28:14.321193 2584 state_mem.go:35] "Initializing new in-memory state store" May 13 00:28:14.324962 kubelet[2584]: I0513 00:28:14.324942 2584 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 13 00:28:14.326345 kubelet[2584]: I0513 00:28:14.325155 2584 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 13 00:28:14.326345 kubelet[2584]: I0513 00:28:14.325227 2584 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 13 00:28:14.327407 kubelet[2584]: E0513 00:28:14.327389 2584 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" May 13 00:28:14.390463 kubelet[2584]: I0513 00:28:14.390436 2584 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 13 00:28:14.390686 kubelet[2584]: E0513 00:28:14.390672 2584 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://139.178.70.107:6443/api/v1/nodes\": dial tcp 139.178.70.107:6443: connect: connection refused" node="localhost" May 13 00:28:14.400014 kubelet[2584]: I0513 00:28:14.399943 2584 topology_manager.go:215] "Topology Admit Handler" podUID="70735a6e08b5be8f131e6519e52cde5d" podNamespace="kube-system" podName="kube-apiserver-localhost" May 13 00:28:14.400962 kubelet[2584]: I0513 00:28:14.400949 2584 topology_manager.go:215] "Topology Admit Handler" podUID="b20b39a8540dba87b5883a6f0f602dba" podNamespace="kube-system" podName="kube-controller-manager-localhost" May 13 00:28:14.402307 kubelet[2584]: I0513 00:28:14.402117 2584 topology_manager.go:215] "Topology Admit Handler" podUID="6ece95f10dbffa04b25ec3439a115512" podNamespace="kube-system" podName="kube-scheduler-localhost" May 13 00:28:14.491873 kubelet[2584]: E0513 00:28:14.491787 2584 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.107:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.107:6443: connect: connection refused" interval="400ms" May 13 00:28:14.493181 kubelet[2584]: I0513 00:28:14.493146 2584 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6ece95f10dbffa04b25ec3439a115512-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"6ece95f10dbffa04b25ec3439a115512\") " pod="kube-system/kube-scheduler-localhost" May 13 00:28:14.592139 kubelet[2584]: I0513 00:28:14.592108 2584 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 13 00:28:14.592370 kubelet[2584]: E0513 00:28:14.592319 2584 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://139.178.70.107:6443/api/v1/nodes\": dial tcp 139.178.70.107:6443: connect: connection refused" node="localhost" May 13 00:28:14.593507 kubelet[2584]: I0513 00:28:14.593491 2584 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 13 00:28:14.593554 kubelet[2584]: I0513 00:28:14.593524 2584 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/70735a6e08b5be8f131e6519e52cde5d-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"70735a6e08b5be8f131e6519e52cde5d\") " pod="kube-system/kube-apiserver-localhost" May 13 00:28:14.593609 kubelet[2584]: I0513 00:28:14.593595 2584 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/70735a6e08b5be8f131e6519e52cde5d-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"70735a6e08b5be8f131e6519e52cde5d\") " pod="kube-system/kube-apiserver-localhost" May 13 00:28:14.593634 kubelet[2584]: I0513 00:28:14.593615 2584 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 13 00:28:14.593634 kubelet[2584]: I0513 00:28:14.593624 2584 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 13 00:28:14.593676 kubelet[2584]: I0513 00:28:14.593633 2584 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/70735a6e08b5be8f131e6519e52cde5d-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"70735a6e08b5be8f131e6519e52cde5d\") " pod="kube-system/kube-apiserver-localhost" May 13 00:28:14.593676 kubelet[2584]: I0513 00:28:14.593642 2584 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 13 00:28:14.593712 kubelet[2584]: I0513 00:28:14.593675 2584 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 13 00:28:14.707218 containerd[1654]: time="2025-05-13T00:28:14.706999053Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:b20b39a8540dba87b5883a6f0f602dba,Namespace:kube-system,Attempt:0,}" May 13 00:28:14.708890 containerd[1654]: time="2025-05-13T00:28:14.708867834Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:6ece95f10dbffa04b25ec3439a115512,Namespace:kube-system,Attempt:0,}" May 13 00:28:14.708991 containerd[1654]: time="2025-05-13T00:28:14.708980333Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:70735a6e08b5be8f131e6519e52cde5d,Namespace:kube-system,Attempt:0,}" May 13 00:28:14.893029 kubelet[2584]: E0513 00:28:14.893000 2584 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.107:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.107:6443: connect: connection refused" interval="800ms" May 13 00:28:14.993186 kubelet[2584]: I0513 00:28:14.993168 2584 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 13 00:28:14.993380 kubelet[2584]: E0513 00:28:14.993364 2584 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://139.178.70.107:6443/api/v1/nodes\": dial tcp 139.178.70.107:6443: connect: connection refused" node="localhost" May 13 00:28:15.180578 kubelet[2584]: W0513 00:28:15.180499 2584 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://139.178.70.107:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 139.178.70.107:6443: connect: connection refused May 13 00:28:15.180578 kubelet[2584]: E0513 00:28:15.180527 2584 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://139.178.70.107:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 139.178.70.107:6443: connect: connection refused May 13 00:28:15.195311 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3501974798.mount: Deactivated successfully. May 13 00:28:15.198021 containerd[1654]: time="2025-05-13T00:28:15.197985784Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 13 00:28:15.198514 containerd[1654]: time="2025-05-13T00:28:15.198483158Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" May 13 00:28:15.199265 containerd[1654]: time="2025-05-13T00:28:15.199114911Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 13 00:28:15.199801 containerd[1654]: time="2025-05-13T00:28:15.199663191Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 13 00:28:15.199801 containerd[1654]: time="2025-05-13T00:28:15.199723336Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 13 00:28:15.200075 containerd[1654]: time="2025-05-13T00:28:15.200059353Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 13 00:28:15.200149 containerd[1654]: time="2025-05-13T00:28:15.200138633Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 13 00:28:15.202117 containerd[1654]: time="2025-05-13T00:28:15.202092920Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 13 00:28:15.203958 containerd[1654]: time="2025-05-13T00:28:15.203075397Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 494.167425ms" May 13 00:28:15.203958 containerd[1654]: time="2025-05-13T00:28:15.203901590Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 494.864922ms" May 13 00:28:15.205100 containerd[1654]: time="2025-05-13T00:28:15.205000039Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 497.95482ms" May 13 00:28:15.261500 kubelet[2584]: W0513 00:28:15.261459 2584 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://139.178.70.107:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 139.178.70.107:6443: connect: connection refused May 13 00:28:15.261500 kubelet[2584]: E0513 00:28:15.261486 2584 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://139.178.70.107:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 139.178.70.107:6443: connect: connection refused May 13 00:28:15.307052 containerd[1654]: time="2025-05-13T00:28:15.306996473Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:28:15.307229 containerd[1654]: time="2025-05-13T00:28:15.307151364Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:28:15.307229 containerd[1654]: time="2025-05-13T00:28:15.307181477Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:28:15.307294 containerd[1654]: time="2025-05-13T00:28:15.307260759Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:28:15.311752 containerd[1654]: time="2025-05-13T00:28:15.307038850Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:28:15.311752 containerd[1654]: time="2025-05-13T00:28:15.311644240Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:28:15.311752 containerd[1654]: time="2025-05-13T00:28:15.311656356Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:28:15.311752 containerd[1654]: time="2025-05-13T00:28:15.311724928Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:28:15.314640 containerd[1654]: time="2025-05-13T00:28:15.314412917Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:28:15.314640 containerd[1654]: time="2025-05-13T00:28:15.314589325Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:28:15.314911 containerd[1654]: time="2025-05-13T00:28:15.314734478Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:28:15.315421 containerd[1654]: time="2025-05-13T00:28:15.315387184Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:28:15.378926 containerd[1654]: time="2025-05-13T00:28:15.377954670Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:70735a6e08b5be8f131e6519e52cde5d,Namespace:kube-system,Attempt:0,} returns sandbox id \"d4c8359721a38f621d202e6276be6cb670bf40d738bb087f77dfcbd25a4427e5\"" May 13 00:28:15.384192 containerd[1654]: time="2025-05-13T00:28:15.384166775Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:6ece95f10dbffa04b25ec3439a115512,Namespace:kube-system,Attempt:0,} returns sandbox id \"04cf3984bfeff3b2a255dc818bef0a1de402a2301b131c63b741ac7fcfa92a60\"" May 13 00:28:15.384432 containerd[1654]: time="2025-05-13T00:28:15.384373938Z" level=info msg="CreateContainer within sandbox \"d4c8359721a38f621d202e6276be6cb670bf40d738bb087f77dfcbd25a4427e5\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 13 00:28:15.385401 containerd[1654]: time="2025-05-13T00:28:15.385388881Z" level=info msg="CreateContainer within sandbox \"04cf3984bfeff3b2a255dc818bef0a1de402a2301b131c63b741ac7fcfa92a60\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 13 00:28:15.388997 containerd[1654]: time="2025-05-13T00:28:15.388980787Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:b20b39a8540dba87b5883a6f0f602dba,Namespace:kube-system,Attempt:0,} returns sandbox id \"7140eebcba83a4a7649e5008aa17bdd4f935fe288bc22be61d6d9e5db307aa14\"" May 13 00:28:15.390401 containerd[1654]: time="2025-05-13T00:28:15.390387685Z" level=info msg="CreateContainer within sandbox \"7140eebcba83a4a7649e5008aa17bdd4f935fe288bc22be61d6d9e5db307aa14\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 13 00:28:15.408095 containerd[1654]: time="2025-05-13T00:28:15.408067743Z" level=info msg="CreateContainer within sandbox \"d4c8359721a38f621d202e6276be6cb670bf40d738bb087f77dfcbd25a4427e5\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"0e873b0275e8e1bfa63ae94ee9dae1ede924312577773a863c772f0b786862f2\"" May 13 00:28:15.408587 containerd[1654]: time="2025-05-13T00:28:15.408568042Z" level=info msg="StartContainer for \"0e873b0275e8e1bfa63ae94ee9dae1ede924312577773a863c772f0b786862f2\"" May 13 00:28:15.410089 containerd[1654]: time="2025-05-13T00:28:15.409994093Z" level=info msg="CreateContainer within sandbox \"7140eebcba83a4a7649e5008aa17bdd4f935fe288bc22be61d6d9e5db307aa14\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"6ea8ea513315ecb0f9d0cdf12cc4723a7a12538e515077560d2878065f38bebe\"" May 13 00:28:15.410727 containerd[1654]: time="2025-05-13T00:28:15.410634611Z" level=info msg="CreateContainer within sandbox \"04cf3984bfeff3b2a255dc818bef0a1de402a2301b131c63b741ac7fcfa92a60\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"74c53022364bfa7fc8cb57a6b5bff685b369da0ee7a0d5b940b6c67b2ef443e2\"" May 13 00:28:15.411010 containerd[1654]: time="2025-05-13T00:28:15.411000037Z" level=info msg="StartContainer for \"6ea8ea513315ecb0f9d0cdf12cc4723a7a12538e515077560d2878065f38bebe\"" May 13 00:28:15.414128 containerd[1654]: time="2025-05-13T00:28:15.414109458Z" level=info msg="StartContainer for \"74c53022364bfa7fc8cb57a6b5bff685b369da0ee7a0d5b940b6c67b2ef443e2\"" May 13 00:28:15.468931 containerd[1654]: time="2025-05-13T00:28:15.468806568Z" level=info msg="StartContainer for \"6ea8ea513315ecb0f9d0cdf12cc4723a7a12538e515077560d2878065f38bebe\" returns successfully" May 13 00:28:15.471576 containerd[1654]: time="2025-05-13T00:28:15.471470861Z" level=info msg="StartContainer for \"0e873b0275e8e1bfa63ae94ee9dae1ede924312577773a863c772f0b786862f2\" returns successfully" May 13 00:28:15.487696 containerd[1654]: time="2025-05-13T00:28:15.487656933Z" level=info msg="StartContainer for \"74c53022364bfa7fc8cb57a6b5bff685b369da0ee7a0d5b940b6c67b2ef443e2\" returns successfully" May 13 00:28:15.540540 kubelet[2584]: W0513 00:28:15.540500 2584 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://139.178.70.107:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 139.178.70.107:6443: connect: connection refused May 13 00:28:15.540540 kubelet[2584]: E0513 00:28:15.540542 2584 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://139.178.70.107:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 139.178.70.107:6443: connect: connection refused May 13 00:28:15.612532 kubelet[2584]: W0513 00:28:15.612492 2584 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://139.178.70.107:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 139.178.70.107:6443: connect: connection refused May 13 00:28:15.612532 kubelet[2584]: E0513 00:28:15.612537 2584 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://139.178.70.107:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 139.178.70.107:6443: connect: connection refused May 13 00:28:15.694084 kubelet[2584]: E0513 00:28:15.694044 2584 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.107:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.107:6443: connect: connection refused" interval="1.6s" May 13 00:28:15.797484 kubelet[2584]: I0513 00:28:15.797462 2584 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 13 00:28:15.797892 kubelet[2584]: E0513 00:28:15.797703 2584 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://139.178.70.107:6443/api/v1/nodes\": dial tcp 139.178.70.107:6443: connect: connection refused" node="localhost" May 13 00:28:17.204112 kubelet[2584]: E0513 00:28:17.204090 2584 csi_plugin.go:308] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found May 13 00:28:17.295631 kubelet[2584]: E0513 00:28:17.295606 2584 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" May 13 00:28:17.399393 kubelet[2584]: I0513 00:28:17.399324 2584 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 13 00:28:17.406971 kubelet[2584]: I0513 00:28:17.406952 2584 kubelet_node_status.go:76] "Successfully registered node" node="localhost" May 13 00:28:18.246031 kubelet[2584]: I0513 00:28:18.245978 2584 apiserver.go:52] "Watching apiserver" May 13 00:28:18.291226 kubelet[2584]: I0513 00:28:18.291202 2584 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 13 00:28:19.118021 systemd[1]: Reloading requested from client PID 2852 ('systemctl') (unit session-9.scope)... May 13 00:28:19.118033 systemd[1]: Reloading... May 13 00:28:19.173363 zram_generator::config[2893]: No configuration found. May 13 00:28:19.241528 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") May 13 00:28:19.258456 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 00:28:19.312270 systemd[1]: Reloading finished in 194 ms. May 13 00:28:19.333668 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 13 00:28:19.341067 systemd[1]: kubelet.service: Deactivated successfully. May 13 00:28:19.341314 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 13 00:28:19.347688 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 00:28:20.169075 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 00:28:20.172104 (kubelet)[2967]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 13 00:28:20.243871 kubelet[2967]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 00:28:20.244276 kubelet[2967]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 13 00:28:20.244276 kubelet[2967]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 00:28:20.244276 kubelet[2967]: I0513 00:28:20.244216 2967 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 13 00:28:20.249401 kubelet[2967]: I0513 00:28:20.248834 2967 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" May 13 00:28:20.249401 kubelet[2967]: I0513 00:28:20.248848 2967 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 13 00:28:20.249401 kubelet[2967]: I0513 00:28:20.248981 2967 server.go:927] "Client rotation is on, will bootstrap in background" May 13 00:28:20.249856 kubelet[2967]: I0513 00:28:20.249845 2967 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 13 00:28:20.255906 kubelet[2967]: I0513 00:28:20.255030 2967 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 13 00:28:20.259297 kubelet[2967]: I0513 00:28:20.259191 2967 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 13 00:28:20.259551 kubelet[2967]: I0513 00:28:20.259535 2967 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 13 00:28:20.259644 kubelet[2967]: I0513 00:28:20.259551 2967 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} May 13 00:28:20.260601 kubelet[2967]: I0513 00:28:20.260586 2967 topology_manager.go:138] "Creating topology manager with none policy" May 13 00:28:20.260601 kubelet[2967]: I0513 00:28:20.260600 2967 container_manager_linux.go:301] "Creating device plugin manager" May 13 00:28:20.260646 kubelet[2967]: I0513 00:28:20.260625 2967 state_mem.go:36] "Initialized new in-memory state store" May 13 00:28:20.262407 kubelet[2967]: I0513 00:28:20.262396 2967 kubelet.go:400] "Attempting to sync node with API server" May 13 00:28:20.262407 kubelet[2967]: I0513 00:28:20.262408 2967 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" May 13 00:28:20.262456 kubelet[2967]: I0513 00:28:20.262423 2967 kubelet.go:312] "Adding apiserver pod source" May 13 00:28:20.262456 kubelet[2967]: I0513 00:28:20.262432 2967 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 13 00:28:20.264247 kubelet[2967]: I0513 00:28:20.264235 2967 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" May 13 00:28:20.268965 kubelet[2967]: I0513 00:28:20.268561 2967 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 13 00:28:20.268965 kubelet[2967]: I0513 00:28:20.268787 2967 server.go:1264] "Started kubelet" May 13 00:28:20.280052 kubelet[2967]: I0513 00:28:20.279937 2967 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 13 00:28:20.285032 kubelet[2967]: I0513 00:28:20.284608 2967 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 13 00:28:20.296665 kubelet[2967]: I0513 00:28:20.296645 2967 server.go:455] "Adding debug handlers to kubelet server" May 13 00:28:20.301201 kubelet[2967]: I0513 00:28:20.301162 2967 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 13 00:28:20.301681 kubelet[2967]: I0513 00:28:20.301314 2967 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 13 00:28:20.303952 kubelet[2967]: I0513 00:28:20.303565 2967 volume_manager.go:291] "Starting Kubelet Volume Manager" May 13 00:28:20.303952 kubelet[2967]: I0513 00:28:20.303637 2967 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 13 00:28:20.303952 kubelet[2967]: I0513 00:28:20.303711 2967 reconciler.go:26] "Reconciler: start to sync state" May 13 00:28:20.309942 kubelet[2967]: I0513 00:28:20.307799 2967 factory.go:221] Registration of the systemd container factory successfully May 13 00:28:20.309942 kubelet[2967]: I0513 00:28:20.307885 2967 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 13 00:28:20.310917 kubelet[2967]: E0513 00:28:20.310501 2967 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 13 00:28:20.312681 kubelet[2967]: I0513 00:28:20.312042 2967 factory.go:221] Registration of the containerd container factory successfully May 13 00:28:20.318638 kubelet[2967]: I0513 00:28:20.318613 2967 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 13 00:28:20.319475 kubelet[2967]: I0513 00:28:20.319454 2967 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 13 00:28:20.319475 kubelet[2967]: I0513 00:28:20.319476 2967 status_manager.go:217] "Starting to sync pod status with apiserver" May 13 00:28:20.319552 kubelet[2967]: I0513 00:28:20.319490 2967 kubelet.go:2337] "Starting kubelet main sync loop" May 13 00:28:20.319894 kubelet[2967]: E0513 00:28:20.319869 2967 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 13 00:28:20.357812 kubelet[2967]: I0513 00:28:20.357573 2967 cpu_manager.go:214] "Starting CPU manager" policy="none" May 13 00:28:20.357812 kubelet[2967]: I0513 00:28:20.357585 2967 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 13 00:28:20.357812 kubelet[2967]: I0513 00:28:20.357598 2967 state_mem.go:36] "Initialized new in-memory state store" May 13 00:28:20.357812 kubelet[2967]: I0513 00:28:20.357695 2967 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 13 00:28:20.357812 kubelet[2967]: I0513 00:28:20.357702 2967 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 13 00:28:20.357812 kubelet[2967]: I0513 00:28:20.357714 2967 policy_none.go:49] "None policy: Start" May 13 00:28:20.358206 kubelet[2967]: I0513 00:28:20.358199 2967 memory_manager.go:170] "Starting memorymanager" policy="None" May 13 00:28:20.359005 kubelet[2967]: I0513 00:28:20.358245 2967 state_mem.go:35] "Initializing new in-memory state store" May 13 00:28:20.359005 kubelet[2967]: I0513 00:28:20.358361 2967 state_mem.go:75] "Updated machine memory state" May 13 00:28:20.359673 kubelet[2967]: I0513 00:28:20.359664 2967 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 13 00:28:20.359877 kubelet[2967]: I0513 00:28:20.359856 2967 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 13 00:28:20.360130 kubelet[2967]: I0513 00:28:20.360124 2967 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 13 00:28:20.404754 kubelet[2967]: I0513 00:28:20.404741 2967 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 13 00:28:20.408738 kubelet[2967]: I0513 00:28:20.408727 2967 kubelet_node_status.go:112] "Node was previously registered" node="localhost" May 13 00:28:20.408905 kubelet[2967]: I0513 00:28:20.408862 2967 kubelet_node_status.go:76] "Successfully registered node" node="localhost" May 13 00:28:20.420184 kubelet[2967]: I0513 00:28:20.420118 2967 topology_manager.go:215] "Topology Admit Handler" podUID="b20b39a8540dba87b5883a6f0f602dba" podNamespace="kube-system" podName="kube-controller-manager-localhost" May 13 00:28:20.420686 kubelet[2967]: I0513 00:28:20.420676 2967 topology_manager.go:215] "Topology Admit Handler" podUID="6ece95f10dbffa04b25ec3439a115512" podNamespace="kube-system" podName="kube-scheduler-localhost" May 13 00:28:20.426483 kubelet[2967]: I0513 00:28:20.426432 2967 topology_manager.go:215] "Topology Admit Handler" podUID="70735a6e08b5be8f131e6519e52cde5d" podNamespace="kube-system" podName="kube-apiserver-localhost" May 13 00:28:20.430098 kubelet[2967]: E0513 00:28:20.430072 2967 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" May 13 00:28:20.604546 kubelet[2967]: I0513 00:28:20.604487 2967 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 13 00:28:20.604643 kubelet[2967]: I0513 00:28:20.604565 2967 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 13 00:28:20.604643 kubelet[2967]: I0513 00:28:20.604580 2967 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 13 00:28:20.604643 kubelet[2967]: I0513 00:28:20.604612 2967 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 13 00:28:20.604643 kubelet[2967]: I0513 00:28:20.604624 2967 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 13 00:28:20.604643 kubelet[2967]: I0513 00:28:20.604636 2967 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6ece95f10dbffa04b25ec3439a115512-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"6ece95f10dbffa04b25ec3439a115512\") " pod="kube-system/kube-scheduler-localhost" May 13 00:28:20.604734 kubelet[2967]: I0513 00:28:20.604644 2967 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/70735a6e08b5be8f131e6519e52cde5d-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"70735a6e08b5be8f131e6519e52cde5d\") " pod="kube-system/kube-apiserver-localhost" May 13 00:28:20.604734 kubelet[2967]: I0513 00:28:20.604680 2967 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/70735a6e08b5be8f131e6519e52cde5d-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"70735a6e08b5be8f131e6519e52cde5d\") " pod="kube-system/kube-apiserver-localhost" May 13 00:28:20.604734 kubelet[2967]: I0513 00:28:20.604689 2967 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/70735a6e08b5be8f131e6519e52cde5d-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"70735a6e08b5be8f131e6519e52cde5d\") " pod="kube-system/kube-apiserver-localhost" May 13 00:28:21.266000 kubelet[2967]: I0513 00:28:21.263759 2967 apiserver.go:52] "Watching apiserver" May 13 00:28:21.304121 kubelet[2967]: I0513 00:28:21.304074 2967 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 13 00:28:21.380693 kubelet[2967]: E0513 00:28:21.380593 2967 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" May 13 00:28:21.453880 kubelet[2967]: E0513 00:28:21.453335 2967 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" May 13 00:28:21.477343 kubelet[2967]: I0513 00:28:21.476284 2967 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.476262239 podStartE2EDuration="1.476262239s" podCreationTimestamp="2025-05-13 00:28:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 00:28:21.454686608 +0000 UTC m=+1.276770710" watchObservedRunningTime="2025-05-13 00:28:21.476262239 +0000 UTC m=+1.298346332" May 13 00:28:21.493228 kubelet[2967]: I0513 00:28:21.493005 2967 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.4929915999999999 podStartE2EDuration="1.4929916s" podCreationTimestamp="2025-05-13 00:28:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 00:28:21.484262882 +0000 UTC m=+1.306346984" watchObservedRunningTime="2025-05-13 00:28:21.4929916 +0000 UTC m=+1.315075696" May 13 00:28:21.501055 kubelet[2967]: I0513 00:28:21.501010 2967 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=3.5009936440000002 podStartE2EDuration="3.500993644s" podCreationTimestamp="2025-05-13 00:28:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 00:28:21.493150349 +0000 UTC m=+1.315234451" watchObservedRunningTime="2025-05-13 00:28:21.500993644 +0000 UTC m=+1.323077742" May 13 00:28:24.232683 sudo[1988]: pam_unix(sudo:session): session closed for user root May 13 00:28:24.234525 sshd[1981]: pam_unix(sshd:session): session closed for user core May 13 00:28:24.237203 systemd[1]: sshd@6-139.178.70.107:22-139.178.68.195:38242.service: Deactivated successfully. May 13 00:28:24.238986 systemd-logind[1625]: Session 9 logged out. Waiting for processes to exit. May 13 00:28:24.239039 systemd[1]: session-9.scope: Deactivated successfully. May 13 00:28:24.240583 systemd-logind[1625]: Removed session 9. May 13 00:28:33.796533 kubelet[2967]: I0513 00:28:33.794582 2967 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 13 00:28:33.814511 containerd[1654]: time="2025-05-13T00:28:33.814379428Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 13 00:28:33.815581 kubelet[2967]: I0513 00:28:33.815167 2967 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 13 00:28:34.519896 kubelet[2967]: I0513 00:28:34.519867 2967 topology_manager.go:215] "Topology Admit Handler" podUID="52051d5c-cefd-49d1-892c-df46004324a8" podNamespace="kube-system" podName="kube-proxy-jxpvk" May 13 00:28:34.617142 kubelet[2967]: I0513 00:28:34.617108 2967 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/52051d5c-cefd-49d1-892c-df46004324a8-kube-proxy\") pod \"kube-proxy-jxpvk\" (UID: \"52051d5c-cefd-49d1-892c-df46004324a8\") " pod="kube-system/kube-proxy-jxpvk" May 13 00:28:34.617243 kubelet[2967]: I0513 00:28:34.617171 2967 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/52051d5c-cefd-49d1-892c-df46004324a8-lib-modules\") pod \"kube-proxy-jxpvk\" (UID: \"52051d5c-cefd-49d1-892c-df46004324a8\") " pod="kube-system/kube-proxy-jxpvk" May 13 00:28:34.617243 kubelet[2967]: I0513 00:28:34.617185 2967 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g2b7q\" (UniqueName: \"kubernetes.io/projected/52051d5c-cefd-49d1-892c-df46004324a8-kube-api-access-g2b7q\") pod \"kube-proxy-jxpvk\" (UID: \"52051d5c-cefd-49d1-892c-df46004324a8\") " pod="kube-system/kube-proxy-jxpvk" May 13 00:28:34.617243 kubelet[2967]: I0513 00:28:34.617197 2967 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/52051d5c-cefd-49d1-892c-df46004324a8-xtables-lock\") pod \"kube-proxy-jxpvk\" (UID: \"52051d5c-cefd-49d1-892c-df46004324a8\") " pod="kube-system/kube-proxy-jxpvk" May 13 00:28:34.832543 containerd[1654]: time="2025-05-13T00:28:34.831309395Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-jxpvk,Uid:52051d5c-cefd-49d1-892c-df46004324a8,Namespace:kube-system,Attempt:0,}" May 13 00:28:34.919624 containerd[1654]: time="2025-05-13T00:28:34.919565324Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:28:34.920111 containerd[1654]: time="2025-05-13T00:28:34.919869537Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:28:34.920197 containerd[1654]: time="2025-05-13T00:28:34.920130000Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:28:34.920319 containerd[1654]: time="2025-05-13T00:28:34.920286628Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:28:34.956083 containerd[1654]: time="2025-05-13T00:28:34.956034937Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-jxpvk,Uid:52051d5c-cefd-49d1-892c-df46004324a8,Namespace:kube-system,Attempt:0,} returns sandbox id \"6e9cb4dadfc8b0710964f92008037e2172cc4bd3fece14a4cf7d3313df9f92f3\"" May 13 00:28:34.957839 containerd[1654]: time="2025-05-13T00:28:34.957818015Z" level=info msg="CreateContainer within sandbox \"6e9cb4dadfc8b0710964f92008037e2172cc4bd3fece14a4cf7d3313df9f92f3\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 13 00:28:35.046353 kubelet[2967]: I0513 00:28:35.042072 2967 topology_manager.go:215] "Topology Admit Handler" podUID="1a4cd09e-d541-4b7f-a2f8-aebcad632cfc" podNamespace="tigera-operator" podName="tigera-operator-797db67f8-tmmdg" May 13 00:28:35.048984 containerd[1654]: time="2025-05-13T00:28:35.048938858Z" level=info msg="CreateContainer within sandbox \"6e9cb4dadfc8b0710964f92008037e2172cc4bd3fece14a4cf7d3313df9f92f3\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"8d3818907ff510d44822fb242efbb435041eb3d6cad328cc624080a670adfb11\"" May 13 00:28:35.050065 containerd[1654]: time="2025-05-13T00:28:35.049387322Z" level=info msg="StartContainer for \"8d3818907ff510d44822fb242efbb435041eb3d6cad328cc624080a670adfb11\"" May 13 00:28:35.097596 containerd[1654]: time="2025-05-13T00:28:35.097492557Z" level=info msg="StartContainer for \"8d3818907ff510d44822fb242efbb435041eb3d6cad328cc624080a670adfb11\" returns successfully" May 13 00:28:35.120022 kubelet[2967]: I0513 00:28:35.119986 2967 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6pnvz\" (UniqueName: \"kubernetes.io/projected/1a4cd09e-d541-4b7f-a2f8-aebcad632cfc-kube-api-access-6pnvz\") pod \"tigera-operator-797db67f8-tmmdg\" (UID: \"1a4cd09e-d541-4b7f-a2f8-aebcad632cfc\") " pod="tigera-operator/tigera-operator-797db67f8-tmmdg" May 13 00:28:35.120022 kubelet[2967]: I0513 00:28:35.120016 2967 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/1a4cd09e-d541-4b7f-a2f8-aebcad632cfc-var-lib-calico\") pod \"tigera-operator-797db67f8-tmmdg\" (UID: \"1a4cd09e-d541-4b7f-a2f8-aebcad632cfc\") " pod="tigera-operator/tigera-operator-797db67f8-tmmdg" May 13 00:28:35.348875 containerd[1654]: time="2025-05-13T00:28:35.348608341Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-797db67f8-tmmdg,Uid:1a4cd09e-d541-4b7f-a2f8-aebcad632cfc,Namespace:tigera-operator,Attempt:0,}" May 13 00:28:35.392942 kubelet[2967]: I0513 00:28:35.392763 2967 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-jxpvk" podStartSLOduration=1.392734613 podStartE2EDuration="1.392734613s" podCreationTimestamp="2025-05-13 00:28:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 00:28:35.392617267 +0000 UTC m=+15.214701376" watchObservedRunningTime="2025-05-13 00:28:35.392734613 +0000 UTC m=+15.214818717" May 13 00:28:35.479303 containerd[1654]: time="2025-05-13T00:28:35.479079416Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:28:35.479303 containerd[1654]: time="2025-05-13T00:28:35.479149352Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:28:35.479303 containerd[1654]: time="2025-05-13T00:28:35.479184276Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:28:35.479526 containerd[1654]: time="2025-05-13T00:28:35.479478764Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:28:35.520180 containerd[1654]: time="2025-05-13T00:28:35.520049845Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-797db67f8-tmmdg,Uid:1a4cd09e-d541-4b7f-a2f8-aebcad632cfc,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"803c36877d4b069116d5d4c4e4617660ad90518573b685fe9d19f4dee8de946b\"" May 13 00:28:35.521764 containerd[1654]: time="2025-05-13T00:28:35.521612807Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.7\"" May 13 00:28:35.740628 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3882710342.mount: Deactivated successfully. May 13 00:28:36.984674 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3428436980.mount: Deactivated successfully. May 13 00:28:37.309539 containerd[1654]: time="2025-05-13T00:28:37.309507295Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:28:37.310143 containerd[1654]: time="2025-05-13T00:28:37.310117735Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.7: active requests=0, bytes read=22002662" May 13 00:28:37.314407 containerd[1654]: time="2025-05-13T00:28:37.314365563Z" level=info msg="ImageCreate event name:\"sha256:e9b19fa62f476f04e5840eb65a0f71b49c7b9f4ceede31675409ddc218bb5578\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:28:37.315592 containerd[1654]: time="2025-05-13T00:28:37.315570539Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:a4a44422d8f2a14e0aaea2031ccb5580f2bf68218c9db444450c1888743305e9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:28:37.316113 containerd[1654]: time="2025-05-13T00:28:37.316020709Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.7\" with image id \"sha256:e9b19fa62f476f04e5840eb65a0f71b49c7b9f4ceede31675409ddc218bb5578\", repo tag \"quay.io/tigera/operator:v1.36.7\", repo digest \"quay.io/tigera/operator@sha256:a4a44422d8f2a14e0aaea2031ccb5580f2bf68218c9db444450c1888743305e9\", size \"21998657\" in 1.794386544s" May 13 00:28:37.316113 containerd[1654]: time="2025-05-13T00:28:37.316039915Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.7\" returns image reference \"sha256:e9b19fa62f476f04e5840eb65a0f71b49c7b9f4ceede31675409ddc218bb5578\"" May 13 00:28:37.329197 containerd[1654]: time="2025-05-13T00:28:37.329122577Z" level=info msg="CreateContainer within sandbox \"803c36877d4b069116d5d4c4e4617660ad90518573b685fe9d19f4dee8de946b\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" May 13 00:28:37.336523 containerd[1654]: time="2025-05-13T00:28:37.336501064Z" level=info msg="CreateContainer within sandbox \"803c36877d4b069116d5d4c4e4617660ad90518573b685fe9d19f4dee8de946b\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"eecb19c33e6eb9b697543cc2b80e3a56ccaabd4052285be12d89de56388869d0\"" May 13 00:28:37.337113 containerd[1654]: time="2025-05-13T00:28:37.336749310Z" level=info msg="StartContainer for \"eecb19c33e6eb9b697543cc2b80e3a56ccaabd4052285be12d89de56388869d0\"" May 13 00:28:37.378595 containerd[1654]: time="2025-05-13T00:28:37.378568188Z" level=info msg="StartContainer for \"eecb19c33e6eb9b697543cc2b80e3a56ccaabd4052285be12d89de56388869d0\" returns successfully" May 13 00:28:40.249975 kubelet[2967]: I0513 00:28:40.249547 2967 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-797db67f8-tmmdg" podStartSLOduration=3.442144888 podStartE2EDuration="5.249535006s" podCreationTimestamp="2025-05-13 00:28:35 +0000 UTC" firstStartedPulling="2025-05-13 00:28:35.520732449 +0000 UTC m=+15.342816543" lastFinishedPulling="2025-05-13 00:28:37.32812257 +0000 UTC m=+17.150206661" observedRunningTime="2025-05-13 00:28:37.39263016 +0000 UTC m=+17.214714257" watchObservedRunningTime="2025-05-13 00:28:40.249535006 +0000 UTC m=+20.071619109" May 13 00:28:40.249975 kubelet[2967]: I0513 00:28:40.249648 2967 topology_manager.go:215] "Topology Admit Handler" podUID="3d70339b-c076-427c-b84c-4b7a62badc7c" podNamespace="calico-system" podName="calico-typha-bc5f68658-nz7br" May 13 00:28:40.287041 kubelet[2967]: I0513 00:28:40.286794 2967 topology_manager.go:215] "Topology Admit Handler" podUID="a3f0af17-310d-4953-ac6f-bfe350f5a6b8" podNamespace="calico-system" podName="calico-node-8sn2r" May 13 00:28:40.364422 kubelet[2967]: I0513 00:28:40.363479 2967 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/a3f0af17-310d-4953-ac6f-bfe350f5a6b8-cni-bin-dir\") pod \"calico-node-8sn2r\" (UID: \"a3f0af17-310d-4953-ac6f-bfe350f5a6b8\") " pod="calico-system/calico-node-8sn2r" May 13 00:28:40.364422 kubelet[2967]: I0513 00:28:40.363511 2967 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bsbg9\" (UniqueName: \"kubernetes.io/projected/3d70339b-c076-427c-b84c-4b7a62badc7c-kube-api-access-bsbg9\") pod \"calico-typha-bc5f68658-nz7br\" (UID: \"3d70339b-c076-427c-b84c-4b7a62badc7c\") " pod="calico-system/calico-typha-bc5f68658-nz7br" May 13 00:28:40.364422 kubelet[2967]: I0513 00:28:40.363529 2967 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a3f0af17-310d-4953-ac6f-bfe350f5a6b8-tigera-ca-bundle\") pod \"calico-node-8sn2r\" (UID: \"a3f0af17-310d-4953-ac6f-bfe350f5a6b8\") " pod="calico-system/calico-node-8sn2r" May 13 00:28:40.364422 kubelet[2967]: I0513 00:28:40.363541 2967 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a3f0af17-310d-4953-ac6f-bfe350f5a6b8-lib-modules\") pod \"calico-node-8sn2r\" (UID: \"a3f0af17-310d-4953-ac6f-bfe350f5a6b8\") " pod="calico-system/calico-node-8sn2r" May 13 00:28:40.364422 kubelet[2967]: I0513 00:28:40.363551 2967 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/a3f0af17-310d-4953-ac6f-bfe350f5a6b8-cni-net-dir\") pod \"calico-node-8sn2r\" (UID: \"a3f0af17-310d-4953-ac6f-bfe350f5a6b8\") " pod="calico-system/calico-node-8sn2r" May 13 00:28:40.366940 kubelet[2967]: I0513 00:28:40.363561 2967 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/3d70339b-c076-427c-b84c-4b7a62badc7c-typha-certs\") pod \"calico-typha-bc5f68658-nz7br\" (UID: \"3d70339b-c076-427c-b84c-4b7a62badc7c\") " pod="calico-system/calico-typha-bc5f68658-nz7br" May 13 00:28:40.366940 kubelet[2967]: I0513 00:28:40.363570 2967 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a3f0af17-310d-4953-ac6f-bfe350f5a6b8-xtables-lock\") pod \"calico-node-8sn2r\" (UID: \"a3f0af17-310d-4953-ac6f-bfe350f5a6b8\") " pod="calico-system/calico-node-8sn2r" May 13 00:28:40.366940 kubelet[2967]: I0513 00:28:40.363579 2967 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/a3f0af17-310d-4953-ac6f-bfe350f5a6b8-cni-log-dir\") pod \"calico-node-8sn2r\" (UID: \"a3f0af17-310d-4953-ac6f-bfe350f5a6b8\") " pod="calico-system/calico-node-8sn2r" May 13 00:28:40.366940 kubelet[2967]: I0513 00:28:40.363590 2967 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/a3f0af17-310d-4953-ac6f-bfe350f5a6b8-flexvol-driver-host\") pod \"calico-node-8sn2r\" (UID: \"a3f0af17-310d-4953-ac6f-bfe350f5a6b8\") " pod="calico-system/calico-node-8sn2r" May 13 00:28:40.366940 kubelet[2967]: I0513 00:28:40.363600 2967 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/a3f0af17-310d-4953-ac6f-bfe350f5a6b8-node-certs\") pod \"calico-node-8sn2r\" (UID: \"a3f0af17-310d-4953-ac6f-bfe350f5a6b8\") " pod="calico-system/calico-node-8sn2r" May 13 00:28:40.368787 kubelet[2967]: I0513 00:28:40.363608 2967 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/a3f0af17-310d-4953-ac6f-bfe350f5a6b8-var-lib-calico\") pod \"calico-node-8sn2r\" (UID: \"a3f0af17-310d-4953-ac6f-bfe350f5a6b8\") " pod="calico-system/calico-node-8sn2r" May 13 00:28:40.368787 kubelet[2967]: I0513 00:28:40.363617 2967 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3d70339b-c076-427c-b84c-4b7a62badc7c-tigera-ca-bundle\") pod \"calico-typha-bc5f68658-nz7br\" (UID: \"3d70339b-c076-427c-b84c-4b7a62badc7c\") " pod="calico-system/calico-typha-bc5f68658-nz7br" May 13 00:28:40.368787 kubelet[2967]: I0513 00:28:40.363626 2967 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/a3f0af17-310d-4953-ac6f-bfe350f5a6b8-var-run-calico\") pod \"calico-node-8sn2r\" (UID: \"a3f0af17-310d-4953-ac6f-bfe350f5a6b8\") " pod="calico-system/calico-node-8sn2r" May 13 00:28:40.368787 kubelet[2967]: I0513 00:28:40.363637 2967 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xshzx\" (UniqueName: \"kubernetes.io/projected/a3f0af17-310d-4953-ac6f-bfe350f5a6b8-kube-api-access-xshzx\") pod \"calico-node-8sn2r\" (UID: \"a3f0af17-310d-4953-ac6f-bfe350f5a6b8\") " pod="calico-system/calico-node-8sn2r" May 13 00:28:40.368787 kubelet[2967]: I0513 00:28:40.363646 2967 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/a3f0af17-310d-4953-ac6f-bfe350f5a6b8-policysync\") pod \"calico-node-8sn2r\" (UID: \"a3f0af17-310d-4953-ac6f-bfe350f5a6b8\") " pod="calico-system/calico-node-8sn2r" May 13 00:28:40.437534 kubelet[2967]: I0513 00:28:40.437503 2967 topology_manager.go:215] "Topology Admit Handler" podUID="8207c568-159c-4015-827d-1b226c94f3cf" podNamespace="calico-system" podName="csi-node-driver-gfr9r" May 13 00:28:40.438082 kubelet[2967]: E0513 00:28:40.438046 2967 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gfr9r" podUID="8207c568-159c-4015-827d-1b226c94f3cf" May 13 00:28:40.482347 kubelet[2967]: E0513 00:28:40.480359 2967 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:28:40.482347 kubelet[2967]: W0513 00:28:40.480387 2967 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:28:40.482347 kubelet[2967]: E0513 00:28:40.481104 2967 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:28:40.482347 kubelet[2967]: E0513 00:28:40.481291 2967 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:28:40.482347 kubelet[2967]: W0513 00:28:40.481297 2967 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:28:40.482347 kubelet[2967]: E0513 00:28:40.481305 2967 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:28:40.482347 kubelet[2967]: E0513 00:28:40.481418 2967 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:28:40.482347 kubelet[2967]: W0513 00:28:40.481422 2967 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:28:40.482347 kubelet[2967]: E0513 00:28:40.481427 2967 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:28:40.487953 kubelet[2967]: E0513 00:28:40.487740 2967 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:28:40.487953 kubelet[2967]: W0513 00:28:40.487754 2967 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:28:40.487953 kubelet[2967]: E0513 00:28:40.487768 2967 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:28:40.488428 kubelet[2967]: E0513 00:28:40.488291 2967 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:28:40.488428 kubelet[2967]: W0513 00:28:40.488304 2967 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:28:40.488428 kubelet[2967]: E0513 00:28:40.488350 2967 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:28:40.488665 kubelet[2967]: E0513 00:28:40.488514 2967 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:28:40.488665 kubelet[2967]: W0513 00:28:40.488522 2967 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:28:40.488665 kubelet[2967]: E0513 00:28:40.488532 2967 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:28:40.488665 kubelet[2967]: E0513 00:28:40.488626 2967 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:28:40.488665 kubelet[2967]: W0513 00:28:40.488630 2967 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:28:40.488665 kubelet[2967]: E0513 00:28:40.488636 2967 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:28:40.488771 kubelet[2967]: E0513 00:28:40.488711 2967 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:28:40.488771 kubelet[2967]: W0513 00:28:40.488715 2967 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:28:40.488771 kubelet[2967]: E0513 00:28:40.488720 2967 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:28:40.490787 kubelet[2967]: E0513 00:28:40.488822 2967 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:28:40.490787 kubelet[2967]: W0513 00:28:40.488828 2967 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:28:40.490787 kubelet[2967]: E0513 00:28:40.488834 2967 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:28:40.490787 kubelet[2967]: E0513 00:28:40.489425 2967 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:28:40.490787 kubelet[2967]: W0513 00:28:40.489432 2967 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:28:40.490787 kubelet[2967]: E0513 00:28:40.489440 2967 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:28:40.490787 kubelet[2967]: E0513 00:28:40.489547 2967 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:28:40.490787 kubelet[2967]: W0513 00:28:40.489552 2967 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:28:40.490787 kubelet[2967]: E0513 00:28:40.489559 2967 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:28:40.490787 kubelet[2967]: E0513 00:28:40.489636 2967 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:28:40.490967 kubelet[2967]: W0513 00:28:40.489641 2967 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:28:40.490967 kubelet[2967]: E0513 00:28:40.489646 2967 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:28:40.490967 kubelet[2967]: E0513 00:28:40.489723 2967 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:28:40.490967 kubelet[2967]: W0513 00:28:40.489727 2967 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:28:40.490967 kubelet[2967]: E0513 00:28:40.489732 2967 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:28:40.490967 kubelet[2967]: E0513 00:28:40.489819 2967 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:28:40.490967 kubelet[2967]: W0513 00:28:40.489823 2967 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:28:40.490967 kubelet[2967]: E0513 00:28:40.489828 2967 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:28:40.496586 kubelet[2967]: E0513 00:28:40.492161 2967 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:28:40.496586 kubelet[2967]: W0513 00:28:40.492174 2967 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:28:40.496586 kubelet[2967]: E0513 00:28:40.492194 2967 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:28:40.496586 kubelet[2967]: E0513 00:28:40.494156 2967 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:28:40.496586 kubelet[2967]: W0513 00:28:40.494168 2967 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:28:40.496586 kubelet[2967]: E0513 00:28:40.494182 2967 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:28:40.496586 kubelet[2967]: E0513 00:28:40.494809 2967 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:28:40.496586 kubelet[2967]: W0513 00:28:40.494817 2967 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:28:40.496586 kubelet[2967]: E0513 00:28:40.494827 2967 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:28:40.496586 kubelet[2967]: E0513 00:28:40.494918 2967 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:28:40.496839 kubelet[2967]: W0513 00:28:40.494923 2967 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:28:40.496839 kubelet[2967]: E0513 00:28:40.494927 2967 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:28:40.496839 kubelet[2967]: E0513 00:28:40.495003 2967 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:28:40.496839 kubelet[2967]: W0513 00:28:40.495007 2967 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:28:40.496839 kubelet[2967]: E0513 00:28:40.495011 2967 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:28:40.496839 kubelet[2967]: E0513 00:28:40.495114 2967 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:28:40.496839 kubelet[2967]: W0513 00:28:40.495118 2967 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:28:40.496839 kubelet[2967]: E0513 00:28:40.495124 2967 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:28:40.502125 kubelet[2967]: E0513 00:28:40.502056 2967 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:28:40.502125 kubelet[2967]: W0513 00:28:40.502076 2967 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:28:40.502125 kubelet[2967]: E0513 00:28:40.502094 2967 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:28:40.530877 kubelet[2967]: E0513 00:28:40.530645 2967 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:28:40.530877 kubelet[2967]: W0513 00:28:40.530660 2967 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:28:40.530877 kubelet[2967]: E0513 00:28:40.530674 2967 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:28:40.530877 kubelet[2967]: E0513 00:28:40.530782 2967 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:28:40.530877 kubelet[2967]: W0513 00:28:40.530787 2967 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:28:40.530877 kubelet[2967]: E0513 00:28:40.530793 2967 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:28:40.531054 kubelet[2967]: E0513 00:28:40.530902 2967 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:28:40.531054 kubelet[2967]: W0513 00:28:40.530908 2967 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:28:40.531054 kubelet[2967]: E0513 00:28:40.530917 2967 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:28:40.531122 kubelet[2967]: E0513 00:28:40.531097 2967 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:28:40.531146 kubelet[2967]: W0513 00:28:40.531122 2967 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:28:40.531146 kubelet[2967]: E0513 00:28:40.531133 2967 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:28:40.531507 kubelet[2967]: E0513 00:28:40.531494 2967 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:28:40.531507 kubelet[2967]: W0513 00:28:40.531501 2967 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:28:40.531557 kubelet[2967]: E0513 00:28:40.531511 2967 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:28:40.531867 kubelet[2967]: E0513 00:28:40.531609 2967 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:28:40.531867 kubelet[2967]: W0513 00:28:40.531617 2967 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:28:40.531867 kubelet[2967]: E0513 00:28:40.531624 2967 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:28:40.531867 kubelet[2967]: E0513 00:28:40.531707 2967 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:28:40.531867 kubelet[2967]: W0513 00:28:40.531712 2967 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:28:40.531867 kubelet[2967]: E0513 00:28:40.531717 2967 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:28:40.531867 kubelet[2967]: E0513 00:28:40.531805 2967 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:28:40.531867 kubelet[2967]: W0513 00:28:40.531811 2967 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:28:40.531867 kubelet[2967]: E0513 00:28:40.531819 2967 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:28:40.532021 kubelet[2967]: E0513 00:28:40.531912 2967 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:28:40.532021 kubelet[2967]: W0513 00:28:40.531916 2967 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:28:40.532021 kubelet[2967]: E0513 00:28:40.531922 2967 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:28:40.532021 kubelet[2967]: E0513 00:28:40.531996 2967 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:28:40.532021 kubelet[2967]: W0513 00:28:40.532000 2967 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:28:40.532021 kubelet[2967]: E0513 00:28:40.532005 2967 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:28:40.532115 kubelet[2967]: E0513 00:28:40.532084 2967 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:28:40.532115 kubelet[2967]: W0513 00:28:40.532090 2967 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:28:40.532115 kubelet[2967]: E0513 00:28:40.532095 2967 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:28:40.532356 kubelet[2967]: E0513 00:28:40.532175 2967 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:28:40.532356 kubelet[2967]: W0513 00:28:40.532181 2967 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:28:40.532356 kubelet[2967]: E0513 00:28:40.532186 2967 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:28:40.532356 kubelet[2967]: E0513 00:28:40.532263 2967 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:28:40.532356 kubelet[2967]: W0513 00:28:40.532267 2967 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:28:40.532356 kubelet[2967]: E0513 00:28:40.532272 2967 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:28:40.533442 kubelet[2967]: E0513 00:28:40.532387 2967 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:28:40.533442 kubelet[2967]: W0513 00:28:40.532393 2967 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:28:40.533442 kubelet[2967]: E0513 00:28:40.532400 2967 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:28:40.533442 kubelet[2967]: E0513 00:28:40.532563 2967 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:28:40.533442 kubelet[2967]: W0513 00:28:40.532569 2967 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:28:40.533442 kubelet[2967]: E0513 00:28:40.532577 2967 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:28:40.533442 kubelet[2967]: E0513 00:28:40.532746 2967 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:28:40.533442 kubelet[2967]: W0513 00:28:40.532751 2967 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:28:40.533442 kubelet[2967]: E0513 00:28:40.532757 2967 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:28:40.533442 kubelet[2967]: E0513 00:28:40.532855 2967 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:28:40.533667 kubelet[2967]: W0513 00:28:40.532859 2967 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:28:40.533667 kubelet[2967]: E0513 00:28:40.532864 2967 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:28:40.533667 kubelet[2967]: E0513 00:28:40.533102 2967 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:28:40.533667 kubelet[2967]: W0513 00:28:40.533107 2967 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:28:40.533667 kubelet[2967]: E0513 00:28:40.533112 2967 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:28:40.533667 kubelet[2967]: E0513 00:28:40.533219 2967 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:28:40.533667 kubelet[2967]: W0513 00:28:40.533224 2967 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:28:40.533667 kubelet[2967]: E0513 00:28:40.533229 2967 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:28:40.533667 kubelet[2967]: E0513 00:28:40.533313 2967 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:28:40.533667 kubelet[2967]: W0513 00:28:40.533320 2967 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:28:40.534039 kubelet[2967]: E0513 00:28:40.533336 2967 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:28:40.554066 containerd[1654]: time="2025-05-13T00:28:40.554043480Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-bc5f68658-nz7br,Uid:3d70339b-c076-427c-b84c-4b7a62badc7c,Namespace:calico-system,Attempt:0,}" May 13 00:28:40.568416 kubelet[2967]: E0513 00:28:40.568277 2967 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:28:40.568416 kubelet[2967]: W0513 00:28:40.568292 2967 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:28:40.568416 kubelet[2967]: E0513 00:28:40.568307 2967 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:28:40.568416 kubelet[2967]: I0513 00:28:40.568326 2967 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/8207c568-159c-4015-827d-1b226c94f3cf-varrun\") pod \"csi-node-driver-gfr9r\" (UID: \"8207c568-159c-4015-827d-1b226c94f3cf\") " pod="calico-system/csi-node-driver-gfr9r" May 13 00:28:40.571130 kubelet[2967]: E0513 00:28:40.571110 2967 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:28:40.571130 kubelet[2967]: W0513 00:28:40.571125 2967 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:28:40.571207 kubelet[2967]: E0513 00:28:40.571149 2967 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:28:40.571207 kubelet[2967]: I0513 00:28:40.571168 2967 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8207c568-159c-4015-827d-1b226c94f3cf-kubelet-dir\") pod \"csi-node-driver-gfr9r\" (UID: \"8207c568-159c-4015-827d-1b226c94f3cf\") " pod="calico-system/csi-node-driver-gfr9r" May 13 00:28:40.572982 kubelet[2967]: E0513 00:28:40.571386 2967 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:28:40.572982 kubelet[2967]: W0513 00:28:40.571394 2967 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:28:40.572982 kubelet[2967]: E0513 00:28:40.571464 2967 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:28:40.572982 kubelet[2967]: I0513 00:28:40.571477 2967 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/8207c568-159c-4015-827d-1b226c94f3cf-registration-dir\") pod \"csi-node-driver-gfr9r\" (UID: \"8207c568-159c-4015-827d-1b226c94f3cf\") " pod="calico-system/csi-node-driver-gfr9r" May 13 00:28:40.572982 kubelet[2967]: E0513 00:28:40.571538 2967 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:28:40.572982 kubelet[2967]: W0513 00:28:40.571543 2967 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:28:40.572982 kubelet[2967]: E0513 00:28:40.571612 2967 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:28:40.572982 kubelet[2967]: E0513 00:28:40.571662 2967 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:28:40.572982 kubelet[2967]: W0513 00:28:40.571667 2967 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:28:40.573516 kubelet[2967]: E0513 00:28:40.571677 2967 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:28:40.574148 kubelet[2967]: E0513 00:28:40.574030 2967 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:28:40.574148 kubelet[2967]: W0513 00:28:40.574043 2967 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:28:40.574148 kubelet[2967]: E0513 00:28:40.574062 2967 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:28:40.574148 kubelet[2967]: I0513 00:28:40.574080 2967 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/8207c568-159c-4015-827d-1b226c94f3cf-socket-dir\") pod \"csi-node-driver-gfr9r\" (UID: \"8207c568-159c-4015-827d-1b226c94f3cf\") " pod="calico-system/csi-node-driver-gfr9r" May 13 00:28:40.575045 kubelet[2967]: E0513 00:28:40.575030 2967 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:28:40.575045 kubelet[2967]: W0513 00:28:40.575042 2967 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:28:40.575145 kubelet[2967]: E0513 00:28:40.575058 2967 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:28:40.575167 kubelet[2967]: E0513 00:28:40.575163 2967 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:28:40.575184 kubelet[2967]: W0513 00:28:40.575167 2967 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:28:40.575184 kubelet[2967]: E0513 00:28:40.575173 2967 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:28:40.576324 kubelet[2967]: E0513 00:28:40.576311 2967 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:28:40.576324 kubelet[2967]: W0513 00:28:40.576322 2967 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:28:40.578435 kubelet[2967]: E0513 00:28:40.576426 2967 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:28:40.578435 kubelet[2967]: E0513 00:28:40.576633 2967 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:28:40.578435 kubelet[2967]: W0513 00:28:40.576639 2967 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:28:40.578435 kubelet[2967]: E0513 00:28:40.576728 2967 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:28:40.578435 kubelet[2967]: I0513 00:28:40.576748 2967 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gbfsm\" (UniqueName: \"kubernetes.io/projected/8207c568-159c-4015-827d-1b226c94f3cf-kube-api-access-gbfsm\") pod \"csi-node-driver-gfr9r\" (UID: \"8207c568-159c-4015-827d-1b226c94f3cf\") " pod="calico-system/csi-node-driver-gfr9r" May 13 00:28:40.578435 kubelet[2967]: E0513 00:28:40.578348 2967 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:28:40.578435 kubelet[2967]: W0513 00:28:40.578362 2967 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:28:40.578435 kubelet[2967]: E0513 00:28:40.578384 2967 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:28:40.578592 kubelet[2967]: E0513 00:28:40.578548 2967 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:28:40.578592 kubelet[2967]: W0513 00:28:40.578553 2967 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:28:40.578592 kubelet[2967]: E0513 00:28:40.578574 2967 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:28:40.579057 kubelet[2967]: E0513 00:28:40.578694 2967 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:28:40.579057 kubelet[2967]: W0513 00:28:40.578700 2967 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:28:40.579057 kubelet[2967]: E0513 00:28:40.578705 2967 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:28:40.579057 kubelet[2967]: E0513 00:28:40.578856 2967 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:28:40.579057 kubelet[2967]: W0513 00:28:40.578863 2967 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:28:40.579057 kubelet[2967]: E0513 00:28:40.578868 2967 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:28:40.579057 kubelet[2967]: E0513 00:28:40.578960 2967 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:28:40.579057 kubelet[2967]: W0513 00:28:40.578965 2967 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:28:40.579057 kubelet[2967]: E0513 00:28:40.578979 2967 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:28:40.579890 containerd[1654]: time="2025-05-13T00:28:40.579288665Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:28:40.579890 containerd[1654]: time="2025-05-13T00:28:40.579351323Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:28:40.579890 containerd[1654]: time="2025-05-13T00:28:40.579374116Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:28:40.579890 containerd[1654]: time="2025-05-13T00:28:40.579451811Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:28:40.592021 containerd[1654]: time="2025-05-13T00:28:40.591967805Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-8sn2r,Uid:a3f0af17-310d-4953-ac6f-bfe350f5a6b8,Namespace:calico-system,Attempt:0,}" May 13 00:28:40.629481 containerd[1654]: time="2025-05-13T00:28:40.629169068Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:28:40.629848 containerd[1654]: time="2025-05-13T00:28:40.629427444Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:28:40.629848 containerd[1654]: time="2025-05-13T00:28:40.629444174Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:28:40.629848 containerd[1654]: time="2025-05-13T00:28:40.629692572Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:28:40.679884 kubelet[2967]: E0513 00:28:40.679669 2967 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:28:40.679884 kubelet[2967]: W0513 00:28:40.679682 2967 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:28:40.679884 kubelet[2967]: E0513 00:28:40.679695 2967 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:28:40.680013 kubelet[2967]: E0513 00:28:40.679956 2967 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:28:40.680013 kubelet[2967]: W0513 00:28:40.679961 2967 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:28:40.680013 kubelet[2967]: E0513 00:28:40.679967 2967 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:28:40.680502 kubelet[2967]: E0513 00:28:40.680127 2967 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:28:40.680502 kubelet[2967]: W0513 00:28:40.680133 2967 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:28:40.680502 kubelet[2967]: E0513 00:28:40.680139 2967 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:28:40.680502 kubelet[2967]: E0513 00:28:40.680400 2967 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:28:40.680502 kubelet[2967]: W0513 00:28:40.680405 2967 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:28:40.680502 kubelet[2967]: E0513 00:28:40.680410 2967 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:28:40.680613 kubelet[2967]: E0513 00:28:40.680542 2967 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:28:40.680613 kubelet[2967]: W0513 00:28:40.680547 2967 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:28:40.680613 kubelet[2967]: E0513 00:28:40.680552 2967 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:28:40.681643 kubelet[2967]: E0513 00:28:40.680746 2967 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:28:40.681643 kubelet[2967]: W0513 00:28:40.681222 2967 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:28:40.681643 kubelet[2967]: E0513 00:28:40.681233 2967 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:28:40.683594 kubelet[2967]: E0513 00:28:40.682759 2967 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:28:40.683594 kubelet[2967]: W0513 00:28:40.682771 2967 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:28:40.683594 kubelet[2967]: E0513 00:28:40.682783 2967 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:28:40.683594 kubelet[2967]: E0513 00:28:40.683145 2967 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:28:40.683594 kubelet[2967]: W0513 00:28:40.683150 2967 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:28:40.683594 kubelet[2967]: E0513 00:28:40.683158 2967 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:28:40.683594 kubelet[2967]: E0513 00:28:40.683441 2967 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:28:40.683594 kubelet[2967]: W0513 00:28:40.683448 2967 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:28:40.683594 kubelet[2967]: E0513 00:28:40.683454 2967 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:28:40.683841 kubelet[2967]: E0513 00:28:40.683780 2967 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:28:40.683841 kubelet[2967]: W0513 00:28:40.683785 2967 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:28:40.683841 kubelet[2967]: E0513 00:28:40.683792 2967 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:28:40.685832 kubelet[2967]: E0513 00:28:40.683959 2967 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:28:40.685832 kubelet[2967]: W0513 00:28:40.683966 2967 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:28:40.685832 kubelet[2967]: E0513 00:28:40.683971 2967 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:28:40.685832 kubelet[2967]: E0513 00:28:40.684156 2967 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:28:40.685832 kubelet[2967]: W0513 00:28:40.684161 2967 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:28:40.685832 kubelet[2967]: E0513 00:28:40.684166 2967 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:28:40.685832 kubelet[2967]: E0513 00:28:40.684676 2967 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:28:40.685832 kubelet[2967]: W0513 00:28:40.684694 2967 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:28:40.685832 kubelet[2967]: E0513 00:28:40.684702 2967 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:28:40.685832 kubelet[2967]: E0513 00:28:40.684922 2967 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:28:40.686064 kubelet[2967]: W0513 00:28:40.684928 2967 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:28:40.686064 kubelet[2967]: E0513 00:28:40.684935 2967 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:28:40.686064 kubelet[2967]: E0513 00:28:40.685159 2967 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:28:40.686064 kubelet[2967]: W0513 00:28:40.685164 2967 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:28:40.686064 kubelet[2967]: E0513 00:28:40.685170 2967 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:28:40.688355 containerd[1654]: time="2025-05-13T00:28:40.688209517Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-bc5f68658-nz7br,Uid:3d70339b-c076-427c-b84c-4b7a62badc7c,Namespace:calico-system,Attempt:0,} returns sandbox id \"f036290211faf8d4c836b3e54d2c626c3fd3bad697261c5464386326ea4d2faa\"" May 13 00:28:40.690992 kubelet[2967]: E0513 00:28:40.690918 2967 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:28:40.690992 kubelet[2967]: W0513 00:28:40.690932 2967 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:28:40.690992 kubelet[2967]: E0513 00:28:40.690946 2967 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:28:40.699216 kubelet[2967]: E0513 00:28:40.698986 2967 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:28:40.699216 kubelet[2967]: W0513 00:28:40.699002 2967 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:28:40.699216 kubelet[2967]: E0513 00:28:40.699017 2967 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:28:40.699216 kubelet[2967]: E0513 00:28:40.699144 2967 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:28:40.699216 kubelet[2967]: W0513 00:28:40.699150 2967 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:28:40.699216 kubelet[2967]: E0513 00:28:40.699156 2967 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:28:40.700110 kubelet[2967]: E0513 00:28:40.699907 2967 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:28:40.700110 kubelet[2967]: W0513 00:28:40.699917 2967 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:28:40.700110 kubelet[2967]: E0513 00:28:40.699927 2967 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:28:40.700110 kubelet[2967]: E0513 00:28:40.700049 2967 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:28:40.700110 kubelet[2967]: W0513 00:28:40.700053 2967 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:28:40.700110 kubelet[2967]: E0513 00:28:40.700059 2967 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:28:40.700435 kubelet[2967]: E0513 00:28:40.700317 2967 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:28:40.700435 kubelet[2967]: W0513 00:28:40.700324 2967 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:28:40.700435 kubelet[2967]: E0513 00:28:40.700352 2967 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:28:40.700613 kubelet[2967]: E0513 00:28:40.700533 2967 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:28:40.700613 kubelet[2967]: W0513 00:28:40.700540 2967 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:28:40.700613 kubelet[2967]: E0513 00:28:40.700549 2967 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:28:40.700801 kubelet[2967]: E0513 00:28:40.700725 2967 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:28:40.700801 kubelet[2967]: W0513 00:28:40.700731 2967 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:28:40.700801 kubelet[2967]: E0513 00:28:40.700740 2967 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:28:40.701382 kubelet[2967]: E0513 00:28:40.701283 2967 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:28:40.701382 kubelet[2967]: W0513 00:28:40.701297 2967 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:28:40.701382 kubelet[2967]: E0513 00:28:40.701305 2967 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:28:40.703373 kubelet[2967]: E0513 00:28:40.701572 2967 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:28:40.703373 kubelet[2967]: W0513 00:28:40.701579 2967 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:28:40.703373 kubelet[2967]: E0513 00:28:40.701591 2967 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:28:40.710963 containerd[1654]: time="2025-05-13T00:28:40.710941630Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-8sn2r,Uid:a3f0af17-310d-4953-ac6f-bfe350f5a6b8,Namespace:calico-system,Attempt:0,} returns sandbox id \"ee56a109e98884806b1faa5ab67f1896e594d25111965495500941d6f16ef424\"" May 13 00:28:40.721810 containerd[1654]: time="2025-05-13T00:28:40.721792604Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.3\"" May 13 00:28:40.723869 kubelet[2967]: E0513 00:28:40.723855 2967 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:28:40.723998 kubelet[2967]: W0513 00:28:40.723987 2967 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:28:40.724056 kubelet[2967]: E0513 00:28:40.724048 2967 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:28:42.326834 kubelet[2967]: E0513 00:28:42.326800 2967 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gfr9r" podUID="8207c568-159c-4015-827d-1b226c94f3cf" May 13 00:28:42.634956 containerd[1654]: time="2025-05-13T00:28:42.634886274Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:28:42.635643 containerd[1654]: time="2025-05-13T00:28:42.635576919Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.3: active requests=0, bytes read=30426870" May 13 00:28:42.635892 containerd[1654]: time="2025-05-13T00:28:42.635876059Z" level=info msg="ImageCreate event name:\"sha256:bde24a3cb8851b59372b76b3ad78f8028d1a915ffed82c6cc6256f34e500bd3d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:28:42.636884 containerd[1654]: time="2025-05-13T00:28:42.636862060Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f5516aa6a78f00931d2625f3012dcf2c69d141ce41483b8d59c6ec6330a18620\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:28:42.637340 containerd[1654]: time="2025-05-13T00:28:42.637262107Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.3\" with image id \"sha256:bde24a3cb8851b59372b76b3ad78f8028d1a915ffed82c6cc6256f34e500bd3d\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f5516aa6a78f00931d2625f3012dcf2c69d141ce41483b8d59c6ec6330a18620\", size \"31919484\" in 1.915348003s" May 13 00:28:42.637340 containerd[1654]: time="2025-05-13T00:28:42.637278770Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.3\" returns image reference \"sha256:bde24a3cb8851b59372b76b3ad78f8028d1a915ffed82c6cc6256f34e500bd3d\"" May 13 00:28:42.637859 containerd[1654]: time="2025-05-13T00:28:42.637844611Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\"" May 13 00:28:42.650223 containerd[1654]: time="2025-05-13T00:28:42.650198349Z" level=info msg="CreateContainer within sandbox \"f036290211faf8d4c836b3e54d2c626c3fd3bad697261c5464386326ea4d2faa\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" May 13 00:28:42.656087 containerd[1654]: time="2025-05-13T00:28:42.656040123Z" level=info msg="CreateContainer within sandbox \"f036290211faf8d4c836b3e54d2c626c3fd3bad697261c5464386326ea4d2faa\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"29cdc28f9ff57dd6f14c3fffdeac74971f4afe2983c64c5a24d359f271ad0d50\"" May 13 00:28:42.657035 containerd[1654]: time="2025-05-13T00:28:42.656487148Z" level=info msg="StartContainer for \"29cdc28f9ff57dd6f14c3fffdeac74971f4afe2983c64c5a24d359f271ad0d50\"" May 13 00:28:42.793357 containerd[1654]: time="2025-05-13T00:28:42.793285075Z" level=info msg="StartContainer for \"29cdc28f9ff57dd6f14c3fffdeac74971f4afe2983c64c5a24d359f271ad0d50\" returns successfully" May 13 00:28:43.490444 kubelet[2967]: E0513 00:28:43.490355 2967 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:28:43.490444 kubelet[2967]: W0513 00:28:43.490378 2967 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:28:43.490444 kubelet[2967]: E0513 00:28:43.490396 2967 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:28:43.490835 kubelet[2967]: E0513 00:28:43.490566 2967 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:28:43.490835 kubelet[2967]: W0513 00:28:43.490573 2967 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:28:43.490835 kubelet[2967]: E0513 00:28:43.490580 2967 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:28:43.490835 kubelet[2967]: E0513 00:28:43.490715 2967 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:28:43.490835 kubelet[2967]: W0513 00:28:43.490721 2967 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:28:43.490835 kubelet[2967]: E0513 00:28:43.490728 2967 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:28:43.504950 kubelet[2967]: E0513 00:28:43.490849 2967 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:28:43.504950 kubelet[2967]: W0513 00:28:43.490868 2967 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:28:43.504950 kubelet[2967]: E0513 00:28:43.490878 2967 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:28:43.504950 kubelet[2967]: E0513 00:28:43.491010 2967 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:28:43.504950 kubelet[2967]: W0513 00:28:43.491031 2967 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:28:43.504950 kubelet[2967]: E0513 00:28:43.491040 2967 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:28:43.504950 kubelet[2967]: E0513 00:28:43.491154 2967 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:28:43.504950 kubelet[2967]: W0513 00:28:43.491161 2967 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:28:43.504950 kubelet[2967]: E0513 00:28:43.491168 2967 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:28:43.504950 kubelet[2967]: E0513 00:28:43.491304 2967 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:28:43.505237 kubelet[2967]: W0513 00:28:43.491311 2967 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:28:43.505237 kubelet[2967]: E0513 00:28:43.491319 2967 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:28:43.505237 kubelet[2967]: E0513 00:28:43.491444 2967 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:28:43.505237 kubelet[2967]: W0513 00:28:43.491465 2967 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:28:43.505237 kubelet[2967]: E0513 00:28:43.491475 2967 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:28:43.505237 kubelet[2967]: E0513 00:28:43.491600 2967 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:28:43.505237 kubelet[2967]: W0513 00:28:43.491606 2967 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:28:43.505237 kubelet[2967]: E0513 00:28:43.491613 2967 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:28:43.505237 kubelet[2967]: E0513 00:28:43.491756 2967 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:28:43.505237 kubelet[2967]: W0513 00:28:43.491763 2967 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:28:43.516356 kubelet[2967]: E0513 00:28:43.491770 2967 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:28:43.516356 kubelet[2967]: E0513 00:28:43.491934 2967 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:28:43.516356 kubelet[2967]: W0513 00:28:43.491940 2967 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:28:43.516356 kubelet[2967]: E0513 00:28:43.491947 2967 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:28:43.516356 kubelet[2967]: E0513 00:28:43.492073 2967 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:28:43.516356 kubelet[2967]: W0513 00:28:43.492079 2967 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:28:43.516356 kubelet[2967]: E0513 00:28:43.492086 2967 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:28:43.516356 kubelet[2967]: E0513 00:28:43.492197 2967 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:28:43.516356 kubelet[2967]: W0513 00:28:43.492203 2967 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:28:43.516356 kubelet[2967]: E0513 00:28:43.492210 2967 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:28:43.516653 kubelet[2967]: E0513 00:28:43.492417 2967 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:28:43.516653 kubelet[2967]: W0513 00:28:43.492423 2967 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:28:43.516653 kubelet[2967]: E0513 00:28:43.492430 2967 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:28:43.516653 kubelet[2967]: E0513 00:28:43.492538 2967 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:28:43.516653 kubelet[2967]: W0513 00:28:43.492544 2967 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:28:43.516653 kubelet[2967]: E0513 00:28:43.492550 2967 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:28:43.516653 kubelet[2967]: E0513 00:28:43.498971 2967 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:28:43.516653 kubelet[2967]: W0513 00:28:43.498986 2967 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:28:43.516653 kubelet[2967]: E0513 00:28:43.499010 2967 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:28:43.516653 kubelet[2967]: E0513 00:28:43.499161 2967 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:28:43.516867 kubelet[2967]: W0513 00:28:43.499174 2967 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:28:43.516867 kubelet[2967]: E0513 00:28:43.499187 2967 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:28:43.516867 kubelet[2967]: E0513 00:28:43.499361 2967 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:28:43.516867 kubelet[2967]: W0513 00:28:43.499369 2967 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:28:43.516867 kubelet[2967]: E0513 00:28:43.499382 2967 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:28:43.516867 kubelet[2967]: E0513 00:28:43.499533 2967 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:28:43.516867 kubelet[2967]: W0513 00:28:43.499539 2967 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:28:43.516867 kubelet[2967]: E0513 00:28:43.499552 2967 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:28:43.516867 kubelet[2967]: E0513 00:28:43.499670 2967 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:28:43.516867 kubelet[2967]: W0513 00:28:43.499676 2967 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:28:43.517138 kubelet[2967]: E0513 00:28:43.499688 2967 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:28:43.517138 kubelet[2967]: E0513 00:28:43.499808 2967 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:28:43.517138 kubelet[2967]: W0513 00:28:43.499814 2967 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:28:43.517138 kubelet[2967]: E0513 00:28:43.499822 2967 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:28:43.517138 kubelet[2967]: E0513 00:28:43.499930 2967 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:28:43.517138 kubelet[2967]: W0513 00:28:43.499935 2967 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:28:43.517138 kubelet[2967]: E0513 00:28:43.499940 2967 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:28:43.517138 kubelet[2967]: E0513 00:28:43.500090 2967 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:28:43.517138 kubelet[2967]: W0513 00:28:43.500095 2967 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:28:43.517138 kubelet[2967]: E0513 00:28:43.500104 2967 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:28:43.517376 kubelet[2967]: E0513 00:28:43.500287 2967 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:28:43.517376 kubelet[2967]: W0513 00:28:43.500294 2967 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:28:43.517376 kubelet[2967]: E0513 00:28:43.500307 2967 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:28:43.517376 kubelet[2967]: E0513 00:28:43.500441 2967 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:28:43.517376 kubelet[2967]: W0513 00:28:43.500446 2967 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:28:43.517376 kubelet[2967]: E0513 00:28:43.500457 2967 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:28:43.517376 kubelet[2967]: E0513 00:28:43.500587 2967 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:28:43.517376 kubelet[2967]: W0513 00:28:43.500595 2967 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:28:43.517376 kubelet[2967]: E0513 00:28:43.500609 2967 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:28:43.517376 kubelet[2967]: E0513 00:28:43.500783 2967 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:28:43.521930 kubelet[2967]: W0513 00:28:43.500790 2967 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:28:43.521930 kubelet[2967]: E0513 00:28:43.500817 2967 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:28:43.521930 kubelet[2967]: E0513 00:28:43.500971 2967 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:28:43.521930 kubelet[2967]: W0513 00:28:43.500978 2967 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:28:43.521930 kubelet[2967]: E0513 00:28:43.500990 2967 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:28:43.521930 kubelet[2967]: E0513 00:28:43.501152 2967 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:28:43.521930 kubelet[2967]: W0513 00:28:43.501158 2967 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:28:43.521930 kubelet[2967]: E0513 00:28:43.501170 2967 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:28:43.521930 kubelet[2967]: E0513 00:28:43.501326 2967 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:28:43.521930 kubelet[2967]: W0513 00:28:43.501346 2967 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:28:43.522213 kubelet[2967]: E0513 00:28:43.501369 2967 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:28:43.522213 kubelet[2967]: E0513 00:28:43.501517 2967 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:28:43.522213 kubelet[2967]: W0513 00:28:43.501522 2967 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:28:43.522213 kubelet[2967]: E0513 00:28:43.501532 2967 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:28:43.522213 kubelet[2967]: E0513 00:28:43.501724 2967 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:28:43.522213 kubelet[2967]: W0513 00:28:43.501732 2967 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:28:43.522213 kubelet[2967]: E0513 00:28:43.501754 2967 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:28:43.522213 kubelet[2967]: E0513 00:28:43.501889 2967 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:28:43.522213 kubelet[2967]: W0513 00:28:43.501896 2967 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:28:43.522213 kubelet[2967]: E0513 00:28:43.501903 2967 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:28:44.077489 containerd[1654]: time="2025-05-13T00:28:44.077457337Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:28:44.078298 containerd[1654]: time="2025-05-13T00:28:44.078240105Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3: active requests=0, bytes read=5366937" May 13 00:28:44.078579 containerd[1654]: time="2025-05-13T00:28:44.078563263Z" level=info msg="ImageCreate event name:\"sha256:0ceddb3add2e9955cbb604f666245e259f30b1d6683c428f8748359e83d238a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:28:44.079811 containerd[1654]: time="2025-05-13T00:28:44.079794247Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:28:44.080627 containerd[1654]: time="2025-05-13T00:28:44.080609881Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" with image id \"sha256:0ceddb3add2e9955cbb604f666245e259f30b1d6683c428f8748359e83d238a5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\", size \"6859519\" in 1.442748729s" May 13 00:28:44.080654 containerd[1654]: time="2025-05-13T00:28:44.080630742Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" returns image reference \"sha256:0ceddb3add2e9955cbb604f666245e259f30b1d6683c428f8748359e83d238a5\"" May 13 00:28:44.082881 containerd[1654]: time="2025-05-13T00:28:44.082862451Z" level=info msg="CreateContainer within sandbox \"ee56a109e98884806b1faa5ab67f1896e594d25111965495500941d6f16ef424\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" May 13 00:28:44.089505 containerd[1654]: time="2025-05-13T00:28:44.089403065Z" level=info msg="CreateContainer within sandbox \"ee56a109e98884806b1faa5ab67f1896e594d25111965495500941d6f16ef424\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"d205ccee2730035cdc3dceff3318d7cac278632541bc0786df49cbbd6a5a2767\"" May 13 00:28:44.089935 containerd[1654]: time="2025-05-13T00:28:44.089834576Z" level=info msg="StartContainer for \"d205ccee2730035cdc3dceff3318d7cac278632541bc0786df49cbbd6a5a2767\"" May 13 00:28:44.115496 systemd[1]: run-containerd-runc-k8s.io-d205ccee2730035cdc3dceff3318d7cac278632541bc0786df49cbbd6a5a2767-runc.kgi3m7.mount: Deactivated successfully. May 13 00:28:44.143608 containerd[1654]: time="2025-05-13T00:28:44.143584426Z" level=info msg="StartContainer for \"d205ccee2730035cdc3dceff3318d7cac278632541bc0786df49cbbd6a5a2767\" returns successfully" May 13 00:28:44.320837 kubelet[2967]: E0513 00:28:44.320225 2967 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gfr9r" podUID="8207c568-159c-4015-827d-1b226c94f3cf" May 13 00:28:44.409936 kubelet[2967]: I0513 00:28:44.409874 2967 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 13 00:28:44.434587 kubelet[2967]: I0513 00:28:44.426830 2967 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-bc5f68658-nz7br" podStartSLOduration=2.5101603949999998 podStartE2EDuration="4.426817368s" podCreationTimestamp="2025-05-13 00:28:40 +0000 UTC" firstStartedPulling="2025-05-13 00:28:40.721121609 +0000 UTC m=+20.543205702" lastFinishedPulling="2025-05-13 00:28:42.637778581 +0000 UTC m=+22.459862675" observedRunningTime="2025-05-13 00:28:43.421268936 +0000 UTC m=+23.243353038" watchObservedRunningTime="2025-05-13 00:28:44.426817368 +0000 UTC m=+24.248901465" May 13 00:28:44.456495 containerd[1654]: time="2025-05-13T00:28:44.454885494Z" level=info msg="shim disconnected" id=d205ccee2730035cdc3dceff3318d7cac278632541bc0786df49cbbd6a5a2767 namespace=k8s.io May 13 00:28:44.457165 containerd[1654]: time="2025-05-13T00:28:44.456616761Z" level=warning msg="cleaning up after shim disconnected" id=d205ccee2730035cdc3dceff3318d7cac278632541bc0786df49cbbd6a5a2767 namespace=k8s.io May 13 00:28:44.457165 containerd[1654]: time="2025-05-13T00:28:44.456642401Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 13 00:28:45.087918 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d205ccee2730035cdc3dceff3318d7cac278632541bc0786df49cbbd6a5a2767-rootfs.mount: Deactivated successfully. May 13 00:28:45.412971 containerd[1654]: time="2025-05-13T00:28:45.412846106Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\"" May 13 00:28:46.321678 kubelet[2967]: E0513 00:28:46.321324 2967 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gfr9r" podUID="8207c568-159c-4015-827d-1b226c94f3cf" May 13 00:28:48.319854 kubelet[2967]: E0513 00:28:48.319723 2967 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gfr9r" podUID="8207c568-159c-4015-827d-1b226c94f3cf" May 13 00:28:49.053461 containerd[1654]: time="2025-05-13T00:28:49.053417604Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:28:49.054209 containerd[1654]: time="2025-05-13T00:28:49.054050466Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.3: active requests=0, bytes read=97793683" May 13 00:28:49.054882 containerd[1654]: time="2025-05-13T00:28:49.054722832Z" level=info msg="ImageCreate event name:\"sha256:a140d04be1bc987bae0a1b9159e1dcb85751c448830efbdb3494207cf602b2d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:28:49.056203 containerd[1654]: time="2025-05-13T00:28:49.056172957Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:28:49.056918 containerd[1654]: time="2025-05-13T00:28:49.056842083Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.3\" with image id \"sha256:a140d04be1bc987bae0a1b9159e1dcb85751c448830efbdb3494207cf602b2d9\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\", size \"99286305\" in 3.643956972s" May 13 00:28:49.056918 containerd[1654]: time="2025-05-13T00:28:49.056864653Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\" returns image reference \"sha256:a140d04be1bc987bae0a1b9159e1dcb85751c448830efbdb3494207cf602b2d9\"" May 13 00:28:49.060459 containerd[1654]: time="2025-05-13T00:28:49.060337968Z" level=info msg="CreateContainer within sandbox \"ee56a109e98884806b1faa5ab67f1896e594d25111965495500941d6f16ef424\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" May 13 00:28:49.068636 containerd[1654]: time="2025-05-13T00:28:49.068569570Z" level=info msg="CreateContainer within sandbox \"ee56a109e98884806b1faa5ab67f1896e594d25111965495500941d6f16ef424\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"6fe8f58352b47d6280ee5478feabda701bcec99cad73f131047c47ca13fd2d4b\"" May 13 00:28:49.071612 containerd[1654]: time="2025-05-13T00:28:49.070021341Z" level=info msg="StartContainer for \"6fe8f58352b47d6280ee5478feabda701bcec99cad73f131047c47ca13fd2d4b\"" May 13 00:28:49.106685 containerd[1654]: time="2025-05-13T00:28:49.106628028Z" level=info msg="StartContainer for \"6fe8f58352b47d6280ee5478feabda701bcec99cad73f131047c47ca13fd2d4b\" returns successfully" May 13 00:28:50.321308 kubelet[2967]: E0513 00:28:50.321003 2967 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gfr9r" podUID="8207c568-159c-4015-827d-1b226c94f3cf" May 13 00:28:50.837509 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6fe8f58352b47d6280ee5478feabda701bcec99cad73f131047c47ca13fd2d4b-rootfs.mount: Deactivated successfully. May 13 00:28:50.877510 systemd-resolved[1542]: Under memory pressure, flushing caches. May 13 00:28:50.902884 systemd-journald[1185]: Under memory pressure, flushing caches. May 13 00:28:50.917462 kubelet[2967]: I0513 00:28:50.886657 2967 kubelet_node_status.go:497] "Fast updating node status as it just became ready" May 13 00:28:50.917529 containerd[1654]: time="2025-05-13T00:28:50.893697967Z" level=info msg="shim disconnected" id=6fe8f58352b47d6280ee5478feabda701bcec99cad73f131047c47ca13fd2d4b namespace=k8s.io May 13 00:28:50.917529 containerd[1654]: time="2025-05-13T00:28:50.893775639Z" level=warning msg="cleaning up after shim disconnected" id=6fe8f58352b47d6280ee5478feabda701bcec99cad73f131047c47ca13fd2d4b namespace=k8s.io May 13 00:28:50.917529 containerd[1654]: time="2025-05-13T00:28:50.893781540Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 13 00:28:50.877548 systemd-resolved[1542]: Flushed all caches. May 13 00:28:51.028797 kubelet[2967]: I0513 00:28:51.028705 2967 topology_manager.go:215] "Topology Admit Handler" podUID="5ecaa235-5efa-4dbf-aaf9-e49c99601adb" podNamespace="kube-system" podName="coredns-7db6d8ff4d-jpqnf" May 13 00:28:51.056597 kubelet[2967]: I0513 00:28:51.056348 2967 topology_manager.go:215] "Topology Admit Handler" podUID="d11f4d12-c6f0-4786-9ade-890faf15b637" podNamespace="calico-system" podName="calico-kube-controllers-786f4bd5f6-5nhlr" May 13 00:28:51.056935 kubelet[2967]: I0513 00:28:51.056900 2967 topology_manager.go:215] "Topology Admit Handler" podUID="15d3cbd6-98df-4be7-b394-57ec06b3789c" podNamespace="calico-apiserver" podName="calico-apiserver-76896c5c69-r4r9w" May 13 00:28:51.057049 kubelet[2967]: I0513 00:28:51.057030 2967 topology_manager.go:215] "Topology Admit Handler" podUID="6eb2e449-b9a2-4894-b3ae-1030b0bd4c24" podNamespace="kube-system" podName="coredns-7db6d8ff4d-wnsqm" May 13 00:28:51.057125 kubelet[2967]: I0513 00:28:51.057109 2967 topology_manager.go:215] "Topology Admit Handler" podUID="f38b69e3-9879-44d1-9fca-1ad977d43c8a" podNamespace="calico-apiserver" podName="calico-apiserver-5fcdd59ffd-hwxmq" May 13 00:28:51.057202 kubelet[2967]: I0513 00:28:51.057187 2967 topology_manager.go:215] "Topology Admit Handler" podUID="d2f1ddd1-14cd-452c-934e-9a28944a7b23" podNamespace="calico-apiserver" podName="calico-apiserver-5fcdd59ffd-gtcr6" May 13 00:28:51.148138 kubelet[2967]: I0513 00:28:51.148056 2967 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9vtbh\" (UniqueName: \"kubernetes.io/projected/d11f4d12-c6f0-4786-9ade-890faf15b637-kube-api-access-9vtbh\") pod \"calico-kube-controllers-786f4bd5f6-5nhlr\" (UID: \"d11f4d12-c6f0-4786-9ade-890faf15b637\") " pod="calico-system/calico-kube-controllers-786f4bd5f6-5nhlr" May 13 00:28:51.148670 kubelet[2967]: I0513 00:28:51.148241 2967 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9ppzc\" (UniqueName: \"kubernetes.io/projected/d2f1ddd1-14cd-452c-934e-9a28944a7b23-kube-api-access-9ppzc\") pod \"calico-apiserver-5fcdd59ffd-gtcr6\" (UID: \"d2f1ddd1-14cd-452c-934e-9a28944a7b23\") " pod="calico-apiserver/calico-apiserver-5fcdd59ffd-gtcr6" May 13 00:28:51.148670 kubelet[2967]: I0513 00:28:51.148264 2967 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5ecaa235-5efa-4dbf-aaf9-e49c99601adb-config-volume\") pod \"coredns-7db6d8ff4d-jpqnf\" (UID: \"5ecaa235-5efa-4dbf-aaf9-e49c99601adb\") " pod="kube-system/coredns-7db6d8ff4d-jpqnf" May 13 00:28:51.148670 kubelet[2967]: I0513 00:28:51.148278 2967 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d11f4d12-c6f0-4786-9ade-890faf15b637-tigera-ca-bundle\") pod \"calico-kube-controllers-786f4bd5f6-5nhlr\" (UID: \"d11f4d12-c6f0-4786-9ade-890faf15b637\") " pod="calico-system/calico-kube-controllers-786f4bd5f6-5nhlr" May 13 00:28:51.148670 kubelet[2967]: I0513 00:28:51.148291 2967 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zmcbb\" (UniqueName: \"kubernetes.io/projected/15d3cbd6-98df-4be7-b394-57ec06b3789c-kube-api-access-zmcbb\") pod \"calico-apiserver-76896c5c69-r4r9w\" (UID: \"15d3cbd6-98df-4be7-b394-57ec06b3789c\") " pod="calico-apiserver/calico-apiserver-76896c5c69-r4r9w" May 13 00:28:51.148670 kubelet[2967]: I0513 00:28:51.148307 2967 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lmdw2\" (UniqueName: \"kubernetes.io/projected/f38b69e3-9879-44d1-9fca-1ad977d43c8a-kube-api-access-lmdw2\") pod \"calico-apiserver-5fcdd59ffd-hwxmq\" (UID: \"f38b69e3-9879-44d1-9fca-1ad977d43c8a\") " pod="calico-apiserver/calico-apiserver-5fcdd59ffd-hwxmq" May 13 00:28:51.148778 kubelet[2967]: I0513 00:28:51.148321 2967 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/15d3cbd6-98df-4be7-b394-57ec06b3789c-calico-apiserver-certs\") pod \"calico-apiserver-76896c5c69-r4r9w\" (UID: \"15d3cbd6-98df-4be7-b394-57ec06b3789c\") " pod="calico-apiserver/calico-apiserver-76896c5c69-r4r9w" May 13 00:28:51.148778 kubelet[2967]: I0513 00:28:51.148345 2967 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/f38b69e3-9879-44d1-9fca-1ad977d43c8a-calico-apiserver-certs\") pod \"calico-apiserver-5fcdd59ffd-hwxmq\" (UID: \"f38b69e3-9879-44d1-9fca-1ad977d43c8a\") " pod="calico-apiserver/calico-apiserver-5fcdd59ffd-hwxmq" May 13 00:28:51.148778 kubelet[2967]: I0513 00:28:51.148361 2967 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6eb2e449-b9a2-4894-b3ae-1030b0bd4c24-config-volume\") pod \"coredns-7db6d8ff4d-wnsqm\" (UID: \"6eb2e449-b9a2-4894-b3ae-1030b0bd4c24\") " pod="kube-system/coredns-7db6d8ff4d-wnsqm" May 13 00:28:51.148778 kubelet[2967]: I0513 00:28:51.148373 2967 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-thwbv\" (UniqueName: \"kubernetes.io/projected/5ecaa235-5efa-4dbf-aaf9-e49c99601adb-kube-api-access-thwbv\") pod \"coredns-7db6d8ff4d-jpqnf\" (UID: \"5ecaa235-5efa-4dbf-aaf9-e49c99601adb\") " pod="kube-system/coredns-7db6d8ff4d-jpqnf" May 13 00:28:51.148778 kubelet[2967]: I0513 00:28:51.148387 2967 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rwns9\" (UniqueName: \"kubernetes.io/projected/6eb2e449-b9a2-4894-b3ae-1030b0bd4c24-kube-api-access-rwns9\") pod \"coredns-7db6d8ff4d-wnsqm\" (UID: \"6eb2e449-b9a2-4894-b3ae-1030b0bd4c24\") " pod="kube-system/coredns-7db6d8ff4d-wnsqm" May 13 00:28:51.148862 kubelet[2967]: I0513 00:28:51.148404 2967 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/d2f1ddd1-14cd-452c-934e-9a28944a7b23-calico-apiserver-certs\") pod \"calico-apiserver-5fcdd59ffd-gtcr6\" (UID: \"d2f1ddd1-14cd-452c-934e-9a28944a7b23\") " pod="calico-apiserver/calico-apiserver-5fcdd59ffd-gtcr6" May 13 00:28:51.417451 containerd[1654]: time="2025-05-13T00:28:51.416943139Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-786f4bd5f6-5nhlr,Uid:d11f4d12-c6f0-4786-9ade-890faf15b637,Namespace:calico-system,Attempt:0,}" May 13 00:28:51.417451 containerd[1654]: time="2025-05-13T00:28:51.416991055Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-76896c5c69-r4r9w,Uid:15d3cbd6-98df-4be7-b394-57ec06b3789c,Namespace:calico-apiserver,Attempt:0,}" May 13 00:28:51.417980 containerd[1654]: time="2025-05-13T00:28:51.417612050Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5fcdd59ffd-gtcr6,Uid:d2f1ddd1-14cd-452c-934e-9a28944a7b23,Namespace:calico-apiserver,Attempt:0,}" May 13 00:28:51.420979 containerd[1654]: time="2025-05-13T00:28:51.420965592Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-jpqnf,Uid:5ecaa235-5efa-4dbf-aaf9-e49c99601adb,Namespace:kube-system,Attempt:0,}" May 13 00:28:51.421362 containerd[1654]: time="2025-05-13T00:28:51.421349910Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5fcdd59ffd-hwxmq,Uid:f38b69e3-9879-44d1-9fca-1ad977d43c8a,Namespace:calico-apiserver,Attempt:0,}" May 13 00:28:51.432834 containerd[1654]: time="2025-05-13T00:28:51.432223000Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-wnsqm,Uid:6eb2e449-b9a2-4894-b3ae-1030b0bd4c24,Namespace:kube-system,Attempt:0,}" May 13 00:28:51.436935 containerd[1654]: time="2025-05-13T00:28:51.436914476Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\"" May 13 00:28:51.737793 containerd[1654]: time="2025-05-13T00:28:51.737573181Z" level=error msg="Failed to destroy network for sandbox \"f8eef8de32e59bda577b2e39ecefefac4fcef6ea2ee8bab236c9d2342a3a2a5b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 00:28:51.738492 containerd[1654]: time="2025-05-13T00:28:51.738477674Z" level=error msg="Failed to destroy network for sandbox \"430af441a72f539a91045e2e8f325527c635c795f894d12c97ba94d58aa93098\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 00:28:51.738731 containerd[1654]: time="2025-05-13T00:28:51.738717995Z" level=error msg="encountered an error cleaning up failed sandbox \"f8eef8de32e59bda577b2e39ecefefac4fcef6ea2ee8bab236c9d2342a3a2a5b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 00:28:51.739653 containerd[1654]: time="2025-05-13T00:28:51.739266768Z" level=error msg="encountered an error cleaning up failed sandbox \"430af441a72f539a91045e2e8f325527c635c795f894d12c97ba94d58aa93098\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 00:28:51.744146 containerd[1654]: time="2025-05-13T00:28:51.743141207Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-jpqnf,Uid:5ecaa235-5efa-4dbf-aaf9-e49c99601adb,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"430af441a72f539a91045e2e8f325527c635c795f894d12c97ba94d58aa93098\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 00:28:51.744146 containerd[1654]: time="2025-05-13T00:28:51.743231240Z" level=error msg="Failed to destroy network for sandbox \"e6dc441ee5996693cf7460489ac2eab79d9a40d34f119c96bc9affd6fd513f18\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 00:28:51.744146 containerd[1654]: time="2025-05-13T00:28:51.743444530Z" level=error msg="encountered an error cleaning up failed sandbox \"e6dc441ee5996693cf7460489ac2eab79d9a40d34f119c96bc9affd6fd513f18\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 00:28:51.744146 containerd[1654]: time="2025-05-13T00:28:51.743466928Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-786f4bd5f6-5nhlr,Uid:d11f4d12-c6f0-4786-9ade-890faf15b637,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e6dc441ee5996693cf7460489ac2eab79d9a40d34f119c96bc9affd6fd513f18\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 00:28:51.744146 containerd[1654]: time="2025-05-13T00:28:51.743528967Z" level=error msg="Failed to destroy network for sandbox \"95514c060ba7bfb0e198e40c76b60d097c2db68ab36d63bbe6781ed15e5da073\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 00:28:51.744146 containerd[1654]: time="2025-05-13T00:28:51.743664842Z" level=error msg="encountered an error cleaning up failed sandbox \"95514c060ba7bfb0e198e40c76b60d097c2db68ab36d63bbe6781ed15e5da073\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 00:28:51.744146 containerd[1654]: time="2025-05-13T00:28:51.743681745Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5fcdd59ffd-hwxmq,Uid:f38b69e3-9879-44d1-9fca-1ad977d43c8a,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"95514c060ba7bfb0e198e40c76b60d097c2db68ab36d63bbe6781ed15e5da073\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 00:28:51.744146 containerd[1654]: time="2025-05-13T00:28:51.743745559Z" level=error msg="Failed to destroy network for sandbox \"3858951aad868e71994afe1d06847248f35d4673e4c791ff209e3d7ac3c17ea9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 00:28:51.744146 containerd[1654]: time="2025-05-13T00:28:51.744131593Z" level=error msg="encountered an error cleaning up failed sandbox \"3858951aad868e71994afe1d06847248f35d4673e4c791ff209e3d7ac3c17ea9\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 00:28:51.744146 containerd[1654]: time="2025-05-13T00:28:51.744149983Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-76896c5c69-r4r9w,Uid:15d3cbd6-98df-4be7-b394-57ec06b3789c,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3858951aad868e71994afe1d06847248f35d4673e4c791ff209e3d7ac3c17ea9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 00:28:51.744499 kubelet[2967]: E0513 00:28:51.743868 2967 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"95514c060ba7bfb0e198e40c76b60d097c2db68ab36d63bbe6781ed15e5da073\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 00:28:51.744726 containerd[1654]: time="2025-05-13T00:28:51.744204722Z" level=error msg="Failed to destroy network for sandbox \"078335b4a112e958f144bcd326e6bfce7821d1a603db8a7da766dc2d63af9436\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 00:28:51.744726 containerd[1654]: time="2025-05-13T00:28:51.744389204Z" level=error msg="encountered an error cleaning up failed sandbox \"078335b4a112e958f144bcd326e6bfce7821d1a603db8a7da766dc2d63af9436\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 00:28:51.744885 kubelet[2967]: E0513 00:28:51.744828 2967 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"430af441a72f539a91045e2e8f325527c635c795f894d12c97ba94d58aa93098\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 00:28:51.744885 kubelet[2967]: E0513 00:28:51.744866 2967 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e6dc441ee5996693cf7460489ac2eab79d9a40d34f119c96bc9affd6fd513f18\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 00:28:51.745008 containerd[1654]: time="2025-05-13T00:28:51.744985627Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5fcdd59ffd-gtcr6,Uid:d2f1ddd1-14cd-452c-934e-9a28944a7b23,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f8eef8de32e59bda577b2e39ecefefac4fcef6ea2ee8bab236c9d2342a3a2a5b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 00:28:51.745211 kubelet[2967]: E0513 00:28:51.745175 2967 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f8eef8de32e59bda577b2e39ecefefac4fcef6ea2ee8bab236c9d2342a3a2a5b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 00:28:51.745211 kubelet[2967]: E0513 00:28:51.745196 2967 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3858951aad868e71994afe1d06847248f35d4673e4c791ff209e3d7ac3c17ea9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 00:28:51.746693 kubelet[2967]: E0513 00:28:51.746676 2967 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"95514c060ba7bfb0e198e40c76b60d097c2db68ab36d63bbe6781ed15e5da073\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5fcdd59ffd-hwxmq" May 13 00:28:51.746957 kubelet[2967]: E0513 00:28:51.746780 2967 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"95514c060ba7bfb0e198e40c76b60d097c2db68ab36d63bbe6781ed15e5da073\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5fcdd59ffd-hwxmq" May 13 00:28:51.746957 kubelet[2967]: E0513 00:28:51.746827 2967 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5fcdd59ffd-hwxmq_calico-apiserver(f38b69e3-9879-44d1-9fca-1ad977d43c8a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5fcdd59ffd-hwxmq_calico-apiserver(f38b69e3-9879-44d1-9fca-1ad977d43c8a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"95514c060ba7bfb0e198e40c76b60d097c2db68ab36d63bbe6781ed15e5da073\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5fcdd59ffd-hwxmq" podUID="f38b69e3-9879-44d1-9fca-1ad977d43c8a" May 13 00:28:51.747934 kubelet[2967]: E0513 00:28:51.747095 2967 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f8eef8de32e59bda577b2e39ecefefac4fcef6ea2ee8bab236c9d2342a3a2a5b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5fcdd59ffd-gtcr6" May 13 00:28:51.747934 kubelet[2967]: E0513 00:28:51.747123 2967 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f8eef8de32e59bda577b2e39ecefefac4fcef6ea2ee8bab236c9d2342a3a2a5b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5fcdd59ffd-gtcr6" May 13 00:28:51.747934 kubelet[2967]: E0513 00:28:51.747148 2967 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5fcdd59ffd-gtcr6_calico-apiserver(d2f1ddd1-14cd-452c-934e-9a28944a7b23)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5fcdd59ffd-gtcr6_calico-apiserver(d2f1ddd1-14cd-452c-934e-9a28944a7b23)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f8eef8de32e59bda577b2e39ecefefac4fcef6ea2ee8bab236c9d2342a3a2a5b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5fcdd59ffd-gtcr6" podUID="d2f1ddd1-14cd-452c-934e-9a28944a7b23" May 13 00:28:51.748055 kubelet[2967]: E0513 00:28:51.747174 2967 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"430af441a72f539a91045e2e8f325527c635c795f894d12c97ba94d58aa93098\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-jpqnf" May 13 00:28:51.748055 kubelet[2967]: E0513 00:28:51.747201 2967 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"430af441a72f539a91045e2e8f325527c635c795f894d12c97ba94d58aa93098\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-jpqnf" May 13 00:28:51.748055 kubelet[2967]: E0513 00:28:51.747224 2967 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-jpqnf_kube-system(5ecaa235-5efa-4dbf-aaf9-e49c99601adb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-jpqnf_kube-system(5ecaa235-5efa-4dbf-aaf9-e49c99601adb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"430af441a72f539a91045e2e8f325527c635c795f894d12c97ba94d58aa93098\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-jpqnf" podUID="5ecaa235-5efa-4dbf-aaf9-e49c99601adb" May 13 00:28:51.748170 kubelet[2967]: E0513 00:28:51.747249 2967 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e6dc441ee5996693cf7460489ac2eab79d9a40d34f119c96bc9affd6fd513f18\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-786f4bd5f6-5nhlr" May 13 00:28:51.748446 kubelet[2967]: E0513 00:28:51.748216 2967 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e6dc441ee5996693cf7460489ac2eab79d9a40d34f119c96bc9affd6fd513f18\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-786f4bd5f6-5nhlr" May 13 00:28:51.748446 kubelet[2967]: E0513 00:28:51.748242 2967 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-786f4bd5f6-5nhlr_calico-system(d11f4d12-c6f0-4786-9ade-890faf15b637)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-786f4bd5f6-5nhlr_calico-system(d11f4d12-c6f0-4786-9ade-890faf15b637)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e6dc441ee5996693cf7460489ac2eab79d9a40d34f119c96bc9affd6fd513f18\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-786f4bd5f6-5nhlr" podUID="d11f4d12-c6f0-4786-9ade-890faf15b637" May 13 00:28:51.748446 kubelet[2967]: E0513 00:28:51.748263 2967 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3858951aad868e71994afe1d06847248f35d4673e4c791ff209e3d7ac3c17ea9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-76896c5c69-r4r9w" May 13 00:28:51.748611 kubelet[2967]: E0513 00:28:51.748272 2967 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3858951aad868e71994afe1d06847248f35d4673e4c791ff209e3d7ac3c17ea9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-76896c5c69-r4r9w" May 13 00:28:51.748611 kubelet[2967]: E0513 00:28:51.748293 2967 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-76896c5c69-r4r9w_calico-apiserver(15d3cbd6-98df-4be7-b394-57ec06b3789c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-76896c5c69-r4r9w_calico-apiserver(15d3cbd6-98df-4be7-b394-57ec06b3789c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3858951aad868e71994afe1d06847248f35d4673e4c791ff209e3d7ac3c17ea9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-76896c5c69-r4r9w" podUID="15d3cbd6-98df-4be7-b394-57ec06b3789c" May 13 00:28:51.749346 containerd[1654]: time="2025-05-13T00:28:51.749139876Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-wnsqm,Uid:6eb2e449-b9a2-4894-b3ae-1030b0bd4c24,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"078335b4a112e958f144bcd326e6bfce7821d1a603db8a7da766dc2d63af9436\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 00:28:51.749403 kubelet[2967]: E0513 00:28:51.749281 2967 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"078335b4a112e958f144bcd326e6bfce7821d1a603db8a7da766dc2d63af9436\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 00:28:51.749403 kubelet[2967]: E0513 00:28:51.749301 2967 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"078335b4a112e958f144bcd326e6bfce7821d1a603db8a7da766dc2d63af9436\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-wnsqm" May 13 00:28:51.749403 kubelet[2967]: E0513 00:28:51.749310 2967 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"078335b4a112e958f144bcd326e6bfce7821d1a603db8a7da766dc2d63af9436\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-wnsqm" May 13 00:28:51.749488 kubelet[2967]: E0513 00:28:51.749343 2967 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-wnsqm_kube-system(6eb2e449-b9a2-4894-b3ae-1030b0bd4c24)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-wnsqm_kube-system(6eb2e449-b9a2-4894-b3ae-1030b0bd4c24)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"078335b4a112e958f144bcd326e6bfce7821d1a603db8a7da766dc2d63af9436\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-wnsqm" podUID="6eb2e449-b9a2-4894-b3ae-1030b0bd4c24" May 13 00:28:52.322133 containerd[1654]: time="2025-05-13T00:28:52.321855020Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-gfr9r,Uid:8207c568-159c-4015-827d-1b226c94f3cf,Namespace:calico-system,Attempt:0,}" May 13 00:28:52.425960 containerd[1654]: time="2025-05-13T00:28:52.425920782Z" level=error msg="Failed to destroy network for sandbox \"18dd7149039e82dd824df6286a02bfb259d4b9e18c4b218541ea3ba28be029e8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 00:28:52.427505 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-18dd7149039e82dd824df6286a02bfb259d4b9e18c4b218541ea3ba28be029e8-shm.mount: Deactivated successfully. May 13 00:28:52.427844 containerd[1654]: time="2025-05-13T00:28:52.427587265Z" level=error msg="encountered an error cleaning up failed sandbox \"18dd7149039e82dd824df6286a02bfb259d4b9e18c4b218541ea3ba28be029e8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 00:28:52.427844 containerd[1654]: time="2025-05-13T00:28:52.427630064Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-gfr9r,Uid:8207c568-159c-4015-827d-1b226c94f3cf,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"18dd7149039e82dd824df6286a02bfb259d4b9e18c4b218541ea3ba28be029e8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 00:28:52.427892 kubelet[2967]: E0513 00:28:52.427785 2967 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"18dd7149039e82dd824df6286a02bfb259d4b9e18c4b218541ea3ba28be029e8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 00:28:52.427892 kubelet[2967]: E0513 00:28:52.427826 2967 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"18dd7149039e82dd824df6286a02bfb259d4b9e18c4b218541ea3ba28be029e8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-gfr9r" May 13 00:28:52.427892 kubelet[2967]: E0513 00:28:52.427838 2967 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"18dd7149039e82dd824df6286a02bfb259d4b9e18c4b218541ea3ba28be029e8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-gfr9r" May 13 00:28:52.427958 kubelet[2967]: E0513 00:28:52.427873 2967 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-gfr9r_calico-system(8207c568-159c-4015-827d-1b226c94f3cf)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-gfr9r_calico-system(8207c568-159c-4015-827d-1b226c94f3cf)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"18dd7149039e82dd824df6286a02bfb259d4b9e18c4b218541ea3ba28be029e8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-gfr9r" podUID="8207c568-159c-4015-827d-1b226c94f3cf" May 13 00:28:52.438671 kubelet[2967]: I0513 00:28:52.438651 2967 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e6dc441ee5996693cf7460489ac2eab79d9a40d34f119c96bc9affd6fd513f18" May 13 00:28:52.440272 kubelet[2967]: I0513 00:28:52.440062 2967 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="18dd7149039e82dd824df6286a02bfb259d4b9e18c4b218541ea3ba28be029e8" May 13 00:28:52.441067 kubelet[2967]: I0513 00:28:52.441011 2967 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="078335b4a112e958f144bcd326e6bfce7821d1a603db8a7da766dc2d63af9436" May 13 00:28:52.443017 kubelet[2967]: I0513 00:28:52.442318 2967 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3858951aad868e71994afe1d06847248f35d4673e4c791ff209e3d7ac3c17ea9" May 13 00:28:52.443017 kubelet[2967]: I0513 00:28:52.442896 2967 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="430af441a72f539a91045e2e8f325527c635c795f894d12c97ba94d58aa93098" May 13 00:28:52.444163 kubelet[2967]: I0513 00:28:52.443620 2967 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="95514c060ba7bfb0e198e40c76b60d097c2db68ab36d63bbe6781ed15e5da073" May 13 00:28:52.444526 kubelet[2967]: I0513 00:28:52.444501 2967 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f8eef8de32e59bda577b2e39ecefefac4fcef6ea2ee8bab236c9d2342a3a2a5b" May 13 00:28:52.484005 containerd[1654]: time="2025-05-13T00:28:52.483839772Z" level=info msg="StopPodSandbox for \"078335b4a112e958f144bcd326e6bfce7821d1a603db8a7da766dc2d63af9436\"" May 13 00:28:52.485041 containerd[1654]: time="2025-05-13T00:28:52.484357746Z" level=info msg="StopPodSandbox for \"f8eef8de32e59bda577b2e39ecefefac4fcef6ea2ee8bab236c9d2342a3a2a5b\"" May 13 00:28:52.485041 containerd[1654]: time="2025-05-13T00:28:52.485034990Z" level=info msg="Ensure that sandbox f8eef8de32e59bda577b2e39ecefefac4fcef6ea2ee8bab236c9d2342a3a2a5b in task-service has been cleanup successfully" May 13 00:28:52.485129 containerd[1654]: time="2025-05-13T00:28:52.485117761Z" level=info msg="Ensure that sandbox 078335b4a112e958f144bcd326e6bfce7821d1a603db8a7da766dc2d63af9436 in task-service has been cleanup successfully" May 13 00:28:52.485636 containerd[1654]: time="2025-05-13T00:28:52.485625115Z" level=info msg="StopPodSandbox for \"e6dc441ee5996693cf7460489ac2eab79d9a40d34f119c96bc9affd6fd513f18\"" May 13 00:28:52.485785 containerd[1654]: time="2025-05-13T00:28:52.485775380Z" level=info msg="Ensure that sandbox e6dc441ee5996693cf7460489ac2eab79d9a40d34f119c96bc9affd6fd513f18 in task-service has been cleanup successfully" May 13 00:28:52.486023 containerd[1654]: time="2025-05-13T00:28:52.486007935Z" level=info msg="StopPodSandbox for \"18dd7149039e82dd824df6286a02bfb259d4b9e18c4b218541ea3ba28be029e8\"" May 13 00:28:52.486104 containerd[1654]: time="2025-05-13T00:28:52.486090510Z" level=info msg="Ensure that sandbox 18dd7149039e82dd824df6286a02bfb259d4b9e18c4b218541ea3ba28be029e8 in task-service has been cleanup successfully" May 13 00:28:52.486461 containerd[1654]: time="2025-05-13T00:28:52.486446284Z" level=info msg="StopPodSandbox for \"430af441a72f539a91045e2e8f325527c635c795f894d12c97ba94d58aa93098\"" May 13 00:28:52.486529 containerd[1654]: time="2025-05-13T00:28:52.486516011Z" level=info msg="Ensure that sandbox 430af441a72f539a91045e2e8f325527c635c795f894d12c97ba94d58aa93098 in task-service has been cleanup successfully" May 13 00:28:52.487287 containerd[1654]: time="2025-05-13T00:28:52.487270729Z" level=info msg="StopPodSandbox for \"3858951aad868e71994afe1d06847248f35d4673e4c791ff209e3d7ac3c17ea9\"" May 13 00:28:52.487374 containerd[1654]: time="2025-05-13T00:28:52.487356447Z" level=info msg="Ensure that sandbox 3858951aad868e71994afe1d06847248f35d4673e4c791ff209e3d7ac3c17ea9 in task-service has been cleanup successfully" May 13 00:28:52.487715 containerd[1654]: time="2025-05-13T00:28:52.487700337Z" level=info msg="StopPodSandbox for \"95514c060ba7bfb0e198e40c76b60d097c2db68ab36d63bbe6781ed15e5da073\"" May 13 00:28:52.492049 containerd[1654]: time="2025-05-13T00:28:52.492024208Z" level=info msg="Ensure that sandbox 95514c060ba7bfb0e198e40c76b60d097c2db68ab36d63bbe6781ed15e5da073 in task-service has been cleanup successfully" May 13 00:28:52.567566 containerd[1654]: time="2025-05-13T00:28:52.567536151Z" level=error msg="StopPodSandbox for \"430af441a72f539a91045e2e8f325527c635c795f894d12c97ba94d58aa93098\" failed" error="failed to destroy network for sandbox \"430af441a72f539a91045e2e8f325527c635c795f894d12c97ba94d58aa93098\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 00:28:52.568796 containerd[1654]: time="2025-05-13T00:28:52.567761242Z" level=error msg="StopPodSandbox for \"078335b4a112e958f144bcd326e6bfce7821d1a603db8a7da766dc2d63af9436\" failed" error="failed to destroy network for sandbox \"078335b4a112e958f144bcd326e6bfce7821d1a603db8a7da766dc2d63af9436\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 00:28:52.569287 containerd[1654]: time="2025-05-13T00:28:52.569271167Z" level=error msg="StopPodSandbox for \"18dd7149039e82dd824df6286a02bfb259d4b9e18c4b218541ea3ba28be029e8\" failed" error="failed to destroy network for sandbox \"18dd7149039e82dd824df6286a02bfb259d4b9e18c4b218541ea3ba28be029e8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 00:28:52.573251 containerd[1654]: time="2025-05-13T00:28:52.572681270Z" level=error msg="StopPodSandbox for \"3858951aad868e71994afe1d06847248f35d4673e4c791ff209e3d7ac3c17ea9\" failed" error="failed to destroy network for sandbox \"3858951aad868e71994afe1d06847248f35d4673e4c791ff209e3d7ac3c17ea9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 00:28:52.574315 containerd[1654]: time="2025-05-13T00:28:52.574249857Z" level=error msg="StopPodSandbox for \"95514c060ba7bfb0e198e40c76b60d097c2db68ab36d63bbe6781ed15e5da073\" failed" error="failed to destroy network for sandbox \"95514c060ba7bfb0e198e40c76b60d097c2db68ab36d63bbe6781ed15e5da073\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 00:28:52.574862 kubelet[2967]: E0513 00:28:52.567853 2967 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"078335b4a112e958f144bcd326e6bfce7821d1a603db8a7da766dc2d63af9436\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="078335b4a112e958f144bcd326e6bfce7821d1a603db8a7da766dc2d63af9436" May 13 00:28:52.574862 kubelet[2967]: E0513 00:28:52.574687 2967 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"078335b4a112e958f144bcd326e6bfce7821d1a603db8a7da766dc2d63af9436"} May 13 00:28:52.574862 kubelet[2967]: E0513 00:28:52.574740 2967 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"6eb2e449-b9a2-4894-b3ae-1030b0bd4c24\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"078335b4a112e958f144bcd326e6bfce7821d1a603db8a7da766dc2d63af9436\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 13 00:28:52.574862 kubelet[2967]: E0513 00:28:52.574757 2967 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"6eb2e449-b9a2-4894-b3ae-1030b0bd4c24\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"078335b4a112e958f144bcd326e6bfce7821d1a603db8a7da766dc2d63af9436\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-wnsqm" podUID="6eb2e449-b9a2-4894-b3ae-1030b0bd4c24" May 13 00:28:52.575254 kubelet[2967]: E0513 00:28:52.575082 2967 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"95514c060ba7bfb0e198e40c76b60d097c2db68ab36d63bbe6781ed15e5da073\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="95514c060ba7bfb0e198e40c76b60d097c2db68ab36d63bbe6781ed15e5da073" May 13 00:28:52.575254 kubelet[2967]: E0513 00:28:52.575199 2967 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"95514c060ba7bfb0e198e40c76b60d097c2db68ab36d63bbe6781ed15e5da073"} May 13 00:28:52.575254 kubelet[2967]: E0513 00:28:52.575221 2967 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"f38b69e3-9879-44d1-9fca-1ad977d43c8a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"95514c060ba7bfb0e198e40c76b60d097c2db68ab36d63bbe6781ed15e5da073\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 13 00:28:52.575254 kubelet[2967]: E0513 00:28:52.575233 2967 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"f38b69e3-9879-44d1-9fca-1ad977d43c8a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"95514c060ba7bfb0e198e40c76b60d097c2db68ab36d63bbe6781ed15e5da073\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5fcdd59ffd-hwxmq" podUID="f38b69e3-9879-44d1-9fca-1ad977d43c8a" May 13 00:28:52.575384 kubelet[2967]: E0513 00:28:52.567687 2967 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"430af441a72f539a91045e2e8f325527c635c795f894d12c97ba94d58aa93098\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="430af441a72f539a91045e2e8f325527c635c795f894d12c97ba94d58aa93098" May 13 00:28:52.575384 kubelet[2967]: E0513 00:28:52.575322 2967 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"430af441a72f539a91045e2e8f325527c635c795f894d12c97ba94d58aa93098"} May 13 00:28:52.575384 kubelet[2967]: E0513 00:28:52.575348 2967 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"5ecaa235-5efa-4dbf-aaf9-e49c99601adb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"430af441a72f539a91045e2e8f325527c635c795f894d12c97ba94d58aa93098\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 13 00:28:52.575384 kubelet[2967]: E0513 00:28:52.575360 2967 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"5ecaa235-5efa-4dbf-aaf9-e49c99601adb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"430af441a72f539a91045e2e8f325527c635c795f894d12c97ba94d58aa93098\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-jpqnf" podUID="5ecaa235-5efa-4dbf-aaf9-e49c99601adb" May 13 00:28:52.575799 kubelet[2967]: E0513 00:28:52.575384 2967 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"3858951aad868e71994afe1d06847248f35d4673e4c791ff209e3d7ac3c17ea9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="3858951aad868e71994afe1d06847248f35d4673e4c791ff209e3d7ac3c17ea9" May 13 00:28:52.575799 kubelet[2967]: E0513 00:28:52.575394 2967 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"3858951aad868e71994afe1d06847248f35d4673e4c791ff209e3d7ac3c17ea9"} May 13 00:28:52.575799 kubelet[2967]: E0513 00:28:52.575406 2967 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"15d3cbd6-98df-4be7-b394-57ec06b3789c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3858951aad868e71994afe1d06847248f35d4673e4c791ff209e3d7ac3c17ea9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 13 00:28:52.575799 kubelet[2967]: E0513 00:28:52.575416 2967 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"15d3cbd6-98df-4be7-b394-57ec06b3789c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3858951aad868e71994afe1d06847248f35d4673e4c791ff209e3d7ac3c17ea9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-76896c5c69-r4r9w" podUID="15d3cbd6-98df-4be7-b394-57ec06b3789c" May 13 00:28:52.575909 kubelet[2967]: E0513 00:28:52.575589 2967 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"18dd7149039e82dd824df6286a02bfb259d4b9e18c4b218541ea3ba28be029e8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="18dd7149039e82dd824df6286a02bfb259d4b9e18c4b218541ea3ba28be029e8" May 13 00:28:52.575909 kubelet[2967]: E0513 00:28:52.575606 2967 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"18dd7149039e82dd824df6286a02bfb259d4b9e18c4b218541ea3ba28be029e8"} May 13 00:28:52.575909 kubelet[2967]: E0513 00:28:52.575624 2967 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"8207c568-159c-4015-827d-1b226c94f3cf\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"18dd7149039e82dd824df6286a02bfb259d4b9e18c4b218541ea3ba28be029e8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 13 00:28:52.575909 kubelet[2967]: E0513 00:28:52.575637 2967 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"8207c568-159c-4015-827d-1b226c94f3cf\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"18dd7149039e82dd824df6286a02bfb259d4b9e18c4b218541ea3ba28be029e8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-gfr9r" podUID="8207c568-159c-4015-827d-1b226c94f3cf" May 13 00:28:52.576636 containerd[1654]: time="2025-05-13T00:28:52.576612484Z" level=error msg="StopPodSandbox for \"f8eef8de32e59bda577b2e39ecefefac4fcef6ea2ee8bab236c9d2342a3a2a5b\" failed" error="failed to destroy network for sandbox \"f8eef8de32e59bda577b2e39ecefefac4fcef6ea2ee8bab236c9d2342a3a2a5b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 00:28:52.576831 kubelet[2967]: E0513 00:28:52.576716 2967 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"f8eef8de32e59bda577b2e39ecefefac4fcef6ea2ee8bab236c9d2342a3a2a5b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="f8eef8de32e59bda577b2e39ecefefac4fcef6ea2ee8bab236c9d2342a3a2a5b" May 13 00:28:52.576831 kubelet[2967]: E0513 00:28:52.576738 2967 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"f8eef8de32e59bda577b2e39ecefefac4fcef6ea2ee8bab236c9d2342a3a2a5b"} May 13 00:28:52.576831 kubelet[2967]: E0513 00:28:52.576769 2967 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d2f1ddd1-14cd-452c-934e-9a28944a7b23\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f8eef8de32e59bda577b2e39ecefefac4fcef6ea2ee8bab236c9d2342a3a2a5b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 13 00:28:52.576831 kubelet[2967]: E0513 00:28:52.576791 2967 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d2f1ddd1-14cd-452c-934e-9a28944a7b23\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f8eef8de32e59bda577b2e39ecefefac4fcef6ea2ee8bab236c9d2342a3a2a5b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5fcdd59ffd-gtcr6" podUID="d2f1ddd1-14cd-452c-934e-9a28944a7b23" May 13 00:28:52.577109 containerd[1654]: time="2025-05-13T00:28:52.577089100Z" level=error msg="StopPodSandbox for \"e6dc441ee5996693cf7460489ac2eab79d9a40d34f119c96bc9affd6fd513f18\" failed" error="failed to destroy network for sandbox \"e6dc441ee5996693cf7460489ac2eab79d9a40d34f119c96bc9affd6fd513f18\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 00:28:52.577180 kubelet[2967]: E0513 00:28:52.577161 2967 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"e6dc441ee5996693cf7460489ac2eab79d9a40d34f119c96bc9affd6fd513f18\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="e6dc441ee5996693cf7460489ac2eab79d9a40d34f119c96bc9affd6fd513f18" May 13 00:28:52.577242 kubelet[2967]: E0513 00:28:52.577182 2967 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"e6dc441ee5996693cf7460489ac2eab79d9a40d34f119c96bc9affd6fd513f18"} May 13 00:28:52.577242 kubelet[2967]: E0513 00:28:52.577196 2967 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d11f4d12-c6f0-4786-9ade-890faf15b637\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e6dc441ee5996693cf7460489ac2eab79d9a40d34f119c96bc9affd6fd513f18\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 13 00:28:52.577242 kubelet[2967]: E0513 00:28:52.577207 2967 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d11f4d12-c6f0-4786-9ade-890faf15b637\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e6dc441ee5996693cf7460489ac2eab79d9a40d34f119c96bc9affd6fd513f18\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-786f4bd5f6-5nhlr" podUID="d11f4d12-c6f0-4786-9ade-890faf15b637" May 13 00:28:55.517689 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3478322370.mount: Deactivated successfully. May 13 00:28:55.815821 containerd[1654]: time="2025-05-13T00:28:55.813662355Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:28:55.833402 containerd[1654]: time="2025-05-13T00:28:55.833238301Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.3: active requests=0, bytes read=144068748" May 13 00:28:55.888364 containerd[1654]: time="2025-05-13T00:28:55.888317874Z" level=info msg="ImageCreate event name:\"sha256:042163432abcec06b8077b24973b223a5f4cfdb35d85c3816f5d07a13d51afae\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:28:55.912731 containerd[1654]: time="2025-05-13T00:28:55.912688629Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:28:55.914518 containerd[1654]: time="2025-05-13T00:28:55.914499341Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.3\" with image id \"sha256:042163432abcec06b8077b24973b223a5f4cfdb35d85c3816f5d07a13d51afae\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916\", size \"144068610\" in 4.476368678s" May 13 00:28:55.914559 containerd[1654]: time="2025-05-13T00:28:55.914521115Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\" returns image reference \"sha256:042163432abcec06b8077b24973b223a5f4cfdb35d85c3816f5d07a13d51afae\"" May 13 00:28:56.173835 containerd[1654]: time="2025-05-13T00:28:56.173663171Z" level=info msg="CreateContainer within sandbox \"ee56a109e98884806b1faa5ab67f1896e594d25111965495500941d6f16ef424\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" May 13 00:28:56.274158 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount962821497.mount: Deactivated successfully. May 13 00:28:56.277184 containerd[1654]: time="2025-05-13T00:28:56.277148105Z" level=info msg="CreateContainer within sandbox \"ee56a109e98884806b1faa5ab67f1896e594d25111965495500941d6f16ef424\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"271f3a5bd07468e4b02b89c9d767d310efc269a4629cac0c9b84784c7d3ab2a6\"" May 13 00:28:56.282130 containerd[1654]: time="2025-05-13T00:28:56.282071798Z" level=info msg="StartContainer for \"271f3a5bd07468e4b02b89c9d767d310efc269a4629cac0c9b84784c7d3ab2a6\"" May 13 00:28:56.346790 containerd[1654]: time="2025-05-13T00:28:56.346769787Z" level=info msg="StartContainer for \"271f3a5bd07468e4b02b89c9d767d310efc269a4629cac0c9b84784c7d3ab2a6\" returns successfully" May 13 00:28:56.502680 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. May 13 00:28:56.502742 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. May 13 00:28:56.637520 kubelet[2967]: I0513 00:28:56.619784 2967 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-8sn2r" podStartSLOduration=1.376535708 podStartE2EDuration="16.566864867s" podCreationTimestamp="2025-05-13 00:28:40 +0000 UTC" firstStartedPulling="2025-05-13 00:28:40.724597007 +0000 UTC m=+20.546681100" lastFinishedPulling="2025-05-13 00:28:55.914926166 +0000 UTC m=+35.737010259" observedRunningTime="2025-05-13 00:28:56.566441504 +0000 UTC m=+36.388525606" watchObservedRunningTime="2025-05-13 00:28:56.566864867 +0000 UTC m=+36.388948963" May 13 00:28:56.830402 systemd-resolved[1542]: Under memory pressure, flushing caches. May 13 00:28:56.838901 systemd-journald[1185]: Under memory pressure, flushing caches. May 13 00:28:56.830425 systemd-resolved[1542]: Flushed all caches. May 13 00:28:57.251599 kubelet[2967]: I0513 00:28:57.251462 2967 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 13 00:28:58.436348 kernel: bpftool[4255]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set May 13 00:28:58.635433 systemd-networkd[1286]: vxlan.calico: Link UP May 13 00:28:58.635437 systemd-networkd[1286]: vxlan.calico: Gained carrier May 13 00:28:59.773422 systemd-networkd[1286]: vxlan.calico: Gained IPv6LL May 13 00:29:05.321290 containerd[1654]: time="2025-05-13T00:29:05.321089724Z" level=info msg="StopPodSandbox for \"430af441a72f539a91045e2e8f325527c635c795f894d12c97ba94d58aa93098\"" May 13 00:29:05.321290 containerd[1654]: time="2025-05-13T00:29:05.321155371Z" level=info msg="StopPodSandbox for \"95514c060ba7bfb0e198e40c76b60d097c2db68ab36d63bbe6781ed15e5da073\"" May 13 00:29:05.322524 containerd[1654]: time="2025-05-13T00:29:05.321106774Z" level=info msg="StopPodSandbox for \"3858951aad868e71994afe1d06847248f35d4673e4c791ff209e3d7ac3c17ea9\"" May 13 00:29:05.779817 containerd[1654]: 2025-05-13 00:29:05.407 [INFO][4403] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="430af441a72f539a91045e2e8f325527c635c795f894d12c97ba94d58aa93098" May 13 00:29:05.779817 containerd[1654]: 2025-05-13 00:29:05.409 [INFO][4403] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="430af441a72f539a91045e2e8f325527c635c795f894d12c97ba94d58aa93098" iface="eth0" netns="/var/run/netns/cni-b2e61c86-e747-2ec9-3974-e71dad117153" May 13 00:29:05.779817 containerd[1654]: 2025-05-13 00:29:05.409 [INFO][4403] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="430af441a72f539a91045e2e8f325527c635c795f894d12c97ba94d58aa93098" iface="eth0" netns="/var/run/netns/cni-b2e61c86-e747-2ec9-3974-e71dad117153" May 13 00:29:05.779817 containerd[1654]: 2025-05-13 00:29:05.425 [INFO][4403] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="430af441a72f539a91045e2e8f325527c635c795f894d12c97ba94d58aa93098" iface="eth0" netns="/var/run/netns/cni-b2e61c86-e747-2ec9-3974-e71dad117153" May 13 00:29:05.779817 containerd[1654]: 2025-05-13 00:29:05.425 [INFO][4403] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="430af441a72f539a91045e2e8f325527c635c795f894d12c97ba94d58aa93098" May 13 00:29:05.779817 containerd[1654]: 2025-05-13 00:29:05.425 [INFO][4403] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="430af441a72f539a91045e2e8f325527c635c795f894d12c97ba94d58aa93098" May 13 00:29:05.779817 containerd[1654]: 2025-05-13 00:29:05.757 [INFO][4428] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="430af441a72f539a91045e2e8f325527c635c795f894d12c97ba94d58aa93098" HandleID="k8s-pod-network.430af441a72f539a91045e2e8f325527c635c795f894d12c97ba94d58aa93098" Workload="localhost-k8s-coredns--7db6d8ff4d--jpqnf-eth0" May 13 00:29:05.779817 containerd[1654]: 2025-05-13 00:29:05.759 [INFO][4428] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 00:29:05.779817 containerd[1654]: 2025-05-13 00:29:05.760 [INFO][4428] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 00:29:05.779817 containerd[1654]: 2025-05-13 00:29:05.774 [WARNING][4428] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="430af441a72f539a91045e2e8f325527c635c795f894d12c97ba94d58aa93098" HandleID="k8s-pod-network.430af441a72f539a91045e2e8f325527c635c795f894d12c97ba94d58aa93098" Workload="localhost-k8s-coredns--7db6d8ff4d--jpqnf-eth0" May 13 00:29:05.779817 containerd[1654]: 2025-05-13 00:29:05.774 [INFO][4428] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="430af441a72f539a91045e2e8f325527c635c795f894d12c97ba94d58aa93098" HandleID="k8s-pod-network.430af441a72f539a91045e2e8f325527c635c795f894d12c97ba94d58aa93098" Workload="localhost-k8s-coredns--7db6d8ff4d--jpqnf-eth0" May 13 00:29:05.779817 containerd[1654]: 2025-05-13 00:29:05.775 [INFO][4428] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 00:29:05.779817 containerd[1654]: 2025-05-13 00:29:05.777 [INFO][4403] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="430af441a72f539a91045e2e8f325527c635c795f894d12c97ba94d58aa93098" May 13 00:29:05.788498 containerd[1654]: time="2025-05-13T00:29:05.782578463Z" level=info msg="TearDown network for sandbox \"430af441a72f539a91045e2e8f325527c635c795f894d12c97ba94d58aa93098\" successfully" May 13 00:29:05.788498 containerd[1654]: time="2025-05-13T00:29:05.782598152Z" level=info msg="StopPodSandbox for \"430af441a72f539a91045e2e8f325527c635c795f894d12c97ba94d58aa93098\" returns successfully" May 13 00:29:05.788498 containerd[1654]: time="2025-05-13T00:29:05.784652055Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-jpqnf,Uid:5ecaa235-5efa-4dbf-aaf9-e49c99601adb,Namespace:kube-system,Attempt:1,}" May 13 00:29:05.783457 systemd[1]: run-netns-cni\x2db2e61c86\x2de747\x2d2ec9\x2d3974\x2de71dad117153.mount: Deactivated successfully. May 13 00:29:05.792611 containerd[1654]: 2025-05-13 00:29:05.417 [INFO][4405] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="3858951aad868e71994afe1d06847248f35d4673e4c791ff209e3d7ac3c17ea9" May 13 00:29:05.792611 containerd[1654]: 2025-05-13 00:29:05.417 [INFO][4405] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="3858951aad868e71994afe1d06847248f35d4673e4c791ff209e3d7ac3c17ea9" iface="eth0" netns="/var/run/netns/cni-faaa449c-d813-cb56-9e2f-6d17577541a6" May 13 00:29:05.792611 containerd[1654]: 2025-05-13 00:29:05.417 [INFO][4405] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="3858951aad868e71994afe1d06847248f35d4673e4c791ff209e3d7ac3c17ea9" iface="eth0" netns="/var/run/netns/cni-faaa449c-d813-cb56-9e2f-6d17577541a6" May 13 00:29:05.792611 containerd[1654]: 2025-05-13 00:29:05.425 [INFO][4405] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="3858951aad868e71994afe1d06847248f35d4673e4c791ff209e3d7ac3c17ea9" iface="eth0" netns="/var/run/netns/cni-faaa449c-d813-cb56-9e2f-6d17577541a6" May 13 00:29:05.792611 containerd[1654]: 2025-05-13 00:29:05.425 [INFO][4405] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="3858951aad868e71994afe1d06847248f35d4673e4c791ff209e3d7ac3c17ea9" May 13 00:29:05.792611 containerd[1654]: 2025-05-13 00:29:05.425 [INFO][4405] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3858951aad868e71994afe1d06847248f35d4673e4c791ff209e3d7ac3c17ea9" May 13 00:29:05.792611 containerd[1654]: 2025-05-13 00:29:05.758 [INFO][4431] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3858951aad868e71994afe1d06847248f35d4673e4c791ff209e3d7ac3c17ea9" HandleID="k8s-pod-network.3858951aad868e71994afe1d06847248f35d4673e4c791ff209e3d7ac3c17ea9" Workload="localhost-k8s-calico--apiserver--76896c5c69--r4r9w-eth0" May 13 00:29:05.792611 containerd[1654]: 2025-05-13 00:29:05.760 [INFO][4431] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 00:29:05.792611 containerd[1654]: 2025-05-13 00:29:05.775 [INFO][4431] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 00:29:05.792611 containerd[1654]: 2025-05-13 00:29:05.783 [WARNING][4431] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3858951aad868e71994afe1d06847248f35d4673e4c791ff209e3d7ac3c17ea9" HandleID="k8s-pod-network.3858951aad868e71994afe1d06847248f35d4673e4c791ff209e3d7ac3c17ea9" Workload="localhost-k8s-calico--apiserver--76896c5c69--r4r9w-eth0" May 13 00:29:05.792611 containerd[1654]: 2025-05-13 00:29:05.783 [INFO][4431] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3858951aad868e71994afe1d06847248f35d4673e4c791ff209e3d7ac3c17ea9" HandleID="k8s-pod-network.3858951aad868e71994afe1d06847248f35d4673e4c791ff209e3d7ac3c17ea9" Workload="localhost-k8s-calico--apiserver--76896c5c69--r4r9w-eth0" May 13 00:29:05.792611 containerd[1654]: 2025-05-13 00:29:05.785 [INFO][4431] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 00:29:05.792611 containerd[1654]: 2025-05-13 00:29:05.789 [INFO][4405] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="3858951aad868e71994afe1d06847248f35d4673e4c791ff209e3d7ac3c17ea9" May 13 00:29:05.794072 containerd[1654]: time="2025-05-13T00:29:05.794046218Z" level=info msg="TearDown network for sandbox \"3858951aad868e71994afe1d06847248f35d4673e4c791ff209e3d7ac3c17ea9\" successfully" May 13 00:29:05.794072 containerd[1654]: time="2025-05-13T00:29:05.794070841Z" level=info msg="StopPodSandbox for \"3858951aad868e71994afe1d06847248f35d4673e4c791ff209e3d7ac3c17ea9\" returns successfully" May 13 00:29:05.799071 containerd[1654]: time="2025-05-13T00:29:05.794548189Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-76896c5c69-r4r9w,Uid:15d3cbd6-98df-4be7-b394-57ec06b3789c,Namespace:calico-apiserver,Attempt:1,}" May 13 00:29:05.799071 containerd[1654]: 2025-05-13 00:29:05.406 [INFO][4409] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="95514c060ba7bfb0e198e40c76b60d097c2db68ab36d63bbe6781ed15e5da073" May 13 00:29:05.799071 containerd[1654]: 2025-05-13 00:29:05.409 [INFO][4409] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="95514c060ba7bfb0e198e40c76b60d097c2db68ab36d63bbe6781ed15e5da073" iface="eth0" netns="/var/run/netns/cni-69604905-70f3-d40e-3a1c-4d57b5165fb6" May 13 00:29:05.799071 containerd[1654]: 2025-05-13 00:29:05.409 [INFO][4409] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="95514c060ba7bfb0e198e40c76b60d097c2db68ab36d63bbe6781ed15e5da073" iface="eth0" netns="/var/run/netns/cni-69604905-70f3-d40e-3a1c-4d57b5165fb6" May 13 00:29:05.799071 containerd[1654]: 2025-05-13 00:29:05.425 [INFO][4409] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="95514c060ba7bfb0e198e40c76b60d097c2db68ab36d63bbe6781ed15e5da073" iface="eth0" netns="/var/run/netns/cni-69604905-70f3-d40e-3a1c-4d57b5165fb6" May 13 00:29:05.799071 containerd[1654]: 2025-05-13 00:29:05.425 [INFO][4409] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="95514c060ba7bfb0e198e40c76b60d097c2db68ab36d63bbe6781ed15e5da073" May 13 00:29:05.799071 containerd[1654]: 2025-05-13 00:29:05.425 [INFO][4409] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="95514c060ba7bfb0e198e40c76b60d097c2db68ab36d63bbe6781ed15e5da073" May 13 00:29:05.799071 containerd[1654]: 2025-05-13 00:29:05.758 [INFO][4432] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="95514c060ba7bfb0e198e40c76b60d097c2db68ab36d63bbe6781ed15e5da073" HandleID="k8s-pod-network.95514c060ba7bfb0e198e40c76b60d097c2db68ab36d63bbe6781ed15e5da073" Workload="localhost-k8s-calico--apiserver--5fcdd59ffd--hwxmq-eth0" May 13 00:29:05.799071 containerd[1654]: 2025-05-13 00:29:05.760 [INFO][4432] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 00:29:05.799071 containerd[1654]: 2025-05-13 00:29:05.785 [INFO][4432] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 00:29:05.799071 containerd[1654]: 2025-05-13 00:29:05.790 [WARNING][4432] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="95514c060ba7bfb0e198e40c76b60d097c2db68ab36d63bbe6781ed15e5da073" HandleID="k8s-pod-network.95514c060ba7bfb0e198e40c76b60d097c2db68ab36d63bbe6781ed15e5da073" Workload="localhost-k8s-calico--apiserver--5fcdd59ffd--hwxmq-eth0" May 13 00:29:05.799071 containerd[1654]: 2025-05-13 00:29:05.790 [INFO][4432] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="95514c060ba7bfb0e198e40c76b60d097c2db68ab36d63bbe6781ed15e5da073" HandleID="k8s-pod-network.95514c060ba7bfb0e198e40c76b60d097c2db68ab36d63bbe6781ed15e5da073" Workload="localhost-k8s-calico--apiserver--5fcdd59ffd--hwxmq-eth0" May 13 00:29:05.799071 containerd[1654]: 2025-05-13 00:29:05.791 [INFO][4432] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 00:29:05.799071 containerd[1654]: 2025-05-13 00:29:05.795 [INFO][4409] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="95514c060ba7bfb0e198e40c76b60d097c2db68ab36d63bbe6781ed15e5da073" May 13 00:29:05.799071 containerd[1654]: time="2025-05-13T00:29:05.797449218Z" level=info msg="TearDown network for sandbox \"95514c060ba7bfb0e198e40c76b60d097c2db68ab36d63bbe6781ed15e5da073\" successfully" May 13 00:29:05.799071 containerd[1654]: time="2025-05-13T00:29:05.797461786Z" level=info msg="StopPodSandbox for \"95514c060ba7bfb0e198e40c76b60d097c2db68ab36d63bbe6781ed15e5da073\" returns successfully" May 13 00:29:05.796035 systemd[1]: run-netns-cni\x2dfaaa449c\x2dd813\x2dcb56\x2d9e2f\x2d6d17577541a6.mount: Deactivated successfully. May 13 00:29:05.799203 systemd[1]: run-netns-cni\x2d69604905\x2d70f3\x2dd40e\x2d3a1c\x2d4d57b5165fb6.mount: Deactivated successfully. May 13 00:29:05.800295 containerd[1654]: time="2025-05-13T00:29:05.800020511Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5fcdd59ffd-hwxmq,Uid:f38b69e3-9879-44d1-9fca-1ad977d43c8a,Namespace:calico-apiserver,Attempt:1,}" May 13 00:29:05.999650 systemd-networkd[1286]: calibd8a7f0d829: Link UP May 13 00:29:05.999795 systemd-networkd[1286]: calibd8a7f0d829: Gained carrier May 13 00:29:06.028246 containerd[1654]: 2025-05-13 00:29:05.877 [INFO][4452] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--76896c5c69--r4r9w-eth0 calico-apiserver-76896c5c69- calico-apiserver 15d3cbd6-98df-4be7-b394-57ec06b3789c 772 0 2025-05-13 00:28:40 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:76896c5c69 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-76896c5c69-r4r9w eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calibd8a7f0d829 [] []}} ContainerID="d4ee7a20bdeb1c591609ee81eb9e8342bb0be15af54d898d113950d3a8a50efc" Namespace="calico-apiserver" Pod="calico-apiserver-76896c5c69-r4r9w" WorkloadEndpoint="localhost-k8s-calico--apiserver--76896c5c69--r4r9w-" May 13 00:29:06.028246 containerd[1654]: 2025-05-13 00:29:05.877 [INFO][4452] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="d4ee7a20bdeb1c591609ee81eb9e8342bb0be15af54d898d113950d3a8a50efc" Namespace="calico-apiserver" Pod="calico-apiserver-76896c5c69-r4r9w" WorkloadEndpoint="localhost-k8s-calico--apiserver--76896c5c69--r4r9w-eth0" May 13 00:29:06.028246 containerd[1654]: 2025-05-13 00:29:05.925 [INFO][4485] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d4ee7a20bdeb1c591609ee81eb9e8342bb0be15af54d898d113950d3a8a50efc" HandleID="k8s-pod-network.d4ee7a20bdeb1c591609ee81eb9e8342bb0be15af54d898d113950d3a8a50efc" Workload="localhost-k8s-calico--apiserver--76896c5c69--r4r9w-eth0" May 13 00:29:06.028246 containerd[1654]: 2025-05-13 00:29:05.933 [INFO][4485] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d4ee7a20bdeb1c591609ee81eb9e8342bb0be15af54d898d113950d3a8a50efc" HandleID="k8s-pod-network.d4ee7a20bdeb1c591609ee81eb9e8342bb0be15af54d898d113950d3a8a50efc" Workload="localhost-k8s-calico--apiserver--76896c5c69--r4r9w-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002843a0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-76896c5c69-r4r9w", "timestamp":"2025-05-13 00:29:05.925048058 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 13 00:29:06.028246 containerd[1654]: 2025-05-13 00:29:05.933 [INFO][4485] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 00:29:06.028246 containerd[1654]: 2025-05-13 00:29:05.933 [INFO][4485] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 00:29:06.028246 containerd[1654]: 2025-05-13 00:29:05.933 [INFO][4485] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 13 00:29:06.028246 containerd[1654]: 2025-05-13 00:29:05.934 [INFO][4485] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.d4ee7a20bdeb1c591609ee81eb9e8342bb0be15af54d898d113950d3a8a50efc" host="localhost" May 13 00:29:06.028246 containerd[1654]: 2025-05-13 00:29:05.943 [INFO][4485] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 13 00:29:06.028246 containerd[1654]: 2025-05-13 00:29:05.947 [INFO][4485] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 13 00:29:06.028246 containerd[1654]: 2025-05-13 00:29:05.948 [INFO][4485] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 13 00:29:06.028246 containerd[1654]: 2025-05-13 00:29:05.950 [INFO][4485] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 13 00:29:06.028246 containerd[1654]: 2025-05-13 00:29:05.950 [INFO][4485] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.d4ee7a20bdeb1c591609ee81eb9e8342bb0be15af54d898d113950d3a8a50efc" host="localhost" May 13 00:29:06.028246 containerd[1654]: 2025-05-13 00:29:05.951 [INFO][4485] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.d4ee7a20bdeb1c591609ee81eb9e8342bb0be15af54d898d113950d3a8a50efc May 13 00:29:06.028246 containerd[1654]: 2025-05-13 00:29:05.963 [INFO][4485] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.d4ee7a20bdeb1c591609ee81eb9e8342bb0be15af54d898d113950d3a8a50efc" host="localhost" May 13 00:29:06.028246 containerd[1654]: 2025-05-13 00:29:05.989 [INFO][4485] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.d4ee7a20bdeb1c591609ee81eb9e8342bb0be15af54d898d113950d3a8a50efc" host="localhost" May 13 00:29:06.028246 containerd[1654]: 2025-05-13 00:29:05.989 [INFO][4485] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.d4ee7a20bdeb1c591609ee81eb9e8342bb0be15af54d898d113950d3a8a50efc" host="localhost" May 13 00:29:06.028246 containerd[1654]: 2025-05-13 00:29:05.989 [INFO][4485] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 00:29:06.028246 containerd[1654]: 2025-05-13 00:29:05.989 [INFO][4485] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="d4ee7a20bdeb1c591609ee81eb9e8342bb0be15af54d898d113950d3a8a50efc" HandleID="k8s-pod-network.d4ee7a20bdeb1c591609ee81eb9e8342bb0be15af54d898d113950d3a8a50efc" Workload="localhost-k8s-calico--apiserver--76896c5c69--r4r9w-eth0" May 13 00:29:06.033920 containerd[1654]: 2025-05-13 00:29:05.993 [INFO][4452] cni-plugin/k8s.go 386: Populated endpoint ContainerID="d4ee7a20bdeb1c591609ee81eb9e8342bb0be15af54d898d113950d3a8a50efc" Namespace="calico-apiserver" Pod="calico-apiserver-76896c5c69-r4r9w" WorkloadEndpoint="localhost-k8s-calico--apiserver--76896c5c69--r4r9w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--76896c5c69--r4r9w-eth0", GenerateName:"calico-apiserver-76896c5c69-", Namespace:"calico-apiserver", SelfLink:"", UID:"15d3cbd6-98df-4be7-b394-57ec06b3789c", ResourceVersion:"772", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 0, 28, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"76896c5c69", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-76896c5c69-r4r9w", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calibd8a7f0d829", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 00:29:06.033920 containerd[1654]: 2025-05-13 00:29:05.993 [INFO][4452] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.129/32] ContainerID="d4ee7a20bdeb1c591609ee81eb9e8342bb0be15af54d898d113950d3a8a50efc" Namespace="calico-apiserver" Pod="calico-apiserver-76896c5c69-r4r9w" WorkloadEndpoint="localhost-k8s-calico--apiserver--76896c5c69--r4r9w-eth0" May 13 00:29:06.033920 containerd[1654]: 2025-05-13 00:29:05.993 [INFO][4452] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibd8a7f0d829 ContainerID="d4ee7a20bdeb1c591609ee81eb9e8342bb0be15af54d898d113950d3a8a50efc" Namespace="calico-apiserver" Pod="calico-apiserver-76896c5c69-r4r9w" WorkloadEndpoint="localhost-k8s-calico--apiserver--76896c5c69--r4r9w-eth0" May 13 00:29:06.033920 containerd[1654]: 2025-05-13 00:29:05.999 [INFO][4452] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d4ee7a20bdeb1c591609ee81eb9e8342bb0be15af54d898d113950d3a8a50efc" Namespace="calico-apiserver" Pod="calico-apiserver-76896c5c69-r4r9w" WorkloadEndpoint="localhost-k8s-calico--apiserver--76896c5c69--r4r9w-eth0" May 13 00:29:06.033920 containerd[1654]: 2025-05-13 00:29:06.000 [INFO][4452] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="d4ee7a20bdeb1c591609ee81eb9e8342bb0be15af54d898d113950d3a8a50efc" Namespace="calico-apiserver" Pod="calico-apiserver-76896c5c69-r4r9w" WorkloadEndpoint="localhost-k8s-calico--apiserver--76896c5c69--r4r9w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--76896c5c69--r4r9w-eth0", GenerateName:"calico-apiserver-76896c5c69-", Namespace:"calico-apiserver", SelfLink:"", UID:"15d3cbd6-98df-4be7-b394-57ec06b3789c", ResourceVersion:"772", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 0, 28, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"76896c5c69", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d4ee7a20bdeb1c591609ee81eb9e8342bb0be15af54d898d113950d3a8a50efc", Pod:"calico-apiserver-76896c5c69-r4r9w", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calibd8a7f0d829", MAC:"2e:5c:e3:1d:42:92", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 00:29:06.033920 containerd[1654]: 2025-05-13 00:29:06.024 [INFO][4452] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="d4ee7a20bdeb1c591609ee81eb9e8342bb0be15af54d898d113950d3a8a50efc" Namespace="calico-apiserver" Pod="calico-apiserver-76896c5c69-r4r9w" WorkloadEndpoint="localhost-k8s-calico--apiserver--76896c5c69--r4r9w-eth0" May 13 00:29:06.052625 containerd[1654]: time="2025-05-13T00:29:06.052395652Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:29:06.052625 containerd[1654]: time="2025-05-13T00:29:06.052448814Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:29:06.052625 containerd[1654]: time="2025-05-13T00:29:06.052455755Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:29:06.058760 containerd[1654]: time="2025-05-13T00:29:06.058718141Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:29:06.097285 systemd-networkd[1286]: calid947ce0e2f2: Link UP May 13 00:29:06.097724 systemd-resolved[1542]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 13 00:29:06.097857 systemd-networkd[1286]: calid947ce0e2f2: Gained carrier May 13 00:29:06.148163 kubelet[2967]: I0513 00:29:06.147506 2967 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 13 00:29:06.153807 containerd[1654]: 2025-05-13 00:29:05.890 [INFO][4450] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7db6d8ff4d--jpqnf-eth0 coredns-7db6d8ff4d- kube-system 5ecaa235-5efa-4dbf-aaf9-e49c99601adb 770 0 2025-05-13 00:28:34 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7db6d8ff4d-jpqnf eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calid947ce0e2f2 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="747fb62411bfde1dfa9a7f00feba5311ce26dfaa9fb21e1648aeefe57fa3b7d2" Namespace="kube-system" Pod="coredns-7db6d8ff4d-jpqnf" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--jpqnf-" May 13 00:29:06.153807 containerd[1654]: 2025-05-13 00:29:05.890 [INFO][4450] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="747fb62411bfde1dfa9a7f00feba5311ce26dfaa9fb21e1648aeefe57fa3b7d2" Namespace="kube-system" Pod="coredns-7db6d8ff4d-jpqnf" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--jpqnf-eth0" May 13 00:29:06.153807 containerd[1654]: 2025-05-13 00:29:05.934 [INFO][4490] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="747fb62411bfde1dfa9a7f00feba5311ce26dfaa9fb21e1648aeefe57fa3b7d2" HandleID="k8s-pod-network.747fb62411bfde1dfa9a7f00feba5311ce26dfaa9fb21e1648aeefe57fa3b7d2" Workload="localhost-k8s-coredns--7db6d8ff4d--jpqnf-eth0" May 13 00:29:06.153807 containerd[1654]: 2025-05-13 00:29:05.945 [INFO][4490] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="747fb62411bfde1dfa9a7f00feba5311ce26dfaa9fb21e1648aeefe57fa3b7d2" HandleID="k8s-pod-network.747fb62411bfde1dfa9a7f00feba5311ce26dfaa9fb21e1648aeefe57fa3b7d2" Workload="localhost-k8s-coredns--7db6d8ff4d--jpqnf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00031aa50), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7db6d8ff4d-jpqnf", "timestamp":"2025-05-13 00:29:05.934913689 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 13 00:29:06.153807 containerd[1654]: 2025-05-13 00:29:05.945 [INFO][4490] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 00:29:06.153807 containerd[1654]: 2025-05-13 00:29:05.990 [INFO][4490] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 00:29:06.153807 containerd[1654]: 2025-05-13 00:29:05.990 [INFO][4490] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 13 00:29:06.153807 containerd[1654]: 2025-05-13 00:29:05.995 [INFO][4490] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.747fb62411bfde1dfa9a7f00feba5311ce26dfaa9fb21e1648aeefe57fa3b7d2" host="localhost" May 13 00:29:06.153807 containerd[1654]: 2025-05-13 00:29:06.014 [INFO][4490] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 13 00:29:06.153807 containerd[1654]: 2025-05-13 00:29:06.025 [INFO][4490] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 13 00:29:06.153807 containerd[1654]: 2025-05-13 00:29:06.030 [INFO][4490] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 13 00:29:06.153807 containerd[1654]: 2025-05-13 00:29:06.045 [INFO][4490] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 13 00:29:06.153807 containerd[1654]: 2025-05-13 00:29:06.045 [INFO][4490] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.747fb62411bfde1dfa9a7f00feba5311ce26dfaa9fb21e1648aeefe57fa3b7d2" host="localhost" May 13 00:29:06.153807 containerd[1654]: 2025-05-13 00:29:06.056 [INFO][4490] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.747fb62411bfde1dfa9a7f00feba5311ce26dfaa9fb21e1648aeefe57fa3b7d2 May 13 00:29:06.153807 containerd[1654]: 2025-05-13 00:29:06.066 [INFO][4490] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.747fb62411bfde1dfa9a7f00feba5311ce26dfaa9fb21e1648aeefe57fa3b7d2" host="localhost" May 13 00:29:06.153807 containerd[1654]: 2025-05-13 00:29:06.083 [INFO][4490] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.747fb62411bfde1dfa9a7f00feba5311ce26dfaa9fb21e1648aeefe57fa3b7d2" host="localhost" May 13 00:29:06.153807 containerd[1654]: 2025-05-13 00:29:06.083 [INFO][4490] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.747fb62411bfde1dfa9a7f00feba5311ce26dfaa9fb21e1648aeefe57fa3b7d2" host="localhost" May 13 00:29:06.153807 containerd[1654]: 2025-05-13 00:29:06.083 [INFO][4490] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 00:29:06.153807 containerd[1654]: 2025-05-13 00:29:06.083 [INFO][4490] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="747fb62411bfde1dfa9a7f00feba5311ce26dfaa9fb21e1648aeefe57fa3b7d2" HandleID="k8s-pod-network.747fb62411bfde1dfa9a7f00feba5311ce26dfaa9fb21e1648aeefe57fa3b7d2" Workload="localhost-k8s-coredns--7db6d8ff4d--jpqnf-eth0" May 13 00:29:06.154294 containerd[1654]: 2025-05-13 00:29:06.091 [INFO][4450] cni-plugin/k8s.go 386: Populated endpoint ContainerID="747fb62411bfde1dfa9a7f00feba5311ce26dfaa9fb21e1648aeefe57fa3b7d2" Namespace="kube-system" Pod="coredns-7db6d8ff4d-jpqnf" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--jpqnf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--jpqnf-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"5ecaa235-5efa-4dbf-aaf9-e49c99601adb", ResourceVersion:"770", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 0, 28, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7db6d8ff4d-jpqnf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid947ce0e2f2", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 00:29:06.154294 containerd[1654]: 2025-05-13 00:29:06.091 [INFO][4450] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.130/32] ContainerID="747fb62411bfde1dfa9a7f00feba5311ce26dfaa9fb21e1648aeefe57fa3b7d2" Namespace="kube-system" Pod="coredns-7db6d8ff4d-jpqnf" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--jpqnf-eth0" May 13 00:29:06.154294 containerd[1654]: 2025-05-13 00:29:06.091 [INFO][4450] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid947ce0e2f2 ContainerID="747fb62411bfde1dfa9a7f00feba5311ce26dfaa9fb21e1648aeefe57fa3b7d2" Namespace="kube-system" Pod="coredns-7db6d8ff4d-jpqnf" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--jpqnf-eth0" May 13 00:29:06.154294 containerd[1654]: 2025-05-13 00:29:06.102 [INFO][4450] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="747fb62411bfde1dfa9a7f00feba5311ce26dfaa9fb21e1648aeefe57fa3b7d2" Namespace="kube-system" Pod="coredns-7db6d8ff4d-jpqnf" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--jpqnf-eth0" May 13 00:29:06.154294 containerd[1654]: 2025-05-13 00:29:06.118 [INFO][4450] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="747fb62411bfde1dfa9a7f00feba5311ce26dfaa9fb21e1648aeefe57fa3b7d2" Namespace="kube-system" Pod="coredns-7db6d8ff4d-jpqnf" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--jpqnf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--jpqnf-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"5ecaa235-5efa-4dbf-aaf9-e49c99601adb", ResourceVersion:"770", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 0, 28, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"747fb62411bfde1dfa9a7f00feba5311ce26dfaa9fb21e1648aeefe57fa3b7d2", Pod:"coredns-7db6d8ff4d-jpqnf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid947ce0e2f2", MAC:"ae:11:1d:8c:ef:fd", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 00:29:06.154294 containerd[1654]: 2025-05-13 00:29:06.129 [INFO][4450] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="747fb62411bfde1dfa9a7f00feba5311ce26dfaa9fb21e1648aeefe57fa3b7d2" Namespace="kube-system" Pod="coredns-7db6d8ff4d-jpqnf" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--jpqnf-eth0" May 13 00:29:06.161096 systemd-networkd[1286]: cali70442139ad3: Link UP May 13 00:29:06.161262 systemd-networkd[1286]: cali70442139ad3: Gained carrier May 13 00:29:06.182594 containerd[1654]: 2025-05-13 00:29:05.908 [INFO][4470] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--5fcdd59ffd--hwxmq-eth0 calico-apiserver-5fcdd59ffd- calico-apiserver f38b69e3-9879-44d1-9fca-1ad977d43c8a 771 0 2025-05-13 00:28:40 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5fcdd59ffd projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-5fcdd59ffd-hwxmq eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali70442139ad3 [] []}} ContainerID="ee177d1e7bc80240241d282d772b455fb5d8f5bf6076d993d144d7dc913d6087" Namespace="calico-apiserver" Pod="calico-apiserver-5fcdd59ffd-hwxmq" WorkloadEndpoint="localhost-k8s-calico--apiserver--5fcdd59ffd--hwxmq-" May 13 00:29:06.182594 containerd[1654]: 2025-05-13 00:29:05.908 [INFO][4470] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="ee177d1e7bc80240241d282d772b455fb5d8f5bf6076d993d144d7dc913d6087" Namespace="calico-apiserver" Pod="calico-apiserver-5fcdd59ffd-hwxmq" WorkloadEndpoint="localhost-k8s-calico--apiserver--5fcdd59ffd--hwxmq-eth0" May 13 00:29:06.182594 containerd[1654]: 2025-05-13 00:29:05.954 [INFO][4496] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ee177d1e7bc80240241d282d772b455fb5d8f5bf6076d993d144d7dc913d6087" HandleID="k8s-pod-network.ee177d1e7bc80240241d282d772b455fb5d8f5bf6076d993d144d7dc913d6087" Workload="localhost-k8s-calico--apiserver--5fcdd59ffd--hwxmq-eth0" May 13 00:29:06.182594 containerd[1654]: 2025-05-13 00:29:05.964 [INFO][4496] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ee177d1e7bc80240241d282d772b455fb5d8f5bf6076d993d144d7dc913d6087" HandleID="k8s-pod-network.ee177d1e7bc80240241d282d772b455fb5d8f5bf6076d993d144d7dc913d6087" Workload="localhost-k8s-calico--apiserver--5fcdd59ffd--hwxmq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000303520), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-5fcdd59ffd-hwxmq", "timestamp":"2025-05-13 00:29:05.954519234 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 13 00:29:06.182594 containerd[1654]: 2025-05-13 00:29:05.964 [INFO][4496] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 00:29:06.182594 containerd[1654]: 2025-05-13 00:29:06.094 [INFO][4496] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 00:29:06.182594 containerd[1654]: 2025-05-13 00:29:06.094 [INFO][4496] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 13 00:29:06.182594 containerd[1654]: 2025-05-13 00:29:06.115 [INFO][4496] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.ee177d1e7bc80240241d282d772b455fb5d8f5bf6076d993d144d7dc913d6087" host="localhost" May 13 00:29:06.182594 containerd[1654]: 2025-05-13 00:29:06.127 [INFO][4496] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 13 00:29:06.182594 containerd[1654]: 2025-05-13 00:29:06.131 [INFO][4496] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 13 00:29:06.182594 containerd[1654]: 2025-05-13 00:29:06.136 [INFO][4496] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 13 00:29:06.182594 containerd[1654]: 2025-05-13 00:29:06.141 [INFO][4496] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 13 00:29:06.182594 containerd[1654]: 2025-05-13 00:29:06.141 [INFO][4496] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.ee177d1e7bc80240241d282d772b455fb5d8f5bf6076d993d144d7dc913d6087" host="localhost" May 13 00:29:06.182594 containerd[1654]: 2025-05-13 00:29:06.142 [INFO][4496] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.ee177d1e7bc80240241d282d772b455fb5d8f5bf6076d993d144d7dc913d6087 May 13 00:29:06.182594 containerd[1654]: 2025-05-13 00:29:06.148 [INFO][4496] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.ee177d1e7bc80240241d282d772b455fb5d8f5bf6076d993d144d7dc913d6087" host="localhost" May 13 00:29:06.182594 containerd[1654]: 2025-05-13 00:29:06.154 [INFO][4496] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.ee177d1e7bc80240241d282d772b455fb5d8f5bf6076d993d144d7dc913d6087" host="localhost" May 13 00:29:06.182594 containerd[1654]: 2025-05-13 00:29:06.154 [INFO][4496] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.ee177d1e7bc80240241d282d772b455fb5d8f5bf6076d993d144d7dc913d6087" host="localhost" May 13 00:29:06.182594 containerd[1654]: 2025-05-13 00:29:06.155 [INFO][4496] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 00:29:06.182594 containerd[1654]: 2025-05-13 00:29:06.155 [INFO][4496] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="ee177d1e7bc80240241d282d772b455fb5d8f5bf6076d993d144d7dc913d6087" HandleID="k8s-pod-network.ee177d1e7bc80240241d282d772b455fb5d8f5bf6076d993d144d7dc913d6087" Workload="localhost-k8s-calico--apiserver--5fcdd59ffd--hwxmq-eth0" May 13 00:29:06.183419 containerd[1654]: 2025-05-13 00:29:06.158 [INFO][4470] cni-plugin/k8s.go 386: Populated endpoint ContainerID="ee177d1e7bc80240241d282d772b455fb5d8f5bf6076d993d144d7dc913d6087" Namespace="calico-apiserver" Pod="calico-apiserver-5fcdd59ffd-hwxmq" WorkloadEndpoint="localhost-k8s-calico--apiserver--5fcdd59ffd--hwxmq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5fcdd59ffd--hwxmq-eth0", GenerateName:"calico-apiserver-5fcdd59ffd-", Namespace:"calico-apiserver", SelfLink:"", UID:"f38b69e3-9879-44d1-9fca-1ad977d43c8a", ResourceVersion:"771", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 0, 28, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5fcdd59ffd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-5fcdd59ffd-hwxmq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali70442139ad3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 00:29:06.183419 containerd[1654]: 2025-05-13 00:29:06.158 [INFO][4470] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.131/32] ContainerID="ee177d1e7bc80240241d282d772b455fb5d8f5bf6076d993d144d7dc913d6087" Namespace="calico-apiserver" Pod="calico-apiserver-5fcdd59ffd-hwxmq" WorkloadEndpoint="localhost-k8s-calico--apiserver--5fcdd59ffd--hwxmq-eth0" May 13 00:29:06.183419 containerd[1654]: 2025-05-13 00:29:06.158 [INFO][4470] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali70442139ad3 ContainerID="ee177d1e7bc80240241d282d772b455fb5d8f5bf6076d993d144d7dc913d6087" Namespace="calico-apiserver" Pod="calico-apiserver-5fcdd59ffd-hwxmq" WorkloadEndpoint="localhost-k8s-calico--apiserver--5fcdd59ffd--hwxmq-eth0" May 13 00:29:06.183419 containerd[1654]: 2025-05-13 00:29:06.161 [INFO][4470] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ee177d1e7bc80240241d282d772b455fb5d8f5bf6076d993d144d7dc913d6087" Namespace="calico-apiserver" Pod="calico-apiserver-5fcdd59ffd-hwxmq" WorkloadEndpoint="localhost-k8s-calico--apiserver--5fcdd59ffd--hwxmq-eth0" May 13 00:29:06.183419 containerd[1654]: 2025-05-13 00:29:06.162 [INFO][4470] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="ee177d1e7bc80240241d282d772b455fb5d8f5bf6076d993d144d7dc913d6087" Namespace="calico-apiserver" Pod="calico-apiserver-5fcdd59ffd-hwxmq" WorkloadEndpoint="localhost-k8s-calico--apiserver--5fcdd59ffd--hwxmq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5fcdd59ffd--hwxmq-eth0", GenerateName:"calico-apiserver-5fcdd59ffd-", Namespace:"calico-apiserver", SelfLink:"", UID:"f38b69e3-9879-44d1-9fca-1ad977d43c8a", ResourceVersion:"771", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 0, 28, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5fcdd59ffd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ee177d1e7bc80240241d282d772b455fb5d8f5bf6076d993d144d7dc913d6087", Pod:"calico-apiserver-5fcdd59ffd-hwxmq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali70442139ad3", MAC:"e2:35:15:79:52:ab", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 00:29:06.183419 containerd[1654]: 2025-05-13 00:29:06.178 [INFO][4470] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="ee177d1e7bc80240241d282d772b455fb5d8f5bf6076d993d144d7dc913d6087" Namespace="calico-apiserver" Pod="calico-apiserver-5fcdd59ffd-hwxmq" WorkloadEndpoint="localhost-k8s-calico--apiserver--5fcdd59ffd--hwxmq-eth0" May 13 00:29:06.189608 containerd[1654]: time="2025-05-13T00:29:06.184433643Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-76896c5c69-r4r9w,Uid:15d3cbd6-98df-4be7-b394-57ec06b3789c,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"d4ee7a20bdeb1c591609ee81eb9e8342bb0be15af54d898d113950d3a8a50efc\"" May 13 00:29:06.201491 containerd[1654]: time="2025-05-13T00:29:06.201079050Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:29:06.201491 containerd[1654]: time="2025-05-13T00:29:06.201126474Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:29:06.201491 containerd[1654]: time="2025-05-13T00:29:06.201139755Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:29:06.201491 containerd[1654]: time="2025-05-13T00:29:06.201258813Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:29:06.201826 containerd[1654]: time="2025-05-13T00:29:06.201636076Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\"" May 13 00:29:06.222574 systemd-resolved[1542]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 13 00:29:06.223067 containerd[1654]: time="2025-05-13T00:29:06.222957258Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:29:06.223067 containerd[1654]: time="2025-05-13T00:29:06.223005036Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:29:06.223067 containerd[1654]: time="2025-05-13T00:29:06.223028533Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:29:06.223245 containerd[1654]: time="2025-05-13T00:29:06.223212538Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:29:06.251101 containerd[1654]: time="2025-05-13T00:29:06.251032520Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-jpqnf,Uid:5ecaa235-5efa-4dbf-aaf9-e49c99601adb,Namespace:kube-system,Attempt:1,} returns sandbox id \"747fb62411bfde1dfa9a7f00feba5311ce26dfaa9fb21e1648aeefe57fa3b7d2\"" May 13 00:29:06.252741 systemd-resolved[1542]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 13 00:29:06.299855 containerd[1654]: time="2025-05-13T00:29:06.298299351Z" level=info msg="CreateContainer within sandbox \"747fb62411bfde1dfa9a7f00feba5311ce26dfaa9fb21e1648aeefe57fa3b7d2\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 13 00:29:06.310898 containerd[1654]: time="2025-05-13T00:29:06.310876156Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5fcdd59ffd-hwxmq,Uid:f38b69e3-9879-44d1-9fca-1ad977d43c8a,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"ee177d1e7bc80240241d282d772b455fb5d8f5bf6076d993d144d7dc913d6087\"" May 13 00:29:06.332396 containerd[1654]: time="2025-05-13T00:29:06.332374889Z" level=info msg="CreateContainer within sandbox \"747fb62411bfde1dfa9a7f00feba5311ce26dfaa9fb21e1648aeefe57fa3b7d2\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"e2be164e617abef294070551334a7780f9b912397ef716d86bd99bac568dece7\"" May 13 00:29:06.333828 containerd[1654]: time="2025-05-13T00:29:06.333694028Z" level=info msg="StartContainer for \"e2be164e617abef294070551334a7780f9b912397ef716d86bd99bac568dece7\"" May 13 00:29:06.420654 containerd[1654]: time="2025-05-13T00:29:06.420589396Z" level=info msg="StartContainer for \"e2be164e617abef294070551334a7780f9b912397ef716d86bd99bac568dece7\" returns successfully" May 13 00:29:06.565532 kubelet[2967]: I0513 00:29:06.565161 2967 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-jpqnf" podStartSLOduration=32.565150973 podStartE2EDuration="32.565150973s" podCreationTimestamp="2025-05-13 00:28:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 00:29:06.564591741 +0000 UTC m=+46.386675843" watchObservedRunningTime="2025-05-13 00:29:06.565150973 +0000 UTC m=+46.387235070" May 13 00:29:07.070458 systemd-networkd[1286]: calibd8a7f0d829: Gained IPv6LL May 13 00:29:07.133422 systemd-networkd[1286]: calid947ce0e2f2: Gained IPv6LL May 13 00:29:07.321148 containerd[1654]: time="2025-05-13T00:29:07.320912647Z" level=info msg="StopPodSandbox for \"e6dc441ee5996693cf7460489ac2eab79d9a40d34f119c96bc9affd6fd513f18\"" May 13 00:29:07.338266 containerd[1654]: time="2025-05-13T00:29:07.338241190Z" level=info msg="StopPodSandbox for \"18dd7149039e82dd824df6286a02bfb259d4b9e18c4b218541ea3ba28be029e8\"" May 13 00:29:07.338915 containerd[1654]: time="2025-05-13T00:29:07.338440600Z" level=info msg="StopPodSandbox for \"078335b4a112e958f144bcd326e6bfce7821d1a603db8a7da766dc2d63af9436\"" May 13 00:29:07.422110 containerd[1654]: 2025-05-13 00:29:07.385 [INFO][4801] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="e6dc441ee5996693cf7460489ac2eab79d9a40d34f119c96bc9affd6fd513f18" May 13 00:29:07.422110 containerd[1654]: 2025-05-13 00:29:07.385 [INFO][4801] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="e6dc441ee5996693cf7460489ac2eab79d9a40d34f119c96bc9affd6fd513f18" iface="eth0" netns="/var/run/netns/cni-2cb6a02c-2c27-f731-6340-5fd159aa6ffb" May 13 00:29:07.422110 containerd[1654]: 2025-05-13 00:29:07.385 [INFO][4801] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="e6dc441ee5996693cf7460489ac2eab79d9a40d34f119c96bc9affd6fd513f18" iface="eth0" netns="/var/run/netns/cni-2cb6a02c-2c27-f731-6340-5fd159aa6ffb" May 13 00:29:07.422110 containerd[1654]: 2025-05-13 00:29:07.386 [INFO][4801] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="e6dc441ee5996693cf7460489ac2eab79d9a40d34f119c96bc9affd6fd513f18" iface="eth0" netns="/var/run/netns/cni-2cb6a02c-2c27-f731-6340-5fd159aa6ffb" May 13 00:29:07.422110 containerd[1654]: 2025-05-13 00:29:07.386 [INFO][4801] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="e6dc441ee5996693cf7460489ac2eab79d9a40d34f119c96bc9affd6fd513f18" May 13 00:29:07.422110 containerd[1654]: 2025-05-13 00:29:07.386 [INFO][4801] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e6dc441ee5996693cf7460489ac2eab79d9a40d34f119c96bc9affd6fd513f18" May 13 00:29:07.422110 containerd[1654]: 2025-05-13 00:29:07.407 [INFO][4812] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e6dc441ee5996693cf7460489ac2eab79d9a40d34f119c96bc9affd6fd513f18" HandleID="k8s-pod-network.e6dc441ee5996693cf7460489ac2eab79d9a40d34f119c96bc9affd6fd513f18" Workload="localhost-k8s-calico--kube--controllers--786f4bd5f6--5nhlr-eth0" May 13 00:29:07.422110 containerd[1654]: 2025-05-13 00:29:07.407 [INFO][4812] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 00:29:07.422110 containerd[1654]: 2025-05-13 00:29:07.407 [INFO][4812] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 00:29:07.422110 containerd[1654]: 2025-05-13 00:29:07.416 [WARNING][4812] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e6dc441ee5996693cf7460489ac2eab79d9a40d34f119c96bc9affd6fd513f18" HandleID="k8s-pod-network.e6dc441ee5996693cf7460489ac2eab79d9a40d34f119c96bc9affd6fd513f18" Workload="localhost-k8s-calico--kube--controllers--786f4bd5f6--5nhlr-eth0" May 13 00:29:07.422110 containerd[1654]: 2025-05-13 00:29:07.416 [INFO][4812] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e6dc441ee5996693cf7460489ac2eab79d9a40d34f119c96bc9affd6fd513f18" HandleID="k8s-pod-network.e6dc441ee5996693cf7460489ac2eab79d9a40d34f119c96bc9affd6fd513f18" Workload="localhost-k8s-calico--kube--controllers--786f4bd5f6--5nhlr-eth0" May 13 00:29:07.422110 containerd[1654]: 2025-05-13 00:29:07.418 [INFO][4812] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 00:29:07.422110 containerd[1654]: 2025-05-13 00:29:07.420 [INFO][4801] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="e6dc441ee5996693cf7460489ac2eab79d9a40d34f119c96bc9affd6fd513f18" May 13 00:29:07.425342 containerd[1654]: time="2025-05-13T00:29:07.423665576Z" level=info msg="TearDown network for sandbox \"e6dc441ee5996693cf7460489ac2eab79d9a40d34f119c96bc9affd6fd513f18\" successfully" May 13 00:29:07.425342 containerd[1654]: time="2025-05-13T00:29:07.424473890Z" level=info msg="StopPodSandbox for \"e6dc441ee5996693cf7460489ac2eab79d9a40d34f119c96bc9affd6fd513f18\" returns successfully" May 13 00:29:07.426075 systemd[1]: run-netns-cni\x2d2cb6a02c\x2d2c27\x2df731\x2d6340\x2d5fd159aa6ffb.mount: Deactivated successfully. May 13 00:29:07.427220 containerd[1654]: time="2025-05-13T00:29:07.426935698Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-786f4bd5f6-5nhlr,Uid:d11f4d12-c6f0-4786-9ade-890faf15b637,Namespace:calico-system,Attempt:1,}" May 13 00:29:07.473840 containerd[1654]: 2025-05-13 00:29:07.414 [INFO][4782] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="078335b4a112e958f144bcd326e6bfce7821d1a603db8a7da766dc2d63af9436" May 13 00:29:07.473840 containerd[1654]: 2025-05-13 00:29:07.414 [INFO][4782] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="078335b4a112e958f144bcd326e6bfce7821d1a603db8a7da766dc2d63af9436" iface="eth0" netns="/var/run/netns/cni-93d22267-be09-8b2e-15a2-8814e4abcc47" May 13 00:29:07.473840 containerd[1654]: 2025-05-13 00:29:07.415 [INFO][4782] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="078335b4a112e958f144bcd326e6bfce7821d1a603db8a7da766dc2d63af9436" iface="eth0" netns="/var/run/netns/cni-93d22267-be09-8b2e-15a2-8814e4abcc47" May 13 00:29:07.473840 containerd[1654]: 2025-05-13 00:29:07.416 [INFO][4782] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="078335b4a112e958f144bcd326e6bfce7821d1a603db8a7da766dc2d63af9436" iface="eth0" netns="/var/run/netns/cni-93d22267-be09-8b2e-15a2-8814e4abcc47" May 13 00:29:07.473840 containerd[1654]: 2025-05-13 00:29:07.416 [INFO][4782] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="078335b4a112e958f144bcd326e6bfce7821d1a603db8a7da766dc2d63af9436" May 13 00:29:07.473840 containerd[1654]: 2025-05-13 00:29:07.416 [INFO][4782] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="078335b4a112e958f144bcd326e6bfce7821d1a603db8a7da766dc2d63af9436" May 13 00:29:07.473840 containerd[1654]: 2025-05-13 00:29:07.457 [INFO][4820] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="078335b4a112e958f144bcd326e6bfce7821d1a603db8a7da766dc2d63af9436" HandleID="k8s-pod-network.078335b4a112e958f144bcd326e6bfce7821d1a603db8a7da766dc2d63af9436" Workload="localhost-k8s-coredns--7db6d8ff4d--wnsqm-eth0" May 13 00:29:07.473840 containerd[1654]: 2025-05-13 00:29:07.457 [INFO][4820] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 00:29:07.473840 containerd[1654]: 2025-05-13 00:29:07.457 [INFO][4820] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 00:29:07.473840 containerd[1654]: 2025-05-13 00:29:07.463 [WARNING][4820] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="078335b4a112e958f144bcd326e6bfce7821d1a603db8a7da766dc2d63af9436" HandleID="k8s-pod-network.078335b4a112e958f144bcd326e6bfce7821d1a603db8a7da766dc2d63af9436" Workload="localhost-k8s-coredns--7db6d8ff4d--wnsqm-eth0" May 13 00:29:07.473840 containerd[1654]: 2025-05-13 00:29:07.466 [INFO][4820] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="078335b4a112e958f144bcd326e6bfce7821d1a603db8a7da766dc2d63af9436" HandleID="k8s-pod-network.078335b4a112e958f144bcd326e6bfce7821d1a603db8a7da766dc2d63af9436" Workload="localhost-k8s-coredns--7db6d8ff4d--wnsqm-eth0" May 13 00:29:07.473840 containerd[1654]: 2025-05-13 00:29:07.468 [INFO][4820] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 00:29:07.473840 containerd[1654]: 2025-05-13 00:29:07.470 [INFO][4782] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="078335b4a112e958f144bcd326e6bfce7821d1a603db8a7da766dc2d63af9436" May 13 00:29:07.476069 containerd[1654]: time="2025-05-13T00:29:07.475912827Z" level=info msg="TearDown network for sandbox \"078335b4a112e958f144bcd326e6bfce7821d1a603db8a7da766dc2d63af9436\" successfully" May 13 00:29:07.476069 containerd[1654]: time="2025-05-13T00:29:07.475930662Z" level=info msg="StopPodSandbox for \"078335b4a112e958f144bcd326e6bfce7821d1a603db8a7da766dc2d63af9436\" returns successfully" May 13 00:29:07.476691 systemd[1]: run-netns-cni\x2d93d22267\x2dbe09\x2d8b2e\x2d15a2\x2d8814e4abcc47.mount: Deactivated successfully. May 13 00:29:07.477416 containerd[1654]: time="2025-05-13T00:29:07.477400585Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-wnsqm,Uid:6eb2e449-b9a2-4894-b3ae-1030b0bd4c24,Namespace:kube-system,Attempt:1,}" May 13 00:29:07.486625 containerd[1654]: 2025-05-13 00:29:07.429 [INFO][4797] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="18dd7149039e82dd824df6286a02bfb259d4b9e18c4b218541ea3ba28be029e8" May 13 00:29:07.486625 containerd[1654]: 2025-05-13 00:29:07.429 [INFO][4797] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="18dd7149039e82dd824df6286a02bfb259d4b9e18c4b218541ea3ba28be029e8" iface="eth0" netns="/var/run/netns/cni-8ff441e9-bb50-045c-0f9a-900d413ee335" May 13 00:29:07.486625 containerd[1654]: 2025-05-13 00:29:07.429 [INFO][4797] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="18dd7149039e82dd824df6286a02bfb259d4b9e18c4b218541ea3ba28be029e8" iface="eth0" netns="/var/run/netns/cni-8ff441e9-bb50-045c-0f9a-900d413ee335" May 13 00:29:07.486625 containerd[1654]: 2025-05-13 00:29:07.430 [INFO][4797] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="18dd7149039e82dd824df6286a02bfb259d4b9e18c4b218541ea3ba28be029e8" iface="eth0" netns="/var/run/netns/cni-8ff441e9-bb50-045c-0f9a-900d413ee335" May 13 00:29:07.486625 containerd[1654]: 2025-05-13 00:29:07.430 [INFO][4797] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="18dd7149039e82dd824df6286a02bfb259d4b9e18c4b218541ea3ba28be029e8" May 13 00:29:07.486625 containerd[1654]: 2025-05-13 00:29:07.430 [INFO][4797] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="18dd7149039e82dd824df6286a02bfb259d4b9e18c4b218541ea3ba28be029e8" May 13 00:29:07.486625 containerd[1654]: 2025-05-13 00:29:07.469 [INFO][4825] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="18dd7149039e82dd824df6286a02bfb259d4b9e18c4b218541ea3ba28be029e8" HandleID="k8s-pod-network.18dd7149039e82dd824df6286a02bfb259d4b9e18c4b218541ea3ba28be029e8" Workload="localhost-k8s-csi--node--driver--gfr9r-eth0" May 13 00:29:07.486625 containerd[1654]: 2025-05-13 00:29:07.469 [INFO][4825] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 00:29:07.486625 containerd[1654]: 2025-05-13 00:29:07.469 [INFO][4825] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 00:29:07.486625 containerd[1654]: 2025-05-13 00:29:07.478 [WARNING][4825] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="18dd7149039e82dd824df6286a02bfb259d4b9e18c4b218541ea3ba28be029e8" HandleID="k8s-pod-network.18dd7149039e82dd824df6286a02bfb259d4b9e18c4b218541ea3ba28be029e8" Workload="localhost-k8s-csi--node--driver--gfr9r-eth0" May 13 00:29:07.486625 containerd[1654]: 2025-05-13 00:29:07.478 [INFO][4825] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="18dd7149039e82dd824df6286a02bfb259d4b9e18c4b218541ea3ba28be029e8" HandleID="k8s-pod-network.18dd7149039e82dd824df6286a02bfb259d4b9e18c4b218541ea3ba28be029e8" Workload="localhost-k8s-csi--node--driver--gfr9r-eth0" May 13 00:29:07.486625 containerd[1654]: 2025-05-13 00:29:07.481 [INFO][4825] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 00:29:07.486625 containerd[1654]: 2025-05-13 00:29:07.483 [INFO][4797] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="18dd7149039e82dd824df6286a02bfb259d4b9e18c4b218541ea3ba28be029e8" May 13 00:29:07.490310 containerd[1654]: time="2025-05-13T00:29:07.487098732Z" level=info msg="TearDown network for sandbox \"18dd7149039e82dd824df6286a02bfb259d4b9e18c4b218541ea3ba28be029e8\" successfully" May 13 00:29:07.490310 containerd[1654]: time="2025-05-13T00:29:07.487122979Z" level=info msg="StopPodSandbox for \"18dd7149039e82dd824df6286a02bfb259d4b9e18c4b218541ea3ba28be029e8\" returns successfully" May 13 00:29:07.490310 containerd[1654]: time="2025-05-13T00:29:07.487689482Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-gfr9r,Uid:8207c568-159c-4015-827d-1b226c94f3cf,Namespace:calico-system,Attempt:1,}" May 13 00:29:07.553347 systemd-networkd[1286]: cali53d73bed430: Link UP May 13 00:29:07.553520 systemd-networkd[1286]: cali53d73bed430: Gained carrier May 13 00:29:07.574055 containerd[1654]: 2025-05-13 00:29:07.464 [INFO][4830] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--786f4bd5f6--5nhlr-eth0 calico-kube-controllers-786f4bd5f6- calico-system d11f4d12-c6f0-4786-9ade-890faf15b637 799 0 2025-05-13 00:28:40 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:786f4bd5f6 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-786f4bd5f6-5nhlr eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali53d73bed430 [] []}} ContainerID="68fae20ca00d661345e722dd8ed1da97d49c518574bf2a60f1588a635c32a0a5" Namespace="calico-system" Pod="calico-kube-controllers-786f4bd5f6-5nhlr" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--786f4bd5f6--5nhlr-" May 13 00:29:07.574055 containerd[1654]: 2025-05-13 00:29:07.465 [INFO][4830] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="68fae20ca00d661345e722dd8ed1da97d49c518574bf2a60f1588a635c32a0a5" Namespace="calico-system" Pod="calico-kube-controllers-786f4bd5f6-5nhlr" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--786f4bd5f6--5nhlr-eth0" May 13 00:29:07.574055 containerd[1654]: 2025-05-13 00:29:07.505 [INFO][4846] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="68fae20ca00d661345e722dd8ed1da97d49c518574bf2a60f1588a635c32a0a5" HandleID="k8s-pod-network.68fae20ca00d661345e722dd8ed1da97d49c518574bf2a60f1588a635c32a0a5" Workload="localhost-k8s-calico--kube--controllers--786f4bd5f6--5nhlr-eth0" May 13 00:29:07.574055 containerd[1654]: 2025-05-13 00:29:07.519 [INFO][4846] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="68fae20ca00d661345e722dd8ed1da97d49c518574bf2a60f1588a635c32a0a5" HandleID="k8s-pod-network.68fae20ca00d661345e722dd8ed1da97d49c518574bf2a60f1588a635c32a0a5" Workload="localhost-k8s-calico--kube--controllers--786f4bd5f6--5nhlr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000051ee0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-786f4bd5f6-5nhlr", "timestamp":"2025-05-13 00:29:07.505906987 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 13 00:29:07.574055 containerd[1654]: 2025-05-13 00:29:07.519 [INFO][4846] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 00:29:07.574055 containerd[1654]: 2025-05-13 00:29:07.519 [INFO][4846] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 00:29:07.574055 containerd[1654]: 2025-05-13 00:29:07.519 [INFO][4846] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 13 00:29:07.574055 containerd[1654]: 2025-05-13 00:29:07.520 [INFO][4846] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.68fae20ca00d661345e722dd8ed1da97d49c518574bf2a60f1588a635c32a0a5" host="localhost" May 13 00:29:07.574055 containerd[1654]: 2025-05-13 00:29:07.523 [INFO][4846] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 13 00:29:07.574055 containerd[1654]: 2025-05-13 00:29:07.525 [INFO][4846] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 13 00:29:07.574055 containerd[1654]: 2025-05-13 00:29:07.526 [INFO][4846] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 13 00:29:07.574055 containerd[1654]: 2025-05-13 00:29:07.528 [INFO][4846] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 13 00:29:07.574055 containerd[1654]: 2025-05-13 00:29:07.528 [INFO][4846] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.68fae20ca00d661345e722dd8ed1da97d49c518574bf2a60f1588a635c32a0a5" host="localhost" May 13 00:29:07.574055 containerd[1654]: 2025-05-13 00:29:07.529 [INFO][4846] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.68fae20ca00d661345e722dd8ed1da97d49c518574bf2a60f1588a635c32a0a5 May 13 00:29:07.574055 containerd[1654]: 2025-05-13 00:29:07.533 [INFO][4846] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.68fae20ca00d661345e722dd8ed1da97d49c518574bf2a60f1588a635c32a0a5" host="localhost" May 13 00:29:07.574055 containerd[1654]: 2025-05-13 00:29:07.544 [INFO][4846] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.68fae20ca00d661345e722dd8ed1da97d49c518574bf2a60f1588a635c32a0a5" host="localhost" May 13 00:29:07.574055 containerd[1654]: 2025-05-13 00:29:07.544 [INFO][4846] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.68fae20ca00d661345e722dd8ed1da97d49c518574bf2a60f1588a635c32a0a5" host="localhost" May 13 00:29:07.574055 containerd[1654]: 2025-05-13 00:29:07.544 [INFO][4846] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 00:29:07.574055 containerd[1654]: 2025-05-13 00:29:07.544 [INFO][4846] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="68fae20ca00d661345e722dd8ed1da97d49c518574bf2a60f1588a635c32a0a5" HandleID="k8s-pod-network.68fae20ca00d661345e722dd8ed1da97d49c518574bf2a60f1588a635c32a0a5" Workload="localhost-k8s-calico--kube--controllers--786f4bd5f6--5nhlr-eth0" May 13 00:29:07.583747 containerd[1654]: 2025-05-13 00:29:07.546 [INFO][4830] cni-plugin/k8s.go 386: Populated endpoint ContainerID="68fae20ca00d661345e722dd8ed1da97d49c518574bf2a60f1588a635c32a0a5" Namespace="calico-system" Pod="calico-kube-controllers-786f4bd5f6-5nhlr" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--786f4bd5f6--5nhlr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--786f4bd5f6--5nhlr-eth0", GenerateName:"calico-kube-controllers-786f4bd5f6-", Namespace:"calico-system", SelfLink:"", UID:"d11f4d12-c6f0-4786-9ade-890faf15b637", ResourceVersion:"799", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 0, 28, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"786f4bd5f6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-786f4bd5f6-5nhlr", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali53d73bed430", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 00:29:07.583747 containerd[1654]: 2025-05-13 00:29:07.546 [INFO][4830] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.132/32] ContainerID="68fae20ca00d661345e722dd8ed1da97d49c518574bf2a60f1588a635c32a0a5" Namespace="calico-system" Pod="calico-kube-controllers-786f4bd5f6-5nhlr" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--786f4bd5f6--5nhlr-eth0" May 13 00:29:07.583747 containerd[1654]: 2025-05-13 00:29:07.546 [INFO][4830] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali53d73bed430 ContainerID="68fae20ca00d661345e722dd8ed1da97d49c518574bf2a60f1588a635c32a0a5" Namespace="calico-system" Pod="calico-kube-controllers-786f4bd5f6-5nhlr" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--786f4bd5f6--5nhlr-eth0" May 13 00:29:07.583747 containerd[1654]: 2025-05-13 00:29:07.554 [INFO][4830] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="68fae20ca00d661345e722dd8ed1da97d49c518574bf2a60f1588a635c32a0a5" Namespace="calico-system" Pod="calico-kube-controllers-786f4bd5f6-5nhlr" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--786f4bd5f6--5nhlr-eth0" May 13 00:29:07.583747 containerd[1654]: 2025-05-13 00:29:07.556 [INFO][4830] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="68fae20ca00d661345e722dd8ed1da97d49c518574bf2a60f1588a635c32a0a5" Namespace="calico-system" Pod="calico-kube-controllers-786f4bd5f6-5nhlr" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--786f4bd5f6--5nhlr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--786f4bd5f6--5nhlr-eth0", GenerateName:"calico-kube-controllers-786f4bd5f6-", Namespace:"calico-system", SelfLink:"", UID:"d11f4d12-c6f0-4786-9ade-890faf15b637", ResourceVersion:"799", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 0, 28, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"786f4bd5f6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"68fae20ca00d661345e722dd8ed1da97d49c518574bf2a60f1588a635c32a0a5", Pod:"calico-kube-controllers-786f4bd5f6-5nhlr", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali53d73bed430", MAC:"26:0a:d6:b3:58:ca", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 00:29:07.583747 containerd[1654]: 2025-05-13 00:29:07.570 [INFO][4830] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="68fae20ca00d661345e722dd8ed1da97d49c518574bf2a60f1588a635c32a0a5" Namespace="calico-system" Pod="calico-kube-controllers-786f4bd5f6-5nhlr" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--786f4bd5f6--5nhlr-eth0" May 13 00:29:07.654886 systemd-networkd[1286]: cali1cb572dadb3: Link UP May 13 00:29:07.655658 systemd-networkd[1286]: cali1cb572dadb3: Gained carrier May 13 00:29:07.659838 containerd[1654]: time="2025-05-13T00:29:07.656225781Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:29:07.659838 containerd[1654]: time="2025-05-13T00:29:07.656259663Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:29:07.659838 containerd[1654]: time="2025-05-13T00:29:07.656266706Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:29:07.659838 containerd[1654]: time="2025-05-13T00:29:07.656446151Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:29:07.684630 containerd[1654]: 2025-05-13 00:29:07.545 [INFO][4854] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7db6d8ff4d--wnsqm-eth0 coredns-7db6d8ff4d- kube-system 6eb2e449-b9a2-4894-b3ae-1030b0bd4c24 800 0 2025-05-13 00:28:35 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7db6d8ff4d-wnsqm eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali1cb572dadb3 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="74094c76b45426af403322529b393924093f6019005861a048db11c48de429bb" Namespace="kube-system" Pod="coredns-7db6d8ff4d-wnsqm" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--wnsqm-" May 13 00:29:07.684630 containerd[1654]: 2025-05-13 00:29:07.545 [INFO][4854] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="74094c76b45426af403322529b393924093f6019005861a048db11c48de429bb" Namespace="kube-system" Pod="coredns-7db6d8ff4d-wnsqm" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--wnsqm-eth0" May 13 00:29:07.684630 containerd[1654]: 2025-05-13 00:29:07.600 [INFO][4879] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="74094c76b45426af403322529b393924093f6019005861a048db11c48de429bb" HandleID="k8s-pod-network.74094c76b45426af403322529b393924093f6019005861a048db11c48de429bb" Workload="localhost-k8s-coredns--7db6d8ff4d--wnsqm-eth0" May 13 00:29:07.684630 containerd[1654]: 2025-05-13 00:29:07.614 [INFO][4879] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="74094c76b45426af403322529b393924093f6019005861a048db11c48de429bb" HandleID="k8s-pod-network.74094c76b45426af403322529b393924093f6019005861a048db11c48de429bb" Workload="localhost-k8s-coredns--7db6d8ff4d--wnsqm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0004e2e00), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7db6d8ff4d-wnsqm", "timestamp":"2025-05-13 00:29:07.600920019 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 13 00:29:07.684630 containerd[1654]: 2025-05-13 00:29:07.614 [INFO][4879] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 00:29:07.684630 containerd[1654]: 2025-05-13 00:29:07.614 [INFO][4879] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 00:29:07.684630 containerd[1654]: 2025-05-13 00:29:07.614 [INFO][4879] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 13 00:29:07.684630 containerd[1654]: 2025-05-13 00:29:07.616 [INFO][4879] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.74094c76b45426af403322529b393924093f6019005861a048db11c48de429bb" host="localhost" May 13 00:29:07.684630 containerd[1654]: 2025-05-13 00:29:07.621 [INFO][4879] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 13 00:29:07.684630 containerd[1654]: 2025-05-13 00:29:07.629 [INFO][4879] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 13 00:29:07.684630 containerd[1654]: 2025-05-13 00:29:07.633 [INFO][4879] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 13 00:29:07.684630 containerd[1654]: 2025-05-13 00:29:07.634 [INFO][4879] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 13 00:29:07.684630 containerd[1654]: 2025-05-13 00:29:07.635 [INFO][4879] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.74094c76b45426af403322529b393924093f6019005861a048db11c48de429bb" host="localhost" May 13 00:29:07.684630 containerd[1654]: 2025-05-13 00:29:07.637 [INFO][4879] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.74094c76b45426af403322529b393924093f6019005861a048db11c48de429bb May 13 00:29:07.684630 containerd[1654]: 2025-05-13 00:29:07.643 [INFO][4879] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.74094c76b45426af403322529b393924093f6019005861a048db11c48de429bb" host="localhost" May 13 00:29:07.684630 containerd[1654]: 2025-05-13 00:29:07.648 [INFO][4879] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.74094c76b45426af403322529b393924093f6019005861a048db11c48de429bb" host="localhost" May 13 00:29:07.684630 containerd[1654]: 2025-05-13 00:29:07.649 [INFO][4879] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.74094c76b45426af403322529b393924093f6019005861a048db11c48de429bb" host="localhost" May 13 00:29:07.684630 containerd[1654]: 2025-05-13 00:29:07.649 [INFO][4879] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 00:29:07.684630 containerd[1654]: 2025-05-13 00:29:07.649 [INFO][4879] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="74094c76b45426af403322529b393924093f6019005861a048db11c48de429bb" HandleID="k8s-pod-network.74094c76b45426af403322529b393924093f6019005861a048db11c48de429bb" Workload="localhost-k8s-coredns--7db6d8ff4d--wnsqm-eth0" May 13 00:29:07.687191 containerd[1654]: 2025-05-13 00:29:07.652 [INFO][4854] cni-plugin/k8s.go 386: Populated endpoint ContainerID="74094c76b45426af403322529b393924093f6019005861a048db11c48de429bb" Namespace="kube-system" Pod="coredns-7db6d8ff4d-wnsqm" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--wnsqm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--wnsqm-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"6eb2e449-b9a2-4894-b3ae-1030b0bd4c24", ResourceVersion:"800", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 0, 28, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7db6d8ff4d-wnsqm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1cb572dadb3", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 00:29:07.687191 containerd[1654]: 2025-05-13 00:29:07.653 [INFO][4854] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.133/32] ContainerID="74094c76b45426af403322529b393924093f6019005861a048db11c48de429bb" Namespace="kube-system" Pod="coredns-7db6d8ff4d-wnsqm" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--wnsqm-eth0" May 13 00:29:07.687191 containerd[1654]: 2025-05-13 00:29:07.653 [INFO][4854] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1cb572dadb3 ContainerID="74094c76b45426af403322529b393924093f6019005861a048db11c48de429bb" Namespace="kube-system" Pod="coredns-7db6d8ff4d-wnsqm" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--wnsqm-eth0" May 13 00:29:07.687191 containerd[1654]: 2025-05-13 00:29:07.655 [INFO][4854] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="74094c76b45426af403322529b393924093f6019005861a048db11c48de429bb" Namespace="kube-system" Pod="coredns-7db6d8ff4d-wnsqm" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--wnsqm-eth0" May 13 00:29:07.687191 containerd[1654]: 2025-05-13 00:29:07.661 [INFO][4854] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="74094c76b45426af403322529b393924093f6019005861a048db11c48de429bb" Namespace="kube-system" Pod="coredns-7db6d8ff4d-wnsqm" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--wnsqm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--wnsqm-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"6eb2e449-b9a2-4894-b3ae-1030b0bd4c24", ResourceVersion:"800", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 0, 28, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"74094c76b45426af403322529b393924093f6019005861a048db11c48de429bb", Pod:"coredns-7db6d8ff4d-wnsqm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1cb572dadb3", MAC:"aa:3e:05:e2:5b:44", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 00:29:07.687191 containerd[1654]: 2025-05-13 00:29:07.677 [INFO][4854] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="74094c76b45426af403322529b393924093f6019005861a048db11c48de429bb" Namespace="kube-system" Pod="coredns-7db6d8ff4d-wnsqm" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--wnsqm-eth0" May 13 00:29:07.716564 systemd-resolved[1542]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 13 00:29:07.721905 containerd[1654]: time="2025-05-13T00:29:07.721471231Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:29:07.721905 containerd[1654]: time="2025-05-13T00:29:07.721514127Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:29:07.721905 containerd[1654]: time="2025-05-13T00:29:07.721532317Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:29:07.721905 containerd[1654]: time="2025-05-13T00:29:07.721718015Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:29:07.748990 systemd-networkd[1286]: calid510999f90e: Link UP May 13 00:29:07.749755 systemd-networkd[1286]: calid510999f90e: Gained carrier May 13 00:29:07.755692 systemd-resolved[1542]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 13 00:29:07.772641 containerd[1654]: 2025-05-13 00:29:07.591 [INFO][4866] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--gfr9r-eth0 csi-node-driver- calico-system 8207c568-159c-4015-827d-1b226c94f3cf 801 0 2025-05-13 00:28:40 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:55b7b4b9d k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-gfr9r eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calid510999f90e [] []}} ContainerID="356931c6ab7a9c03836f1e4a94676e311ce43c2b080aa50abc472a40a8d1a698" Namespace="calico-system" Pod="csi-node-driver-gfr9r" WorkloadEndpoint="localhost-k8s-csi--node--driver--gfr9r-" May 13 00:29:07.772641 containerd[1654]: 2025-05-13 00:29:07.591 [INFO][4866] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="356931c6ab7a9c03836f1e4a94676e311ce43c2b080aa50abc472a40a8d1a698" Namespace="calico-system" Pod="csi-node-driver-gfr9r" WorkloadEndpoint="localhost-k8s-csi--node--driver--gfr9r-eth0" May 13 00:29:07.772641 containerd[1654]: 2025-05-13 00:29:07.666 [INFO][4900] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="356931c6ab7a9c03836f1e4a94676e311ce43c2b080aa50abc472a40a8d1a698" HandleID="k8s-pod-network.356931c6ab7a9c03836f1e4a94676e311ce43c2b080aa50abc472a40a8d1a698" Workload="localhost-k8s-csi--node--driver--gfr9r-eth0" May 13 00:29:07.772641 containerd[1654]: 2025-05-13 00:29:07.677 [INFO][4900] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="356931c6ab7a9c03836f1e4a94676e311ce43c2b080aa50abc472a40a8d1a698" HandleID="k8s-pod-network.356931c6ab7a9c03836f1e4a94676e311ce43c2b080aa50abc472a40a8d1a698" Workload="localhost-k8s-csi--node--driver--gfr9r-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00031b6b0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-gfr9r", "timestamp":"2025-05-13 00:29:07.663999707 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 13 00:29:07.772641 containerd[1654]: 2025-05-13 00:29:07.677 [INFO][4900] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 00:29:07.772641 containerd[1654]: 2025-05-13 00:29:07.677 [INFO][4900] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 00:29:07.772641 containerd[1654]: 2025-05-13 00:29:07.677 [INFO][4900] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 13 00:29:07.772641 containerd[1654]: 2025-05-13 00:29:07.683 [INFO][4900] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.356931c6ab7a9c03836f1e4a94676e311ce43c2b080aa50abc472a40a8d1a698" host="localhost" May 13 00:29:07.772641 containerd[1654]: 2025-05-13 00:29:07.687 [INFO][4900] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 13 00:29:07.772641 containerd[1654]: 2025-05-13 00:29:07.698 [INFO][4900] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 13 00:29:07.772641 containerd[1654]: 2025-05-13 00:29:07.703 [INFO][4900] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 13 00:29:07.772641 containerd[1654]: 2025-05-13 00:29:07.708 [INFO][4900] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 13 00:29:07.772641 containerd[1654]: 2025-05-13 00:29:07.708 [INFO][4900] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.356931c6ab7a9c03836f1e4a94676e311ce43c2b080aa50abc472a40a8d1a698" host="localhost" May 13 00:29:07.772641 containerd[1654]: 2025-05-13 00:29:07.713 [INFO][4900] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.356931c6ab7a9c03836f1e4a94676e311ce43c2b080aa50abc472a40a8d1a698 May 13 00:29:07.772641 containerd[1654]: 2025-05-13 00:29:07.717 [INFO][4900] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.356931c6ab7a9c03836f1e4a94676e311ce43c2b080aa50abc472a40a8d1a698" host="localhost" May 13 00:29:07.772641 containerd[1654]: 2025-05-13 00:29:07.737 [INFO][4900] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.356931c6ab7a9c03836f1e4a94676e311ce43c2b080aa50abc472a40a8d1a698" host="localhost" May 13 00:29:07.772641 containerd[1654]: 2025-05-13 00:29:07.737 [INFO][4900] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.356931c6ab7a9c03836f1e4a94676e311ce43c2b080aa50abc472a40a8d1a698" host="localhost" May 13 00:29:07.772641 containerd[1654]: 2025-05-13 00:29:07.737 [INFO][4900] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 00:29:07.772641 containerd[1654]: 2025-05-13 00:29:07.737 [INFO][4900] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="356931c6ab7a9c03836f1e4a94676e311ce43c2b080aa50abc472a40a8d1a698" HandleID="k8s-pod-network.356931c6ab7a9c03836f1e4a94676e311ce43c2b080aa50abc472a40a8d1a698" Workload="localhost-k8s-csi--node--driver--gfr9r-eth0" May 13 00:29:07.784929 containerd[1654]: 2025-05-13 00:29:07.743 [INFO][4866] cni-plugin/k8s.go 386: Populated endpoint ContainerID="356931c6ab7a9c03836f1e4a94676e311ce43c2b080aa50abc472a40a8d1a698" Namespace="calico-system" Pod="csi-node-driver-gfr9r" WorkloadEndpoint="localhost-k8s-csi--node--driver--gfr9r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--gfr9r-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"8207c568-159c-4015-827d-1b226c94f3cf", ResourceVersion:"801", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 0, 28, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b7b4b9d", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-gfr9r", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calid510999f90e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 00:29:07.784929 containerd[1654]: 2025-05-13 00:29:07.744 [INFO][4866] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.134/32] ContainerID="356931c6ab7a9c03836f1e4a94676e311ce43c2b080aa50abc472a40a8d1a698" Namespace="calico-system" Pod="csi-node-driver-gfr9r" WorkloadEndpoint="localhost-k8s-csi--node--driver--gfr9r-eth0" May 13 00:29:07.784929 containerd[1654]: 2025-05-13 00:29:07.744 [INFO][4866] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid510999f90e ContainerID="356931c6ab7a9c03836f1e4a94676e311ce43c2b080aa50abc472a40a8d1a698" Namespace="calico-system" Pod="csi-node-driver-gfr9r" WorkloadEndpoint="localhost-k8s-csi--node--driver--gfr9r-eth0" May 13 00:29:07.784929 containerd[1654]: 2025-05-13 00:29:07.752 [INFO][4866] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="356931c6ab7a9c03836f1e4a94676e311ce43c2b080aa50abc472a40a8d1a698" Namespace="calico-system" Pod="csi-node-driver-gfr9r" WorkloadEndpoint="localhost-k8s-csi--node--driver--gfr9r-eth0" May 13 00:29:07.784929 containerd[1654]: 2025-05-13 00:29:07.752 [INFO][4866] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="356931c6ab7a9c03836f1e4a94676e311ce43c2b080aa50abc472a40a8d1a698" Namespace="calico-system" Pod="csi-node-driver-gfr9r" WorkloadEndpoint="localhost-k8s-csi--node--driver--gfr9r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--gfr9r-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"8207c568-159c-4015-827d-1b226c94f3cf", ResourceVersion:"801", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 0, 28, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b7b4b9d", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"356931c6ab7a9c03836f1e4a94676e311ce43c2b080aa50abc472a40a8d1a698", Pod:"csi-node-driver-gfr9r", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calid510999f90e", MAC:"46:a5:d0:b6:0b:a9", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 00:29:07.784929 containerd[1654]: 2025-05-13 00:29:07.771 [INFO][4866] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="356931c6ab7a9c03836f1e4a94676e311ce43c2b080aa50abc472a40a8d1a698" Namespace="calico-system" Pod="csi-node-driver-gfr9r" WorkloadEndpoint="localhost-k8s-csi--node--driver--gfr9r-eth0" May 13 00:29:07.774047 systemd-networkd[1286]: cali70442139ad3: Gained IPv6LL May 13 00:29:07.787174 systemd[1]: run-netns-cni\x2d8ff441e9\x2dbb50\x2d045c\x2d0f9a\x2d900d413ee335.mount: Deactivated successfully. May 13 00:29:07.794912 containerd[1654]: time="2025-05-13T00:29:07.794850948Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-786f4bd5f6-5nhlr,Uid:d11f4d12-c6f0-4786-9ade-890faf15b637,Namespace:calico-system,Attempt:1,} returns sandbox id \"68fae20ca00d661345e722dd8ed1da97d49c518574bf2a60f1588a635c32a0a5\"" May 13 00:29:07.807935 containerd[1654]: time="2025-05-13T00:29:07.807832426Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-wnsqm,Uid:6eb2e449-b9a2-4894-b3ae-1030b0bd4c24,Namespace:kube-system,Attempt:1,} returns sandbox id \"74094c76b45426af403322529b393924093f6019005861a048db11c48de429bb\"" May 13 00:29:07.810408 containerd[1654]: time="2025-05-13T00:29:07.810345294Z" level=info msg="CreateContainer within sandbox \"74094c76b45426af403322529b393924093f6019005861a048db11c48de429bb\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 13 00:29:07.826035 containerd[1654]: time="2025-05-13T00:29:07.814475437Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:29:07.826035 containerd[1654]: time="2025-05-13T00:29:07.814598813Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:29:07.826035 containerd[1654]: time="2025-05-13T00:29:07.814610468Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:29:07.826035 containerd[1654]: time="2025-05-13T00:29:07.815290916Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:29:07.837697 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount197286167.mount: Deactivated successfully. May 13 00:29:07.850698 systemd-resolved[1542]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 13 00:29:07.860457 containerd[1654]: time="2025-05-13T00:29:07.860432300Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-gfr9r,Uid:8207c568-159c-4015-827d-1b226c94f3cf,Namespace:calico-system,Attempt:1,} returns sandbox id \"356931c6ab7a9c03836f1e4a94676e311ce43c2b080aa50abc472a40a8d1a698\"" May 13 00:29:08.103053 containerd[1654]: time="2025-05-13T00:29:08.102942361Z" level=info msg="CreateContainer within sandbox \"74094c76b45426af403322529b393924093f6019005861a048db11c48de429bb\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"39378bfbbb5a33fcec002f06e3dff63bc48257fd92c06a867ddb3f95cd398dc2\"" May 13 00:29:08.104019 containerd[1654]: time="2025-05-13T00:29:08.103990125Z" level=info msg="StartContainer for \"39378bfbbb5a33fcec002f06e3dff63bc48257fd92c06a867ddb3f95cd398dc2\"" May 13 00:29:08.157195 containerd[1654]: time="2025-05-13T00:29:08.157087496Z" level=info msg="StartContainer for \"39378bfbbb5a33fcec002f06e3dff63bc48257fd92c06a867ddb3f95cd398dc2\" returns successfully" May 13 00:29:08.321347 containerd[1654]: time="2025-05-13T00:29:08.321318406Z" level=info msg="StopPodSandbox for \"f8eef8de32e59bda577b2e39ecefefac4fcef6ea2ee8bab236c9d2342a3a2a5b\"" May 13 00:29:08.400602 containerd[1654]: 2025-05-13 00:29:08.361 [INFO][5113] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="f8eef8de32e59bda577b2e39ecefefac4fcef6ea2ee8bab236c9d2342a3a2a5b" May 13 00:29:08.400602 containerd[1654]: 2025-05-13 00:29:08.361 [INFO][5113] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="f8eef8de32e59bda577b2e39ecefefac4fcef6ea2ee8bab236c9d2342a3a2a5b" iface="eth0" netns="/var/run/netns/cni-7355f8ce-9769-94fd-354b-fe41fea92d37" May 13 00:29:08.400602 containerd[1654]: 2025-05-13 00:29:08.361 [INFO][5113] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="f8eef8de32e59bda577b2e39ecefefac4fcef6ea2ee8bab236c9d2342a3a2a5b" iface="eth0" netns="/var/run/netns/cni-7355f8ce-9769-94fd-354b-fe41fea92d37" May 13 00:29:08.400602 containerd[1654]: 2025-05-13 00:29:08.361 [INFO][5113] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="f8eef8de32e59bda577b2e39ecefefac4fcef6ea2ee8bab236c9d2342a3a2a5b" iface="eth0" netns="/var/run/netns/cni-7355f8ce-9769-94fd-354b-fe41fea92d37" May 13 00:29:08.400602 containerd[1654]: 2025-05-13 00:29:08.362 [INFO][5113] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="f8eef8de32e59bda577b2e39ecefefac4fcef6ea2ee8bab236c9d2342a3a2a5b" May 13 00:29:08.400602 containerd[1654]: 2025-05-13 00:29:08.362 [INFO][5113] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f8eef8de32e59bda577b2e39ecefefac4fcef6ea2ee8bab236c9d2342a3a2a5b" May 13 00:29:08.400602 containerd[1654]: 2025-05-13 00:29:08.384 [INFO][5120] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f8eef8de32e59bda577b2e39ecefefac4fcef6ea2ee8bab236c9d2342a3a2a5b" HandleID="k8s-pod-network.f8eef8de32e59bda577b2e39ecefefac4fcef6ea2ee8bab236c9d2342a3a2a5b" Workload="localhost-k8s-calico--apiserver--5fcdd59ffd--gtcr6-eth0" May 13 00:29:08.400602 containerd[1654]: 2025-05-13 00:29:08.384 [INFO][5120] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 00:29:08.400602 containerd[1654]: 2025-05-13 00:29:08.384 [INFO][5120] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 00:29:08.400602 containerd[1654]: 2025-05-13 00:29:08.393 [WARNING][5120] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f8eef8de32e59bda577b2e39ecefefac4fcef6ea2ee8bab236c9d2342a3a2a5b" HandleID="k8s-pod-network.f8eef8de32e59bda577b2e39ecefefac4fcef6ea2ee8bab236c9d2342a3a2a5b" Workload="localhost-k8s-calico--apiserver--5fcdd59ffd--gtcr6-eth0" May 13 00:29:08.400602 containerd[1654]: 2025-05-13 00:29:08.393 [INFO][5120] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f8eef8de32e59bda577b2e39ecefefac4fcef6ea2ee8bab236c9d2342a3a2a5b" HandleID="k8s-pod-network.f8eef8de32e59bda577b2e39ecefefac4fcef6ea2ee8bab236c9d2342a3a2a5b" Workload="localhost-k8s-calico--apiserver--5fcdd59ffd--gtcr6-eth0" May 13 00:29:08.400602 containerd[1654]: 2025-05-13 00:29:08.395 [INFO][5120] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 00:29:08.400602 containerd[1654]: 2025-05-13 00:29:08.398 [INFO][5113] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="f8eef8de32e59bda577b2e39ecefefac4fcef6ea2ee8bab236c9d2342a3a2a5b" May 13 00:29:08.401391 containerd[1654]: time="2025-05-13T00:29:08.401369201Z" level=info msg="TearDown network for sandbox \"f8eef8de32e59bda577b2e39ecefefac4fcef6ea2ee8bab236c9d2342a3a2a5b\" successfully" May 13 00:29:08.401420 containerd[1654]: time="2025-05-13T00:29:08.401390360Z" level=info msg="StopPodSandbox for \"f8eef8de32e59bda577b2e39ecefefac4fcef6ea2ee8bab236c9d2342a3a2a5b\" returns successfully" May 13 00:29:08.402275 containerd[1654]: time="2025-05-13T00:29:08.402257375Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5fcdd59ffd-gtcr6,Uid:d2f1ddd1-14cd-452c-934e-9a28944a7b23,Namespace:calico-apiserver,Attempt:1,}" May 13 00:29:08.615968 kubelet[2967]: I0513 00:29:08.615618 2967 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-wnsqm" podStartSLOduration=33.615600018 podStartE2EDuration="33.615600018s" podCreationTimestamp="2025-05-13 00:28:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 00:29:08.596129251 +0000 UTC m=+48.418213345" watchObservedRunningTime="2025-05-13 00:29:08.615600018 +0000 UTC m=+48.437684115" May 13 00:29:08.733566 systemd-networkd[1286]: cali1cb572dadb3: Gained IPv6LL May 13 00:29:08.785715 systemd[1]: run-netns-cni\x2d7355f8ce\x2d9769\x2d94fd\x2d354b\x2dfe41fea92d37.mount: Deactivated successfully. May 13 00:29:09.011952 containerd[1654]: time="2025-05-13T00:29:09.011924854Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:29:09.015136 containerd[1654]: time="2025-05-13T00:29:09.015096231Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.3: active requests=0, bytes read=43021437" May 13 00:29:09.017350 containerd[1654]: time="2025-05-13T00:29:09.017319617Z" level=info msg="ImageCreate event name:\"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:29:09.021992 containerd[1654]: time="2025-05-13T00:29:09.021880631Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:29:09.022152 containerd[1654]: time="2025-05-13T00:29:09.022137638Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" with image id \"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\", size \"44514075\" in 2.820481812s" May 13 00:29:09.022188 containerd[1654]: time="2025-05-13T00:29:09.022155679Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" returns image reference \"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\"" May 13 00:29:09.035538 containerd[1654]: time="2025-05-13T00:29:09.035486793Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\"" May 13 00:29:09.052860 containerd[1654]: time="2025-05-13T00:29:09.052760902Z" level=info msg="CreateContainer within sandbox \"d4ee7a20bdeb1c591609ee81eb9e8342bb0be15af54d898d113950d3a8a50efc\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 13 00:29:09.053459 systemd-networkd[1286]: cali53d73bed430: Gained IPv6LL May 13 00:29:09.065532 containerd[1654]: time="2025-05-13T00:29:09.065463591Z" level=info msg="CreateContainer within sandbox \"d4ee7a20bdeb1c591609ee81eb9e8342bb0be15af54d898d113950d3a8a50efc\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"47a57bbfc0205063cc5b4009e38f4ce59251517af67ec59de97bf42608c97431\"" May 13 00:29:09.066437 containerd[1654]: time="2025-05-13T00:29:09.066321128Z" level=info msg="StartContainer for \"47a57bbfc0205063cc5b4009e38f4ce59251517af67ec59de97bf42608c97431\"" May 13 00:29:09.067521 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount229418188.mount: Deactivated successfully. May 13 00:29:09.142006 systemd-networkd[1286]: cali980d56dedd6: Link UP May 13 00:29:09.142616 systemd-networkd[1286]: cali980d56dedd6: Gained carrier May 13 00:29:09.173758 containerd[1654]: 2025-05-13 00:29:09.039 [INFO][5139] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--5fcdd59ffd--gtcr6-eth0 calico-apiserver-5fcdd59ffd- calico-apiserver d2f1ddd1-14cd-452c-934e-9a28944a7b23 825 0 2025-05-13 00:28:40 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5fcdd59ffd projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-5fcdd59ffd-gtcr6 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali980d56dedd6 [] []}} ContainerID="d20cf9d6e8012019de8c66e56402082068f8419b948332fc02ef397838479d49" Namespace="calico-apiserver" Pod="calico-apiserver-5fcdd59ffd-gtcr6" WorkloadEndpoint="localhost-k8s-calico--apiserver--5fcdd59ffd--gtcr6-" May 13 00:29:09.173758 containerd[1654]: 2025-05-13 00:29:09.039 [INFO][5139] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="d20cf9d6e8012019de8c66e56402082068f8419b948332fc02ef397838479d49" Namespace="calico-apiserver" Pod="calico-apiserver-5fcdd59ffd-gtcr6" WorkloadEndpoint="localhost-k8s-calico--apiserver--5fcdd59ffd--gtcr6-eth0" May 13 00:29:09.173758 containerd[1654]: 2025-05-13 00:29:09.080 [INFO][5154] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d20cf9d6e8012019de8c66e56402082068f8419b948332fc02ef397838479d49" HandleID="k8s-pod-network.d20cf9d6e8012019de8c66e56402082068f8419b948332fc02ef397838479d49" Workload="localhost-k8s-calico--apiserver--5fcdd59ffd--gtcr6-eth0" May 13 00:29:09.173758 containerd[1654]: 2025-05-13 00:29:09.087 [INFO][5154] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d20cf9d6e8012019de8c66e56402082068f8419b948332fc02ef397838479d49" HandleID="k8s-pod-network.d20cf9d6e8012019de8c66e56402082068f8419b948332fc02ef397838479d49" Workload="localhost-k8s-calico--apiserver--5fcdd59ffd--gtcr6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003be560), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-5fcdd59ffd-gtcr6", "timestamp":"2025-05-13 00:29:09.080552995 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 13 00:29:09.173758 containerd[1654]: 2025-05-13 00:29:09.087 [INFO][5154] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 00:29:09.173758 containerd[1654]: 2025-05-13 00:29:09.087 [INFO][5154] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 00:29:09.173758 containerd[1654]: 2025-05-13 00:29:09.087 [INFO][5154] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 13 00:29:09.173758 containerd[1654]: 2025-05-13 00:29:09.090 [INFO][5154] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.d20cf9d6e8012019de8c66e56402082068f8419b948332fc02ef397838479d49" host="localhost" May 13 00:29:09.173758 containerd[1654]: 2025-05-13 00:29:09.093 [INFO][5154] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 13 00:29:09.173758 containerd[1654]: 2025-05-13 00:29:09.101 [INFO][5154] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 13 00:29:09.173758 containerd[1654]: 2025-05-13 00:29:09.105 [INFO][5154] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 13 00:29:09.173758 containerd[1654]: 2025-05-13 00:29:09.107 [INFO][5154] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 13 00:29:09.173758 containerd[1654]: 2025-05-13 00:29:09.107 [INFO][5154] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.d20cf9d6e8012019de8c66e56402082068f8419b948332fc02ef397838479d49" host="localhost" May 13 00:29:09.173758 containerd[1654]: 2025-05-13 00:29:09.108 [INFO][5154] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.d20cf9d6e8012019de8c66e56402082068f8419b948332fc02ef397838479d49 May 13 00:29:09.173758 containerd[1654]: 2025-05-13 00:29:09.125 [INFO][5154] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.d20cf9d6e8012019de8c66e56402082068f8419b948332fc02ef397838479d49" host="localhost" May 13 00:29:09.173758 containerd[1654]: 2025-05-13 00:29:09.137 [INFO][5154] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.d20cf9d6e8012019de8c66e56402082068f8419b948332fc02ef397838479d49" host="localhost" May 13 00:29:09.173758 containerd[1654]: 2025-05-13 00:29:09.137 [INFO][5154] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.d20cf9d6e8012019de8c66e56402082068f8419b948332fc02ef397838479d49" host="localhost" May 13 00:29:09.173758 containerd[1654]: 2025-05-13 00:29:09.137 [INFO][5154] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 00:29:09.173758 containerd[1654]: 2025-05-13 00:29:09.137 [INFO][5154] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="d20cf9d6e8012019de8c66e56402082068f8419b948332fc02ef397838479d49" HandleID="k8s-pod-network.d20cf9d6e8012019de8c66e56402082068f8419b948332fc02ef397838479d49" Workload="localhost-k8s-calico--apiserver--5fcdd59ffd--gtcr6-eth0" May 13 00:29:09.198041 containerd[1654]: 2025-05-13 00:29:09.139 [INFO][5139] cni-plugin/k8s.go 386: Populated endpoint ContainerID="d20cf9d6e8012019de8c66e56402082068f8419b948332fc02ef397838479d49" Namespace="calico-apiserver" Pod="calico-apiserver-5fcdd59ffd-gtcr6" WorkloadEndpoint="localhost-k8s-calico--apiserver--5fcdd59ffd--gtcr6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5fcdd59ffd--gtcr6-eth0", GenerateName:"calico-apiserver-5fcdd59ffd-", Namespace:"calico-apiserver", SelfLink:"", UID:"d2f1ddd1-14cd-452c-934e-9a28944a7b23", ResourceVersion:"825", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 0, 28, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5fcdd59ffd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-5fcdd59ffd-gtcr6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali980d56dedd6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 00:29:09.198041 containerd[1654]: 2025-05-13 00:29:09.139 [INFO][5139] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.135/32] ContainerID="d20cf9d6e8012019de8c66e56402082068f8419b948332fc02ef397838479d49" Namespace="calico-apiserver" Pod="calico-apiserver-5fcdd59ffd-gtcr6" WorkloadEndpoint="localhost-k8s-calico--apiserver--5fcdd59ffd--gtcr6-eth0" May 13 00:29:09.198041 containerd[1654]: 2025-05-13 00:29:09.139 [INFO][5139] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali980d56dedd6 ContainerID="d20cf9d6e8012019de8c66e56402082068f8419b948332fc02ef397838479d49" Namespace="calico-apiserver" Pod="calico-apiserver-5fcdd59ffd-gtcr6" WorkloadEndpoint="localhost-k8s-calico--apiserver--5fcdd59ffd--gtcr6-eth0" May 13 00:29:09.198041 containerd[1654]: 2025-05-13 00:29:09.142 [INFO][5139] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d20cf9d6e8012019de8c66e56402082068f8419b948332fc02ef397838479d49" Namespace="calico-apiserver" Pod="calico-apiserver-5fcdd59ffd-gtcr6" WorkloadEndpoint="localhost-k8s-calico--apiserver--5fcdd59ffd--gtcr6-eth0" May 13 00:29:09.198041 containerd[1654]: 2025-05-13 00:29:09.142 [INFO][5139] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="d20cf9d6e8012019de8c66e56402082068f8419b948332fc02ef397838479d49" Namespace="calico-apiserver" Pod="calico-apiserver-5fcdd59ffd-gtcr6" WorkloadEndpoint="localhost-k8s-calico--apiserver--5fcdd59ffd--gtcr6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5fcdd59ffd--gtcr6-eth0", GenerateName:"calico-apiserver-5fcdd59ffd-", Namespace:"calico-apiserver", SelfLink:"", UID:"d2f1ddd1-14cd-452c-934e-9a28944a7b23", ResourceVersion:"825", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 0, 28, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5fcdd59ffd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d20cf9d6e8012019de8c66e56402082068f8419b948332fc02ef397838479d49", Pod:"calico-apiserver-5fcdd59ffd-gtcr6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali980d56dedd6", MAC:"8e:23:28:6e:c1:1d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 00:29:09.198041 containerd[1654]: 2025-05-13 00:29:09.169 [INFO][5139] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="d20cf9d6e8012019de8c66e56402082068f8419b948332fc02ef397838479d49" Namespace="calico-apiserver" Pod="calico-apiserver-5fcdd59ffd-gtcr6" WorkloadEndpoint="localhost-k8s-calico--apiserver--5fcdd59ffd--gtcr6-eth0" May 13 00:29:09.198041 containerd[1654]: time="2025-05-13T00:29:09.177288714Z" level=info msg="StartContainer for \"47a57bbfc0205063cc5b4009e38f4ce59251517af67ec59de97bf42608c97431\" returns successfully" May 13 00:29:09.233828 containerd[1654]: time="2025-05-13T00:29:09.233781487Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:29:09.234199 containerd[1654]: time="2025-05-13T00:29:09.234122171Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:29:09.234310 containerd[1654]: time="2025-05-13T00:29:09.234242871Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:29:09.234395 containerd[1654]: time="2025-05-13T00:29:09.234301662Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:29:09.257060 systemd-resolved[1542]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 13 00:29:09.294557 containerd[1654]: time="2025-05-13T00:29:09.294357732Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5fcdd59ffd-gtcr6,Uid:d2f1ddd1-14cd-452c-934e-9a28944a7b23,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"d20cf9d6e8012019de8c66e56402082068f8419b948332fc02ef397838479d49\"" May 13 00:29:09.309451 systemd-networkd[1286]: calid510999f90e: Gained IPv6LL May 13 00:29:09.320269 containerd[1654]: time="2025-05-13T00:29:09.320244687Z" level=info msg="CreateContainer within sandbox \"d20cf9d6e8012019de8c66e56402082068f8419b948332fc02ef397838479d49\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 13 00:29:09.408226 containerd[1654]: time="2025-05-13T00:29:09.408195641Z" level=info msg="CreateContainer within sandbox \"d20cf9d6e8012019de8c66e56402082068f8419b948332fc02ef397838479d49\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"2742db4c4fa26718c3fed97f7bef56514ae859abe8be7b826f422bc4385854fc\"" May 13 00:29:09.408689 containerd[1654]: time="2025-05-13T00:29:09.408657256Z" level=info msg="StartContainer for \"2742db4c4fa26718c3fed97f7bef56514ae859abe8be7b826f422bc4385854fc\"" May 13 00:29:09.473606 containerd[1654]: time="2025-05-13T00:29:09.473569052Z" level=info msg="StartContainer for \"2742db4c4fa26718c3fed97f7bef56514ae859abe8be7b826f422bc4385854fc\" returns successfully" May 13 00:29:09.507425 containerd[1654]: time="2025-05-13T00:29:09.507395031Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:29:09.513251 containerd[1654]: time="2025-05-13T00:29:09.513218770Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.3: active requests=0, bytes read=77" May 13 00:29:09.514557 containerd[1654]: time="2025-05-13T00:29:09.514535489Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" with image id \"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\", size \"44514075\" in 479.02782ms" May 13 00:29:09.514557 containerd[1654]: time="2025-05-13T00:29:09.514554626Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" returns image reference \"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\"" May 13 00:29:09.516180 containerd[1654]: time="2025-05-13T00:29:09.516163354Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\"" May 13 00:29:09.529239 containerd[1654]: time="2025-05-13T00:29:09.529215971Z" level=info msg="CreateContainer within sandbox \"ee177d1e7bc80240241d282d772b455fb5d8f5bf6076d993d144d7dc913d6087\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 13 00:29:09.606067 kubelet[2967]: I0513 00:29:09.605547 2967 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5fcdd59ffd-gtcr6" podStartSLOduration=29.605535684 podStartE2EDuration="29.605535684s" podCreationTimestamp="2025-05-13 00:28:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 00:29:09.603740167 +0000 UTC m=+49.425824261" watchObservedRunningTime="2025-05-13 00:29:09.605535684 +0000 UTC m=+49.427619786" May 13 00:29:09.662279 containerd[1654]: time="2025-05-13T00:29:09.659378241Z" level=info msg="CreateContainer within sandbox \"ee177d1e7bc80240241d282d772b455fb5d8f5bf6076d993d144d7dc913d6087\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"a541ff898dc344da359f32d88aacb08643308e6d0d45876572324afeb6f59b4a\"" May 13 00:29:09.663788 containerd[1654]: time="2025-05-13T00:29:09.663305953Z" level=info msg="StartContainer for \"a541ff898dc344da359f32d88aacb08643308e6d0d45876572324afeb6f59b4a\"" May 13 00:29:09.729446 containerd[1654]: time="2025-05-13T00:29:09.729404419Z" level=info msg="StartContainer for \"a541ff898dc344da359f32d88aacb08643308e6d0d45876572324afeb6f59b4a\" returns successfully" May 13 00:29:10.200113 kubelet[2967]: I0513 00:29:10.200071 2967 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-76896c5c69-r4r9w" podStartSLOduration=27.365786389 podStartE2EDuration="30.200057738s" podCreationTimestamp="2025-05-13 00:28:40 +0000 UTC" firstStartedPulling="2025-05-13 00:29:06.201095067 +0000 UTC m=+46.023179161" lastFinishedPulling="2025-05-13 00:29:09.035366416 +0000 UTC m=+48.857450510" observedRunningTime="2025-05-13 00:29:09.630906712 +0000 UTC m=+49.452990810" watchObservedRunningTime="2025-05-13 00:29:10.200057738 +0000 UTC m=+50.022141835" May 13 00:29:10.340559 kubelet[2967]: I0513 00:29:10.340343 2967 topology_manager.go:215] "Topology Admit Handler" podUID="8f505608-5468-4caa-86e4-869516b2809c" podNamespace="calico-apiserver" podName="calico-apiserver-76896c5c69-h9cqc" May 13 00:29:10.497265 kubelet[2967]: I0513 00:29:10.496792 2967 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pd7l8\" (UniqueName: \"kubernetes.io/projected/8f505608-5468-4caa-86e4-869516b2809c-kube-api-access-pd7l8\") pod \"calico-apiserver-76896c5c69-h9cqc\" (UID: \"8f505608-5468-4caa-86e4-869516b2809c\") " pod="calico-apiserver/calico-apiserver-76896c5c69-h9cqc" May 13 00:29:10.497706 kubelet[2967]: I0513 00:29:10.497679 2967 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/8f505608-5468-4caa-86e4-869516b2809c-calico-apiserver-certs\") pod \"calico-apiserver-76896c5c69-h9cqc\" (UID: \"8f505608-5468-4caa-86e4-869516b2809c\") " pod="calico-apiserver/calico-apiserver-76896c5c69-h9cqc" May 13 00:29:10.589678 kubelet[2967]: I0513 00:29:10.589654 2967 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 13 00:29:10.652319 kubelet[2967]: I0513 00:29:10.652284 2967 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5fcdd59ffd-hwxmq" podStartSLOduration=27.469613997 podStartE2EDuration="30.652271424s" podCreationTimestamp="2025-05-13 00:28:40 +0000 UTC" firstStartedPulling="2025-05-13 00:29:06.333357255 +0000 UTC m=+46.155441348" lastFinishedPulling="2025-05-13 00:29:09.516014682 +0000 UTC m=+49.338098775" observedRunningTime="2025-05-13 00:29:10.602271601 +0000 UTC m=+50.424355706" watchObservedRunningTime="2025-05-13 00:29:10.652271424 +0000 UTC m=+50.474355520" May 13 00:29:10.703687 containerd[1654]: time="2025-05-13T00:29:10.703660949Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-76896c5c69-h9cqc,Uid:8f505608-5468-4caa-86e4-869516b2809c,Namespace:calico-apiserver,Attempt:0,}" May 13 00:29:10.786971 systemd-journald[1185]: Under memory pressure, flushing caches. May 13 00:29:10.783103 systemd-resolved[1542]: Under memory pressure, flushing caches. May 13 00:29:10.783182 systemd-resolved[1542]: Flushed all caches. May 13 00:29:10.910392 systemd-networkd[1286]: cali980d56dedd6: Gained IPv6LL May 13 00:29:11.042852 systemd-networkd[1286]: calid6f0604a086: Link UP May 13 00:29:11.043321 systemd-networkd[1286]: calid6f0604a086: Gained carrier May 13 00:29:11.060302 containerd[1654]: 2025-05-13 00:29:10.800 [INFO][5335] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--76896c5c69--h9cqc-eth0 calico-apiserver-76896c5c69- calico-apiserver 8f505608-5468-4caa-86e4-869516b2809c 880 0 2025-05-13 00:29:10 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:76896c5c69 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-76896c5c69-h9cqc eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calid6f0604a086 [] []}} ContainerID="8a71902ef82e27bd6a9fbc66a4a23c279eef28e29535c94c48e59b787eeaac80" Namespace="calico-apiserver" Pod="calico-apiserver-76896c5c69-h9cqc" WorkloadEndpoint="localhost-k8s-calico--apiserver--76896c5c69--h9cqc-" May 13 00:29:11.060302 containerd[1654]: 2025-05-13 00:29:10.801 [INFO][5335] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="8a71902ef82e27bd6a9fbc66a4a23c279eef28e29535c94c48e59b787eeaac80" Namespace="calico-apiserver" Pod="calico-apiserver-76896c5c69-h9cqc" WorkloadEndpoint="localhost-k8s-calico--apiserver--76896c5c69--h9cqc-eth0" May 13 00:29:11.060302 containerd[1654]: 2025-05-13 00:29:10.871 [INFO][5346] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8a71902ef82e27bd6a9fbc66a4a23c279eef28e29535c94c48e59b787eeaac80" HandleID="k8s-pod-network.8a71902ef82e27bd6a9fbc66a4a23c279eef28e29535c94c48e59b787eeaac80" Workload="localhost-k8s-calico--apiserver--76896c5c69--h9cqc-eth0" May 13 00:29:11.060302 containerd[1654]: 2025-05-13 00:29:10.883 [INFO][5346] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="8a71902ef82e27bd6a9fbc66a4a23c279eef28e29535c94c48e59b787eeaac80" HandleID="k8s-pod-network.8a71902ef82e27bd6a9fbc66a4a23c279eef28e29535c94c48e59b787eeaac80" Workload="localhost-k8s-calico--apiserver--76896c5c69--h9cqc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00031d2a0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-76896c5c69-h9cqc", "timestamp":"2025-05-13 00:29:10.871899881 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 13 00:29:11.060302 containerd[1654]: 2025-05-13 00:29:10.883 [INFO][5346] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 00:29:11.060302 containerd[1654]: 2025-05-13 00:29:10.883 [INFO][5346] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 00:29:11.060302 containerd[1654]: 2025-05-13 00:29:10.883 [INFO][5346] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 13 00:29:11.060302 containerd[1654]: 2025-05-13 00:29:10.884 [INFO][5346] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.8a71902ef82e27bd6a9fbc66a4a23c279eef28e29535c94c48e59b787eeaac80" host="localhost" May 13 00:29:11.060302 containerd[1654]: 2025-05-13 00:29:10.902 [INFO][5346] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 13 00:29:11.060302 containerd[1654]: 2025-05-13 00:29:10.995 [INFO][5346] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 13 00:29:11.060302 containerd[1654]: 2025-05-13 00:29:10.997 [INFO][5346] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 13 00:29:11.060302 containerd[1654]: 2025-05-13 00:29:11.000 [INFO][5346] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 13 00:29:11.060302 containerd[1654]: 2025-05-13 00:29:11.000 [INFO][5346] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.8a71902ef82e27bd6a9fbc66a4a23c279eef28e29535c94c48e59b787eeaac80" host="localhost" May 13 00:29:11.060302 containerd[1654]: 2025-05-13 00:29:11.011 [INFO][5346] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.8a71902ef82e27bd6a9fbc66a4a23c279eef28e29535c94c48e59b787eeaac80 May 13 00:29:11.060302 containerd[1654]: 2025-05-13 00:29:11.017 [INFO][5346] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.8a71902ef82e27bd6a9fbc66a4a23c279eef28e29535c94c48e59b787eeaac80" host="localhost" May 13 00:29:11.060302 containerd[1654]: 2025-05-13 00:29:11.037 [INFO][5346] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.8a71902ef82e27bd6a9fbc66a4a23c279eef28e29535c94c48e59b787eeaac80" host="localhost" May 13 00:29:11.060302 containerd[1654]: 2025-05-13 00:29:11.037 [INFO][5346] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.8a71902ef82e27bd6a9fbc66a4a23c279eef28e29535c94c48e59b787eeaac80" host="localhost" May 13 00:29:11.060302 containerd[1654]: 2025-05-13 00:29:11.037 [INFO][5346] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 00:29:11.060302 containerd[1654]: 2025-05-13 00:29:11.038 [INFO][5346] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="8a71902ef82e27bd6a9fbc66a4a23c279eef28e29535c94c48e59b787eeaac80" HandleID="k8s-pod-network.8a71902ef82e27bd6a9fbc66a4a23c279eef28e29535c94c48e59b787eeaac80" Workload="localhost-k8s-calico--apiserver--76896c5c69--h9cqc-eth0" May 13 00:29:11.072348 containerd[1654]: 2025-05-13 00:29:11.040 [INFO][5335] cni-plugin/k8s.go 386: Populated endpoint ContainerID="8a71902ef82e27bd6a9fbc66a4a23c279eef28e29535c94c48e59b787eeaac80" Namespace="calico-apiserver" Pod="calico-apiserver-76896c5c69-h9cqc" WorkloadEndpoint="localhost-k8s-calico--apiserver--76896c5c69--h9cqc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--76896c5c69--h9cqc-eth0", GenerateName:"calico-apiserver-76896c5c69-", Namespace:"calico-apiserver", SelfLink:"", UID:"8f505608-5468-4caa-86e4-869516b2809c", ResourceVersion:"880", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 0, 29, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"76896c5c69", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-76896c5c69-h9cqc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid6f0604a086", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 00:29:11.072348 containerd[1654]: 2025-05-13 00:29:11.040 [INFO][5335] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.136/32] ContainerID="8a71902ef82e27bd6a9fbc66a4a23c279eef28e29535c94c48e59b787eeaac80" Namespace="calico-apiserver" Pod="calico-apiserver-76896c5c69-h9cqc" WorkloadEndpoint="localhost-k8s-calico--apiserver--76896c5c69--h9cqc-eth0" May 13 00:29:11.072348 containerd[1654]: 2025-05-13 00:29:11.040 [INFO][5335] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid6f0604a086 ContainerID="8a71902ef82e27bd6a9fbc66a4a23c279eef28e29535c94c48e59b787eeaac80" Namespace="calico-apiserver" Pod="calico-apiserver-76896c5c69-h9cqc" WorkloadEndpoint="localhost-k8s-calico--apiserver--76896c5c69--h9cqc-eth0" May 13 00:29:11.072348 containerd[1654]: 2025-05-13 00:29:11.043 [INFO][5335] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8a71902ef82e27bd6a9fbc66a4a23c279eef28e29535c94c48e59b787eeaac80" Namespace="calico-apiserver" Pod="calico-apiserver-76896c5c69-h9cqc" WorkloadEndpoint="localhost-k8s-calico--apiserver--76896c5c69--h9cqc-eth0" May 13 00:29:11.072348 containerd[1654]: 2025-05-13 00:29:11.044 [INFO][5335] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="8a71902ef82e27bd6a9fbc66a4a23c279eef28e29535c94c48e59b787eeaac80" Namespace="calico-apiserver" Pod="calico-apiserver-76896c5c69-h9cqc" WorkloadEndpoint="localhost-k8s-calico--apiserver--76896c5c69--h9cqc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--76896c5c69--h9cqc-eth0", GenerateName:"calico-apiserver-76896c5c69-", Namespace:"calico-apiserver", SelfLink:"", UID:"8f505608-5468-4caa-86e4-869516b2809c", ResourceVersion:"880", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 0, 29, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"76896c5c69", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8a71902ef82e27bd6a9fbc66a4a23c279eef28e29535c94c48e59b787eeaac80", Pod:"calico-apiserver-76896c5c69-h9cqc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid6f0604a086", MAC:"5a:ce:95:21:82:11", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 00:29:11.072348 containerd[1654]: 2025-05-13 00:29:11.058 [INFO][5335] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="8a71902ef82e27bd6a9fbc66a4a23c279eef28e29535c94c48e59b787eeaac80" Namespace="calico-apiserver" Pod="calico-apiserver-76896c5c69-h9cqc" WorkloadEndpoint="localhost-k8s-calico--apiserver--76896c5c69--h9cqc-eth0" May 13 00:29:11.369718 containerd[1654]: time="2025-05-13T00:29:11.369326085Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:29:11.369718 containerd[1654]: time="2025-05-13T00:29:11.369385079Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:29:11.369718 containerd[1654]: time="2025-05-13T00:29:11.369404119Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:29:11.369718 containerd[1654]: time="2025-05-13T00:29:11.369466786Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:29:11.413618 systemd[1]: run-containerd-runc-k8s.io-271f3a5bd07468e4b02b89c9d767d310efc269a4629cac0c9b84784c7d3ab2a6-runc.i23BB5.mount: Deactivated successfully. May 13 00:29:11.432391 systemd[1]: run-containerd-runc-k8s.io-8a71902ef82e27bd6a9fbc66a4a23c279eef28e29535c94c48e59b787eeaac80-runc.fGvdue.mount: Deactivated successfully. May 13 00:29:11.454845 containerd[1654]: time="2025-05-13T00:29:11.451786522Z" level=info msg="StopContainer for \"29cdc28f9ff57dd6f14c3fffdeac74971f4afe2983c64c5a24d359f271ad0d50\" with timeout 300 (s)" May 13 00:29:11.459718 systemd-resolved[1542]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 13 00:29:11.460864 containerd[1654]: time="2025-05-13T00:29:11.460532892Z" level=info msg="Stop container \"29cdc28f9ff57dd6f14c3fffdeac74971f4afe2983c64c5a24d359f271ad0d50\" with signal terminated" May 13 00:29:11.527385 containerd[1654]: time="2025-05-13T00:29:11.527357669Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-76896c5c69-h9cqc,Uid:8f505608-5468-4caa-86e4-869516b2809c,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"8a71902ef82e27bd6a9fbc66a4a23c279eef28e29535c94c48e59b787eeaac80\"" May 13 00:29:11.550636 containerd[1654]: time="2025-05-13T00:29:11.550606162Z" level=info msg="CreateContainer within sandbox \"8a71902ef82e27bd6a9fbc66a4a23c279eef28e29535c94c48e59b787eeaac80\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 13 00:29:11.559988 containerd[1654]: time="2025-05-13T00:29:11.559909119Z" level=info msg="StopContainer for \"271f3a5bd07468e4b02b89c9d767d310efc269a4629cac0c9b84784c7d3ab2a6\" with timeout 5 (s)" May 13 00:29:11.560254 containerd[1654]: time="2025-05-13T00:29:11.560182478Z" level=info msg="Stop container \"271f3a5bd07468e4b02b89c9d767d310efc269a4629cac0c9b84784c7d3ab2a6\" with signal terminated" May 13 00:29:11.627423 containerd[1654]: time="2025-05-13T00:29:11.627020870Z" level=info msg="CreateContainer within sandbox \"8a71902ef82e27bd6a9fbc66a4a23c279eef28e29535c94c48e59b787eeaac80\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"1e02273b74ae5bf085e2cae5e052e0da70f86566d98844142df6d477524ae391\"" May 13 00:29:11.628572 containerd[1654]: time="2025-05-13T00:29:11.628453905Z" level=info msg="StartContainer for \"1e02273b74ae5bf085e2cae5e052e0da70f86566d98844142df6d477524ae391\"" May 13 00:29:11.664298 containerd[1654]: time="2025-05-13T00:29:11.663763784Z" level=info msg="StopContainer for \"a541ff898dc344da359f32d88aacb08643308e6d0d45876572324afeb6f59b4a\" with timeout 30 (s)" May 13 00:29:11.673289 containerd[1654]: time="2025-05-13T00:29:11.671658600Z" level=info msg="Stop container \"a541ff898dc344da359f32d88aacb08643308e6d0d45876572324afeb6f59b4a\" with signal terminated" May 13 00:29:11.763349 containerd[1654]: time="2025-05-13T00:29:11.712097493Z" level=info msg="shim disconnected" id=271f3a5bd07468e4b02b89c9d767d310efc269a4629cac0c9b84784c7d3ab2a6 namespace=k8s.io May 13 00:29:11.763645 containerd[1654]: time="2025-05-13T00:29:11.720653314Z" level=info msg="StartContainer for \"1e02273b74ae5bf085e2cae5e052e0da70f86566d98844142df6d477524ae391\" returns successfully" May 13 00:29:11.786910 containerd[1654]: time="2025-05-13T00:29:11.786716491Z" level=warning msg="cleaning up after shim disconnected" id=271f3a5bd07468e4b02b89c9d767d310efc269a4629cac0c9b84784c7d3ab2a6 namespace=k8s.io May 13 00:29:11.786910 containerd[1654]: time="2025-05-13T00:29:11.786738201Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 13 00:29:11.802745 containerd[1654]: time="2025-05-13T00:29:11.802214144Z" level=info msg="shim disconnected" id=a541ff898dc344da359f32d88aacb08643308e6d0d45876572324afeb6f59b4a namespace=k8s.io May 13 00:29:11.802745 containerd[1654]: time="2025-05-13T00:29:11.802250985Z" level=warning msg="cleaning up after shim disconnected" id=a541ff898dc344da359f32d88aacb08643308e6d0d45876572324afeb6f59b4a namespace=k8s.io May 13 00:29:11.802745 containerd[1654]: time="2025-05-13T00:29:11.802256777Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 13 00:29:11.851114 containerd[1654]: time="2025-05-13T00:29:11.851093295Z" level=info msg="StopContainer for \"a541ff898dc344da359f32d88aacb08643308e6d0d45876572324afeb6f59b4a\" returns successfully" May 13 00:29:11.852106 containerd[1654]: time="2025-05-13T00:29:11.852095356Z" level=info msg="StopContainer for \"271f3a5bd07468e4b02b89c9d767d310efc269a4629cac0c9b84784c7d3ab2a6\" returns successfully" May 13 00:29:11.853301 containerd[1654]: time="2025-05-13T00:29:11.853201772Z" level=info msg="StopPodSandbox for \"ee177d1e7bc80240241d282d772b455fb5d8f5bf6076d993d144d7dc913d6087\"" May 13 00:29:11.853388 containerd[1654]: time="2025-05-13T00:29:11.853377050Z" level=info msg="Container to stop \"a541ff898dc344da359f32d88aacb08643308e6d0d45876572324afeb6f59b4a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 13 00:29:11.853664 containerd[1654]: time="2025-05-13T00:29:11.853654148Z" level=info msg="StopPodSandbox for \"ee56a109e98884806b1faa5ab67f1896e594d25111965495500941d6f16ef424\"" May 13 00:29:11.853764 containerd[1654]: time="2025-05-13T00:29:11.853749463Z" level=info msg="Container to stop \"6fe8f58352b47d6280ee5478feabda701bcec99cad73f131047c47ca13fd2d4b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 13 00:29:11.853874 containerd[1654]: time="2025-05-13T00:29:11.853800968Z" level=info msg="Container to stop \"271f3a5bd07468e4b02b89c9d767d310efc269a4629cac0c9b84784c7d3ab2a6\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 13 00:29:11.853874 containerd[1654]: time="2025-05-13T00:29:11.853809958Z" level=info msg="Container to stop \"d205ccee2730035cdc3dceff3318d7cac278632541bc0786df49cbbd6a5a2767\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 13 00:29:12.056856 containerd[1654]: time="2025-05-13T00:29:12.056812530Z" level=info msg="shim disconnected" id=ee56a109e98884806b1faa5ab67f1896e594d25111965495500941d6f16ef424 namespace=k8s.io May 13 00:29:12.057068 containerd[1654]: time="2025-05-13T00:29:12.056910997Z" level=warning msg="cleaning up after shim disconnected" id=ee56a109e98884806b1faa5ab67f1896e594d25111965495500941d6f16ef424 namespace=k8s.io May 13 00:29:12.057068 containerd[1654]: time="2025-05-13T00:29:12.056919071Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 13 00:29:12.060245 containerd[1654]: time="2025-05-13T00:29:12.057823956Z" level=info msg="shim disconnected" id=ee177d1e7bc80240241d282d772b455fb5d8f5bf6076d993d144d7dc913d6087 namespace=k8s.io May 13 00:29:12.060245 containerd[1654]: time="2025-05-13T00:29:12.057844335Z" level=warning msg="cleaning up after shim disconnected" id=ee177d1e7bc80240241d282d772b455fb5d8f5bf6076d993d144d7dc913d6087 namespace=k8s.io May 13 00:29:12.060245 containerd[1654]: time="2025-05-13T00:29:12.057849170Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 13 00:29:12.092617 containerd[1654]: time="2025-05-13T00:29:12.092595043Z" level=info msg="TearDown network for sandbox \"ee56a109e98884806b1faa5ab67f1896e594d25111965495500941d6f16ef424\" successfully" May 13 00:29:12.092827 containerd[1654]: time="2025-05-13T00:29:12.092818590Z" level=info msg="StopPodSandbox for \"ee56a109e98884806b1faa5ab67f1896e594d25111965495500941d6f16ef424\" returns successfully" May 13 00:29:12.130733 kubelet[2967]: I0513 00:29:12.130641 2967 topology_manager.go:215] "Topology Admit Handler" podUID="29f25a67-e8d7-4f92-b2a9-e7604fcfe2c4" podNamespace="calico-system" podName="calico-node-4925d" May 13 00:29:12.139191 kubelet[2967]: E0513 00:29:12.139169 2967 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="a3f0af17-310d-4953-ac6f-bfe350f5a6b8" containerName="calico-node" May 13 00:29:12.139316 kubelet[2967]: E0513 00:29:12.139309 2967 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="a3f0af17-310d-4953-ac6f-bfe350f5a6b8" containerName="flexvol-driver" May 13 00:29:12.139420 kubelet[2967]: E0513 00:29:12.139413 2967 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="a3f0af17-310d-4953-ac6f-bfe350f5a6b8" containerName="install-cni" May 13 00:29:12.168853 kubelet[2967]: I0513 00:29:12.168830 2967 memory_manager.go:354] "RemoveStaleState removing state" podUID="a3f0af17-310d-4953-ac6f-bfe350f5a6b8" containerName="calico-node" May 13 00:29:12.179300 kubelet[2967]: I0513 00:29:12.179084 2967 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a3f0af17-310d-4953-ac6f-bfe350f5a6b8-lib-modules\") pod \"a3f0af17-310d-4953-ac6f-bfe350f5a6b8\" (UID: \"a3f0af17-310d-4953-ac6f-bfe350f5a6b8\") " May 13 00:29:12.179300 kubelet[2967]: I0513 00:29:12.179114 2967 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xshzx\" (UniqueName: \"kubernetes.io/projected/a3f0af17-310d-4953-ac6f-bfe350f5a6b8-kube-api-access-xshzx\") pod \"a3f0af17-310d-4953-ac6f-bfe350f5a6b8\" (UID: \"a3f0af17-310d-4953-ac6f-bfe350f5a6b8\") " May 13 00:29:12.179300 kubelet[2967]: I0513 00:29:12.179126 2967 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/a3f0af17-310d-4953-ac6f-bfe350f5a6b8-flexvol-driver-host\") pod \"a3f0af17-310d-4953-ac6f-bfe350f5a6b8\" (UID: \"a3f0af17-310d-4953-ac6f-bfe350f5a6b8\") " May 13 00:29:12.179300 kubelet[2967]: I0513 00:29:12.179139 2967 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/a3f0af17-310d-4953-ac6f-bfe350f5a6b8-node-certs\") pod \"a3f0af17-310d-4953-ac6f-bfe350f5a6b8\" (UID: \"a3f0af17-310d-4953-ac6f-bfe350f5a6b8\") " May 13 00:29:12.179300 kubelet[2967]: I0513 00:29:12.179148 2967 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a3f0af17-310d-4953-ac6f-bfe350f5a6b8-xtables-lock\") pod \"a3f0af17-310d-4953-ac6f-bfe350f5a6b8\" (UID: \"a3f0af17-310d-4953-ac6f-bfe350f5a6b8\") " May 13 00:29:12.179300 kubelet[2967]: I0513 00:29:12.179156 2967 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/a3f0af17-310d-4953-ac6f-bfe350f5a6b8-policysync\") pod \"a3f0af17-310d-4953-ac6f-bfe350f5a6b8\" (UID: \"a3f0af17-310d-4953-ac6f-bfe350f5a6b8\") " May 13 00:29:12.181043 kubelet[2967]: I0513 00:29:12.179164 2967 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/a3f0af17-310d-4953-ac6f-bfe350f5a6b8-var-lib-calico\") pod \"a3f0af17-310d-4953-ac6f-bfe350f5a6b8\" (UID: \"a3f0af17-310d-4953-ac6f-bfe350f5a6b8\") " May 13 00:29:12.181043 kubelet[2967]: I0513 00:29:12.179174 2967 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/a3f0af17-310d-4953-ac6f-bfe350f5a6b8-cni-net-dir\") pod \"a3f0af17-310d-4953-ac6f-bfe350f5a6b8\" (UID: \"a3f0af17-310d-4953-ac6f-bfe350f5a6b8\") " May 13 00:29:12.181043 kubelet[2967]: I0513 00:29:12.179185 2967 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/a3f0af17-310d-4953-ac6f-bfe350f5a6b8-cni-log-dir\") pod \"a3f0af17-310d-4953-ac6f-bfe350f5a6b8\" (UID: \"a3f0af17-310d-4953-ac6f-bfe350f5a6b8\") " May 13 00:29:12.181043 kubelet[2967]: I0513 00:29:12.179192 2967 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/a3f0af17-310d-4953-ac6f-bfe350f5a6b8-var-run-calico\") pod \"a3f0af17-310d-4953-ac6f-bfe350f5a6b8\" (UID: \"a3f0af17-310d-4953-ac6f-bfe350f5a6b8\") " May 13 00:29:12.181043 kubelet[2967]: I0513 00:29:12.179202 2967 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/a3f0af17-310d-4953-ac6f-bfe350f5a6b8-cni-bin-dir\") pod \"a3f0af17-310d-4953-ac6f-bfe350f5a6b8\" (UID: \"a3f0af17-310d-4953-ac6f-bfe350f5a6b8\") " May 13 00:29:12.181043 kubelet[2967]: I0513 00:29:12.179213 2967 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a3f0af17-310d-4953-ac6f-bfe350f5a6b8-tigera-ca-bundle\") pod \"a3f0af17-310d-4953-ac6f-bfe350f5a6b8\" (UID: \"a3f0af17-310d-4953-ac6f-bfe350f5a6b8\") " May 13 00:29:12.205710 containerd[1654]: time="2025-05-13T00:29:12.205288388Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:29:12.206660 kubelet[2967]: I0513 00:29:12.204018 2967 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a3f0af17-310d-4953-ac6f-bfe350f5a6b8-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "a3f0af17-310d-4953-ac6f-bfe350f5a6b8" (UID: "a3f0af17-310d-4953-ac6f-bfe350f5a6b8"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 00:29:12.206713 containerd[1654]: time="2025-05-13T00:29:12.206673162Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.3: active requests=0, bytes read=34789138" May 13 00:29:12.209236 containerd[1654]: time="2025-05-13T00:29:12.209060649Z" level=info msg="ImageCreate event name:\"sha256:4e982138231b3653a012db4f21ed5e7be69afd5f553dba38cf7e88f0ed740b94\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:29:12.214883 containerd[1654]: time="2025-05-13T00:29:12.214840979Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:feaab0197035d474845e0f8137a99a78cab274f0a3cac4d5485cf9b1bdf9ffa9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:29:12.216607 containerd[1654]: time="2025-05-13T00:29:12.216478631Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" with image id \"sha256:4e982138231b3653a012db4f21ed5e7be69afd5f553dba38cf7e88f0ed740b94\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:feaab0197035d474845e0f8137a99a78cab274f0a3cac4d5485cf9b1bdf9ffa9\", size \"36281728\" in 2.700298501s" May 13 00:29:12.216607 containerd[1654]: time="2025-05-13T00:29:12.216499225Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" returns image reference \"sha256:4e982138231b3653a012db4f21ed5e7be69afd5f553dba38cf7e88f0ed740b94\"" May 13 00:29:12.235402 kubelet[2967]: I0513 00:29:12.232318 2967 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a3f0af17-310d-4953-ac6f-bfe350f5a6b8-flexvol-driver-host" (OuterVolumeSpecName: "flexvol-driver-host") pod "a3f0af17-310d-4953-ac6f-bfe350f5a6b8" (UID: "a3f0af17-310d-4953-ac6f-bfe350f5a6b8"). InnerVolumeSpecName "flexvol-driver-host". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 00:29:12.238294 systemd-networkd[1286]: cali70442139ad3: Link DOWN May 13 00:29:12.241461 kubelet[2967]: I0513 00:29:12.239313 2967 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a3f0af17-310d-4953-ac6f-bfe350f5a6b8-tigera-ca-bundle" (OuterVolumeSpecName: "tigera-ca-bundle") pod "a3f0af17-310d-4953-ac6f-bfe350f5a6b8" (UID: "a3f0af17-310d-4953-ac6f-bfe350f5a6b8"). InnerVolumeSpecName "tigera-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 13 00:29:12.238298 systemd-networkd[1286]: cali70442139ad3: Lost carrier May 13 00:29:12.244799 kubelet[2967]: I0513 00:29:12.242948 2967 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a3f0af17-310d-4953-ac6f-bfe350f5a6b8-kube-api-access-xshzx" (OuterVolumeSpecName: "kube-api-access-xshzx") pod "a3f0af17-310d-4953-ac6f-bfe350f5a6b8" (UID: "a3f0af17-310d-4953-ac6f-bfe350f5a6b8"). InnerVolumeSpecName "kube-api-access-xshzx". PluginName "kubernetes.io/projected", VolumeGidValue "" May 13 00:29:12.244799 kubelet[2967]: I0513 00:29:12.244521 2967 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a3f0af17-310d-4953-ac6f-bfe350f5a6b8-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "a3f0af17-310d-4953-ac6f-bfe350f5a6b8" (UID: "a3f0af17-310d-4953-ac6f-bfe350f5a6b8"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 00:29:12.244799 kubelet[2967]: I0513 00:29:12.244542 2967 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a3f0af17-310d-4953-ac6f-bfe350f5a6b8-policysync" (OuterVolumeSpecName: "policysync") pod "a3f0af17-310d-4953-ac6f-bfe350f5a6b8" (UID: "a3f0af17-310d-4953-ac6f-bfe350f5a6b8"). InnerVolumeSpecName "policysync". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 00:29:12.244799 kubelet[2967]: I0513 00:29:12.244551 2967 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a3f0af17-310d-4953-ac6f-bfe350f5a6b8-var-lib-calico" (OuterVolumeSpecName: "var-lib-calico") pod "a3f0af17-310d-4953-ac6f-bfe350f5a6b8" (UID: "a3f0af17-310d-4953-ac6f-bfe350f5a6b8"). InnerVolumeSpecName "var-lib-calico". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 00:29:12.244799 kubelet[2967]: I0513 00:29:12.244559 2967 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a3f0af17-310d-4953-ac6f-bfe350f5a6b8-cni-net-dir" (OuterVolumeSpecName: "cni-net-dir") pod "a3f0af17-310d-4953-ac6f-bfe350f5a6b8" (UID: "a3f0af17-310d-4953-ac6f-bfe350f5a6b8"). InnerVolumeSpecName "cni-net-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 00:29:12.244927 kubelet[2967]: I0513 00:29:12.244580 2967 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a3f0af17-310d-4953-ac6f-bfe350f5a6b8-cni-log-dir" (OuterVolumeSpecName: "cni-log-dir") pod "a3f0af17-310d-4953-ac6f-bfe350f5a6b8" (UID: "a3f0af17-310d-4953-ac6f-bfe350f5a6b8"). InnerVolumeSpecName "cni-log-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 00:29:12.244927 kubelet[2967]: I0513 00:29:12.244589 2967 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a3f0af17-310d-4953-ac6f-bfe350f5a6b8-var-run-calico" (OuterVolumeSpecName: "var-run-calico") pod "a3f0af17-310d-4953-ac6f-bfe350f5a6b8" (UID: "a3f0af17-310d-4953-ac6f-bfe350f5a6b8"). InnerVolumeSpecName "var-run-calico". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 00:29:12.244927 kubelet[2967]: I0513 00:29:12.244597 2967 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a3f0af17-310d-4953-ac6f-bfe350f5a6b8-cni-bin-dir" (OuterVolumeSpecName: "cni-bin-dir") pod "a3f0af17-310d-4953-ac6f-bfe350f5a6b8" (UID: "a3f0af17-310d-4953-ac6f-bfe350f5a6b8"). InnerVolumeSpecName "cni-bin-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 00:29:12.247370 kubelet[2967]: I0513 00:29:12.247163 2967 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a3f0af17-310d-4953-ac6f-bfe350f5a6b8-node-certs" (OuterVolumeSpecName: "node-certs") pod "a3f0af17-310d-4953-ac6f-bfe350f5a6b8" (UID: "a3f0af17-310d-4953-ac6f-bfe350f5a6b8"). InnerVolumeSpecName "node-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" May 13 00:29:12.258297 containerd[1654]: time="2025-05-13T00:29:12.258275497Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\"" May 13 00:29:12.264867 containerd[1654]: time="2025-05-13T00:29:12.263682633Z" level=info msg="CreateContainer within sandbox \"68fae20ca00d661345e722dd8ed1da97d49c518574bf2a60f1588a635c32a0a5\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" May 13 00:29:12.280184 kubelet[2967]: I0513 00:29:12.279939 2967 reconciler_common.go:289] "Volume detached for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/a3f0af17-310d-4953-ac6f-bfe350f5a6b8-flexvol-driver-host\") on node \"localhost\" DevicePath \"\"" May 13 00:29:12.280184 kubelet[2967]: I0513 00:29:12.279957 2967 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a3f0af17-310d-4953-ac6f-bfe350f5a6b8-xtables-lock\") on node \"localhost\" DevicePath \"\"" May 13 00:29:12.280184 kubelet[2967]: I0513 00:29:12.279964 2967 reconciler_common.go:289] "Volume detached for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/a3f0af17-310d-4953-ac6f-bfe350f5a6b8-node-certs\") on node \"localhost\" DevicePath \"\"" May 13 00:29:12.280184 kubelet[2967]: I0513 00:29:12.279969 2967 reconciler_common.go:289] "Volume detached for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/a3f0af17-310d-4953-ac6f-bfe350f5a6b8-policysync\") on node \"localhost\" DevicePath \"\"" May 13 00:29:12.280184 kubelet[2967]: I0513 00:29:12.279973 2967 reconciler_common.go:289] "Volume detached for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/a3f0af17-310d-4953-ac6f-bfe350f5a6b8-var-lib-calico\") on node \"localhost\" DevicePath \"\"" May 13 00:29:12.280184 kubelet[2967]: I0513 00:29:12.279977 2967 reconciler_common.go:289] "Volume detached for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/a3f0af17-310d-4953-ac6f-bfe350f5a6b8-cni-net-dir\") on node \"localhost\" DevicePath \"\"" May 13 00:29:12.280184 kubelet[2967]: I0513 00:29:12.279981 2967 reconciler_common.go:289] "Volume detached for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/a3f0af17-310d-4953-ac6f-bfe350f5a6b8-cni-log-dir\") on node \"localhost\" DevicePath \"\"" May 13 00:29:12.280184 kubelet[2967]: I0513 00:29:12.279986 2967 reconciler_common.go:289] "Volume detached for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/a3f0af17-310d-4953-ac6f-bfe350f5a6b8-var-run-calico\") on node \"localhost\" DevicePath \"\"" May 13 00:29:12.280725 kubelet[2967]: I0513 00:29:12.279991 2967 reconciler_common.go:289] "Volume detached for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/a3f0af17-310d-4953-ac6f-bfe350f5a6b8-cni-bin-dir\") on node \"localhost\" DevicePath \"\"" May 13 00:29:12.280725 kubelet[2967]: I0513 00:29:12.279995 2967 reconciler_common.go:289] "Volume detached for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a3f0af17-310d-4953-ac6f-bfe350f5a6b8-tigera-ca-bundle\") on node \"localhost\" DevicePath \"\"" May 13 00:29:12.280725 kubelet[2967]: I0513 00:29:12.279999 2967 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-xshzx\" (UniqueName: \"kubernetes.io/projected/a3f0af17-310d-4953-ac6f-bfe350f5a6b8-kube-api-access-xshzx\") on node \"localhost\" DevicePath \"\"" May 13 00:29:12.280725 kubelet[2967]: I0513 00:29:12.280004 2967 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a3f0af17-310d-4953-ac6f-bfe350f5a6b8-lib-modules\") on node \"localhost\" DevicePath \"\"" May 13 00:29:12.283658 containerd[1654]: time="2025-05-13T00:29:12.283569052Z" level=info msg="CreateContainer within sandbox \"68fae20ca00d661345e722dd8ed1da97d49c518574bf2a60f1588a635c32a0a5\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"d08c1464d61ceaa93b5826f12d3f7066b647e654f0b145012b4731c514424fce\"" May 13 00:29:12.285208 containerd[1654]: time="2025-05-13T00:29:12.285109492Z" level=info msg="StartContainer for \"d08c1464d61ceaa93b5826f12d3f7066b647e654f0b145012b4731c514424fce\"" May 13 00:29:12.317486 systemd-networkd[1286]: calid6f0604a086: Gained IPv6LL May 13 00:29:12.336044 containerd[1654]: 2025-05-13 00:29:12.237 [INFO][5615] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="ee177d1e7bc80240241d282d772b455fb5d8f5bf6076d993d144d7dc913d6087" May 13 00:29:12.336044 containerd[1654]: 2025-05-13 00:29:12.237 [INFO][5615] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="ee177d1e7bc80240241d282d772b455fb5d8f5bf6076d993d144d7dc913d6087" iface="eth0" netns="/var/run/netns/cni-4ab6a512-6047-8a27-c075-95f5c4f1e05a" May 13 00:29:12.336044 containerd[1654]: 2025-05-13 00:29:12.237 [INFO][5615] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="ee177d1e7bc80240241d282d772b455fb5d8f5bf6076d993d144d7dc913d6087" iface="eth0" netns="/var/run/netns/cni-4ab6a512-6047-8a27-c075-95f5c4f1e05a" May 13 00:29:12.336044 containerd[1654]: 2025-05-13 00:29:12.246 [INFO][5615] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="ee177d1e7bc80240241d282d772b455fb5d8f5bf6076d993d144d7dc913d6087" after=9.78715ms iface="eth0" netns="/var/run/netns/cni-4ab6a512-6047-8a27-c075-95f5c4f1e05a" May 13 00:29:12.336044 containerd[1654]: 2025-05-13 00:29:12.247 [INFO][5615] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="ee177d1e7bc80240241d282d772b455fb5d8f5bf6076d993d144d7dc913d6087" May 13 00:29:12.336044 containerd[1654]: 2025-05-13 00:29:12.247 [INFO][5615] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ee177d1e7bc80240241d282d772b455fb5d8f5bf6076d993d144d7dc913d6087" May 13 00:29:12.336044 containerd[1654]: 2025-05-13 00:29:12.286 [INFO][5626] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ee177d1e7bc80240241d282d772b455fb5d8f5bf6076d993d144d7dc913d6087" HandleID="k8s-pod-network.ee177d1e7bc80240241d282d772b455fb5d8f5bf6076d993d144d7dc913d6087" Workload="localhost-k8s-calico--apiserver--5fcdd59ffd--hwxmq-eth0" May 13 00:29:12.336044 containerd[1654]: 2025-05-13 00:29:12.287 [INFO][5626] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 00:29:12.336044 containerd[1654]: 2025-05-13 00:29:12.287 [INFO][5626] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 00:29:12.336044 containerd[1654]: 2025-05-13 00:29:12.317 [INFO][5626] ipam/ipam_plugin.go 431: Released address using handleID ContainerID="ee177d1e7bc80240241d282d772b455fb5d8f5bf6076d993d144d7dc913d6087" HandleID="k8s-pod-network.ee177d1e7bc80240241d282d772b455fb5d8f5bf6076d993d144d7dc913d6087" Workload="localhost-k8s-calico--apiserver--5fcdd59ffd--hwxmq-eth0" May 13 00:29:12.336044 containerd[1654]: 2025-05-13 00:29:12.317 [INFO][5626] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ee177d1e7bc80240241d282d772b455fb5d8f5bf6076d993d144d7dc913d6087" HandleID="k8s-pod-network.ee177d1e7bc80240241d282d772b455fb5d8f5bf6076d993d144d7dc913d6087" Workload="localhost-k8s-calico--apiserver--5fcdd59ffd--hwxmq-eth0" May 13 00:29:12.336044 containerd[1654]: 2025-05-13 00:29:12.320 [INFO][5626] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 00:29:12.336044 containerd[1654]: 2025-05-13 00:29:12.329 [INFO][5615] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="ee177d1e7bc80240241d282d772b455fb5d8f5bf6076d993d144d7dc913d6087" May 13 00:29:12.337310 containerd[1654]: time="2025-05-13T00:29:12.336124043Z" level=info msg="TearDown network for sandbox \"ee177d1e7bc80240241d282d772b455fb5d8f5bf6076d993d144d7dc913d6087\" successfully" May 13 00:29:12.337310 containerd[1654]: time="2025-05-13T00:29:12.336140062Z" level=info msg="StopPodSandbox for \"ee177d1e7bc80240241d282d772b455fb5d8f5bf6076d993d144d7dc913d6087\" returns successfully" May 13 00:29:12.337310 containerd[1654]: time="2025-05-13T00:29:12.336481463Z" level=info msg="StopPodSandbox for \"95514c060ba7bfb0e198e40c76b60d097c2db68ab36d63bbe6781ed15e5da073\"" May 13 00:29:12.382417 kubelet[2967]: I0513 00:29:12.381575 2967 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/29f25a67-e8d7-4f92-b2a9-e7604fcfe2c4-xtables-lock\") pod \"calico-node-4925d\" (UID: \"29f25a67-e8d7-4f92-b2a9-e7604fcfe2c4\") " pod="calico-system/calico-node-4925d" May 13 00:29:12.382417 kubelet[2967]: I0513 00:29:12.381923 2967 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/29f25a67-e8d7-4f92-b2a9-e7604fcfe2c4-node-certs\") pod \"calico-node-4925d\" (UID: \"29f25a67-e8d7-4f92-b2a9-e7604fcfe2c4\") " pod="calico-system/calico-node-4925d" May 13 00:29:12.382417 kubelet[2967]: I0513 00:29:12.381940 2967 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/29f25a67-e8d7-4f92-b2a9-e7604fcfe2c4-var-run-calico\") pod \"calico-node-4925d\" (UID: \"29f25a67-e8d7-4f92-b2a9-e7604fcfe2c4\") " pod="calico-system/calico-node-4925d" May 13 00:29:12.382417 kubelet[2967]: I0513 00:29:12.381953 2967 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/29f25a67-e8d7-4f92-b2a9-e7604fcfe2c4-tigera-ca-bundle\") pod \"calico-node-4925d\" (UID: \"29f25a67-e8d7-4f92-b2a9-e7604fcfe2c4\") " pod="calico-system/calico-node-4925d" May 13 00:29:12.384001 kubelet[2967]: I0513 00:29:12.383628 2967 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/29f25a67-e8d7-4f92-b2a9-e7604fcfe2c4-var-lib-calico\") pod \"calico-node-4925d\" (UID: \"29f25a67-e8d7-4f92-b2a9-e7604fcfe2c4\") " pod="calico-system/calico-node-4925d" May 13 00:29:12.384001 kubelet[2967]: I0513 00:29:12.383651 2967 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/29f25a67-e8d7-4f92-b2a9-e7604fcfe2c4-lib-modules\") pod \"calico-node-4925d\" (UID: \"29f25a67-e8d7-4f92-b2a9-e7604fcfe2c4\") " pod="calico-system/calico-node-4925d" May 13 00:29:12.384001 kubelet[2967]: I0513 00:29:12.383662 2967 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/29f25a67-e8d7-4f92-b2a9-e7604fcfe2c4-cni-net-dir\") pod \"calico-node-4925d\" (UID: \"29f25a67-e8d7-4f92-b2a9-e7604fcfe2c4\") " pod="calico-system/calico-node-4925d" May 13 00:29:12.384001 kubelet[2967]: I0513 00:29:12.383674 2967 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/29f25a67-e8d7-4f92-b2a9-e7604fcfe2c4-cni-bin-dir\") pod \"calico-node-4925d\" (UID: \"29f25a67-e8d7-4f92-b2a9-e7604fcfe2c4\") " pod="calico-system/calico-node-4925d" May 13 00:29:12.384001 kubelet[2967]: I0513 00:29:12.383684 2967 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/29f25a67-e8d7-4f92-b2a9-e7604fcfe2c4-policysync\") pod \"calico-node-4925d\" (UID: \"29f25a67-e8d7-4f92-b2a9-e7604fcfe2c4\") " pod="calico-system/calico-node-4925d" May 13 00:29:12.388370 kubelet[2967]: I0513 00:29:12.386401 2967 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/29f25a67-e8d7-4f92-b2a9-e7604fcfe2c4-cni-log-dir\") pod \"calico-node-4925d\" (UID: \"29f25a67-e8d7-4f92-b2a9-e7604fcfe2c4\") " pod="calico-system/calico-node-4925d" May 13 00:29:12.388370 kubelet[2967]: I0513 00:29:12.386455 2967 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pk8gh\" (UniqueName: \"kubernetes.io/projected/29f25a67-e8d7-4f92-b2a9-e7604fcfe2c4-kube-api-access-pk8gh\") pod \"calico-node-4925d\" (UID: \"29f25a67-e8d7-4f92-b2a9-e7604fcfe2c4\") " pod="calico-system/calico-node-4925d" May 13 00:29:12.388370 kubelet[2967]: I0513 00:29:12.386468 2967 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/29f25a67-e8d7-4f92-b2a9-e7604fcfe2c4-flexvol-driver-host\") pod \"calico-node-4925d\" (UID: \"29f25a67-e8d7-4f92-b2a9-e7604fcfe2c4\") " pod="calico-system/calico-node-4925d" May 13 00:29:12.387631 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a541ff898dc344da359f32d88aacb08643308e6d0d45876572324afeb6f59b4a-rootfs.mount: Deactivated successfully. May 13 00:29:12.387719 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ee177d1e7bc80240241d282d772b455fb5d8f5bf6076d993d144d7dc913d6087-rootfs.mount: Deactivated successfully. May 13 00:29:12.387783 systemd[1]: run-netns-cni\x2d4ab6a512\x2d6047\x2d8a27\x2dc075\x2d95f5c4f1e05a.mount: Deactivated successfully. May 13 00:29:12.387836 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ee177d1e7bc80240241d282d772b455fb5d8f5bf6076d993d144d7dc913d6087-shm.mount: Deactivated successfully. May 13 00:29:12.387889 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-271f3a5bd07468e4b02b89c9d767d310efc269a4629cac0c9b84784c7d3ab2a6-rootfs.mount: Deactivated successfully. May 13 00:29:12.387938 systemd[1]: var-lib-kubelet-pods-a3f0af17\x2d310d\x2d4953\x2dac6f\x2dbfe350f5a6b8-volume\x2dsubpaths-tigera\x2dca\x2dbundle-calico\x2dnode-1.mount: Deactivated successfully. May 13 00:29:12.387994 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ee56a109e98884806b1faa5ab67f1896e594d25111965495500941d6f16ef424-rootfs.mount: Deactivated successfully. May 13 00:29:12.388911 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ee56a109e98884806b1faa5ab67f1896e594d25111965495500941d6f16ef424-shm.mount: Deactivated successfully. May 13 00:29:12.388992 systemd[1]: var-lib-kubelet-pods-a3f0af17\x2d310d\x2d4953\x2dac6f\x2dbfe350f5a6b8-volumes-kubernetes.io\x7esecret-node\x2dcerts.mount: Deactivated successfully. May 13 00:29:12.389087 systemd[1]: var-lib-kubelet-pods-a3f0af17\x2d310d\x2d4953\x2dac6f\x2dbfe350f5a6b8-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dxshzx.mount: Deactivated successfully. May 13 00:29:12.426230 containerd[1654]: time="2025-05-13T00:29:12.426205477Z" level=info msg="StartContainer for \"d08c1464d61ceaa93b5826f12d3f7066b647e654f0b145012b4731c514424fce\" returns successfully" May 13 00:29:12.487441 containerd[1654]: 2025-05-13 00:29:12.455 [WARNING][5671] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="95514c060ba7bfb0e198e40c76b60d097c2db68ab36d63bbe6781ed15e5da073" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5fcdd59ffd--hwxmq-eth0", GenerateName:"calico-apiserver-5fcdd59ffd-", Namespace:"calico-apiserver", SelfLink:"", UID:"f38b69e3-9879-44d1-9fca-1ad977d43c8a", ResourceVersion:"950", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 0, 28, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5fcdd59ffd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ee177d1e7bc80240241d282d772b455fb5d8f5bf6076d993d144d7dc913d6087", Pod:"calico-apiserver-5fcdd59ffd-hwxmq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali70442139ad3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 00:29:12.487441 containerd[1654]: 2025-05-13 00:29:12.455 [INFO][5671] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="95514c060ba7bfb0e198e40c76b60d097c2db68ab36d63bbe6781ed15e5da073" May 13 00:29:12.487441 containerd[1654]: 2025-05-13 00:29:12.455 [INFO][5671] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="95514c060ba7bfb0e198e40c76b60d097c2db68ab36d63bbe6781ed15e5da073" iface="eth0" netns="" May 13 00:29:12.487441 containerd[1654]: 2025-05-13 00:29:12.455 [INFO][5671] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="95514c060ba7bfb0e198e40c76b60d097c2db68ab36d63bbe6781ed15e5da073" May 13 00:29:12.487441 containerd[1654]: 2025-05-13 00:29:12.455 [INFO][5671] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="95514c060ba7bfb0e198e40c76b60d097c2db68ab36d63bbe6781ed15e5da073" May 13 00:29:12.487441 containerd[1654]: 2025-05-13 00:29:12.475 [INFO][5688] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="95514c060ba7bfb0e198e40c76b60d097c2db68ab36d63bbe6781ed15e5da073" HandleID="k8s-pod-network.95514c060ba7bfb0e198e40c76b60d097c2db68ab36d63bbe6781ed15e5da073" Workload="localhost-k8s-calico--apiserver--5fcdd59ffd--hwxmq-eth0" May 13 00:29:12.487441 containerd[1654]: 2025-05-13 00:29:12.477 [INFO][5688] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 00:29:12.487441 containerd[1654]: 2025-05-13 00:29:12.477 [INFO][5688] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 00:29:12.487441 containerd[1654]: 2025-05-13 00:29:12.481 [WARNING][5688] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="95514c060ba7bfb0e198e40c76b60d097c2db68ab36d63bbe6781ed15e5da073" HandleID="k8s-pod-network.95514c060ba7bfb0e198e40c76b60d097c2db68ab36d63bbe6781ed15e5da073" Workload="localhost-k8s-calico--apiserver--5fcdd59ffd--hwxmq-eth0" May 13 00:29:12.487441 containerd[1654]: 2025-05-13 00:29:12.481 [INFO][5688] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="95514c060ba7bfb0e198e40c76b60d097c2db68ab36d63bbe6781ed15e5da073" HandleID="k8s-pod-network.95514c060ba7bfb0e198e40c76b60d097c2db68ab36d63bbe6781ed15e5da073" Workload="localhost-k8s-calico--apiserver--5fcdd59ffd--hwxmq-eth0" May 13 00:29:12.487441 containerd[1654]: 2025-05-13 00:29:12.481 [INFO][5688] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 00:29:12.487441 containerd[1654]: 2025-05-13 00:29:12.484 [INFO][5671] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="95514c060ba7bfb0e198e40c76b60d097c2db68ab36d63bbe6781ed15e5da073" May 13 00:29:12.489455 containerd[1654]: time="2025-05-13T00:29:12.488936851Z" level=info msg="TearDown network for sandbox \"95514c060ba7bfb0e198e40c76b60d097c2db68ab36d63bbe6781ed15e5da073\" successfully" May 13 00:29:12.489455 containerd[1654]: time="2025-05-13T00:29:12.488954521Z" level=info msg="StopPodSandbox for \"95514c060ba7bfb0e198e40c76b60d097c2db68ab36d63bbe6781ed15e5da073\" returns successfully" May 13 00:29:12.532341 containerd[1654]: time="2025-05-13T00:29:12.529603973Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-4925d,Uid:29f25a67-e8d7-4f92-b2a9-e7604fcfe2c4,Namespace:calico-system,Attempt:0,}" May 13 00:29:12.561288 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-29cdc28f9ff57dd6f14c3fffdeac74971f4afe2983c64c5a24d359f271ad0d50-rootfs.mount: Deactivated successfully. May 13 00:29:12.576831 containerd[1654]: time="2025-05-13T00:29:12.576736387Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:29:12.578526 containerd[1654]: time="2025-05-13T00:29:12.578493209Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:29:12.578586 containerd[1654]: time="2025-05-13T00:29:12.578525316Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:29:12.585831 containerd[1654]: time="2025-05-13T00:29:12.578682503Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:29:12.612741 containerd[1654]: time="2025-05-13T00:29:12.612713349Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-4925d,Uid:29f25a67-e8d7-4f92-b2a9-e7604fcfe2c4,Namespace:calico-system,Attempt:0,} returns sandbox id \"04e5f6ba09b44371150dff89d4edb43ea1cea703538495033e2372c6f7ab1b94\"" May 13 00:29:12.634489 containerd[1654]: time="2025-05-13T00:29:12.634462212Z" level=info msg="CreateContainer within sandbox \"04e5f6ba09b44371150dff89d4edb43ea1cea703538495033e2372c6f7ab1b94\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" May 13 00:29:12.651912 containerd[1654]: time="2025-05-13T00:29:12.651881296Z" level=info msg="shim disconnected" id=29cdc28f9ff57dd6f14c3fffdeac74971f4afe2983c64c5a24d359f271ad0d50 namespace=k8s.io May 13 00:29:12.651912 containerd[1654]: time="2025-05-13T00:29:12.651909238Z" level=warning msg="cleaning up after shim disconnected" id=29cdc28f9ff57dd6f14c3fffdeac74971f4afe2983c64c5a24d359f271ad0d50 namespace=k8s.io May 13 00:29:12.651912 containerd[1654]: time="2025-05-13T00:29:12.651914341Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 13 00:29:12.657897 containerd[1654]: time="2025-05-13T00:29:12.657871519Z" level=info msg="CreateContainer within sandbox \"04e5f6ba09b44371150dff89d4edb43ea1cea703538495033e2372c6f7ab1b94\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"8fe4e263779a916394f8a2372934dd04c16205cd0935c6304a96abbfe1e0ef1a\"" May 13 00:29:12.658347 containerd[1654]: time="2025-05-13T00:29:12.658254802Z" level=info msg="StartContainer for \"8fe4e263779a916394f8a2372934dd04c16205cd0935c6304a96abbfe1e0ef1a\"" May 13 00:29:12.663572 containerd[1654]: time="2025-05-13T00:29:12.663550459Z" level=info msg="StopContainer for \"29cdc28f9ff57dd6f14c3fffdeac74971f4afe2983c64c5a24d359f271ad0d50\" returns successfully" May 13 00:29:12.663974 containerd[1654]: time="2025-05-13T00:29:12.663958873Z" level=info msg="StopPodSandbox for \"f036290211faf8d4c836b3e54d2c626c3fd3bad697261c5464386326ea4d2faa\"" May 13 00:29:12.664005 containerd[1654]: time="2025-05-13T00:29:12.663981019Z" level=info msg="Container to stop \"29cdc28f9ff57dd6f14c3fffdeac74971f4afe2983c64c5a24d359f271ad0d50\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 13 00:29:12.687787 containerd[1654]: time="2025-05-13T00:29:12.686771785Z" level=info msg="StopContainer for \"d08c1464d61ceaa93b5826f12d3f7066b647e654f0b145012b4731c514424fce\" with timeout 30 (s)" May 13 00:29:12.688257 kubelet[2967]: I0513 00:29:12.688159 2967 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/f38b69e3-9879-44d1-9fca-1ad977d43c8a-calico-apiserver-certs\") pod \"f38b69e3-9879-44d1-9fca-1ad977d43c8a\" (UID: \"f38b69e3-9879-44d1-9fca-1ad977d43c8a\") " May 13 00:29:12.688257 kubelet[2967]: I0513 00:29:12.688191 2967 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lmdw2\" (UniqueName: \"kubernetes.io/projected/f38b69e3-9879-44d1-9fca-1ad977d43c8a-kube-api-access-lmdw2\") pod \"f38b69e3-9879-44d1-9fca-1ad977d43c8a\" (UID: \"f38b69e3-9879-44d1-9fca-1ad977d43c8a\") " May 13 00:29:12.694836 kubelet[2967]: I0513 00:29:12.694026 2967 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-76896c5c69-h9cqc" podStartSLOduration=2.693834784 podStartE2EDuration="2.693834784s" podCreationTimestamp="2025-05-13 00:29:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 00:29:12.6807134 +0000 UTC m=+52.502797501" watchObservedRunningTime="2025-05-13 00:29:12.693834784 +0000 UTC m=+52.515918881" May 13 00:29:12.695012 containerd[1654]: time="2025-05-13T00:29:12.694172553Z" level=info msg="Stop container \"d08c1464d61ceaa93b5826f12d3f7066b647e654f0b145012b4731c514424fce\" with signal terminated" May 13 00:29:12.697324 kubelet[2967]: I0513 00:29:12.697293 2967 scope.go:117] "RemoveContainer" containerID="a541ff898dc344da359f32d88aacb08643308e6d0d45876572324afeb6f59b4a" May 13 00:29:12.706112 kubelet[2967]: I0513 00:29:12.706081 2967 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f38b69e3-9879-44d1-9fca-1ad977d43c8a-calico-apiserver-certs" (OuterVolumeSpecName: "calico-apiserver-certs") pod "f38b69e3-9879-44d1-9fca-1ad977d43c8a" (UID: "f38b69e3-9879-44d1-9fca-1ad977d43c8a"). InnerVolumeSpecName "calico-apiserver-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" May 13 00:29:12.713897 kubelet[2967]: I0513 00:29:12.713875 2967 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f38b69e3-9879-44d1-9fca-1ad977d43c8a-kube-api-access-lmdw2" (OuterVolumeSpecName: "kube-api-access-lmdw2") pod "f38b69e3-9879-44d1-9fca-1ad977d43c8a" (UID: "f38b69e3-9879-44d1-9fca-1ad977d43c8a"). InnerVolumeSpecName "kube-api-access-lmdw2". PluginName "kubernetes.io/projected", VolumeGidValue "" May 13 00:29:12.714811 containerd[1654]: time="2025-05-13T00:29:12.713438833Z" level=info msg="RemoveContainer for \"a541ff898dc344da359f32d88aacb08643308e6d0d45876572324afeb6f59b4a\"" May 13 00:29:12.745549 containerd[1654]: time="2025-05-13T00:29:12.745492629Z" level=info msg="RemoveContainer for \"a541ff898dc344da359f32d88aacb08643308e6d0d45876572324afeb6f59b4a\" returns successfully" May 13 00:29:12.757222 containerd[1654]: time="2025-05-13T00:29:12.757123347Z" level=info msg="shim disconnected" id=f036290211faf8d4c836b3e54d2c626c3fd3bad697261c5464386326ea4d2faa namespace=k8s.io May 13 00:29:12.757222 containerd[1654]: time="2025-05-13T00:29:12.757166402Z" level=warning msg="cleaning up after shim disconnected" id=f036290211faf8d4c836b3e54d2c626c3fd3bad697261c5464386326ea4d2faa namespace=k8s.io May 13 00:29:12.757222 containerd[1654]: time="2025-05-13T00:29:12.757172416Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 13 00:29:12.763786 kubelet[2967]: I0513 00:29:12.763769 2967 scope.go:117] "RemoveContainer" containerID="a541ff898dc344da359f32d88aacb08643308e6d0d45876572324afeb6f59b4a" May 13 00:29:12.791722 kubelet[2967]: I0513 00:29:12.791702 2967 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-lmdw2\" (UniqueName: \"kubernetes.io/projected/f38b69e3-9879-44d1-9fca-1ad977d43c8a-kube-api-access-lmdw2\") on node \"localhost\" DevicePath \"\"" May 13 00:29:12.791722 kubelet[2967]: I0513 00:29:12.791719 2967 reconciler_common.go:289] "Volume detached for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/f38b69e3-9879-44d1-9fca-1ad977d43c8a-calico-apiserver-certs\") on node \"localhost\" DevicePath \"\"" May 13 00:29:12.802024 kubelet[2967]: I0513 00:29:12.801730 2967 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-786f4bd5f6-5nhlr" podStartSLOduration=28.349960778 podStartE2EDuration="32.801717451s" podCreationTimestamp="2025-05-13 00:28:40 +0000 UTC" firstStartedPulling="2025-05-13 00:29:07.796201844 +0000 UTC m=+47.618285937" lastFinishedPulling="2025-05-13 00:29:12.247958517 +0000 UTC m=+52.070042610" observedRunningTime="2025-05-13 00:29:12.740965131 +0000 UTC m=+52.563049232" watchObservedRunningTime="2025-05-13 00:29:12.801717451 +0000 UTC m=+52.623801549" May 13 00:29:12.810968 containerd[1654]: time="2025-05-13T00:29:12.787868925Z" level=error msg="ContainerStatus for \"a541ff898dc344da359f32d88aacb08643308e6d0d45876572324afeb6f59b4a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a541ff898dc344da359f32d88aacb08643308e6d0d45876572324afeb6f59b4a\": not found" May 13 00:29:12.824692 kubelet[2967]: E0513 00:29:12.824668 2967 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a541ff898dc344da359f32d88aacb08643308e6d0d45876572324afeb6f59b4a\": not found" containerID="a541ff898dc344da359f32d88aacb08643308e6d0d45876572324afeb6f59b4a" May 13 00:29:12.826225 kubelet[2967]: I0513 00:29:12.824701 2967 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a541ff898dc344da359f32d88aacb08643308e6d0d45876572324afeb6f59b4a"} err="failed to get container status \"a541ff898dc344da359f32d88aacb08643308e6d0d45876572324afeb6f59b4a\": rpc error: code = NotFound desc = an error occurred when try to find container \"a541ff898dc344da359f32d88aacb08643308e6d0d45876572324afeb6f59b4a\": not found" May 13 00:29:12.826225 kubelet[2967]: I0513 00:29:12.824717 2967 scope.go:117] "RemoveContainer" containerID="271f3a5bd07468e4b02b89c9d767d310efc269a4629cac0c9b84784c7d3ab2a6" May 13 00:29:12.826815 containerd[1654]: time="2025-05-13T00:29:12.826689901Z" level=info msg="StartContainer for \"8fe4e263779a916394f8a2372934dd04c16205cd0935c6304a96abbfe1e0ef1a\" returns successfully" May 13 00:29:12.828248 containerd[1654]: time="2025-05-13T00:29:12.828204624Z" level=info msg="RemoveContainer for \"271f3a5bd07468e4b02b89c9d767d310efc269a4629cac0c9b84784c7d3ab2a6\"" May 13 00:29:12.829534 systemd-resolved[1542]: Under memory pressure, flushing caches. May 13 00:29:12.829547 systemd-resolved[1542]: Flushed all caches. May 13 00:29:12.830349 systemd-journald[1185]: Under memory pressure, flushing caches. May 13 00:29:12.830831 containerd[1654]: time="2025-05-13T00:29:12.830804712Z" level=info msg="TearDown network for sandbox \"f036290211faf8d4c836b3e54d2c626c3fd3bad697261c5464386326ea4d2faa\" successfully" May 13 00:29:12.830944 containerd[1654]: time="2025-05-13T00:29:12.830934359Z" level=info msg="StopPodSandbox for \"f036290211faf8d4c836b3e54d2c626c3fd3bad697261c5464386326ea4d2faa\" returns successfully" May 13 00:29:12.835073 containerd[1654]: time="2025-05-13T00:29:12.835044331Z" level=info msg="RemoveContainer for \"271f3a5bd07468e4b02b89c9d767d310efc269a4629cac0c9b84784c7d3ab2a6\" returns successfully" May 13 00:29:12.835184 kubelet[2967]: I0513 00:29:12.835174 2967 scope.go:117] "RemoveContainer" containerID="6fe8f58352b47d6280ee5478feabda701bcec99cad73f131047c47ca13fd2d4b" May 13 00:29:12.838513 containerd[1654]: time="2025-05-13T00:29:12.838497413Z" level=info msg="RemoveContainer for \"6fe8f58352b47d6280ee5478feabda701bcec99cad73f131047c47ca13fd2d4b\"" May 13 00:29:12.853783 containerd[1654]: time="2025-05-13T00:29:12.853758503Z" level=info msg="RemoveContainer for \"6fe8f58352b47d6280ee5478feabda701bcec99cad73f131047c47ca13fd2d4b\" returns successfully" May 13 00:29:12.854020 kubelet[2967]: I0513 00:29:12.854006 2967 scope.go:117] "RemoveContainer" containerID="d205ccee2730035cdc3dceff3318d7cac278632541bc0786df49cbbd6a5a2767" May 13 00:29:12.855654 containerd[1654]: time="2025-05-13T00:29:12.855636875Z" level=info msg="RemoveContainer for \"d205ccee2730035cdc3dceff3318d7cac278632541bc0786df49cbbd6a5a2767\"" May 13 00:29:12.862140 containerd[1654]: time="2025-05-13T00:29:12.862122529Z" level=error msg="ExecSync for \"d08c1464d61ceaa93b5826f12d3f7066b647e654f0b145012b4731c514424fce\" failed" error="failed to exec in container: failed to start exec \"c8dac5bd4f574bb1941a9e3837f27a3b03e2096f1d9be261cd67b7c167e316ca\": OCI runtime exec failed: exec failed: unable to start container process: error executing setns process: exit status 1: unknown" May 13 00:29:12.862321 kubelet[2967]: E0513 00:29:12.862297 2967 remote_runtime.go:496] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = failed to exec in container: failed to start exec \"c8dac5bd4f574bb1941a9e3837f27a3b03e2096f1d9be261cd67b7c167e316ca\": OCI runtime exec failed: exec failed: unable to start container process: error executing setns process: exit status 1: unknown" containerID="d08c1464d61ceaa93b5826f12d3f7066b647e654f0b145012b4731c514424fce" cmd=["/usr/bin/check-status","-r"] May 13 00:29:12.864893 containerd[1654]: time="2025-05-13T00:29:12.864859495Z" level=info msg="RemoveContainer for \"d205ccee2730035cdc3dceff3318d7cac278632541bc0786df49cbbd6a5a2767\" returns successfully" May 13 00:29:12.865009 kubelet[2967]: I0513 00:29:12.864941 2967 scope.go:117] "RemoveContainer" containerID="271f3a5bd07468e4b02b89c9d767d310efc269a4629cac0c9b84784c7d3ab2a6" May 13 00:29:12.865158 containerd[1654]: time="2025-05-13T00:29:12.865140579Z" level=error msg="ContainerStatus for \"271f3a5bd07468e4b02b89c9d767d310efc269a4629cac0c9b84784c7d3ab2a6\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"271f3a5bd07468e4b02b89c9d767d310efc269a4629cac0c9b84784c7d3ab2a6\": not found" May 13 00:29:12.866121 kubelet[2967]: E0513 00:29:12.866102 2967 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"271f3a5bd07468e4b02b89c9d767d310efc269a4629cac0c9b84784c7d3ab2a6\": not found" containerID="271f3a5bd07468e4b02b89c9d767d310efc269a4629cac0c9b84784c7d3ab2a6" May 13 00:29:12.866225 kubelet[2967]: I0513 00:29:12.866176 2967 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"271f3a5bd07468e4b02b89c9d767d310efc269a4629cac0c9b84784c7d3ab2a6"} err="failed to get container status \"271f3a5bd07468e4b02b89c9d767d310efc269a4629cac0c9b84784c7d3ab2a6\": rpc error: code = NotFound desc = an error occurred when try to find container \"271f3a5bd07468e4b02b89c9d767d310efc269a4629cac0c9b84784c7d3ab2a6\": not found" May 13 00:29:12.866225 kubelet[2967]: I0513 00:29:12.866193 2967 scope.go:117] "RemoveContainer" containerID="6fe8f58352b47d6280ee5478feabda701bcec99cad73f131047c47ca13fd2d4b" May 13 00:29:12.866664 containerd[1654]: time="2025-05-13T00:29:12.866646636Z" level=error msg="ContainerStatus for \"6fe8f58352b47d6280ee5478feabda701bcec99cad73f131047c47ca13fd2d4b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6fe8f58352b47d6280ee5478feabda701bcec99cad73f131047c47ca13fd2d4b\": not found" May 13 00:29:12.866716 kubelet[2967]: E0513 00:29:12.866703 2967 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6fe8f58352b47d6280ee5478feabda701bcec99cad73f131047c47ca13fd2d4b\": not found" containerID="6fe8f58352b47d6280ee5478feabda701bcec99cad73f131047c47ca13fd2d4b" May 13 00:29:12.866740 kubelet[2967]: I0513 00:29:12.866720 2967 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6fe8f58352b47d6280ee5478feabda701bcec99cad73f131047c47ca13fd2d4b"} err="failed to get container status \"6fe8f58352b47d6280ee5478feabda701bcec99cad73f131047c47ca13fd2d4b\": rpc error: code = NotFound desc = an error occurred when try to find container \"6fe8f58352b47d6280ee5478feabda701bcec99cad73f131047c47ca13fd2d4b\": not found" May 13 00:29:12.866740 kubelet[2967]: I0513 00:29:12.866731 2967 scope.go:117] "RemoveContainer" containerID="d205ccee2730035cdc3dceff3318d7cac278632541bc0786df49cbbd6a5a2767" May 13 00:29:12.871348 containerd[1654]: time="2025-05-13T00:29:12.871240077Z" level=error msg="ContainerStatus for \"d205ccee2730035cdc3dceff3318d7cac278632541bc0786df49cbbd6a5a2767\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d205ccee2730035cdc3dceff3318d7cac278632541bc0786df49cbbd6a5a2767\": not found" May 13 00:29:12.871815 kubelet[2967]: E0513 00:29:12.871756 2967 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d205ccee2730035cdc3dceff3318d7cac278632541bc0786df49cbbd6a5a2767\": not found" containerID="d205ccee2730035cdc3dceff3318d7cac278632541bc0786df49cbbd6a5a2767" May 13 00:29:12.871815 kubelet[2967]: I0513 00:29:12.871781 2967 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d205ccee2730035cdc3dceff3318d7cac278632541bc0786df49cbbd6a5a2767"} err="failed to get container status \"d205ccee2730035cdc3dceff3318d7cac278632541bc0786df49cbbd6a5a2767\": rpc error: code = NotFound desc = an error occurred when try to find container \"d205ccee2730035cdc3dceff3318d7cac278632541bc0786df49cbbd6a5a2767\": not found" May 13 00:29:12.874829 containerd[1654]: time="2025-05-13T00:29:12.874795892Z" level=info msg="shim disconnected" id=d08c1464d61ceaa93b5826f12d3f7066b647e654f0b145012b4731c514424fce namespace=k8s.io May 13 00:29:12.874829 containerd[1654]: time="2025-05-13T00:29:12.874828380Z" level=warning msg="cleaning up after shim disconnected" id=d08c1464d61ceaa93b5826f12d3f7066b647e654f0b145012b4731c514424fce namespace=k8s.io May 13 00:29:12.874985 containerd[1654]: time="2025-05-13T00:29:12.874833893Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 13 00:29:12.875063 containerd[1654]: time="2025-05-13T00:29:12.875049447Z" level=error msg="ExecSync for \"d08c1464d61ceaa93b5826f12d3f7066b647e654f0b145012b4731c514424fce\" failed" error="rpc error: code = NotFound desc = failed to exec in container: failed to create exec \"46008d12704949b4e3473d70042253e5147619a82c8c9929464f775d6ace50d5\": task d08c1464d61ceaa93b5826f12d3f7066b647e654f0b145012b4731c514424fce not found: not found" May 13 00:29:12.876237 kubelet[2967]: E0513 00:29:12.876154 2967 remote_runtime.go:496] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = failed to exec in container: failed to create exec \"46008d12704949b4e3473d70042253e5147619a82c8c9929464f775d6ace50d5\": task d08c1464d61ceaa93b5826f12d3f7066b647e654f0b145012b4731c514424fce not found: not found" containerID="d08c1464d61ceaa93b5826f12d3f7066b647e654f0b145012b4731c514424fce" cmd=["/usr/bin/check-status","-r"] May 13 00:29:12.876728 containerd[1654]: time="2025-05-13T00:29:12.876568552Z" level=error msg="ExecSync for \"d08c1464d61ceaa93b5826f12d3f7066b647e654f0b145012b4731c514424fce\" failed" error="rpc error: code = NotFound desc = failed to exec in container: failed to load task: no running task found: task d08c1464d61ceaa93b5826f12d3f7066b647e654f0b145012b4731c514424fce not found: not found" May 13 00:29:12.876766 kubelet[2967]: E0513 00:29:12.876631 2967 remote_runtime.go:496] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = failed to exec in container: failed to load task: no running task found: task d08c1464d61ceaa93b5826f12d3f7066b647e654f0b145012b4731c514424fce not found: not found" containerID="d08c1464d61ceaa93b5826f12d3f7066b647e654f0b145012b4731c514424fce" cmd=["/usr/bin/check-status","-r"] May 13 00:29:12.887243 containerd[1654]: time="2025-05-13T00:29:12.887210480Z" level=warning msg="cleanup warnings time=\"2025-05-13T00:29:12Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io May 13 00:29:12.889015 containerd[1654]: time="2025-05-13T00:29:12.888955955Z" level=info msg="StopContainer for \"d08c1464d61ceaa93b5826f12d3f7066b647e654f0b145012b4731c514424fce\" returns successfully" May 13 00:29:12.889366 containerd[1654]: time="2025-05-13T00:29:12.889355793Z" level=info msg="StopPodSandbox for \"68fae20ca00d661345e722dd8ed1da97d49c518574bf2a60f1588a635c32a0a5\"" May 13 00:29:12.889486 containerd[1654]: time="2025-05-13T00:29:12.889414438Z" level=info msg="Container to stop \"d08c1464d61ceaa93b5826f12d3f7066b647e654f0b145012b4731c514424fce\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 13 00:29:12.936351 containerd[1654]: time="2025-05-13T00:29:12.936280227Z" level=info msg="shim disconnected" id=68fae20ca00d661345e722dd8ed1da97d49c518574bf2a60f1588a635c32a0a5 namespace=k8s.io May 13 00:29:12.936351 containerd[1654]: time="2025-05-13T00:29:12.936317838Z" level=warning msg="cleaning up after shim disconnected" id=68fae20ca00d661345e722dd8ed1da97d49c518574bf2a60f1588a635c32a0a5 namespace=k8s.io May 13 00:29:12.936351 containerd[1654]: time="2025-05-13T00:29:12.936323621Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 13 00:29:12.961324 containerd[1654]: time="2025-05-13T00:29:12.961216649Z" level=info msg="shim disconnected" id=8fe4e263779a916394f8a2372934dd04c16205cd0935c6304a96abbfe1e0ef1a namespace=k8s.io May 13 00:29:12.961324 containerd[1654]: time="2025-05-13T00:29:12.961249555Z" level=warning msg="cleaning up after shim disconnected" id=8fe4e263779a916394f8a2372934dd04c16205cd0935c6304a96abbfe1e0ef1a namespace=k8s.io May 13 00:29:12.961324 containerd[1654]: time="2025-05-13T00:29:12.961255835Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 13 00:29:12.989903 systemd-networkd[1286]: cali53d73bed430: Link DOWN May 13 00:29:12.989907 systemd-networkd[1286]: cali53d73bed430: Lost carrier May 13 00:29:12.995028 kubelet[2967]: I0513 00:29:12.994888 2967 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3d70339b-c076-427c-b84c-4b7a62badc7c-tigera-ca-bundle\") pod \"3d70339b-c076-427c-b84c-4b7a62badc7c\" (UID: \"3d70339b-c076-427c-b84c-4b7a62badc7c\") " May 13 00:29:12.995028 kubelet[2967]: I0513 00:29:12.994911 2967 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bsbg9\" (UniqueName: \"kubernetes.io/projected/3d70339b-c076-427c-b84c-4b7a62badc7c-kube-api-access-bsbg9\") pod \"3d70339b-c076-427c-b84c-4b7a62badc7c\" (UID: \"3d70339b-c076-427c-b84c-4b7a62badc7c\") " May 13 00:29:12.995028 kubelet[2967]: I0513 00:29:12.994925 2967 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/3d70339b-c076-427c-b84c-4b7a62badc7c-typha-certs\") pod \"3d70339b-c076-427c-b84c-4b7a62badc7c\" (UID: \"3d70339b-c076-427c-b84c-4b7a62badc7c\") " May 13 00:29:12.997978 kubelet[2967]: I0513 00:29:12.997950 2967 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3d70339b-c076-427c-b84c-4b7a62badc7c-kube-api-access-bsbg9" (OuterVolumeSpecName: "kube-api-access-bsbg9") pod "3d70339b-c076-427c-b84c-4b7a62badc7c" (UID: "3d70339b-c076-427c-b84c-4b7a62badc7c"). InnerVolumeSpecName "kube-api-access-bsbg9". PluginName "kubernetes.io/projected", VolumeGidValue "" May 13 00:29:12.998651 kubelet[2967]: I0513 00:29:12.998634 2967 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d70339b-c076-427c-b84c-4b7a62badc7c-typha-certs" (OuterVolumeSpecName: "typha-certs") pod "3d70339b-c076-427c-b84c-4b7a62badc7c" (UID: "3d70339b-c076-427c-b84c-4b7a62badc7c"). InnerVolumeSpecName "typha-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" May 13 00:29:12.999035 kubelet[2967]: I0513 00:29:12.999017 2967 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3d70339b-c076-427c-b84c-4b7a62badc7c-tigera-ca-bundle" (OuterVolumeSpecName: "tigera-ca-bundle") pod "3d70339b-c076-427c-b84c-4b7a62badc7c" (UID: "3d70339b-c076-427c-b84c-4b7a62badc7c"). InnerVolumeSpecName "tigera-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 13 00:29:13.029599 containerd[1654]: 2025-05-13 00:29:12.987 [INFO][5931] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="68fae20ca00d661345e722dd8ed1da97d49c518574bf2a60f1588a635c32a0a5" May 13 00:29:13.029599 containerd[1654]: 2025-05-13 00:29:12.989 [INFO][5931] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="68fae20ca00d661345e722dd8ed1da97d49c518574bf2a60f1588a635c32a0a5" iface="eth0" netns="/var/run/netns/cni-34079748-c87d-cafa-0199-e011a3a5508e" May 13 00:29:13.029599 containerd[1654]: 2025-05-13 00:29:12.989 [INFO][5931] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="68fae20ca00d661345e722dd8ed1da97d49c518574bf2a60f1588a635c32a0a5" iface="eth0" netns="/var/run/netns/cni-34079748-c87d-cafa-0199-e011a3a5508e" May 13 00:29:13.029599 containerd[1654]: 2025-05-13 00:29:12.999 [INFO][5931] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="68fae20ca00d661345e722dd8ed1da97d49c518574bf2a60f1588a635c32a0a5" after=10.326517ms iface="eth0" netns="/var/run/netns/cni-34079748-c87d-cafa-0199-e011a3a5508e" May 13 00:29:13.029599 containerd[1654]: 2025-05-13 00:29:12.999 [INFO][5931] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="68fae20ca00d661345e722dd8ed1da97d49c518574bf2a60f1588a635c32a0a5" May 13 00:29:13.029599 containerd[1654]: 2025-05-13 00:29:12.999 [INFO][5931] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="68fae20ca00d661345e722dd8ed1da97d49c518574bf2a60f1588a635c32a0a5" May 13 00:29:13.029599 containerd[1654]: 2025-05-13 00:29:13.011 [INFO][5967] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="68fae20ca00d661345e722dd8ed1da97d49c518574bf2a60f1588a635c32a0a5" HandleID="k8s-pod-network.68fae20ca00d661345e722dd8ed1da97d49c518574bf2a60f1588a635c32a0a5" Workload="localhost-k8s-calico--kube--controllers--786f4bd5f6--5nhlr-eth0" May 13 00:29:13.029599 containerd[1654]: 2025-05-13 00:29:13.011 [INFO][5967] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 00:29:13.029599 containerd[1654]: 2025-05-13 00:29:13.011 [INFO][5967] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 00:29:13.029599 containerd[1654]: 2025-05-13 00:29:13.027 [INFO][5967] ipam/ipam_plugin.go 431: Released address using handleID ContainerID="68fae20ca00d661345e722dd8ed1da97d49c518574bf2a60f1588a635c32a0a5" HandleID="k8s-pod-network.68fae20ca00d661345e722dd8ed1da97d49c518574bf2a60f1588a635c32a0a5" Workload="localhost-k8s-calico--kube--controllers--786f4bd5f6--5nhlr-eth0" May 13 00:29:13.029599 containerd[1654]: 2025-05-13 00:29:13.027 [INFO][5967] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="68fae20ca00d661345e722dd8ed1da97d49c518574bf2a60f1588a635c32a0a5" HandleID="k8s-pod-network.68fae20ca00d661345e722dd8ed1da97d49c518574bf2a60f1588a635c32a0a5" Workload="localhost-k8s-calico--kube--controllers--786f4bd5f6--5nhlr-eth0" May 13 00:29:13.029599 containerd[1654]: 2025-05-13 00:29:13.027 [INFO][5967] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 00:29:13.029599 containerd[1654]: 2025-05-13 00:29:13.028 [INFO][5931] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="68fae20ca00d661345e722dd8ed1da97d49c518574bf2a60f1588a635c32a0a5" May 13 00:29:13.030108 containerd[1654]: time="2025-05-13T00:29:13.029919351Z" level=info msg="TearDown network for sandbox \"68fae20ca00d661345e722dd8ed1da97d49c518574bf2a60f1588a635c32a0a5\" successfully" May 13 00:29:13.030108 containerd[1654]: time="2025-05-13T00:29:13.029937501Z" level=info msg="StopPodSandbox for \"68fae20ca00d661345e722dd8ed1da97d49c518574bf2a60f1588a635c32a0a5\" returns successfully" May 13 00:29:13.030491 containerd[1654]: time="2025-05-13T00:29:13.030323360Z" level=info msg="StopPodSandbox for \"e6dc441ee5996693cf7460489ac2eab79d9a40d34f119c96bc9affd6fd513f18\"" May 13 00:29:13.070901 containerd[1654]: 2025-05-13 00:29:13.052 [WARNING][5985] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e6dc441ee5996693cf7460489ac2eab79d9a40d34f119c96bc9affd6fd513f18" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--786f4bd5f6--5nhlr-eth0", GenerateName:"calico-kube-controllers-786f4bd5f6-", Namespace:"calico-system", SelfLink:"", UID:"d11f4d12-c6f0-4786-9ade-890faf15b637", ResourceVersion:"980", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 0, 28, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"786f4bd5f6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"68fae20ca00d661345e722dd8ed1da97d49c518574bf2a60f1588a635c32a0a5", Pod:"calico-kube-controllers-786f4bd5f6-5nhlr", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali53d73bed430", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 00:29:13.070901 containerd[1654]: 2025-05-13 00:29:13.052 [INFO][5985] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="e6dc441ee5996693cf7460489ac2eab79d9a40d34f119c96bc9affd6fd513f18" May 13 00:29:13.070901 containerd[1654]: 2025-05-13 00:29:13.052 [INFO][5985] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e6dc441ee5996693cf7460489ac2eab79d9a40d34f119c96bc9affd6fd513f18" iface="eth0" netns="" May 13 00:29:13.070901 containerd[1654]: 2025-05-13 00:29:13.052 [INFO][5985] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="e6dc441ee5996693cf7460489ac2eab79d9a40d34f119c96bc9affd6fd513f18" May 13 00:29:13.070901 containerd[1654]: 2025-05-13 00:29:13.052 [INFO][5985] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e6dc441ee5996693cf7460489ac2eab79d9a40d34f119c96bc9affd6fd513f18" May 13 00:29:13.070901 containerd[1654]: 2025-05-13 00:29:13.064 [INFO][5992] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e6dc441ee5996693cf7460489ac2eab79d9a40d34f119c96bc9affd6fd513f18" HandleID="k8s-pod-network.e6dc441ee5996693cf7460489ac2eab79d9a40d34f119c96bc9affd6fd513f18" Workload="localhost-k8s-calico--kube--controllers--786f4bd5f6--5nhlr-eth0" May 13 00:29:13.070901 containerd[1654]: 2025-05-13 00:29:13.064 [INFO][5992] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 00:29:13.070901 containerd[1654]: 2025-05-13 00:29:13.064 [INFO][5992] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 00:29:13.070901 containerd[1654]: 2025-05-13 00:29:13.068 [WARNING][5992] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e6dc441ee5996693cf7460489ac2eab79d9a40d34f119c96bc9affd6fd513f18" HandleID="k8s-pod-network.e6dc441ee5996693cf7460489ac2eab79d9a40d34f119c96bc9affd6fd513f18" Workload="localhost-k8s-calico--kube--controllers--786f4bd5f6--5nhlr-eth0" May 13 00:29:13.070901 containerd[1654]: 2025-05-13 00:29:13.068 [INFO][5992] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e6dc441ee5996693cf7460489ac2eab79d9a40d34f119c96bc9affd6fd513f18" HandleID="k8s-pod-network.e6dc441ee5996693cf7460489ac2eab79d9a40d34f119c96bc9affd6fd513f18" Workload="localhost-k8s-calico--kube--controllers--786f4bd5f6--5nhlr-eth0" May 13 00:29:13.070901 containerd[1654]: 2025-05-13 00:29:13.069 [INFO][5992] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 00:29:13.070901 containerd[1654]: 2025-05-13 00:29:13.070 [INFO][5985] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="e6dc441ee5996693cf7460489ac2eab79d9a40d34f119c96bc9affd6fd513f18" May 13 00:29:13.071418 containerd[1654]: time="2025-05-13T00:29:13.071247859Z" level=info msg="TearDown network for sandbox \"e6dc441ee5996693cf7460489ac2eab79d9a40d34f119c96bc9affd6fd513f18\" successfully" May 13 00:29:13.071418 containerd[1654]: time="2025-05-13T00:29:13.071266549Z" level=info msg="StopPodSandbox for \"e6dc441ee5996693cf7460489ac2eab79d9a40d34f119c96bc9affd6fd513f18\" returns successfully" May 13 00:29:13.095951 kubelet[2967]: I0513 00:29:13.095873 2967 reconciler_common.go:289] "Volume detached for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/3d70339b-c076-427c-b84c-4b7a62badc7c-typha-certs\") on node \"localhost\" DevicePath \"\"" May 13 00:29:13.095951 kubelet[2967]: I0513 00:29:13.095889 2967 reconciler_common.go:289] "Volume detached for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3d70339b-c076-427c-b84c-4b7a62badc7c-tigera-ca-bundle\") on node \"localhost\" DevicePath \"\"" May 13 00:29:13.095951 kubelet[2967]: I0513 00:29:13.095896 2967 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-bsbg9\" (UniqueName: \"kubernetes.io/projected/3d70339b-c076-427c-b84c-4b7a62badc7c-kube-api-access-bsbg9\") on node \"localhost\" DevicePath \"\"" May 13 00:29:13.196527 kubelet[2967]: I0513 00:29:13.196276 2967 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d11f4d12-c6f0-4786-9ade-890faf15b637-tigera-ca-bundle\") pod \"d11f4d12-c6f0-4786-9ade-890faf15b637\" (UID: \"d11f4d12-c6f0-4786-9ade-890faf15b637\") " May 13 00:29:13.197290 kubelet[2967]: I0513 00:29:13.196772 2967 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9vtbh\" (UniqueName: \"kubernetes.io/projected/d11f4d12-c6f0-4786-9ade-890faf15b637-kube-api-access-9vtbh\") pod \"d11f4d12-c6f0-4786-9ade-890faf15b637\" (UID: \"d11f4d12-c6f0-4786-9ade-890faf15b637\") " May 13 00:29:13.199665 kubelet[2967]: I0513 00:29:13.199644 2967 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d11f4d12-c6f0-4786-9ade-890faf15b637-tigera-ca-bundle" (OuterVolumeSpecName: "tigera-ca-bundle") pod "d11f4d12-c6f0-4786-9ade-890faf15b637" (UID: "d11f4d12-c6f0-4786-9ade-890faf15b637"). InnerVolumeSpecName "tigera-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 13 00:29:13.200575 kubelet[2967]: I0513 00:29:13.199708 2967 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d11f4d12-c6f0-4786-9ade-890faf15b637-kube-api-access-9vtbh" (OuterVolumeSpecName: "kube-api-access-9vtbh") pod "d11f4d12-c6f0-4786-9ade-890faf15b637" (UID: "d11f4d12-c6f0-4786-9ade-890faf15b637"). InnerVolumeSpecName "kube-api-access-9vtbh". PluginName "kubernetes.io/projected", VolumeGidValue "" May 13 00:29:13.297009 kubelet[2967]: I0513 00:29:13.296960 2967 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-9vtbh\" (UniqueName: \"kubernetes.io/projected/d11f4d12-c6f0-4786-9ade-890faf15b637-kube-api-access-9vtbh\") on node \"localhost\" DevicePath \"\"" May 13 00:29:13.297009 kubelet[2967]: I0513 00:29:13.296985 2967 reconciler_common.go:289] "Volume detached for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d11f4d12-c6f0-4786-9ade-890faf15b637-tigera-ca-bundle\") on node \"localhost\" DevicePath \"\"" May 13 00:29:13.375685 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d08c1464d61ceaa93b5826f12d3f7066b647e654f0b145012b4731c514424fce-rootfs.mount: Deactivated successfully. May 13 00:29:13.375767 systemd[1]: var-lib-kubelet-pods-d11f4d12\x2dc6f0\x2d4786\x2d9ade\x2d890faf15b637-volume\x2dsubpaths-tigera\x2dca\x2dbundle-calico\x2dkube\x2dcontrollers-1.mount: Deactivated successfully. May 13 00:29:13.375831 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-68fae20ca00d661345e722dd8ed1da97d49c518574bf2a60f1588a635c32a0a5-rootfs.mount: Deactivated successfully. May 13 00:29:13.375880 systemd[1]: run-netns-cni\x2d34079748\x2dc87d\x2dcafa\x2d0199\x2de011a3a5508e.mount: Deactivated successfully. May 13 00:29:13.375928 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-68fae20ca00d661345e722dd8ed1da97d49c518574bf2a60f1588a635c32a0a5-shm.mount: Deactivated successfully. May 13 00:29:13.375981 systemd[1]: var-lib-kubelet-pods-f38b69e3\x2d9879\x2d44d1\x2d9fca\x2d1ad977d43c8a-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dlmdw2.mount: Deactivated successfully. May 13 00:29:13.376054 systemd[1]: var-lib-kubelet-pods-d11f4d12\x2dc6f0\x2d4786\x2d9ade\x2d890faf15b637-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d9vtbh.mount: Deactivated successfully. May 13 00:29:13.376134 systemd[1]: var-lib-kubelet-pods-f38b69e3\x2d9879\x2d44d1\x2d9fca\x2d1ad977d43c8a-volumes-kubernetes.io\x7esecret-calico\x2dapiserver\x2dcerts.mount: Deactivated successfully. May 13 00:29:13.376185 systemd[1]: var-lib-kubelet-pods-3d70339b\x2dc076\x2d427c\x2db84c\x2d4b7a62badc7c-volume\x2dsubpaths-tigera\x2dca\x2dbundle-calico\x2dtypha-1.mount: Deactivated successfully. May 13 00:29:13.376235 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f036290211faf8d4c836b3e54d2c626c3fd3bad697261c5464386326ea4d2faa-rootfs.mount: Deactivated successfully. May 13 00:29:13.376284 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f036290211faf8d4c836b3e54d2c626c3fd3bad697261c5464386326ea4d2faa-shm.mount: Deactivated successfully. May 13 00:29:13.376346 systemd[1]: var-lib-kubelet-pods-3d70339b\x2dc076\x2d427c\x2db84c\x2d4b7a62badc7c-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dbsbg9.mount: Deactivated successfully. May 13 00:29:13.376399 systemd[1]: var-lib-kubelet-pods-3d70339b\x2dc076\x2d427c\x2db84c\x2d4b7a62badc7c-volumes-kubernetes.io\x7esecret-typha\x2dcerts.mount: Deactivated successfully. May 13 00:29:13.771479 kubelet[2967]: I0513 00:29:13.771461 2967 scope.go:117] "RemoveContainer" containerID="29cdc28f9ff57dd6f14c3fffdeac74971f4afe2983c64c5a24d359f271ad0d50" May 13 00:29:13.775203 containerd[1654]: time="2025-05-13T00:29:13.775178624Z" level=info msg="RemoveContainer for \"29cdc28f9ff57dd6f14c3fffdeac74971f4afe2983c64c5a24d359f271ad0d50\"" May 13 00:29:13.782558 containerd[1654]: time="2025-05-13T00:29:13.780684252Z" level=info msg="RemoveContainer for \"29cdc28f9ff57dd6f14c3fffdeac74971f4afe2983c64c5a24d359f271ad0d50\" returns successfully" May 13 00:29:13.827565 containerd[1654]: time="2025-05-13T00:29:13.827525841Z" level=info msg="CreateContainer within sandbox \"04e5f6ba09b44371150dff89d4edb43ea1cea703538495033e2372c6f7ab1b94\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" May 13 00:29:13.834576 kubelet[2967]: I0513 00:29:13.834468 2967 scope.go:117] "RemoveContainer" containerID="d08c1464d61ceaa93b5826f12d3f7066b647e654f0b145012b4731c514424fce" May 13 00:29:13.838021 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4062294309.mount: Deactivated successfully. May 13 00:29:13.841214 containerd[1654]: time="2025-05-13T00:29:13.839493240Z" level=info msg="CreateContainer within sandbox \"04e5f6ba09b44371150dff89d4edb43ea1cea703538495033e2372c6f7ab1b94\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"98690ab55260c68de63050f4517f20546d25f41fac3c7ecef048b37c37a0485a\"" May 13 00:29:13.841214 containerd[1654]: time="2025-05-13T00:29:13.839915004Z" level=info msg="StartContainer for \"98690ab55260c68de63050f4517f20546d25f41fac3c7ecef048b37c37a0485a\"" May 13 00:29:13.856134 containerd[1654]: time="2025-05-13T00:29:13.856069240Z" level=info msg="RemoveContainer for \"d08c1464d61ceaa93b5826f12d3f7066b647e654f0b145012b4731c514424fce\"" May 13 00:29:13.861441 kubelet[2967]: I0513 00:29:13.861423 2967 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 13 00:29:13.874267 containerd[1654]: time="2025-05-13T00:29:13.874147177Z" level=info msg="RemoveContainer for \"d08c1464d61ceaa93b5826f12d3f7066b647e654f0b145012b4731c514424fce\" returns successfully" May 13 00:29:13.874895 kubelet[2967]: I0513 00:29:13.874739 2967 scope.go:117] "RemoveContainer" containerID="d08c1464d61ceaa93b5826f12d3f7066b647e654f0b145012b4731c514424fce" May 13 00:29:13.875863 containerd[1654]: time="2025-05-13T00:29:13.875754076Z" level=error msg="ContainerStatus for \"d08c1464d61ceaa93b5826f12d3f7066b647e654f0b145012b4731c514424fce\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d08c1464d61ceaa93b5826f12d3f7066b647e654f0b145012b4731c514424fce\": not found" May 13 00:29:13.876148 kubelet[2967]: E0513 00:29:13.876132 2967 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d08c1464d61ceaa93b5826f12d3f7066b647e654f0b145012b4731c514424fce\": not found" containerID="d08c1464d61ceaa93b5826f12d3f7066b647e654f0b145012b4731c514424fce" May 13 00:29:13.876193 kubelet[2967]: I0513 00:29:13.876154 2967 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d08c1464d61ceaa93b5826f12d3f7066b647e654f0b145012b4731c514424fce"} err="failed to get container status \"d08c1464d61ceaa93b5826f12d3f7066b647e654f0b145012b4731c514424fce\": rpc error: code = NotFound desc = an error occurred when try to find container \"d08c1464d61ceaa93b5826f12d3f7066b647e654f0b145012b4731c514424fce\": not found" May 13 00:29:13.907761 containerd[1654]: time="2025-05-13T00:29:13.907735447Z" level=info msg="StartContainer for \"98690ab55260c68de63050f4517f20546d25f41fac3c7ecef048b37c37a0485a\" returns successfully" May 13 00:29:13.933653 containerd[1654]: time="2025-05-13T00:29:13.933627963Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:29:13.934039 containerd[1654]: time="2025-05-13T00:29:13.934017356Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.3: active requests=0, bytes read=7912898" May 13 00:29:13.934709 containerd[1654]: time="2025-05-13T00:29:13.934443175Z" level=info msg="ImageCreate event name:\"sha256:4c37db5645f4075f8b8170eea8f14e340cb13550e0a392962f1f211ded741505\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:29:13.935851 containerd[1654]: time="2025-05-13T00:29:13.935827221Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:29:13.936470 containerd[1654]: time="2025-05-13T00:29:13.936129107Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.3\" with image id \"sha256:4c37db5645f4075f8b8170eea8f14e340cb13550e0a392962f1f211ded741505\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31\", size \"9405520\" in 1.677830138s" May 13 00:29:13.936470 containerd[1654]: time="2025-05-13T00:29:13.936145535Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\" returns image reference \"sha256:4c37db5645f4075f8b8170eea8f14e340cb13550e0a392962f1f211ded741505\"" May 13 00:29:13.938690 containerd[1654]: time="2025-05-13T00:29:13.938677542Z" level=info msg="CreateContainer within sandbox \"356931c6ab7a9c03836f1e4a94676e311ce43c2b080aa50abc472a40a8d1a698\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" May 13 00:29:13.957399 containerd[1654]: time="2025-05-13T00:29:13.957376986Z" level=info msg="CreateContainer within sandbox \"356931c6ab7a9c03836f1e4a94676e311ce43c2b080aa50abc472a40a8d1a698\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"dbe02f977d198e5af12bdb61b5b3a77009e8606992e5934e8dcbe99e1b35ba13\"" May 13 00:29:13.959122 containerd[1654]: time="2025-05-13T00:29:13.958366939Z" level=info msg="StartContainer for \"dbe02f977d198e5af12bdb61b5b3a77009e8606992e5934e8dcbe99e1b35ba13\"" May 13 00:29:14.058350 containerd[1654]: time="2025-05-13T00:29:14.058249808Z" level=info msg="StartContainer for \"dbe02f977d198e5af12bdb61b5b3a77009e8606992e5934e8dcbe99e1b35ba13\" returns successfully" May 13 00:29:14.061353 containerd[1654]: time="2025-05-13T00:29:14.061292636Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\"" May 13 00:29:14.326046 kubelet[2967]: I0513 00:29:14.325943 2967 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3d70339b-c076-427c-b84c-4b7a62badc7c" path="/var/lib/kubelet/pods/3d70339b-c076-427c-b84c-4b7a62badc7c/volumes" May 13 00:29:14.327915 kubelet[2967]: I0513 00:29:14.327685 2967 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a3f0af17-310d-4953-ac6f-bfe350f5a6b8" path="/var/lib/kubelet/pods/a3f0af17-310d-4953-ac6f-bfe350f5a6b8/volumes" May 13 00:29:14.330689 kubelet[2967]: I0513 00:29:14.330580 2967 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d11f4d12-c6f0-4786-9ade-890faf15b637" path="/var/lib/kubelet/pods/d11f4d12-c6f0-4786-9ade-890faf15b637/volumes" May 13 00:29:14.331104 kubelet[2967]: I0513 00:29:14.331023 2967 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f38b69e3-9879-44d1-9fca-1ad977d43c8a" path="/var/lib/kubelet/pods/f38b69e3-9879-44d1-9fca-1ad977d43c8a/volumes" May 13 00:29:15.601999 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-98690ab55260c68de63050f4517f20546d25f41fac3c7ecef048b37c37a0485a-rootfs.mount: Deactivated successfully. May 13 00:29:15.603703 containerd[1654]: time="2025-05-13T00:29:15.603648740Z" level=info msg="shim disconnected" id=98690ab55260c68de63050f4517f20546d25f41fac3c7ecef048b37c37a0485a namespace=k8s.io May 13 00:29:15.603703 containerd[1654]: time="2025-05-13T00:29:15.603700506Z" level=warning msg="cleaning up after shim disconnected" id=98690ab55260c68de63050f4517f20546d25f41fac3c7ecef048b37c37a0485a namespace=k8s.io May 13 00:29:15.603910 containerd[1654]: time="2025-05-13T00:29:15.603707545Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 13 00:29:15.705670 containerd[1654]: time="2025-05-13T00:29:15.705428785Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:29:15.706247 containerd[1654]: time="2025-05-13T00:29:15.706222584Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3: active requests=0, bytes read=13991773" May 13 00:29:15.707851 containerd[1654]: time="2025-05-13T00:29:15.707379572Z" level=info msg="ImageCreate event name:\"sha256:e909e2ccf54404290b577fbddd190d036984deed184001767f820b0dddf77fd9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:29:15.708822 containerd[1654]: time="2025-05-13T00:29:15.708809296Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:29:15.709591 containerd[1654]: time="2025-05-13T00:29:15.709579543Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" with image id \"sha256:e909e2ccf54404290b577fbddd190d036984deed184001767f820b0dddf77fd9\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324\", size \"15484347\" in 1.648142079s" May 13 00:29:15.709643 containerd[1654]: time="2025-05-13T00:29:15.709635697Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" returns image reference \"sha256:e909e2ccf54404290b577fbddd190d036984deed184001767f820b0dddf77fd9\"" May 13 00:29:15.777666 kubelet[2967]: I0513 00:29:15.772191 2967 topology_manager.go:215] "Topology Admit Handler" podUID="9854eff3-8e24-4160-afd8-88b63a49ad6e" podNamespace="calico-system" podName="calico-typha-7945ff9846-q6zql" May 13 00:29:15.784241 kubelet[2967]: E0513 00:29:15.784210 2967 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="3d70339b-c076-427c-b84c-4b7a62badc7c" containerName="calico-typha" May 13 00:29:15.784399 kubelet[2967]: E0513 00:29:15.784391 2967 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f38b69e3-9879-44d1-9fca-1ad977d43c8a" containerName="calico-apiserver" May 13 00:29:15.785157 kubelet[2967]: E0513 00:29:15.784602 2967 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d11f4d12-c6f0-4786-9ade-890faf15b637" containerName="calico-kube-controllers" May 13 00:29:15.785157 kubelet[2967]: I0513 00:29:15.784648 2967 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d70339b-c076-427c-b84c-4b7a62badc7c" containerName="calico-typha" May 13 00:29:15.785157 kubelet[2967]: I0513 00:29:15.784652 2967 memory_manager.go:354] "RemoveStaleState removing state" podUID="d11f4d12-c6f0-4786-9ade-890faf15b637" containerName="calico-kube-controllers" May 13 00:29:15.785157 kubelet[2967]: I0513 00:29:15.784656 2967 memory_manager.go:354] "RemoveStaleState removing state" podUID="f38b69e3-9879-44d1-9fca-1ad977d43c8a" containerName="calico-apiserver" May 13 00:29:15.838710 containerd[1654]: time="2025-05-13T00:29:15.838681743Z" level=info msg="CreateContainer within sandbox \"356931c6ab7a9c03836f1e4a94676e311ce43c2b080aa50abc472a40a8d1a698\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" May 13 00:29:15.878944 containerd[1654]: time="2025-05-13T00:29:15.877405899Z" level=info msg="CreateContainer within sandbox \"356931c6ab7a9c03836f1e4a94676e311ce43c2b080aa50abc472a40a8d1a698\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"235556c14e8267720413c56c591bb9061e7bce669f0dbd900c0147739040d393\"" May 13 00:29:15.878944 containerd[1654]: time="2025-05-13T00:29:15.877771042Z" level=info msg="StartContainer for \"235556c14e8267720413c56c591bb9061e7bce669f0dbd900c0147739040d393\"" May 13 00:29:15.933616 containerd[1654]: time="2025-05-13T00:29:15.933325094Z" level=info msg="StartContainer for \"235556c14e8267720413c56c591bb9061e7bce669f0dbd900c0147739040d393\" returns successfully" May 13 00:29:16.004921 kubelet[2967]: I0513 00:29:16.004897 2967 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wzxwb\" (UniqueName: \"kubernetes.io/projected/9854eff3-8e24-4160-afd8-88b63a49ad6e-kube-api-access-wzxwb\") pod \"calico-typha-7945ff9846-q6zql\" (UID: \"9854eff3-8e24-4160-afd8-88b63a49ad6e\") " pod="calico-system/calico-typha-7945ff9846-q6zql" May 13 00:29:16.005004 kubelet[2967]: I0513 00:29:16.004954 2967 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9854eff3-8e24-4160-afd8-88b63a49ad6e-tigera-ca-bundle\") pod \"calico-typha-7945ff9846-q6zql\" (UID: \"9854eff3-8e24-4160-afd8-88b63a49ad6e\") " pod="calico-system/calico-typha-7945ff9846-q6zql" May 13 00:29:16.005004 kubelet[2967]: I0513 00:29:16.004971 2967 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/9854eff3-8e24-4160-afd8-88b63a49ad6e-typha-certs\") pod \"calico-typha-7945ff9846-q6zql\" (UID: \"9854eff3-8e24-4160-afd8-88b63a49ad6e\") " pod="calico-system/calico-typha-7945ff9846-q6zql" May 13 00:29:16.018868 containerd[1654]: time="2025-05-13T00:29:16.018804566Z" level=info msg="CreateContainer within sandbox \"04e5f6ba09b44371150dff89d4edb43ea1cea703538495033e2372c6f7ab1b94\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" May 13 00:29:16.043385 containerd[1654]: time="2025-05-13T00:29:16.043307842Z" level=info msg="CreateContainer within sandbox \"04e5f6ba09b44371150dff89d4edb43ea1cea703538495033e2372c6f7ab1b94\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"dbf7af9c6fed1170becba3747fc4139af9cddc738822030cc7104679a023f7ee\"" May 13 00:29:16.049349 containerd[1654]: time="2025-05-13T00:29:16.049218661Z" level=info msg="StartContainer for \"dbf7af9c6fed1170becba3747fc4139af9cddc738822030cc7104679a023f7ee\"" May 13 00:29:16.092895 containerd[1654]: time="2025-05-13T00:29:16.092869221Z" level=info msg="StartContainer for \"dbf7af9c6fed1170becba3747fc4139af9cddc738822030cc7104679a023f7ee\" returns successfully" May 13 00:29:16.421310 kubelet[2967]: I0513 00:29:16.420701 2967 topology_manager.go:215] "Topology Admit Handler" podUID="cba4145c-72fc-48ab-bca0-bde992abb413" podNamespace="calico-system" podName="calico-kube-controllers-6fbdcfc698-gqqsj" May 13 00:29:16.477815 kubelet[2967]: I0513 00:29:16.477796 2967 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cba4145c-72fc-48ab-bca0-bde992abb413-tigera-ca-bundle\") pod \"calico-kube-controllers-6fbdcfc698-gqqsj\" (UID: \"cba4145c-72fc-48ab-bca0-bde992abb413\") " pod="calico-system/calico-kube-controllers-6fbdcfc698-gqqsj" May 13 00:29:16.477973 kubelet[2967]: I0513 00:29:16.477962 2967 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jxfm8\" (UniqueName: \"kubernetes.io/projected/cba4145c-72fc-48ab-bca0-bde992abb413-kube-api-access-jxfm8\") pod \"calico-kube-controllers-6fbdcfc698-gqqsj\" (UID: \"cba4145c-72fc-48ab-bca0-bde992abb413\") " pod="calico-system/calico-kube-controllers-6fbdcfc698-gqqsj" May 13 00:29:16.480740 containerd[1654]: time="2025-05-13T00:29:16.480521857Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7945ff9846-q6zql,Uid:9854eff3-8e24-4160-afd8-88b63a49ad6e,Namespace:calico-system,Attempt:0,}" May 13 00:29:16.564966 containerd[1654]: time="2025-05-13T00:29:16.564881346Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:29:16.564966 containerd[1654]: time="2025-05-13T00:29:16.564933478Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:29:16.566432 containerd[1654]: time="2025-05-13T00:29:16.564949324Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:29:16.575406 containerd[1654]: time="2025-05-13T00:29:16.569667398Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:29:16.589708 kubelet[2967]: I0513 00:29:16.585504 2967 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 May 13 00:29:16.624721 kubelet[2967]: I0513 00:29:16.623641 2967 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock May 13 00:29:16.683383 containerd[1654]: time="2025-05-13T00:29:16.683290989Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7945ff9846-q6zql,Uid:9854eff3-8e24-4160-afd8-88b63a49ad6e,Namespace:calico-system,Attempt:0,} returns sandbox id \"1013f767389a2fd4dc937596a418189ac177b9268a8b2f579bf2882753f0e7bf\"" May 13 00:29:16.709876 containerd[1654]: time="2025-05-13T00:29:16.709855417Z" level=info msg="CreateContainer within sandbox \"1013f767389a2fd4dc937596a418189ac177b9268a8b2f579bf2882753f0e7bf\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" May 13 00:29:16.729552 containerd[1654]: time="2025-05-13T00:29:16.724593114Z" level=info msg="CreateContainer within sandbox \"1013f767389a2fd4dc937596a418189ac177b9268a8b2f579bf2882753f0e7bf\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"590507fa9eda6db5496d914cc5024515ba97848444b449b083becf5b289ca36b\"" May 13 00:29:16.730809 containerd[1654]: time="2025-05-13T00:29:16.730787847Z" level=info msg="StartContainer for \"590507fa9eda6db5496d914cc5024515ba97848444b449b083becf5b289ca36b\"" May 13 00:29:16.732312 containerd[1654]: time="2025-05-13T00:29:16.732272154Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6fbdcfc698-gqqsj,Uid:cba4145c-72fc-48ab-bca0-bde992abb413,Namespace:calico-system,Attempt:0,}" May 13 00:29:16.830126 containerd[1654]: time="2025-05-13T00:29:16.830105598Z" level=info msg="StartContainer for \"590507fa9eda6db5496d914cc5024515ba97848444b449b083becf5b289ca36b\" returns successfully" May 13 00:29:16.927747 systemd-journald[1185]: Under memory pressure, flushing caches. May 13 00:29:16.927466 systemd-resolved[1542]: Under memory pressure, flushing caches. May 13 00:29:16.927490 systemd-resolved[1542]: Flushed all caches. May 13 00:29:16.952923 kubelet[2967]: I0513 00:29:16.950248 2967 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-gfr9r" podStartSLOduration=29.066095988 podStartE2EDuration="36.940818544s" podCreationTimestamp="2025-05-13 00:28:40 +0000 UTC" firstStartedPulling="2025-05-13 00:29:07.861110037 +0000 UTC m=+47.683194130" lastFinishedPulling="2025-05-13 00:29:15.735832592 +0000 UTC m=+55.557916686" observedRunningTime="2025-05-13 00:29:16.912572296 +0000 UTC m=+56.734656391" watchObservedRunningTime="2025-05-13 00:29:16.940818544 +0000 UTC m=+56.762902640" May 13 00:29:16.966686 systemd-networkd[1286]: cali4f3ec51d24b: Link UP May 13 00:29:16.968546 systemd-networkd[1286]: cali4f3ec51d24b: Gained carrier May 13 00:29:16.973935 kubelet[2967]: I0513 00:29:16.973467 2967 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-4925d" podStartSLOduration=4.97345601 podStartE2EDuration="4.97345601s" podCreationTimestamp="2025-05-13 00:29:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 00:29:16.953422244 +0000 UTC m=+56.775506337" watchObservedRunningTime="2025-05-13 00:29:16.97345601 +0000 UTC m=+56.795540107" May 13 00:29:16.975153 kubelet[2967]: I0513 00:29:16.975004 2967 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-7945ff9846-q6zql" podStartSLOduration=5.974996758 podStartE2EDuration="5.974996758s" podCreationTimestamp="2025-05-13 00:29:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 00:29:16.974920587 +0000 UTC m=+56.797004682" watchObservedRunningTime="2025-05-13 00:29:16.974996758 +0000 UTC m=+56.797080860" May 13 00:29:16.991470 containerd[1654]: 2025-05-13 00:29:16.850 [INFO][6250] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--6fbdcfc698--gqqsj-eth0 calico-kube-controllers-6fbdcfc698- calico-system cba4145c-72fc-48ab-bca0-bde992abb413 1039 0 2025-05-13 00:29:13 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:6fbdcfc698 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-6fbdcfc698-gqqsj eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali4f3ec51d24b [] []}} ContainerID="f791efc2fc67fa4bdc4e17530998f4af18524f04e98d7f2fc687e2ea8ed47e51" Namespace="calico-system" Pod="calico-kube-controllers-6fbdcfc698-gqqsj" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6fbdcfc698--gqqsj-" May 13 00:29:16.991470 containerd[1654]: 2025-05-13 00:29:16.852 [INFO][6250] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="f791efc2fc67fa4bdc4e17530998f4af18524f04e98d7f2fc687e2ea8ed47e51" Namespace="calico-system" Pod="calico-kube-controllers-6fbdcfc698-gqqsj" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6fbdcfc698--gqqsj-eth0" May 13 00:29:16.991470 containerd[1654]: 2025-05-13 00:29:16.887 [INFO][6290] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f791efc2fc67fa4bdc4e17530998f4af18524f04e98d7f2fc687e2ea8ed47e51" HandleID="k8s-pod-network.f791efc2fc67fa4bdc4e17530998f4af18524f04e98d7f2fc687e2ea8ed47e51" Workload="localhost-k8s-calico--kube--controllers--6fbdcfc698--gqqsj-eth0" May 13 00:29:16.991470 containerd[1654]: 2025-05-13 00:29:16.897 [INFO][6290] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f791efc2fc67fa4bdc4e17530998f4af18524f04e98d7f2fc687e2ea8ed47e51" HandleID="k8s-pod-network.f791efc2fc67fa4bdc4e17530998f4af18524f04e98d7f2fc687e2ea8ed47e51" Workload="localhost-k8s-calico--kube--controllers--6fbdcfc698--gqqsj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000291230), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-6fbdcfc698-gqqsj", "timestamp":"2025-05-13 00:29:16.886953288 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 13 00:29:16.991470 containerd[1654]: 2025-05-13 00:29:16.897 [INFO][6290] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 00:29:16.991470 containerd[1654]: 2025-05-13 00:29:16.899 [INFO][6290] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 00:29:16.991470 containerd[1654]: 2025-05-13 00:29:16.899 [INFO][6290] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 13 00:29:16.991470 containerd[1654]: 2025-05-13 00:29:16.904 [INFO][6290] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.f791efc2fc67fa4bdc4e17530998f4af18524f04e98d7f2fc687e2ea8ed47e51" host="localhost" May 13 00:29:16.991470 containerd[1654]: 2025-05-13 00:29:16.909 [INFO][6290] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 13 00:29:16.991470 containerd[1654]: 2025-05-13 00:29:16.918 [INFO][6290] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 13 00:29:16.991470 containerd[1654]: 2025-05-13 00:29:16.921 [INFO][6290] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 13 00:29:16.991470 containerd[1654]: 2025-05-13 00:29:16.925 [INFO][6290] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 13 00:29:16.991470 containerd[1654]: 2025-05-13 00:29:16.926 [INFO][6290] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.f791efc2fc67fa4bdc4e17530998f4af18524f04e98d7f2fc687e2ea8ed47e51" host="localhost" May 13 00:29:16.991470 containerd[1654]: 2025-05-13 00:29:16.931 [INFO][6290] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.f791efc2fc67fa4bdc4e17530998f4af18524f04e98d7f2fc687e2ea8ed47e51 May 13 00:29:16.991470 containerd[1654]: 2025-05-13 00:29:16.936 [INFO][6290] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.f791efc2fc67fa4bdc4e17530998f4af18524f04e98d7f2fc687e2ea8ed47e51" host="localhost" May 13 00:29:16.991470 containerd[1654]: 2025-05-13 00:29:16.943 [INFO][6290] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.137/26] block=192.168.88.128/26 handle="k8s-pod-network.f791efc2fc67fa4bdc4e17530998f4af18524f04e98d7f2fc687e2ea8ed47e51" host="localhost" May 13 00:29:16.991470 containerd[1654]: 2025-05-13 00:29:16.943 [INFO][6290] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.137/26] handle="k8s-pod-network.f791efc2fc67fa4bdc4e17530998f4af18524f04e98d7f2fc687e2ea8ed47e51" host="localhost" May 13 00:29:16.991470 containerd[1654]: 2025-05-13 00:29:16.943 [INFO][6290] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 00:29:16.991470 containerd[1654]: 2025-05-13 00:29:16.943 [INFO][6290] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.137/26] IPv6=[] ContainerID="f791efc2fc67fa4bdc4e17530998f4af18524f04e98d7f2fc687e2ea8ed47e51" HandleID="k8s-pod-network.f791efc2fc67fa4bdc4e17530998f4af18524f04e98d7f2fc687e2ea8ed47e51" Workload="localhost-k8s-calico--kube--controllers--6fbdcfc698--gqqsj-eth0" May 13 00:29:16.994927 containerd[1654]: 2025-05-13 00:29:16.946 [INFO][6250] cni-plugin/k8s.go 386: Populated endpoint ContainerID="f791efc2fc67fa4bdc4e17530998f4af18524f04e98d7f2fc687e2ea8ed47e51" Namespace="calico-system" Pod="calico-kube-controllers-6fbdcfc698-gqqsj" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6fbdcfc698--gqqsj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6fbdcfc698--gqqsj-eth0", GenerateName:"calico-kube-controllers-6fbdcfc698-", Namespace:"calico-system", SelfLink:"", UID:"cba4145c-72fc-48ab-bca0-bde992abb413", ResourceVersion:"1039", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 0, 29, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6fbdcfc698", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-6fbdcfc698-gqqsj", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali4f3ec51d24b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 00:29:16.994927 containerd[1654]: 2025-05-13 00:29:16.947 [INFO][6250] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.137/32] ContainerID="f791efc2fc67fa4bdc4e17530998f4af18524f04e98d7f2fc687e2ea8ed47e51" Namespace="calico-system" Pod="calico-kube-controllers-6fbdcfc698-gqqsj" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6fbdcfc698--gqqsj-eth0" May 13 00:29:16.994927 containerd[1654]: 2025-05-13 00:29:16.947 [INFO][6250] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4f3ec51d24b ContainerID="f791efc2fc67fa4bdc4e17530998f4af18524f04e98d7f2fc687e2ea8ed47e51" Namespace="calico-system" Pod="calico-kube-controllers-6fbdcfc698-gqqsj" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6fbdcfc698--gqqsj-eth0" May 13 00:29:16.994927 containerd[1654]: 2025-05-13 00:29:16.971 [INFO][6250] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f791efc2fc67fa4bdc4e17530998f4af18524f04e98d7f2fc687e2ea8ed47e51" Namespace="calico-system" Pod="calico-kube-controllers-6fbdcfc698-gqqsj" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6fbdcfc698--gqqsj-eth0" May 13 00:29:16.994927 containerd[1654]: 2025-05-13 00:29:16.972 [INFO][6250] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="f791efc2fc67fa4bdc4e17530998f4af18524f04e98d7f2fc687e2ea8ed47e51" Namespace="calico-system" Pod="calico-kube-controllers-6fbdcfc698-gqqsj" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6fbdcfc698--gqqsj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6fbdcfc698--gqqsj-eth0", GenerateName:"calico-kube-controllers-6fbdcfc698-", Namespace:"calico-system", SelfLink:"", UID:"cba4145c-72fc-48ab-bca0-bde992abb413", ResourceVersion:"1039", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 0, 29, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6fbdcfc698", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f791efc2fc67fa4bdc4e17530998f4af18524f04e98d7f2fc687e2ea8ed47e51", Pod:"calico-kube-controllers-6fbdcfc698-gqqsj", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali4f3ec51d24b", MAC:"5a:df:1c:fe:5f:13", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 00:29:16.994927 containerd[1654]: 2025-05-13 00:29:16.984 [INFO][6250] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="f791efc2fc67fa4bdc4e17530998f4af18524f04e98d7f2fc687e2ea8ed47e51" Namespace="calico-system" Pod="calico-kube-controllers-6fbdcfc698-gqqsj" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6fbdcfc698--gqqsj-eth0" May 13 00:29:17.080256 containerd[1654]: time="2025-05-13T00:29:17.079801773Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:29:17.080256 containerd[1654]: time="2025-05-13T00:29:17.079846544Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:29:17.080256 containerd[1654]: time="2025-05-13T00:29:17.079890358Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:29:17.080256 containerd[1654]: time="2025-05-13T00:29:17.079963920Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:29:17.110794 systemd-resolved[1542]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 13 00:29:17.131316 containerd[1654]: time="2025-05-13T00:29:17.131243875Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6fbdcfc698-gqqsj,Uid:cba4145c-72fc-48ab-bca0-bde992abb413,Namespace:calico-system,Attempt:0,} returns sandbox id \"f791efc2fc67fa4bdc4e17530998f4af18524f04e98d7f2fc687e2ea8ed47e51\"" May 13 00:29:17.144261 containerd[1654]: time="2025-05-13T00:29:17.144210177Z" level=info msg="CreateContainer within sandbox \"f791efc2fc67fa4bdc4e17530998f4af18524f04e98d7f2fc687e2ea8ed47e51\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" May 13 00:29:17.152371 containerd[1654]: time="2025-05-13T00:29:17.152145203Z" level=info msg="CreateContainer within sandbox \"f791efc2fc67fa4bdc4e17530998f4af18524f04e98d7f2fc687e2ea8ed47e51\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"39c424483cecd0e0f6ff76988ca35d071475e4ea59ccf3ea65c9895a089308a9\"" May 13 00:29:17.153196 containerd[1654]: time="2025-05-13T00:29:17.152610172Z" level=info msg="StartContainer for \"39c424483cecd0e0f6ff76988ca35d071475e4ea59ccf3ea65c9895a089308a9\"" May 13 00:29:17.214764 containerd[1654]: time="2025-05-13T00:29:17.214698620Z" level=info msg="StartContainer for \"39c424483cecd0e0f6ff76988ca35d071475e4ea59ccf3ea65c9895a089308a9\" returns successfully" May 13 00:29:17.628420 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3111181144.mount: Deactivated successfully. May 13 00:29:18.053288 kubelet[2967]: I0513 00:29:18.051874 2967 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-6fbdcfc698-gqqsj" podStartSLOduration=5.044724895 podStartE2EDuration="5.044724895s" podCreationTimestamp="2025-05-13 00:29:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 00:29:18.042278103 +0000 UTC m=+57.864362208" watchObservedRunningTime="2025-05-13 00:29:18.044724895 +0000 UTC m=+57.866808997" May 13 00:29:18.089186 systemd[1]: run-containerd-runc-k8s.io-39c424483cecd0e0f6ff76988ca35d071475e4ea59ccf3ea65c9895a089308a9-runc.5IKcLO.mount: Deactivated successfully. May 13 00:29:18.781451 systemd-networkd[1286]: cali4f3ec51d24b: Gained IPv6LL May 13 00:29:18.973638 systemd-resolved[1542]: Under memory pressure, flushing caches. May 13 00:29:18.976593 systemd-journald[1185]: Under memory pressure, flushing caches. May 13 00:29:18.973644 systemd-resolved[1542]: Flushed all caches. May 13 00:29:20.756559 containerd[1654]: time="2025-05-13T00:29:20.756433220Z" level=info msg="StopPodSandbox for \"f036290211faf8d4c836b3e54d2c626c3fd3bad697261c5464386326ea4d2faa\"" May 13 00:29:20.827063 containerd[1654]: time="2025-05-13T00:29:20.826611722Z" level=info msg="TearDown network for sandbox \"f036290211faf8d4c836b3e54d2c626c3fd3bad697261c5464386326ea4d2faa\" successfully" May 13 00:29:20.827063 containerd[1654]: time="2025-05-13T00:29:20.826637239Z" level=info msg="StopPodSandbox for \"f036290211faf8d4c836b3e54d2c626c3fd3bad697261c5464386326ea4d2faa\" returns successfully" May 13 00:29:21.014926 containerd[1654]: time="2025-05-13T00:29:21.013872981Z" level=info msg="RemovePodSandbox for \"f036290211faf8d4c836b3e54d2c626c3fd3bad697261c5464386326ea4d2faa\"" May 13 00:29:21.018455 containerd[1654]: time="2025-05-13T00:29:21.018313669Z" level=info msg="Forcibly stopping sandbox \"f036290211faf8d4c836b3e54d2c626c3fd3bad697261c5464386326ea4d2faa\"" May 13 00:29:21.018455 containerd[1654]: time="2025-05-13T00:29:21.018396963Z" level=info msg="TearDown network for sandbox \"f036290211faf8d4c836b3e54d2c626c3fd3bad697261c5464386326ea4d2faa\" successfully" May 13 00:29:21.022322 systemd-resolved[1542]: Under memory pressure, flushing caches. May 13 00:29:21.022338 systemd-resolved[1542]: Flushed all caches. May 13 00:29:21.025307 systemd-journald[1185]: Under memory pressure, flushing caches. May 13 00:29:21.052988 containerd[1654]: time="2025-05-13T00:29:21.052834153Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f036290211faf8d4c836b3e54d2c626c3fd3bad697261c5464386326ea4d2faa\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 13 00:29:21.092922 containerd[1654]: time="2025-05-13T00:29:21.092721707Z" level=info msg="RemovePodSandbox \"f036290211faf8d4c836b3e54d2c626c3fd3bad697261c5464386326ea4d2faa\" returns successfully" May 13 00:29:21.094576 containerd[1654]: time="2025-05-13T00:29:21.094539600Z" level=info msg="StopPodSandbox for \"68fae20ca00d661345e722dd8ed1da97d49c518574bf2a60f1588a635c32a0a5\"" May 13 00:29:22.609791 containerd[1654]: 2025-05-13 00:29:21.543 [WARNING][6780] cni-plugin/k8s.go 566: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="68fae20ca00d661345e722dd8ed1da97d49c518574bf2a60f1588a635c32a0a5" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--786f4bd5f6--5nhlr-eth0" May 13 00:29:22.609791 containerd[1654]: 2025-05-13 00:29:21.550 [INFO][6780] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="68fae20ca00d661345e722dd8ed1da97d49c518574bf2a60f1588a635c32a0a5" May 13 00:29:22.609791 containerd[1654]: 2025-05-13 00:29:21.550 [INFO][6780] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="68fae20ca00d661345e722dd8ed1da97d49c518574bf2a60f1588a635c32a0a5" iface="eth0" netns="" May 13 00:29:22.609791 containerd[1654]: 2025-05-13 00:29:21.550 [INFO][6780] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="68fae20ca00d661345e722dd8ed1da97d49c518574bf2a60f1588a635c32a0a5" May 13 00:29:22.609791 containerd[1654]: 2025-05-13 00:29:21.550 [INFO][6780] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="68fae20ca00d661345e722dd8ed1da97d49c518574bf2a60f1588a635c32a0a5" May 13 00:29:22.609791 containerd[1654]: 2025-05-13 00:29:22.580 [INFO][6787] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="68fae20ca00d661345e722dd8ed1da97d49c518574bf2a60f1588a635c32a0a5" HandleID="k8s-pod-network.68fae20ca00d661345e722dd8ed1da97d49c518574bf2a60f1588a635c32a0a5" Workload="localhost-k8s-calico--kube--controllers--786f4bd5f6--5nhlr-eth0" May 13 00:29:22.609791 containerd[1654]: 2025-05-13 00:29:22.586 [INFO][6787] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 00:29:22.609791 containerd[1654]: 2025-05-13 00:29:22.586 [INFO][6787] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 00:29:22.609791 containerd[1654]: 2025-05-13 00:29:22.605 [WARNING][6787] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="68fae20ca00d661345e722dd8ed1da97d49c518574bf2a60f1588a635c32a0a5" HandleID="k8s-pod-network.68fae20ca00d661345e722dd8ed1da97d49c518574bf2a60f1588a635c32a0a5" Workload="localhost-k8s-calico--kube--controllers--786f4bd5f6--5nhlr-eth0" May 13 00:29:22.609791 containerd[1654]: 2025-05-13 00:29:22.605 [INFO][6787] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="68fae20ca00d661345e722dd8ed1da97d49c518574bf2a60f1588a635c32a0a5" HandleID="k8s-pod-network.68fae20ca00d661345e722dd8ed1da97d49c518574bf2a60f1588a635c32a0a5" Workload="localhost-k8s-calico--kube--controllers--786f4bd5f6--5nhlr-eth0" May 13 00:29:22.609791 containerd[1654]: 2025-05-13 00:29:22.606 [INFO][6787] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 00:29:22.609791 containerd[1654]: 2025-05-13 00:29:22.608 [INFO][6780] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="68fae20ca00d661345e722dd8ed1da97d49c518574bf2a60f1588a635c32a0a5" May 13 00:29:22.615436 containerd[1654]: time="2025-05-13T00:29:22.609844692Z" level=info msg="TearDown network for sandbox \"68fae20ca00d661345e722dd8ed1da97d49c518574bf2a60f1588a635c32a0a5\" successfully" May 13 00:29:22.615436 containerd[1654]: time="2025-05-13T00:29:22.609923071Z" level=info msg="StopPodSandbox for \"68fae20ca00d661345e722dd8ed1da97d49c518574bf2a60f1588a635c32a0a5\" returns successfully" May 13 00:29:22.615436 containerd[1654]: time="2025-05-13T00:29:22.610479349Z" level=info msg="RemovePodSandbox for \"68fae20ca00d661345e722dd8ed1da97d49c518574bf2a60f1588a635c32a0a5\"" May 13 00:29:22.615436 containerd[1654]: time="2025-05-13T00:29:22.610498676Z" level=info msg="Forcibly stopping sandbox \"68fae20ca00d661345e722dd8ed1da97d49c518574bf2a60f1588a635c32a0a5\"" May 13 00:29:22.841175 containerd[1654]: 2025-05-13 00:29:22.804 [WARNING][6806] cni-plugin/k8s.go 566: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="68fae20ca00d661345e722dd8ed1da97d49c518574bf2a60f1588a635c32a0a5" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--786f4bd5f6--5nhlr-eth0" May 13 00:29:22.841175 containerd[1654]: 2025-05-13 00:29:22.804 [INFO][6806] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="68fae20ca00d661345e722dd8ed1da97d49c518574bf2a60f1588a635c32a0a5" May 13 00:29:22.841175 containerd[1654]: 2025-05-13 00:29:22.804 [INFO][6806] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="68fae20ca00d661345e722dd8ed1da97d49c518574bf2a60f1588a635c32a0a5" iface="eth0" netns="" May 13 00:29:22.841175 containerd[1654]: 2025-05-13 00:29:22.804 [INFO][6806] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="68fae20ca00d661345e722dd8ed1da97d49c518574bf2a60f1588a635c32a0a5" May 13 00:29:22.841175 containerd[1654]: 2025-05-13 00:29:22.804 [INFO][6806] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="68fae20ca00d661345e722dd8ed1da97d49c518574bf2a60f1588a635c32a0a5" May 13 00:29:22.841175 containerd[1654]: 2025-05-13 00:29:22.829 [INFO][6813] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="68fae20ca00d661345e722dd8ed1da97d49c518574bf2a60f1588a635c32a0a5" HandleID="k8s-pod-network.68fae20ca00d661345e722dd8ed1da97d49c518574bf2a60f1588a635c32a0a5" Workload="localhost-k8s-calico--kube--controllers--786f4bd5f6--5nhlr-eth0" May 13 00:29:22.841175 containerd[1654]: 2025-05-13 00:29:22.829 [INFO][6813] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 00:29:22.841175 containerd[1654]: 2025-05-13 00:29:22.829 [INFO][6813] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 00:29:22.841175 containerd[1654]: 2025-05-13 00:29:22.833 [WARNING][6813] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="68fae20ca00d661345e722dd8ed1da97d49c518574bf2a60f1588a635c32a0a5" HandleID="k8s-pod-network.68fae20ca00d661345e722dd8ed1da97d49c518574bf2a60f1588a635c32a0a5" Workload="localhost-k8s-calico--kube--controllers--786f4bd5f6--5nhlr-eth0" May 13 00:29:22.841175 containerd[1654]: 2025-05-13 00:29:22.833 [INFO][6813] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="68fae20ca00d661345e722dd8ed1da97d49c518574bf2a60f1588a635c32a0a5" HandleID="k8s-pod-network.68fae20ca00d661345e722dd8ed1da97d49c518574bf2a60f1588a635c32a0a5" Workload="localhost-k8s-calico--kube--controllers--786f4bd5f6--5nhlr-eth0" May 13 00:29:22.841175 containerd[1654]: 2025-05-13 00:29:22.837 [INFO][6813] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 00:29:22.841175 containerd[1654]: 2025-05-13 00:29:22.839 [INFO][6806] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="68fae20ca00d661345e722dd8ed1da97d49c518574bf2a60f1588a635c32a0a5" May 13 00:29:22.844549 containerd[1654]: time="2025-05-13T00:29:22.841153668Z" level=info msg="TearDown network for sandbox \"68fae20ca00d661345e722dd8ed1da97d49c518574bf2a60f1588a635c32a0a5\" successfully" May 13 00:29:22.850660 containerd[1654]: time="2025-05-13T00:29:22.850622140Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"68fae20ca00d661345e722dd8ed1da97d49c518574bf2a60f1588a635c32a0a5\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 13 00:29:22.850746 containerd[1654]: time="2025-05-13T00:29:22.850679837Z" level=info msg="RemovePodSandbox \"68fae20ca00d661345e722dd8ed1da97d49c518574bf2a60f1588a635c32a0a5\" returns successfully" May 13 00:29:22.853641 containerd[1654]: time="2025-05-13T00:29:22.851002163Z" level=info msg="StopPodSandbox for \"e6dc441ee5996693cf7460489ac2eab79d9a40d34f119c96bc9affd6fd513f18\"" May 13 00:29:22.967671 containerd[1654]: 2025-05-13 00:29:22.936 [WARNING][6831] cni-plugin/k8s.go 566: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="e6dc441ee5996693cf7460489ac2eab79d9a40d34f119c96bc9affd6fd513f18" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--786f4bd5f6--5nhlr-eth0" May 13 00:29:22.967671 containerd[1654]: 2025-05-13 00:29:22.936 [INFO][6831] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="e6dc441ee5996693cf7460489ac2eab79d9a40d34f119c96bc9affd6fd513f18" May 13 00:29:22.967671 containerd[1654]: 2025-05-13 00:29:22.936 [INFO][6831] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e6dc441ee5996693cf7460489ac2eab79d9a40d34f119c96bc9affd6fd513f18" iface="eth0" netns="" May 13 00:29:22.967671 containerd[1654]: 2025-05-13 00:29:22.936 [INFO][6831] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="e6dc441ee5996693cf7460489ac2eab79d9a40d34f119c96bc9affd6fd513f18" May 13 00:29:22.967671 containerd[1654]: 2025-05-13 00:29:22.936 [INFO][6831] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e6dc441ee5996693cf7460489ac2eab79d9a40d34f119c96bc9affd6fd513f18" May 13 00:29:22.967671 containerd[1654]: 2025-05-13 00:29:22.960 [INFO][6838] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e6dc441ee5996693cf7460489ac2eab79d9a40d34f119c96bc9affd6fd513f18" HandleID="k8s-pod-network.e6dc441ee5996693cf7460489ac2eab79d9a40d34f119c96bc9affd6fd513f18" Workload="localhost-k8s-calico--kube--controllers--786f4bd5f6--5nhlr-eth0" May 13 00:29:22.967671 containerd[1654]: 2025-05-13 00:29:22.960 [INFO][6838] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 00:29:22.967671 containerd[1654]: 2025-05-13 00:29:22.960 [INFO][6838] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 00:29:22.967671 containerd[1654]: 2025-05-13 00:29:22.964 [WARNING][6838] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e6dc441ee5996693cf7460489ac2eab79d9a40d34f119c96bc9affd6fd513f18" HandleID="k8s-pod-network.e6dc441ee5996693cf7460489ac2eab79d9a40d34f119c96bc9affd6fd513f18" Workload="localhost-k8s-calico--kube--controllers--786f4bd5f6--5nhlr-eth0" May 13 00:29:22.967671 containerd[1654]: 2025-05-13 00:29:22.964 [INFO][6838] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e6dc441ee5996693cf7460489ac2eab79d9a40d34f119c96bc9affd6fd513f18" HandleID="k8s-pod-network.e6dc441ee5996693cf7460489ac2eab79d9a40d34f119c96bc9affd6fd513f18" Workload="localhost-k8s-calico--kube--controllers--786f4bd5f6--5nhlr-eth0" May 13 00:29:22.967671 containerd[1654]: 2025-05-13 00:29:22.964 [INFO][6838] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 00:29:22.967671 containerd[1654]: 2025-05-13 00:29:22.966 [INFO][6831] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="e6dc441ee5996693cf7460489ac2eab79d9a40d34f119c96bc9affd6fd513f18" May 13 00:29:22.987897 containerd[1654]: time="2025-05-13T00:29:22.967653029Z" level=info msg="TearDown network for sandbox \"e6dc441ee5996693cf7460489ac2eab79d9a40d34f119c96bc9affd6fd513f18\" successfully" May 13 00:29:22.987897 containerd[1654]: time="2025-05-13T00:29:22.968225753Z" level=info msg="StopPodSandbox for \"e6dc441ee5996693cf7460489ac2eab79d9a40d34f119c96bc9affd6fd513f18\" returns successfully" May 13 00:29:22.987897 containerd[1654]: time="2025-05-13T00:29:22.970858754Z" level=info msg="RemovePodSandbox for \"e6dc441ee5996693cf7460489ac2eab79d9a40d34f119c96bc9affd6fd513f18\"" May 13 00:29:22.987897 containerd[1654]: time="2025-05-13T00:29:22.970879630Z" level=info msg="Forcibly stopping sandbox \"e6dc441ee5996693cf7460489ac2eab79d9a40d34f119c96bc9affd6fd513f18\"" May 13 00:29:23.034629 containerd[1654]: 2025-05-13 00:29:23.007 [WARNING][6856] cni-plugin/k8s.go 566: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="e6dc441ee5996693cf7460489ac2eab79d9a40d34f119c96bc9affd6fd513f18" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--786f4bd5f6--5nhlr-eth0" May 13 00:29:23.034629 containerd[1654]: 2025-05-13 00:29:23.007 [INFO][6856] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="e6dc441ee5996693cf7460489ac2eab79d9a40d34f119c96bc9affd6fd513f18" May 13 00:29:23.034629 containerd[1654]: 2025-05-13 00:29:23.007 [INFO][6856] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e6dc441ee5996693cf7460489ac2eab79d9a40d34f119c96bc9affd6fd513f18" iface="eth0" netns="" May 13 00:29:23.034629 containerd[1654]: 2025-05-13 00:29:23.007 [INFO][6856] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="e6dc441ee5996693cf7460489ac2eab79d9a40d34f119c96bc9affd6fd513f18" May 13 00:29:23.034629 containerd[1654]: 2025-05-13 00:29:23.007 [INFO][6856] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e6dc441ee5996693cf7460489ac2eab79d9a40d34f119c96bc9affd6fd513f18" May 13 00:29:23.034629 containerd[1654]: 2025-05-13 00:29:23.023 [INFO][6863] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e6dc441ee5996693cf7460489ac2eab79d9a40d34f119c96bc9affd6fd513f18" HandleID="k8s-pod-network.e6dc441ee5996693cf7460489ac2eab79d9a40d34f119c96bc9affd6fd513f18" Workload="localhost-k8s-calico--kube--controllers--786f4bd5f6--5nhlr-eth0" May 13 00:29:23.034629 containerd[1654]: 2025-05-13 00:29:23.023 [INFO][6863] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 00:29:23.034629 containerd[1654]: 2025-05-13 00:29:23.023 [INFO][6863] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 00:29:23.034629 containerd[1654]: 2025-05-13 00:29:23.030 [WARNING][6863] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e6dc441ee5996693cf7460489ac2eab79d9a40d34f119c96bc9affd6fd513f18" HandleID="k8s-pod-network.e6dc441ee5996693cf7460489ac2eab79d9a40d34f119c96bc9affd6fd513f18" Workload="localhost-k8s-calico--kube--controllers--786f4bd5f6--5nhlr-eth0" May 13 00:29:23.034629 containerd[1654]: 2025-05-13 00:29:23.030 [INFO][6863] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e6dc441ee5996693cf7460489ac2eab79d9a40d34f119c96bc9affd6fd513f18" HandleID="k8s-pod-network.e6dc441ee5996693cf7460489ac2eab79d9a40d34f119c96bc9affd6fd513f18" Workload="localhost-k8s-calico--kube--controllers--786f4bd5f6--5nhlr-eth0" May 13 00:29:23.034629 containerd[1654]: 2025-05-13 00:29:23.031 [INFO][6863] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 00:29:23.034629 containerd[1654]: 2025-05-13 00:29:23.033 [INFO][6856] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="e6dc441ee5996693cf7460489ac2eab79d9a40d34f119c96bc9affd6fd513f18" May 13 00:29:23.035471 containerd[1654]: time="2025-05-13T00:29:23.034659411Z" level=info msg="TearDown network for sandbox \"e6dc441ee5996693cf7460489ac2eab79d9a40d34f119c96bc9affd6fd513f18\" successfully" May 13 00:29:23.038660 containerd[1654]: time="2025-05-13T00:29:23.038617974Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e6dc441ee5996693cf7460489ac2eab79d9a40d34f119c96bc9affd6fd513f18\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 13 00:29:23.038753 containerd[1654]: time="2025-05-13T00:29:23.038678956Z" level=info msg="RemovePodSandbox \"e6dc441ee5996693cf7460489ac2eab79d9a40d34f119c96bc9affd6fd513f18\" returns successfully" May 13 00:29:23.039596 containerd[1654]: time="2025-05-13T00:29:23.039003621Z" level=info msg="StopPodSandbox for \"ee56a109e98884806b1faa5ab67f1896e594d25111965495500941d6f16ef424\"" May 13 00:29:23.039596 containerd[1654]: time="2025-05-13T00:29:23.039056978Z" level=info msg="TearDown network for sandbox \"ee56a109e98884806b1faa5ab67f1896e594d25111965495500941d6f16ef424\" successfully" May 13 00:29:23.039596 containerd[1654]: time="2025-05-13T00:29:23.039064812Z" level=info msg="StopPodSandbox for \"ee56a109e98884806b1faa5ab67f1896e594d25111965495500941d6f16ef424\" returns successfully" May 13 00:29:23.039596 containerd[1654]: time="2025-05-13T00:29:23.039230962Z" level=info msg="RemovePodSandbox for \"ee56a109e98884806b1faa5ab67f1896e594d25111965495500941d6f16ef424\"" May 13 00:29:23.039596 containerd[1654]: time="2025-05-13T00:29:23.039260980Z" level=info msg="Forcibly stopping sandbox \"ee56a109e98884806b1faa5ab67f1896e594d25111965495500941d6f16ef424\"" May 13 00:29:23.039596 containerd[1654]: time="2025-05-13T00:29:23.039297957Z" level=info msg="TearDown network for sandbox \"ee56a109e98884806b1faa5ab67f1896e594d25111965495500941d6f16ef424\" successfully" May 13 00:29:23.042107 containerd[1654]: time="2025-05-13T00:29:23.042079142Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ee56a109e98884806b1faa5ab67f1896e594d25111965495500941d6f16ef424\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 13 00:29:23.042182 containerd[1654]: time="2025-05-13T00:29:23.042127538Z" level=info msg="RemovePodSandbox \"ee56a109e98884806b1faa5ab67f1896e594d25111965495500941d6f16ef424\" returns successfully" May 13 00:29:23.042599 containerd[1654]: time="2025-05-13T00:29:23.042582350Z" level=info msg="StopPodSandbox for \"3858951aad868e71994afe1d06847248f35d4673e4c791ff209e3d7ac3c17ea9\"" May 13 00:29:23.072806 systemd-journald[1185]: Under memory pressure, flushing caches. May 13 00:29:23.071545 systemd-resolved[1542]: Under memory pressure, flushing caches. May 13 00:29:23.071550 systemd-resolved[1542]: Flushed all caches. May 13 00:29:23.108538 containerd[1654]: 2025-05-13 00:29:23.077 [WARNING][6881] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3858951aad868e71994afe1d06847248f35d4673e4c791ff209e3d7ac3c17ea9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--76896c5c69--r4r9w-eth0", GenerateName:"calico-apiserver-76896c5c69-", Namespace:"calico-apiserver", SelfLink:"", UID:"15d3cbd6-98df-4be7-b394-57ec06b3789c", ResourceVersion:"855", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 0, 28, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"76896c5c69", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d4ee7a20bdeb1c591609ee81eb9e8342bb0be15af54d898d113950d3a8a50efc", Pod:"calico-apiserver-76896c5c69-r4r9w", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calibd8a7f0d829", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 00:29:23.108538 containerd[1654]: 2025-05-13 00:29:23.077 [INFO][6881] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="3858951aad868e71994afe1d06847248f35d4673e4c791ff209e3d7ac3c17ea9" May 13 00:29:23.108538 containerd[1654]: 2025-05-13 00:29:23.077 [INFO][6881] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3858951aad868e71994afe1d06847248f35d4673e4c791ff209e3d7ac3c17ea9" iface="eth0" netns="" May 13 00:29:23.108538 containerd[1654]: 2025-05-13 00:29:23.077 [INFO][6881] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="3858951aad868e71994afe1d06847248f35d4673e4c791ff209e3d7ac3c17ea9" May 13 00:29:23.108538 containerd[1654]: 2025-05-13 00:29:23.077 [INFO][6881] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3858951aad868e71994afe1d06847248f35d4673e4c791ff209e3d7ac3c17ea9" May 13 00:29:23.108538 containerd[1654]: 2025-05-13 00:29:23.097 [INFO][6888] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3858951aad868e71994afe1d06847248f35d4673e4c791ff209e3d7ac3c17ea9" HandleID="k8s-pod-network.3858951aad868e71994afe1d06847248f35d4673e4c791ff209e3d7ac3c17ea9" Workload="localhost-k8s-calico--apiserver--76896c5c69--r4r9w-eth0" May 13 00:29:23.108538 containerd[1654]: 2025-05-13 00:29:23.098 [INFO][6888] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 00:29:23.108538 containerd[1654]: 2025-05-13 00:29:23.098 [INFO][6888] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 00:29:23.108538 containerd[1654]: 2025-05-13 00:29:23.105 [WARNING][6888] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3858951aad868e71994afe1d06847248f35d4673e4c791ff209e3d7ac3c17ea9" HandleID="k8s-pod-network.3858951aad868e71994afe1d06847248f35d4673e4c791ff209e3d7ac3c17ea9" Workload="localhost-k8s-calico--apiserver--76896c5c69--r4r9w-eth0" May 13 00:29:23.108538 containerd[1654]: 2025-05-13 00:29:23.105 [INFO][6888] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3858951aad868e71994afe1d06847248f35d4673e4c791ff209e3d7ac3c17ea9" HandleID="k8s-pod-network.3858951aad868e71994afe1d06847248f35d4673e4c791ff209e3d7ac3c17ea9" Workload="localhost-k8s-calico--apiserver--76896c5c69--r4r9w-eth0" May 13 00:29:23.108538 containerd[1654]: 2025-05-13 00:29:23.106 [INFO][6888] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 00:29:23.108538 containerd[1654]: 2025-05-13 00:29:23.107 [INFO][6881] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="3858951aad868e71994afe1d06847248f35d4673e4c791ff209e3d7ac3c17ea9" May 13 00:29:23.108940 containerd[1654]: time="2025-05-13T00:29:23.108578556Z" level=info msg="TearDown network for sandbox \"3858951aad868e71994afe1d06847248f35d4673e4c791ff209e3d7ac3c17ea9\" successfully" May 13 00:29:23.108940 containerd[1654]: time="2025-05-13T00:29:23.108593837Z" level=info msg="StopPodSandbox for \"3858951aad868e71994afe1d06847248f35d4673e4c791ff209e3d7ac3c17ea9\" returns successfully" May 13 00:29:23.108940 containerd[1654]: time="2025-05-13T00:29:23.108931303Z" level=info msg="RemovePodSandbox for \"3858951aad868e71994afe1d06847248f35d4673e4c791ff209e3d7ac3c17ea9\"" May 13 00:29:23.108993 containerd[1654]: time="2025-05-13T00:29:23.108962873Z" level=info msg="Forcibly stopping sandbox \"3858951aad868e71994afe1d06847248f35d4673e4c791ff209e3d7ac3c17ea9\"" May 13 00:29:23.163113 containerd[1654]: 2025-05-13 00:29:23.130 [WARNING][6906] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3858951aad868e71994afe1d06847248f35d4673e4c791ff209e3d7ac3c17ea9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--76896c5c69--r4r9w-eth0", GenerateName:"calico-apiserver-76896c5c69-", Namespace:"calico-apiserver", SelfLink:"", UID:"15d3cbd6-98df-4be7-b394-57ec06b3789c", ResourceVersion:"855", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 0, 28, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"76896c5c69", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d4ee7a20bdeb1c591609ee81eb9e8342bb0be15af54d898d113950d3a8a50efc", Pod:"calico-apiserver-76896c5c69-r4r9w", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calibd8a7f0d829", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 00:29:23.163113 containerd[1654]: 2025-05-13 00:29:23.143 [INFO][6906] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="3858951aad868e71994afe1d06847248f35d4673e4c791ff209e3d7ac3c17ea9" May 13 00:29:23.163113 containerd[1654]: 2025-05-13 00:29:23.143 [INFO][6906] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3858951aad868e71994afe1d06847248f35d4673e4c791ff209e3d7ac3c17ea9" iface="eth0" netns="" May 13 00:29:23.163113 containerd[1654]: 2025-05-13 00:29:23.143 [INFO][6906] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="3858951aad868e71994afe1d06847248f35d4673e4c791ff209e3d7ac3c17ea9" May 13 00:29:23.163113 containerd[1654]: 2025-05-13 00:29:23.143 [INFO][6906] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3858951aad868e71994afe1d06847248f35d4673e4c791ff209e3d7ac3c17ea9" May 13 00:29:23.163113 containerd[1654]: 2025-05-13 00:29:23.156 [INFO][6913] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3858951aad868e71994afe1d06847248f35d4673e4c791ff209e3d7ac3c17ea9" HandleID="k8s-pod-network.3858951aad868e71994afe1d06847248f35d4673e4c791ff209e3d7ac3c17ea9" Workload="localhost-k8s-calico--apiserver--76896c5c69--r4r9w-eth0" May 13 00:29:23.163113 containerd[1654]: 2025-05-13 00:29:23.156 [INFO][6913] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 00:29:23.163113 containerd[1654]: 2025-05-13 00:29:23.157 [INFO][6913] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 00:29:23.163113 containerd[1654]: 2025-05-13 00:29:23.160 [WARNING][6913] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3858951aad868e71994afe1d06847248f35d4673e4c791ff209e3d7ac3c17ea9" HandleID="k8s-pod-network.3858951aad868e71994afe1d06847248f35d4673e4c791ff209e3d7ac3c17ea9" Workload="localhost-k8s-calico--apiserver--76896c5c69--r4r9w-eth0" May 13 00:29:23.163113 containerd[1654]: 2025-05-13 00:29:23.160 [INFO][6913] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3858951aad868e71994afe1d06847248f35d4673e4c791ff209e3d7ac3c17ea9" HandleID="k8s-pod-network.3858951aad868e71994afe1d06847248f35d4673e4c791ff209e3d7ac3c17ea9" Workload="localhost-k8s-calico--apiserver--76896c5c69--r4r9w-eth0" May 13 00:29:23.163113 containerd[1654]: 2025-05-13 00:29:23.161 [INFO][6913] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 00:29:23.163113 containerd[1654]: 2025-05-13 00:29:23.162 [INFO][6906] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="3858951aad868e71994afe1d06847248f35d4673e4c791ff209e3d7ac3c17ea9" May 13 00:29:23.186838 containerd[1654]: time="2025-05-13T00:29:23.163138235Z" level=info msg="TearDown network for sandbox \"3858951aad868e71994afe1d06847248f35d4673e4c791ff209e3d7ac3c17ea9\" successfully" May 13 00:29:23.201079 containerd[1654]: time="2025-05-13T00:29:23.201047174Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3858951aad868e71994afe1d06847248f35d4673e4c791ff209e3d7ac3c17ea9\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 13 00:29:23.201152 containerd[1654]: time="2025-05-13T00:29:23.201119865Z" level=info msg="RemovePodSandbox \"3858951aad868e71994afe1d06847248f35d4673e4c791ff209e3d7ac3c17ea9\" returns successfully" May 13 00:29:23.201601 containerd[1654]: time="2025-05-13T00:29:23.201431558Z" level=info msg="StopPodSandbox for \"18dd7149039e82dd824df6286a02bfb259d4b9e18c4b218541ea3ba28be029e8\"" May 13 00:29:23.249098 containerd[1654]: 2025-05-13 00:29:23.224 [WARNING][6931] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="18dd7149039e82dd824df6286a02bfb259d4b9e18c4b218541ea3ba28be029e8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--gfr9r-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"8207c568-159c-4015-827d-1b226c94f3cf", ResourceVersion:"1046", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 0, 28, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b7b4b9d", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"356931c6ab7a9c03836f1e4a94676e311ce43c2b080aa50abc472a40a8d1a698", Pod:"csi-node-driver-gfr9r", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calid510999f90e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 00:29:23.249098 containerd[1654]: 2025-05-13 00:29:23.224 [INFO][6931] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="18dd7149039e82dd824df6286a02bfb259d4b9e18c4b218541ea3ba28be029e8" May 13 00:29:23.249098 containerd[1654]: 2025-05-13 00:29:23.224 [INFO][6931] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="18dd7149039e82dd824df6286a02bfb259d4b9e18c4b218541ea3ba28be029e8" iface="eth0" netns="" May 13 00:29:23.249098 containerd[1654]: 2025-05-13 00:29:23.224 [INFO][6931] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="18dd7149039e82dd824df6286a02bfb259d4b9e18c4b218541ea3ba28be029e8" May 13 00:29:23.249098 containerd[1654]: 2025-05-13 00:29:23.225 [INFO][6931] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="18dd7149039e82dd824df6286a02bfb259d4b9e18c4b218541ea3ba28be029e8" May 13 00:29:23.249098 containerd[1654]: 2025-05-13 00:29:23.240 [INFO][6938] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="18dd7149039e82dd824df6286a02bfb259d4b9e18c4b218541ea3ba28be029e8" HandleID="k8s-pod-network.18dd7149039e82dd824df6286a02bfb259d4b9e18c4b218541ea3ba28be029e8" Workload="localhost-k8s-csi--node--driver--gfr9r-eth0" May 13 00:29:23.249098 containerd[1654]: 2025-05-13 00:29:23.240 [INFO][6938] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 00:29:23.249098 containerd[1654]: 2025-05-13 00:29:23.240 [INFO][6938] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 00:29:23.249098 containerd[1654]: 2025-05-13 00:29:23.245 [WARNING][6938] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="18dd7149039e82dd824df6286a02bfb259d4b9e18c4b218541ea3ba28be029e8" HandleID="k8s-pod-network.18dd7149039e82dd824df6286a02bfb259d4b9e18c4b218541ea3ba28be029e8" Workload="localhost-k8s-csi--node--driver--gfr9r-eth0" May 13 00:29:23.249098 containerd[1654]: 2025-05-13 00:29:23.245 [INFO][6938] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="18dd7149039e82dd824df6286a02bfb259d4b9e18c4b218541ea3ba28be029e8" HandleID="k8s-pod-network.18dd7149039e82dd824df6286a02bfb259d4b9e18c4b218541ea3ba28be029e8" Workload="localhost-k8s-csi--node--driver--gfr9r-eth0" May 13 00:29:23.249098 containerd[1654]: 2025-05-13 00:29:23.246 [INFO][6938] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 00:29:23.249098 containerd[1654]: 2025-05-13 00:29:23.247 [INFO][6931] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="18dd7149039e82dd824df6286a02bfb259d4b9e18c4b218541ea3ba28be029e8" May 13 00:29:23.249672 containerd[1654]: time="2025-05-13T00:29:23.249076151Z" level=info msg="TearDown network for sandbox \"18dd7149039e82dd824df6286a02bfb259d4b9e18c4b218541ea3ba28be029e8\" successfully" May 13 00:29:23.249672 containerd[1654]: time="2025-05-13T00:29:23.249664780Z" level=info msg="StopPodSandbox for \"18dd7149039e82dd824df6286a02bfb259d4b9e18c4b218541ea3ba28be029e8\" returns successfully" May 13 00:29:23.250205 containerd[1654]: time="2025-05-13T00:29:23.250137949Z" level=info msg="RemovePodSandbox for \"18dd7149039e82dd824df6286a02bfb259d4b9e18c4b218541ea3ba28be029e8\"" May 13 00:29:23.250205 containerd[1654]: time="2025-05-13T00:29:23.250161945Z" level=info msg="Forcibly stopping sandbox \"18dd7149039e82dd824df6286a02bfb259d4b9e18c4b218541ea3ba28be029e8\"" May 13 00:29:23.299657 containerd[1654]: 2025-05-13 00:29:23.278 [WARNING][6956] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="18dd7149039e82dd824df6286a02bfb259d4b9e18c4b218541ea3ba28be029e8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--gfr9r-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"8207c568-159c-4015-827d-1b226c94f3cf", ResourceVersion:"1046", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 0, 28, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b7b4b9d", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"356931c6ab7a9c03836f1e4a94676e311ce43c2b080aa50abc472a40a8d1a698", Pod:"csi-node-driver-gfr9r", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calid510999f90e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 00:29:23.299657 containerd[1654]: 2025-05-13 00:29:23.278 [INFO][6956] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="18dd7149039e82dd824df6286a02bfb259d4b9e18c4b218541ea3ba28be029e8" May 13 00:29:23.299657 containerd[1654]: 2025-05-13 00:29:23.278 [INFO][6956] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="18dd7149039e82dd824df6286a02bfb259d4b9e18c4b218541ea3ba28be029e8" iface="eth0" netns="" May 13 00:29:23.299657 containerd[1654]: 2025-05-13 00:29:23.278 [INFO][6956] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="18dd7149039e82dd824df6286a02bfb259d4b9e18c4b218541ea3ba28be029e8" May 13 00:29:23.299657 containerd[1654]: 2025-05-13 00:29:23.278 [INFO][6956] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="18dd7149039e82dd824df6286a02bfb259d4b9e18c4b218541ea3ba28be029e8" May 13 00:29:23.299657 containerd[1654]: 2025-05-13 00:29:23.293 [INFO][6963] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="18dd7149039e82dd824df6286a02bfb259d4b9e18c4b218541ea3ba28be029e8" HandleID="k8s-pod-network.18dd7149039e82dd824df6286a02bfb259d4b9e18c4b218541ea3ba28be029e8" Workload="localhost-k8s-csi--node--driver--gfr9r-eth0" May 13 00:29:23.299657 containerd[1654]: 2025-05-13 00:29:23.293 [INFO][6963] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 00:29:23.299657 containerd[1654]: 2025-05-13 00:29:23.293 [INFO][6963] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 00:29:23.299657 containerd[1654]: 2025-05-13 00:29:23.296 [WARNING][6963] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="18dd7149039e82dd824df6286a02bfb259d4b9e18c4b218541ea3ba28be029e8" HandleID="k8s-pod-network.18dd7149039e82dd824df6286a02bfb259d4b9e18c4b218541ea3ba28be029e8" Workload="localhost-k8s-csi--node--driver--gfr9r-eth0" May 13 00:29:23.299657 containerd[1654]: 2025-05-13 00:29:23.296 [INFO][6963] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="18dd7149039e82dd824df6286a02bfb259d4b9e18c4b218541ea3ba28be029e8" HandleID="k8s-pod-network.18dd7149039e82dd824df6286a02bfb259d4b9e18c4b218541ea3ba28be029e8" Workload="localhost-k8s-csi--node--driver--gfr9r-eth0" May 13 00:29:23.299657 containerd[1654]: 2025-05-13 00:29:23.297 [INFO][6963] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 00:29:23.299657 containerd[1654]: 2025-05-13 00:29:23.298 [INFO][6956] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="18dd7149039e82dd824df6286a02bfb259d4b9e18c4b218541ea3ba28be029e8" May 13 00:29:23.300067 containerd[1654]: time="2025-05-13T00:29:23.299688057Z" level=info msg="TearDown network for sandbox \"18dd7149039e82dd824df6286a02bfb259d4b9e18c4b218541ea3ba28be029e8\" successfully" May 13 00:29:23.343288 containerd[1654]: time="2025-05-13T00:29:23.343254835Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"18dd7149039e82dd824df6286a02bfb259d4b9e18c4b218541ea3ba28be029e8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 13 00:29:23.343384 containerd[1654]: time="2025-05-13T00:29:23.343307030Z" level=info msg="RemovePodSandbox \"18dd7149039e82dd824df6286a02bfb259d4b9e18c4b218541ea3ba28be029e8\" returns successfully" May 13 00:29:23.343793 containerd[1654]: time="2025-05-13T00:29:23.343660609Z" level=info msg="StopPodSandbox for \"078335b4a112e958f144bcd326e6bfce7821d1a603db8a7da766dc2d63af9436\"" May 13 00:29:23.395477 containerd[1654]: 2025-05-13 00:29:23.375 [WARNING][6981] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="078335b4a112e958f144bcd326e6bfce7821d1a603db8a7da766dc2d63af9436" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--wnsqm-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"6eb2e449-b9a2-4894-b3ae-1030b0bd4c24", ResourceVersion:"830", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 0, 28, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"74094c76b45426af403322529b393924093f6019005861a048db11c48de429bb", Pod:"coredns-7db6d8ff4d-wnsqm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1cb572dadb3", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 00:29:23.395477 containerd[1654]: 2025-05-13 00:29:23.375 [INFO][6981] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="078335b4a112e958f144bcd326e6bfce7821d1a603db8a7da766dc2d63af9436" May 13 00:29:23.395477 containerd[1654]: 2025-05-13 00:29:23.375 [INFO][6981] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="078335b4a112e958f144bcd326e6bfce7821d1a603db8a7da766dc2d63af9436" iface="eth0" netns="" May 13 00:29:23.395477 containerd[1654]: 2025-05-13 00:29:23.375 [INFO][6981] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="078335b4a112e958f144bcd326e6bfce7821d1a603db8a7da766dc2d63af9436" May 13 00:29:23.395477 containerd[1654]: 2025-05-13 00:29:23.375 [INFO][6981] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="078335b4a112e958f144bcd326e6bfce7821d1a603db8a7da766dc2d63af9436" May 13 00:29:23.395477 containerd[1654]: 2025-05-13 00:29:23.388 [INFO][6988] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="078335b4a112e958f144bcd326e6bfce7821d1a603db8a7da766dc2d63af9436" HandleID="k8s-pod-network.078335b4a112e958f144bcd326e6bfce7821d1a603db8a7da766dc2d63af9436" Workload="localhost-k8s-coredns--7db6d8ff4d--wnsqm-eth0" May 13 00:29:23.395477 containerd[1654]: 2025-05-13 00:29:23.388 [INFO][6988] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 00:29:23.395477 containerd[1654]: 2025-05-13 00:29:23.388 [INFO][6988] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 00:29:23.395477 containerd[1654]: 2025-05-13 00:29:23.392 [WARNING][6988] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="078335b4a112e958f144bcd326e6bfce7821d1a603db8a7da766dc2d63af9436" HandleID="k8s-pod-network.078335b4a112e958f144bcd326e6bfce7821d1a603db8a7da766dc2d63af9436" Workload="localhost-k8s-coredns--7db6d8ff4d--wnsqm-eth0" May 13 00:29:23.395477 containerd[1654]: 2025-05-13 00:29:23.392 [INFO][6988] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="078335b4a112e958f144bcd326e6bfce7821d1a603db8a7da766dc2d63af9436" HandleID="k8s-pod-network.078335b4a112e958f144bcd326e6bfce7821d1a603db8a7da766dc2d63af9436" Workload="localhost-k8s-coredns--7db6d8ff4d--wnsqm-eth0" May 13 00:29:23.395477 containerd[1654]: 2025-05-13 00:29:23.393 [INFO][6988] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 00:29:23.395477 containerd[1654]: 2025-05-13 00:29:23.394 [INFO][6981] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="078335b4a112e958f144bcd326e6bfce7821d1a603db8a7da766dc2d63af9436" May 13 00:29:23.403799 containerd[1654]: time="2025-05-13T00:29:23.395518919Z" level=info msg="TearDown network for sandbox \"078335b4a112e958f144bcd326e6bfce7821d1a603db8a7da766dc2d63af9436\" successfully" May 13 00:29:23.403799 containerd[1654]: time="2025-05-13T00:29:23.395540170Z" level=info msg="StopPodSandbox for \"078335b4a112e958f144bcd326e6bfce7821d1a603db8a7da766dc2d63af9436\" returns successfully" May 13 00:29:23.403799 containerd[1654]: time="2025-05-13T00:29:23.396003718Z" level=info msg="RemovePodSandbox for \"078335b4a112e958f144bcd326e6bfce7821d1a603db8a7da766dc2d63af9436\"" May 13 00:29:23.403799 containerd[1654]: time="2025-05-13T00:29:23.396065779Z" level=info msg="Forcibly stopping sandbox \"078335b4a112e958f144bcd326e6bfce7821d1a603db8a7da766dc2d63af9436\"" May 13 00:29:23.447482 containerd[1654]: 2025-05-13 00:29:23.420 [WARNING][7006] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="078335b4a112e958f144bcd326e6bfce7821d1a603db8a7da766dc2d63af9436" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--wnsqm-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"6eb2e449-b9a2-4894-b3ae-1030b0bd4c24", ResourceVersion:"830", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 0, 28, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"74094c76b45426af403322529b393924093f6019005861a048db11c48de429bb", Pod:"coredns-7db6d8ff4d-wnsqm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1cb572dadb3", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 00:29:23.447482 containerd[1654]: 2025-05-13 00:29:23.420 [INFO][7006] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="078335b4a112e958f144bcd326e6bfce7821d1a603db8a7da766dc2d63af9436" May 13 00:29:23.447482 containerd[1654]: 2025-05-13 00:29:23.420 [INFO][7006] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="078335b4a112e958f144bcd326e6bfce7821d1a603db8a7da766dc2d63af9436" iface="eth0" netns="" May 13 00:29:23.447482 containerd[1654]: 2025-05-13 00:29:23.420 [INFO][7006] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="078335b4a112e958f144bcd326e6bfce7821d1a603db8a7da766dc2d63af9436" May 13 00:29:23.447482 containerd[1654]: 2025-05-13 00:29:23.420 [INFO][7006] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="078335b4a112e958f144bcd326e6bfce7821d1a603db8a7da766dc2d63af9436" May 13 00:29:23.447482 containerd[1654]: 2025-05-13 00:29:23.439 [INFO][7013] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="078335b4a112e958f144bcd326e6bfce7821d1a603db8a7da766dc2d63af9436" HandleID="k8s-pod-network.078335b4a112e958f144bcd326e6bfce7821d1a603db8a7da766dc2d63af9436" Workload="localhost-k8s-coredns--7db6d8ff4d--wnsqm-eth0" May 13 00:29:23.447482 containerd[1654]: 2025-05-13 00:29:23.439 [INFO][7013] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 00:29:23.447482 containerd[1654]: 2025-05-13 00:29:23.439 [INFO][7013] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 00:29:23.447482 containerd[1654]: 2025-05-13 00:29:23.443 [WARNING][7013] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="078335b4a112e958f144bcd326e6bfce7821d1a603db8a7da766dc2d63af9436" HandleID="k8s-pod-network.078335b4a112e958f144bcd326e6bfce7821d1a603db8a7da766dc2d63af9436" Workload="localhost-k8s-coredns--7db6d8ff4d--wnsqm-eth0" May 13 00:29:23.447482 containerd[1654]: 2025-05-13 00:29:23.443 [INFO][7013] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="078335b4a112e958f144bcd326e6bfce7821d1a603db8a7da766dc2d63af9436" HandleID="k8s-pod-network.078335b4a112e958f144bcd326e6bfce7821d1a603db8a7da766dc2d63af9436" Workload="localhost-k8s-coredns--7db6d8ff4d--wnsqm-eth0" May 13 00:29:23.447482 containerd[1654]: 2025-05-13 00:29:23.444 [INFO][7013] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 00:29:23.447482 containerd[1654]: 2025-05-13 00:29:23.446 [INFO][7006] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="078335b4a112e958f144bcd326e6bfce7821d1a603db8a7da766dc2d63af9436" May 13 00:29:23.447956 containerd[1654]: time="2025-05-13T00:29:23.447936830Z" level=info msg="TearDown network for sandbox \"078335b4a112e958f144bcd326e6bfce7821d1a603db8a7da766dc2d63af9436\" successfully" May 13 00:29:23.494530 containerd[1654]: time="2025-05-13T00:29:23.494498333Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"078335b4a112e958f144bcd326e6bfce7821d1a603db8a7da766dc2d63af9436\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 13 00:29:23.494681 containerd[1654]: time="2025-05-13T00:29:23.494670923Z" level=info msg="RemovePodSandbox \"078335b4a112e958f144bcd326e6bfce7821d1a603db8a7da766dc2d63af9436\" returns successfully" May 13 00:29:23.549489 containerd[1654]: time="2025-05-13T00:29:23.549416496Z" level=info msg="StopPodSandbox for \"ee177d1e7bc80240241d282d772b455fb5d8f5bf6076d993d144d7dc913d6087\"" May 13 00:29:23.625804 containerd[1654]: 2025-05-13 00:29:23.597 [WARNING][7031] cni-plugin/k8s.go 566: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="ee177d1e7bc80240241d282d772b455fb5d8f5bf6076d993d144d7dc913d6087" WorkloadEndpoint="localhost-k8s-calico--apiserver--5fcdd59ffd--hwxmq-eth0" May 13 00:29:23.625804 containerd[1654]: 2025-05-13 00:29:23.597 [INFO][7031] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="ee177d1e7bc80240241d282d772b455fb5d8f5bf6076d993d144d7dc913d6087" May 13 00:29:23.625804 containerd[1654]: 2025-05-13 00:29:23.597 [INFO][7031] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ee177d1e7bc80240241d282d772b455fb5d8f5bf6076d993d144d7dc913d6087" iface="eth0" netns="" May 13 00:29:23.625804 containerd[1654]: 2025-05-13 00:29:23.597 [INFO][7031] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="ee177d1e7bc80240241d282d772b455fb5d8f5bf6076d993d144d7dc913d6087" May 13 00:29:23.625804 containerd[1654]: 2025-05-13 00:29:23.597 [INFO][7031] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ee177d1e7bc80240241d282d772b455fb5d8f5bf6076d993d144d7dc913d6087" May 13 00:29:23.625804 containerd[1654]: 2025-05-13 00:29:23.618 [INFO][7038] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ee177d1e7bc80240241d282d772b455fb5d8f5bf6076d993d144d7dc913d6087" HandleID="k8s-pod-network.ee177d1e7bc80240241d282d772b455fb5d8f5bf6076d993d144d7dc913d6087" Workload="localhost-k8s-calico--apiserver--5fcdd59ffd--hwxmq-eth0" May 13 00:29:23.625804 containerd[1654]: 2025-05-13 00:29:23.618 [INFO][7038] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 00:29:23.625804 containerd[1654]: 2025-05-13 00:29:23.618 [INFO][7038] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 00:29:23.625804 containerd[1654]: 2025-05-13 00:29:23.622 [WARNING][7038] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ee177d1e7bc80240241d282d772b455fb5d8f5bf6076d993d144d7dc913d6087" HandleID="k8s-pod-network.ee177d1e7bc80240241d282d772b455fb5d8f5bf6076d993d144d7dc913d6087" Workload="localhost-k8s-calico--apiserver--5fcdd59ffd--hwxmq-eth0" May 13 00:29:23.625804 containerd[1654]: 2025-05-13 00:29:23.622 [INFO][7038] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ee177d1e7bc80240241d282d772b455fb5d8f5bf6076d993d144d7dc913d6087" HandleID="k8s-pod-network.ee177d1e7bc80240241d282d772b455fb5d8f5bf6076d993d144d7dc913d6087" Workload="localhost-k8s-calico--apiserver--5fcdd59ffd--hwxmq-eth0" May 13 00:29:23.625804 containerd[1654]: 2025-05-13 00:29:23.623 [INFO][7038] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 00:29:23.625804 containerd[1654]: 2025-05-13 00:29:23.624 [INFO][7031] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="ee177d1e7bc80240241d282d772b455fb5d8f5bf6076d993d144d7dc913d6087" May 13 00:29:23.627102 containerd[1654]: time="2025-05-13T00:29:23.625832135Z" level=info msg="TearDown network for sandbox \"ee177d1e7bc80240241d282d772b455fb5d8f5bf6076d993d144d7dc913d6087\" successfully" May 13 00:29:23.627102 containerd[1654]: time="2025-05-13T00:29:23.625848232Z" level=info msg="StopPodSandbox for \"ee177d1e7bc80240241d282d772b455fb5d8f5bf6076d993d144d7dc913d6087\" returns successfully" May 13 00:29:23.627102 containerd[1654]: time="2025-05-13T00:29:23.626698741Z" level=info msg="RemovePodSandbox for \"ee177d1e7bc80240241d282d772b455fb5d8f5bf6076d993d144d7dc913d6087\"" May 13 00:29:23.627102 containerd[1654]: time="2025-05-13T00:29:23.626720637Z" level=info msg="Forcibly stopping sandbox \"ee177d1e7bc80240241d282d772b455fb5d8f5bf6076d993d144d7dc913d6087\"" May 13 00:29:23.686096 containerd[1654]: 2025-05-13 00:29:23.653 [WARNING][7056] cni-plugin/k8s.go 566: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="ee177d1e7bc80240241d282d772b455fb5d8f5bf6076d993d144d7dc913d6087" WorkloadEndpoint="localhost-k8s-calico--apiserver--5fcdd59ffd--hwxmq-eth0" May 13 00:29:23.686096 containerd[1654]: 2025-05-13 00:29:23.653 [INFO][7056] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="ee177d1e7bc80240241d282d772b455fb5d8f5bf6076d993d144d7dc913d6087" May 13 00:29:23.686096 containerd[1654]: 2025-05-13 00:29:23.653 [INFO][7056] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ee177d1e7bc80240241d282d772b455fb5d8f5bf6076d993d144d7dc913d6087" iface="eth0" netns="" May 13 00:29:23.686096 containerd[1654]: 2025-05-13 00:29:23.653 [INFO][7056] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="ee177d1e7bc80240241d282d772b455fb5d8f5bf6076d993d144d7dc913d6087" May 13 00:29:23.686096 containerd[1654]: 2025-05-13 00:29:23.654 [INFO][7056] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ee177d1e7bc80240241d282d772b455fb5d8f5bf6076d993d144d7dc913d6087" May 13 00:29:23.686096 containerd[1654]: 2025-05-13 00:29:23.672 [INFO][7063] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ee177d1e7bc80240241d282d772b455fb5d8f5bf6076d993d144d7dc913d6087" HandleID="k8s-pod-network.ee177d1e7bc80240241d282d772b455fb5d8f5bf6076d993d144d7dc913d6087" Workload="localhost-k8s-calico--apiserver--5fcdd59ffd--hwxmq-eth0" May 13 00:29:23.686096 containerd[1654]: 2025-05-13 00:29:23.672 [INFO][7063] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 00:29:23.686096 containerd[1654]: 2025-05-13 00:29:23.672 [INFO][7063] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 00:29:23.686096 containerd[1654]: 2025-05-13 00:29:23.676 [WARNING][7063] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ee177d1e7bc80240241d282d772b455fb5d8f5bf6076d993d144d7dc913d6087" HandleID="k8s-pod-network.ee177d1e7bc80240241d282d772b455fb5d8f5bf6076d993d144d7dc913d6087" Workload="localhost-k8s-calico--apiserver--5fcdd59ffd--hwxmq-eth0" May 13 00:29:23.686096 containerd[1654]: 2025-05-13 00:29:23.676 [INFO][7063] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ee177d1e7bc80240241d282d772b455fb5d8f5bf6076d993d144d7dc913d6087" HandleID="k8s-pod-network.ee177d1e7bc80240241d282d772b455fb5d8f5bf6076d993d144d7dc913d6087" Workload="localhost-k8s-calico--apiserver--5fcdd59ffd--hwxmq-eth0" May 13 00:29:23.686096 containerd[1654]: 2025-05-13 00:29:23.683 [INFO][7063] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 00:29:23.686096 containerd[1654]: 2025-05-13 00:29:23.685 [INFO][7056] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="ee177d1e7bc80240241d282d772b455fb5d8f5bf6076d993d144d7dc913d6087" May 13 00:29:23.700685 containerd[1654]: time="2025-05-13T00:29:23.686118889Z" level=info msg="TearDown network for sandbox \"ee177d1e7bc80240241d282d772b455fb5d8f5bf6076d993d144d7dc913d6087\" successfully" May 13 00:29:23.732404 containerd[1654]: time="2025-05-13T00:29:23.732381246Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ee177d1e7bc80240241d282d772b455fb5d8f5bf6076d993d144d7dc913d6087\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 13 00:29:23.732463 containerd[1654]: time="2025-05-13T00:29:23.732426774Z" level=info msg="RemovePodSandbox \"ee177d1e7bc80240241d282d772b455fb5d8f5bf6076d993d144d7dc913d6087\" returns successfully" May 13 00:29:23.732883 containerd[1654]: time="2025-05-13T00:29:23.732790477Z" level=info msg="StopPodSandbox for \"95514c060ba7bfb0e198e40c76b60d097c2db68ab36d63bbe6781ed15e5da073\"" May 13 00:29:23.788586 containerd[1654]: 2025-05-13 00:29:23.765 [WARNING][7081] cni-plugin/k8s.go 566: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="95514c060ba7bfb0e198e40c76b60d097c2db68ab36d63bbe6781ed15e5da073" WorkloadEndpoint="localhost-k8s-calico--apiserver--5fcdd59ffd--hwxmq-eth0" May 13 00:29:23.788586 containerd[1654]: 2025-05-13 00:29:23.765 [INFO][7081] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="95514c060ba7bfb0e198e40c76b60d097c2db68ab36d63bbe6781ed15e5da073" May 13 00:29:23.788586 containerd[1654]: 2025-05-13 00:29:23.765 [INFO][7081] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="95514c060ba7bfb0e198e40c76b60d097c2db68ab36d63bbe6781ed15e5da073" iface="eth0" netns="" May 13 00:29:23.788586 containerd[1654]: 2025-05-13 00:29:23.765 [INFO][7081] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="95514c060ba7bfb0e198e40c76b60d097c2db68ab36d63bbe6781ed15e5da073" May 13 00:29:23.788586 containerd[1654]: 2025-05-13 00:29:23.765 [INFO][7081] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="95514c060ba7bfb0e198e40c76b60d097c2db68ab36d63bbe6781ed15e5da073" May 13 00:29:23.788586 containerd[1654]: 2025-05-13 00:29:23.779 [INFO][7088] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="95514c060ba7bfb0e198e40c76b60d097c2db68ab36d63bbe6781ed15e5da073" HandleID="k8s-pod-network.95514c060ba7bfb0e198e40c76b60d097c2db68ab36d63bbe6781ed15e5da073" Workload="localhost-k8s-calico--apiserver--5fcdd59ffd--hwxmq-eth0" May 13 00:29:23.788586 containerd[1654]: 2025-05-13 00:29:23.779 [INFO][7088] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 00:29:23.788586 containerd[1654]: 2025-05-13 00:29:23.779 [INFO][7088] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 00:29:23.788586 containerd[1654]: 2025-05-13 00:29:23.785 [WARNING][7088] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="95514c060ba7bfb0e198e40c76b60d097c2db68ab36d63bbe6781ed15e5da073" HandleID="k8s-pod-network.95514c060ba7bfb0e198e40c76b60d097c2db68ab36d63bbe6781ed15e5da073" Workload="localhost-k8s-calico--apiserver--5fcdd59ffd--hwxmq-eth0" May 13 00:29:23.788586 containerd[1654]: 2025-05-13 00:29:23.785 [INFO][7088] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="95514c060ba7bfb0e198e40c76b60d097c2db68ab36d63bbe6781ed15e5da073" HandleID="k8s-pod-network.95514c060ba7bfb0e198e40c76b60d097c2db68ab36d63bbe6781ed15e5da073" Workload="localhost-k8s-calico--apiserver--5fcdd59ffd--hwxmq-eth0" May 13 00:29:23.788586 containerd[1654]: 2025-05-13 00:29:23.786 [INFO][7088] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 00:29:23.788586 containerd[1654]: 2025-05-13 00:29:23.787 [INFO][7081] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="95514c060ba7bfb0e198e40c76b60d097c2db68ab36d63bbe6781ed15e5da073" May 13 00:29:23.788920 containerd[1654]: time="2025-05-13T00:29:23.788626609Z" level=info msg="TearDown network for sandbox \"95514c060ba7bfb0e198e40c76b60d097c2db68ab36d63bbe6781ed15e5da073\" successfully" May 13 00:29:23.788920 containerd[1654]: time="2025-05-13T00:29:23.788643404Z" level=info msg="StopPodSandbox for \"95514c060ba7bfb0e198e40c76b60d097c2db68ab36d63bbe6781ed15e5da073\" returns successfully" May 13 00:29:23.789059 containerd[1654]: time="2025-05-13T00:29:23.789043204Z" level=info msg="RemovePodSandbox for \"95514c060ba7bfb0e198e40c76b60d097c2db68ab36d63bbe6781ed15e5da073\"" May 13 00:29:23.789085 containerd[1654]: time="2025-05-13T00:29:23.789062076Z" level=info msg="Forcibly stopping sandbox \"95514c060ba7bfb0e198e40c76b60d097c2db68ab36d63bbe6781ed15e5da073\"" May 13 00:29:23.854346 containerd[1654]: 2025-05-13 00:29:23.812 [WARNING][7106] cni-plugin/k8s.go 566: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="95514c060ba7bfb0e198e40c76b60d097c2db68ab36d63bbe6781ed15e5da073" WorkloadEndpoint="localhost-k8s-calico--apiserver--5fcdd59ffd--hwxmq-eth0" May 13 00:29:23.854346 containerd[1654]: 2025-05-13 00:29:23.812 [INFO][7106] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="95514c060ba7bfb0e198e40c76b60d097c2db68ab36d63bbe6781ed15e5da073" May 13 00:29:23.854346 containerd[1654]: 2025-05-13 00:29:23.812 [INFO][7106] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="95514c060ba7bfb0e198e40c76b60d097c2db68ab36d63bbe6781ed15e5da073" iface="eth0" netns="" May 13 00:29:23.854346 containerd[1654]: 2025-05-13 00:29:23.812 [INFO][7106] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="95514c060ba7bfb0e198e40c76b60d097c2db68ab36d63bbe6781ed15e5da073" May 13 00:29:23.854346 containerd[1654]: 2025-05-13 00:29:23.812 [INFO][7106] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="95514c060ba7bfb0e198e40c76b60d097c2db68ab36d63bbe6781ed15e5da073" May 13 00:29:23.854346 containerd[1654]: 2025-05-13 00:29:23.847 [INFO][7113] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="95514c060ba7bfb0e198e40c76b60d097c2db68ab36d63bbe6781ed15e5da073" HandleID="k8s-pod-network.95514c060ba7bfb0e198e40c76b60d097c2db68ab36d63bbe6781ed15e5da073" Workload="localhost-k8s-calico--apiserver--5fcdd59ffd--hwxmq-eth0" May 13 00:29:23.854346 containerd[1654]: 2025-05-13 00:29:23.847 [INFO][7113] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 00:29:23.854346 containerd[1654]: 2025-05-13 00:29:23.847 [INFO][7113] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 00:29:23.854346 containerd[1654]: 2025-05-13 00:29:23.850 [WARNING][7113] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="95514c060ba7bfb0e198e40c76b60d097c2db68ab36d63bbe6781ed15e5da073" HandleID="k8s-pod-network.95514c060ba7bfb0e198e40c76b60d097c2db68ab36d63bbe6781ed15e5da073" Workload="localhost-k8s-calico--apiserver--5fcdd59ffd--hwxmq-eth0" May 13 00:29:23.854346 containerd[1654]: 2025-05-13 00:29:23.850 [INFO][7113] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="95514c060ba7bfb0e198e40c76b60d097c2db68ab36d63bbe6781ed15e5da073" HandleID="k8s-pod-network.95514c060ba7bfb0e198e40c76b60d097c2db68ab36d63bbe6781ed15e5da073" Workload="localhost-k8s-calico--apiserver--5fcdd59ffd--hwxmq-eth0" May 13 00:29:23.854346 containerd[1654]: 2025-05-13 00:29:23.851 [INFO][7113] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 00:29:23.854346 containerd[1654]: 2025-05-13 00:29:23.852 [INFO][7106] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="95514c060ba7bfb0e198e40c76b60d097c2db68ab36d63bbe6781ed15e5da073" May 13 00:29:23.854346 containerd[1654]: time="2025-05-13T00:29:23.853631419Z" level=info msg="TearDown network for sandbox \"95514c060ba7bfb0e198e40c76b60d097c2db68ab36d63bbe6781ed15e5da073\" successfully" May 13 00:29:23.888620 containerd[1654]: time="2025-05-13T00:29:23.888521022Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"95514c060ba7bfb0e198e40c76b60d097c2db68ab36d63bbe6781ed15e5da073\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 13 00:29:23.888620 containerd[1654]: time="2025-05-13T00:29:23.888560747Z" level=info msg="RemovePodSandbox \"95514c060ba7bfb0e198e40c76b60d097c2db68ab36d63bbe6781ed15e5da073\" returns successfully" May 13 00:29:23.889111 containerd[1654]: time="2025-05-13T00:29:23.889087002Z" level=info msg="StopPodSandbox for \"f8eef8de32e59bda577b2e39ecefefac4fcef6ea2ee8bab236c9d2342a3a2a5b\"" May 13 00:29:23.943558 containerd[1654]: 2025-05-13 00:29:23.921 [WARNING][7132] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f8eef8de32e59bda577b2e39ecefefac4fcef6ea2ee8bab236c9d2342a3a2a5b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5fcdd59ffd--gtcr6-eth0", GenerateName:"calico-apiserver-5fcdd59ffd-", Namespace:"calico-apiserver", SelfLink:"", UID:"d2f1ddd1-14cd-452c-934e-9a28944a7b23", ResourceVersion:"846", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 0, 28, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5fcdd59ffd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d20cf9d6e8012019de8c66e56402082068f8419b948332fc02ef397838479d49", Pod:"calico-apiserver-5fcdd59ffd-gtcr6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali980d56dedd6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 00:29:23.943558 containerd[1654]: 2025-05-13 00:29:23.921 [INFO][7132] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="f8eef8de32e59bda577b2e39ecefefac4fcef6ea2ee8bab236c9d2342a3a2a5b" May 13 00:29:23.943558 containerd[1654]: 2025-05-13 00:29:23.921 [INFO][7132] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f8eef8de32e59bda577b2e39ecefefac4fcef6ea2ee8bab236c9d2342a3a2a5b" iface="eth0" netns="" May 13 00:29:23.943558 containerd[1654]: 2025-05-13 00:29:23.921 [INFO][7132] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="f8eef8de32e59bda577b2e39ecefefac4fcef6ea2ee8bab236c9d2342a3a2a5b" May 13 00:29:23.943558 containerd[1654]: 2025-05-13 00:29:23.921 [INFO][7132] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f8eef8de32e59bda577b2e39ecefefac4fcef6ea2ee8bab236c9d2342a3a2a5b" May 13 00:29:23.943558 containerd[1654]: 2025-05-13 00:29:23.936 [INFO][7139] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f8eef8de32e59bda577b2e39ecefefac4fcef6ea2ee8bab236c9d2342a3a2a5b" HandleID="k8s-pod-network.f8eef8de32e59bda577b2e39ecefefac4fcef6ea2ee8bab236c9d2342a3a2a5b" Workload="localhost-k8s-calico--apiserver--5fcdd59ffd--gtcr6-eth0" May 13 00:29:23.943558 containerd[1654]: 2025-05-13 00:29:23.936 [INFO][7139] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 00:29:23.943558 containerd[1654]: 2025-05-13 00:29:23.936 [INFO][7139] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 00:29:23.943558 containerd[1654]: 2025-05-13 00:29:23.940 [WARNING][7139] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f8eef8de32e59bda577b2e39ecefefac4fcef6ea2ee8bab236c9d2342a3a2a5b" HandleID="k8s-pod-network.f8eef8de32e59bda577b2e39ecefefac4fcef6ea2ee8bab236c9d2342a3a2a5b" Workload="localhost-k8s-calico--apiserver--5fcdd59ffd--gtcr6-eth0" May 13 00:29:23.943558 containerd[1654]: 2025-05-13 00:29:23.940 [INFO][7139] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f8eef8de32e59bda577b2e39ecefefac4fcef6ea2ee8bab236c9d2342a3a2a5b" HandleID="k8s-pod-network.f8eef8de32e59bda577b2e39ecefefac4fcef6ea2ee8bab236c9d2342a3a2a5b" Workload="localhost-k8s-calico--apiserver--5fcdd59ffd--gtcr6-eth0" May 13 00:29:23.943558 containerd[1654]: 2025-05-13 00:29:23.941 [INFO][7139] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 00:29:23.943558 containerd[1654]: 2025-05-13 00:29:23.942 [INFO][7132] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="f8eef8de32e59bda577b2e39ecefefac4fcef6ea2ee8bab236c9d2342a3a2a5b" May 13 00:29:23.944043 containerd[1654]: time="2025-05-13T00:29:23.943596550Z" level=info msg="TearDown network for sandbox \"f8eef8de32e59bda577b2e39ecefefac4fcef6ea2ee8bab236c9d2342a3a2a5b\" successfully" May 13 00:29:23.944043 containerd[1654]: time="2025-05-13T00:29:23.943612628Z" level=info msg="StopPodSandbox for \"f8eef8de32e59bda577b2e39ecefefac4fcef6ea2ee8bab236c9d2342a3a2a5b\" returns successfully" May 13 00:29:23.944043 containerd[1654]: time="2025-05-13T00:29:23.943920077Z" level=info msg="RemovePodSandbox for \"f8eef8de32e59bda577b2e39ecefefac4fcef6ea2ee8bab236c9d2342a3a2a5b\"" May 13 00:29:23.944043 containerd[1654]: time="2025-05-13T00:29:23.943936655Z" level=info msg="Forcibly stopping sandbox \"f8eef8de32e59bda577b2e39ecefefac4fcef6ea2ee8bab236c9d2342a3a2a5b\"" May 13 00:29:23.988119 containerd[1654]: 2025-05-13 00:29:23.967 [WARNING][7157] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f8eef8de32e59bda577b2e39ecefefac4fcef6ea2ee8bab236c9d2342a3a2a5b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5fcdd59ffd--gtcr6-eth0", GenerateName:"calico-apiserver-5fcdd59ffd-", Namespace:"calico-apiserver", SelfLink:"", UID:"d2f1ddd1-14cd-452c-934e-9a28944a7b23", ResourceVersion:"846", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 0, 28, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5fcdd59ffd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d20cf9d6e8012019de8c66e56402082068f8419b948332fc02ef397838479d49", Pod:"calico-apiserver-5fcdd59ffd-gtcr6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali980d56dedd6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 00:29:23.988119 containerd[1654]: 2025-05-13 00:29:23.967 [INFO][7157] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="f8eef8de32e59bda577b2e39ecefefac4fcef6ea2ee8bab236c9d2342a3a2a5b" May 13 00:29:23.988119 containerd[1654]: 2025-05-13 00:29:23.967 [INFO][7157] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f8eef8de32e59bda577b2e39ecefefac4fcef6ea2ee8bab236c9d2342a3a2a5b" iface="eth0" netns="" May 13 00:29:23.988119 containerd[1654]: 2025-05-13 00:29:23.967 [INFO][7157] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="f8eef8de32e59bda577b2e39ecefefac4fcef6ea2ee8bab236c9d2342a3a2a5b" May 13 00:29:23.988119 containerd[1654]: 2025-05-13 00:29:23.967 [INFO][7157] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f8eef8de32e59bda577b2e39ecefefac4fcef6ea2ee8bab236c9d2342a3a2a5b" May 13 00:29:23.988119 containerd[1654]: 2025-05-13 00:29:23.980 [INFO][7164] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f8eef8de32e59bda577b2e39ecefefac4fcef6ea2ee8bab236c9d2342a3a2a5b" HandleID="k8s-pod-network.f8eef8de32e59bda577b2e39ecefefac4fcef6ea2ee8bab236c9d2342a3a2a5b" Workload="localhost-k8s-calico--apiserver--5fcdd59ffd--gtcr6-eth0" May 13 00:29:23.988119 containerd[1654]: 2025-05-13 00:29:23.980 [INFO][7164] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 00:29:23.988119 containerd[1654]: 2025-05-13 00:29:23.980 [INFO][7164] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 00:29:23.988119 containerd[1654]: 2025-05-13 00:29:23.985 [WARNING][7164] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f8eef8de32e59bda577b2e39ecefefac4fcef6ea2ee8bab236c9d2342a3a2a5b" HandleID="k8s-pod-network.f8eef8de32e59bda577b2e39ecefefac4fcef6ea2ee8bab236c9d2342a3a2a5b" Workload="localhost-k8s-calico--apiserver--5fcdd59ffd--gtcr6-eth0" May 13 00:29:23.988119 containerd[1654]: 2025-05-13 00:29:23.985 [INFO][7164] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f8eef8de32e59bda577b2e39ecefefac4fcef6ea2ee8bab236c9d2342a3a2a5b" HandleID="k8s-pod-network.f8eef8de32e59bda577b2e39ecefefac4fcef6ea2ee8bab236c9d2342a3a2a5b" Workload="localhost-k8s-calico--apiserver--5fcdd59ffd--gtcr6-eth0" May 13 00:29:23.988119 containerd[1654]: 2025-05-13 00:29:23.985 [INFO][7164] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 00:29:23.988119 containerd[1654]: 2025-05-13 00:29:23.987 [INFO][7157] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="f8eef8de32e59bda577b2e39ecefefac4fcef6ea2ee8bab236c9d2342a3a2a5b" May 13 00:29:23.988510 containerd[1654]: time="2025-05-13T00:29:23.988133486Z" level=info msg="TearDown network for sandbox \"f8eef8de32e59bda577b2e39ecefefac4fcef6ea2ee8bab236c9d2342a3a2a5b\" successfully" May 13 00:29:24.033125 containerd[1654]: time="2025-05-13T00:29:24.033092823Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f8eef8de32e59bda577b2e39ecefefac4fcef6ea2ee8bab236c9d2342a3a2a5b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 13 00:29:24.033211 containerd[1654]: time="2025-05-13T00:29:24.033142366Z" level=info msg="RemovePodSandbox \"f8eef8de32e59bda577b2e39ecefefac4fcef6ea2ee8bab236c9d2342a3a2a5b\" returns successfully" May 13 00:29:24.033512 containerd[1654]: time="2025-05-13T00:29:24.033490566Z" level=info msg="StopPodSandbox for \"430af441a72f539a91045e2e8f325527c635c795f894d12c97ba94d58aa93098\"" May 13 00:29:24.086559 containerd[1654]: 2025-05-13 00:29:24.064 [WARNING][7182] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="430af441a72f539a91045e2e8f325527c635c795f894d12c97ba94d58aa93098" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--jpqnf-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"5ecaa235-5efa-4dbf-aaf9-e49c99601adb", ResourceVersion:"805", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 0, 28, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"747fb62411bfde1dfa9a7f00feba5311ce26dfaa9fb21e1648aeefe57fa3b7d2", Pod:"coredns-7db6d8ff4d-jpqnf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid947ce0e2f2", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 00:29:24.086559 containerd[1654]: 2025-05-13 00:29:24.064 [INFO][7182] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="430af441a72f539a91045e2e8f325527c635c795f894d12c97ba94d58aa93098" May 13 00:29:24.086559 containerd[1654]: 2025-05-13 00:29:24.064 [INFO][7182] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="430af441a72f539a91045e2e8f325527c635c795f894d12c97ba94d58aa93098" iface="eth0" netns="" May 13 00:29:24.086559 containerd[1654]: 2025-05-13 00:29:24.064 [INFO][7182] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="430af441a72f539a91045e2e8f325527c635c795f894d12c97ba94d58aa93098" May 13 00:29:24.086559 containerd[1654]: 2025-05-13 00:29:24.064 [INFO][7182] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="430af441a72f539a91045e2e8f325527c635c795f894d12c97ba94d58aa93098" May 13 00:29:24.086559 containerd[1654]: 2025-05-13 00:29:24.078 [INFO][7189] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="430af441a72f539a91045e2e8f325527c635c795f894d12c97ba94d58aa93098" HandleID="k8s-pod-network.430af441a72f539a91045e2e8f325527c635c795f894d12c97ba94d58aa93098" Workload="localhost-k8s-coredns--7db6d8ff4d--jpqnf-eth0" May 13 00:29:24.086559 containerd[1654]: 2025-05-13 00:29:24.078 [INFO][7189] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 00:29:24.086559 containerd[1654]: 2025-05-13 00:29:24.078 [INFO][7189] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 00:29:24.086559 containerd[1654]: 2025-05-13 00:29:24.082 [WARNING][7189] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="430af441a72f539a91045e2e8f325527c635c795f894d12c97ba94d58aa93098" HandleID="k8s-pod-network.430af441a72f539a91045e2e8f325527c635c795f894d12c97ba94d58aa93098" Workload="localhost-k8s-coredns--7db6d8ff4d--jpqnf-eth0" May 13 00:29:24.086559 containerd[1654]: 2025-05-13 00:29:24.082 [INFO][7189] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="430af441a72f539a91045e2e8f325527c635c795f894d12c97ba94d58aa93098" HandleID="k8s-pod-network.430af441a72f539a91045e2e8f325527c635c795f894d12c97ba94d58aa93098" Workload="localhost-k8s-coredns--7db6d8ff4d--jpqnf-eth0" May 13 00:29:24.086559 containerd[1654]: 2025-05-13 00:29:24.083 [INFO][7189] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 00:29:24.086559 containerd[1654]: 2025-05-13 00:29:24.085 [INFO][7182] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="430af441a72f539a91045e2e8f325527c635c795f894d12c97ba94d58aa93098" May 13 00:29:24.086559 containerd[1654]: time="2025-05-13T00:29:24.086459530Z" level=info msg="TearDown network for sandbox \"430af441a72f539a91045e2e8f325527c635c795f894d12c97ba94d58aa93098\" successfully" May 13 00:29:24.086559 containerd[1654]: time="2025-05-13T00:29:24.086480179Z" level=info msg="StopPodSandbox for \"430af441a72f539a91045e2e8f325527c635c795f894d12c97ba94d58aa93098\" returns successfully" May 13 00:29:24.089180 containerd[1654]: time="2025-05-13T00:29:24.087576642Z" level=info msg="RemovePodSandbox for \"430af441a72f539a91045e2e8f325527c635c795f894d12c97ba94d58aa93098\"" May 13 00:29:24.089180 containerd[1654]: time="2025-05-13T00:29:24.087597304Z" level=info msg="Forcibly stopping sandbox \"430af441a72f539a91045e2e8f325527c635c795f894d12c97ba94d58aa93098\"" May 13 00:29:24.143920 containerd[1654]: 2025-05-13 00:29:24.118 [WARNING][7207] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="430af441a72f539a91045e2e8f325527c635c795f894d12c97ba94d58aa93098" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--jpqnf-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"5ecaa235-5efa-4dbf-aaf9-e49c99601adb", ResourceVersion:"805", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 0, 28, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"747fb62411bfde1dfa9a7f00feba5311ce26dfaa9fb21e1648aeefe57fa3b7d2", Pod:"coredns-7db6d8ff4d-jpqnf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid947ce0e2f2", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 00:29:24.143920 containerd[1654]: 2025-05-13 00:29:24.118 [INFO][7207] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="430af441a72f539a91045e2e8f325527c635c795f894d12c97ba94d58aa93098" May 13 00:29:24.143920 containerd[1654]: 2025-05-13 00:29:24.118 [INFO][7207] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="430af441a72f539a91045e2e8f325527c635c795f894d12c97ba94d58aa93098" iface="eth0" netns="" May 13 00:29:24.143920 containerd[1654]: 2025-05-13 00:29:24.118 [INFO][7207] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="430af441a72f539a91045e2e8f325527c635c795f894d12c97ba94d58aa93098" May 13 00:29:24.143920 containerd[1654]: 2025-05-13 00:29:24.118 [INFO][7207] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="430af441a72f539a91045e2e8f325527c635c795f894d12c97ba94d58aa93098" May 13 00:29:24.143920 containerd[1654]: 2025-05-13 00:29:24.137 [INFO][7214] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="430af441a72f539a91045e2e8f325527c635c795f894d12c97ba94d58aa93098" HandleID="k8s-pod-network.430af441a72f539a91045e2e8f325527c635c795f894d12c97ba94d58aa93098" Workload="localhost-k8s-coredns--7db6d8ff4d--jpqnf-eth0" May 13 00:29:24.143920 containerd[1654]: 2025-05-13 00:29:24.137 [INFO][7214] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 00:29:24.143920 containerd[1654]: 2025-05-13 00:29:24.137 [INFO][7214] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 00:29:24.143920 containerd[1654]: 2025-05-13 00:29:24.141 [WARNING][7214] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="430af441a72f539a91045e2e8f325527c635c795f894d12c97ba94d58aa93098" HandleID="k8s-pod-network.430af441a72f539a91045e2e8f325527c635c795f894d12c97ba94d58aa93098" Workload="localhost-k8s-coredns--7db6d8ff4d--jpqnf-eth0" May 13 00:29:24.143920 containerd[1654]: 2025-05-13 00:29:24.141 [INFO][7214] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="430af441a72f539a91045e2e8f325527c635c795f894d12c97ba94d58aa93098" HandleID="k8s-pod-network.430af441a72f539a91045e2e8f325527c635c795f894d12c97ba94d58aa93098" Workload="localhost-k8s-coredns--7db6d8ff4d--jpqnf-eth0" May 13 00:29:24.143920 containerd[1654]: 2025-05-13 00:29:24.141 [INFO][7214] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 00:29:24.143920 containerd[1654]: 2025-05-13 00:29:24.142 [INFO][7207] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="430af441a72f539a91045e2e8f325527c635c795f894d12c97ba94d58aa93098" May 13 00:29:24.144239 containerd[1654]: time="2025-05-13T00:29:24.144224570Z" level=info msg="TearDown network for sandbox \"430af441a72f539a91045e2e8f325527c635c795f894d12c97ba94d58aa93098\" successfully" May 13 00:29:24.183456 containerd[1654]: time="2025-05-13T00:29:24.183433800Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"430af441a72f539a91045e2e8f325527c635c795f894d12c97ba94d58aa93098\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 13 00:29:24.183636 containerd[1654]: time="2025-05-13T00:29:24.183580790Z" level=info msg="RemovePodSandbox \"430af441a72f539a91045e2e8f325527c635c795f894d12c97ba94d58aa93098\" returns successfully" May 13 00:29:29.657564 systemd[1]: Started sshd@7-139.178.70.107:22-139.178.68.195:49906.service - OpenSSH per-connection server daemon (139.178.68.195:49906). May 13 00:29:29.762617 kubelet[2967]: I0513 00:29:29.762126 2967 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 13 00:29:30.246963 sshd[7236]: Accepted publickey for core from 139.178.68.195 port 49906 ssh2: RSA SHA256:iRPXTnUjHi5KLd+fyNY68ZfnJOxaHNAGmZgm5aHJa9U May 13 00:29:30.266350 sshd[7236]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:29:30.289139 systemd-logind[1625]: New session 10 of user core. May 13 00:29:30.297658 systemd[1]: Started session-10.scope - Session 10 of User core. May 13 00:29:30.813739 systemd-resolved[1542]: Under memory pressure, flushing caches. May 13 00:29:30.813745 systemd-resolved[1542]: Flushed all caches. May 13 00:29:30.815351 systemd-journald[1185]: Under memory pressure, flushing caches. May 13 00:29:31.529917 sshd[7236]: pam_unix(sshd:session): session closed for user core May 13 00:29:31.553979 systemd[1]: sshd@7-139.178.70.107:22-139.178.68.195:49906.service: Deactivated successfully. May 13 00:29:31.556874 systemd[1]: session-10.scope: Deactivated successfully. May 13 00:29:31.557008 systemd-logind[1625]: Session 10 logged out. Waiting for processes to exit. May 13 00:29:31.558021 systemd-logind[1625]: Removed session 10. May 13 00:29:32.861625 systemd-resolved[1542]: Under memory pressure, flushing caches. May 13 00:29:32.862475 systemd-journald[1185]: Under memory pressure, flushing caches. May 13 00:29:32.861631 systemd-resolved[1542]: Flushed all caches. May 13 00:29:36.539496 systemd[1]: Started sshd@8-139.178.70.107:22-139.178.68.195:41954.service - OpenSSH per-connection server daemon (139.178.68.195:41954). May 13 00:29:36.585745 sshd[7264]: Accepted publickey for core from 139.178.68.195 port 41954 ssh2: RSA SHA256:iRPXTnUjHi5KLd+fyNY68ZfnJOxaHNAGmZgm5aHJa9U May 13 00:29:36.586653 sshd[7264]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:29:36.589618 systemd-logind[1625]: New session 11 of user core. May 13 00:29:36.596565 systemd[1]: Started session-11.scope - Session 11 of User core. May 13 00:29:36.763239 sshd[7264]: pam_unix(sshd:session): session closed for user core May 13 00:29:36.765271 systemd[1]: sshd@8-139.178.70.107:22-139.178.68.195:41954.service: Deactivated successfully. May 13 00:29:36.767816 systemd-logind[1625]: Session 11 logged out. Waiting for processes to exit. May 13 00:29:36.768074 systemd[1]: session-11.scope: Deactivated successfully. May 13 00:29:36.768906 systemd-logind[1625]: Removed session 11. May 13 00:29:41.772557 systemd[1]: Started sshd@9-139.178.70.107:22-139.178.68.195:41966.service - OpenSSH per-connection server daemon (139.178.68.195:41966). May 13 00:29:41.805235 sshd[7286]: Accepted publickey for core from 139.178.68.195 port 41966 ssh2: RSA SHA256:iRPXTnUjHi5KLd+fyNY68ZfnJOxaHNAGmZgm5aHJa9U May 13 00:29:41.806417 sshd[7286]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:29:41.812995 systemd-logind[1625]: New session 12 of user core. May 13 00:29:41.815501 systemd[1]: Started session-12.scope - Session 12 of User core. May 13 00:29:41.926114 sshd[7286]: pam_unix(sshd:session): session closed for user core May 13 00:29:41.931630 systemd[1]: Started sshd@10-139.178.70.107:22-139.178.68.195:41970.service - OpenSSH per-connection server daemon (139.178.68.195:41970). May 13 00:29:41.932273 systemd[1]: sshd@9-139.178.70.107:22-139.178.68.195:41966.service: Deactivated successfully. May 13 00:29:41.935267 systemd[1]: session-12.scope: Deactivated successfully. May 13 00:29:41.938934 systemd-logind[1625]: Session 12 logged out. Waiting for processes to exit. May 13 00:29:41.940324 systemd-logind[1625]: Removed session 12. May 13 00:29:41.965494 sshd[7298]: Accepted publickey for core from 139.178.68.195 port 41970 ssh2: RSA SHA256:iRPXTnUjHi5KLd+fyNY68ZfnJOxaHNAGmZgm5aHJa9U May 13 00:29:41.967108 sshd[7298]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:29:41.970081 systemd-logind[1625]: New session 13 of user core. May 13 00:29:41.972613 systemd[1]: Started session-13.scope - Session 13 of User core. May 13 00:29:42.360855 sshd[7298]: pam_unix(sshd:session): session closed for user core May 13 00:29:42.364694 systemd[1]: Started sshd@11-139.178.70.107:22-139.178.68.195:41978.service - OpenSSH per-connection server daemon (139.178.68.195:41978). May 13 00:29:42.385706 systemd[1]: sshd@10-139.178.70.107:22-139.178.68.195:41970.service: Deactivated successfully. May 13 00:29:42.389406 systemd[1]: session-13.scope: Deactivated successfully. May 13 00:29:42.393219 systemd-logind[1625]: Session 13 logged out. Waiting for processes to exit. May 13 00:29:42.393823 systemd-logind[1625]: Removed session 13. May 13 00:29:42.845744 systemd-resolved[1542]: Under memory pressure, flushing caches. May 13 00:29:42.845748 systemd-resolved[1542]: Flushed all caches. May 13 00:29:42.846363 systemd-journald[1185]: Under memory pressure, flushing caches. May 13 00:29:42.895417 sshd[7310]: Accepted publickey for core from 139.178.68.195 port 41978 ssh2: RSA SHA256:iRPXTnUjHi5KLd+fyNY68ZfnJOxaHNAGmZgm5aHJa9U May 13 00:29:42.898107 sshd[7310]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:29:42.908831 systemd-logind[1625]: New session 14 of user core. May 13 00:29:42.915708 systemd[1]: Started session-14.scope - Session 14 of User core. May 13 00:29:43.142457 sshd[7310]: pam_unix(sshd:session): session closed for user core May 13 00:29:43.173620 systemd[1]: sshd@11-139.178.70.107:22-139.178.68.195:41978.service: Deactivated successfully. May 13 00:29:43.175395 systemd-logind[1625]: Session 14 logged out. Waiting for processes to exit. May 13 00:29:43.175738 systemd[1]: session-14.scope: Deactivated successfully. May 13 00:29:43.176833 systemd-logind[1625]: Removed session 14. May 13 00:29:44.893647 systemd-resolved[1542]: Under memory pressure, flushing caches. May 13 00:29:44.893651 systemd-resolved[1542]: Flushed all caches. May 13 00:29:44.894419 systemd-journald[1185]: Under memory pressure, flushing caches. May 13 00:29:48.150549 systemd[1]: Started sshd@12-139.178.70.107:22-139.178.68.195:34944.service - OpenSSH per-connection server daemon (139.178.68.195:34944). May 13 00:29:48.241214 sshd[7372]: Accepted publickey for core from 139.178.68.195 port 34944 ssh2: RSA SHA256:iRPXTnUjHi5KLd+fyNY68ZfnJOxaHNAGmZgm5aHJa9U May 13 00:29:48.242320 sshd[7372]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:29:48.245518 systemd-logind[1625]: New session 15 of user core. May 13 00:29:48.250525 systemd[1]: Started session-15.scope - Session 15 of User core. May 13 00:29:48.427453 sshd[7372]: pam_unix(sshd:session): session closed for user core May 13 00:29:48.431623 systemd[1]: sshd@12-139.178.70.107:22-139.178.68.195:34944.service: Deactivated successfully. May 13 00:29:48.435469 systemd-logind[1625]: Session 15 logged out. Waiting for processes to exit. May 13 00:29:48.435731 systemd[1]: session-15.scope: Deactivated successfully. May 13 00:29:48.436715 systemd-logind[1625]: Removed session 15. May 13 00:29:53.432463 systemd[1]: Started sshd@13-139.178.70.107:22-139.178.68.195:34954.service - OpenSSH per-connection server daemon (139.178.68.195:34954). May 13 00:29:53.473209 sshd[7387]: Accepted publickey for core from 139.178.68.195 port 34954 ssh2: RSA SHA256:iRPXTnUjHi5KLd+fyNY68ZfnJOxaHNAGmZgm5aHJa9U May 13 00:29:53.472593 sshd[7387]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:29:53.478313 systemd-logind[1625]: New session 16 of user core. May 13 00:29:53.484702 systemd[1]: Started session-16.scope - Session 16 of User core. May 13 00:29:53.598066 sshd[7387]: pam_unix(sshd:session): session closed for user core May 13 00:29:53.601209 systemd-logind[1625]: Session 16 logged out. Waiting for processes to exit. May 13 00:29:53.603492 systemd[1]: sshd@13-139.178.70.107:22-139.178.68.195:34954.service: Deactivated successfully. May 13 00:29:53.608680 systemd[1]: session-16.scope: Deactivated successfully. May 13 00:29:53.610081 systemd-logind[1625]: Removed session 16. May 13 00:29:58.603495 systemd[1]: Started sshd@14-139.178.70.107:22-139.178.68.195:33744.service - OpenSSH per-connection server daemon (139.178.68.195:33744). May 13 00:29:58.681919 sshd[7422]: Accepted publickey for core from 139.178.68.195 port 33744 ssh2: RSA SHA256:iRPXTnUjHi5KLd+fyNY68ZfnJOxaHNAGmZgm5aHJa9U May 13 00:29:58.683533 sshd[7422]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:29:58.687508 systemd-logind[1625]: New session 17 of user core. May 13 00:29:58.690512 systemd[1]: Started session-17.scope - Session 17 of User core. May 13 00:29:58.902898 sshd[7422]: pam_unix(sshd:session): session closed for user core May 13 00:29:58.905931 systemd-logind[1625]: Session 17 logged out. Waiting for processes to exit. May 13 00:29:58.906117 systemd[1]: sshd@14-139.178.70.107:22-139.178.68.195:33744.service: Deactivated successfully. May 13 00:29:58.908801 systemd[1]: session-17.scope: Deactivated successfully. May 13 00:29:58.910312 systemd-logind[1625]: Removed session 17. May 13 00:30:03.911465 systemd[1]: Started sshd@15-139.178.70.107:22-139.178.68.195:59606.service - OpenSSH per-connection server daemon (139.178.68.195:59606). May 13 00:30:03.942392 sshd[7444]: Accepted publickey for core from 139.178.68.195 port 59606 ssh2: RSA SHA256:iRPXTnUjHi5KLd+fyNY68ZfnJOxaHNAGmZgm5aHJa9U May 13 00:30:03.943141 sshd[7444]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:30:03.946033 systemd-logind[1625]: New session 18 of user core. May 13 00:30:03.951470 systemd[1]: Started session-18.scope - Session 18 of User core. May 13 00:30:04.079805 sshd[7444]: pam_unix(sshd:session): session closed for user core May 13 00:30:04.086969 systemd[1]: sshd@15-139.178.70.107:22-139.178.68.195:59606.service: Deactivated successfully. May 13 00:30:04.089876 systemd-logind[1625]: Session 18 logged out. Waiting for processes to exit. May 13 00:30:04.090439 systemd[1]: session-18.scope: Deactivated successfully. May 13 00:30:04.091196 systemd-logind[1625]: Removed session 18. May 13 00:30:05.616552 kubelet[2967]: I0513 00:30:05.616528 2967 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 13 00:30:05.745915 containerd[1654]: time="2025-05-13T00:30:05.745880237Z" level=info msg="StopContainer for \"2742db4c4fa26718c3fed97f7bef56514ae859abe8be7b826f422bc4385854fc\" with timeout 30 (s)" May 13 00:30:05.747944 containerd[1654]: time="2025-05-13T00:30:05.747924415Z" level=info msg="Stop container \"2742db4c4fa26718c3fed97f7bef56514ae859abe8be7b826f422bc4385854fc\" with signal terminated" May 13 00:30:05.861471 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2742db4c4fa26718c3fed97f7bef56514ae859abe8be7b826f422bc4385854fc-rootfs.mount: Deactivated successfully. May 13 00:30:05.887441 containerd[1654]: time="2025-05-13T00:30:05.869839019Z" level=info msg="shim disconnected" id=2742db4c4fa26718c3fed97f7bef56514ae859abe8be7b826f422bc4385854fc namespace=k8s.io May 13 00:30:05.887520 containerd[1654]: time="2025-05-13T00:30:05.887508159Z" level=warning msg="cleaning up after shim disconnected" id=2742db4c4fa26718c3fed97f7bef56514ae859abe8be7b826f422bc4385854fc namespace=k8s.io May 13 00:30:05.887560 containerd[1654]: time="2025-05-13T00:30:05.887553367Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 13 00:30:05.953423 containerd[1654]: time="2025-05-13T00:30:05.953370304Z" level=info msg="StopContainer for \"2742db4c4fa26718c3fed97f7bef56514ae859abe8be7b826f422bc4385854fc\" returns successfully" May 13 00:30:05.958575 containerd[1654]: time="2025-05-13T00:30:05.958557875Z" level=info msg="StopPodSandbox for \"d20cf9d6e8012019de8c66e56402082068f8419b948332fc02ef397838479d49\"" May 13 00:30:05.959242 containerd[1654]: time="2025-05-13T00:30:05.959222720Z" level=info msg="Container to stop \"2742db4c4fa26718c3fed97f7bef56514ae859abe8be7b826f422bc4385854fc\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 13 00:30:05.961225 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d20cf9d6e8012019de8c66e56402082068f8419b948332fc02ef397838479d49-shm.mount: Deactivated successfully. May 13 00:30:05.979162 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d20cf9d6e8012019de8c66e56402082068f8419b948332fc02ef397838479d49-rootfs.mount: Deactivated successfully. May 13 00:30:05.980005 containerd[1654]: time="2025-05-13T00:30:05.979560033Z" level=info msg="shim disconnected" id=d20cf9d6e8012019de8c66e56402082068f8419b948332fc02ef397838479d49 namespace=k8s.io May 13 00:30:05.980005 containerd[1654]: time="2025-05-13T00:30:05.979588034Z" level=warning msg="cleaning up after shim disconnected" id=d20cf9d6e8012019de8c66e56402082068f8419b948332fc02ef397838479d49 namespace=k8s.io May 13 00:30:05.980005 containerd[1654]: time="2025-05-13T00:30:05.979593288Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 13 00:30:06.182698 systemd-networkd[1286]: cali980d56dedd6: Link DOWN May 13 00:30:06.182703 systemd-networkd[1286]: cali980d56dedd6: Lost carrier May 13 00:30:06.357075 containerd[1654]: 2025-05-13 00:30:06.170 [INFO][7541] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="d20cf9d6e8012019de8c66e56402082068f8419b948332fc02ef397838479d49" May 13 00:30:06.357075 containerd[1654]: 2025-05-13 00:30:06.173 [INFO][7541] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="d20cf9d6e8012019de8c66e56402082068f8419b948332fc02ef397838479d49" iface="eth0" netns="/var/run/netns/cni-a90a45cd-a343-66bc-3a8c-1198c8fd3b74" May 13 00:30:06.357075 containerd[1654]: 2025-05-13 00:30:06.173 [INFO][7541] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="d20cf9d6e8012019de8c66e56402082068f8419b948332fc02ef397838479d49" iface="eth0" netns="/var/run/netns/cni-a90a45cd-a343-66bc-3a8c-1198c8fd3b74" May 13 00:30:06.357075 containerd[1654]: 2025-05-13 00:30:06.187 [INFO][7541] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="d20cf9d6e8012019de8c66e56402082068f8419b948332fc02ef397838479d49" after=14.035389ms iface="eth0" netns="/var/run/netns/cni-a90a45cd-a343-66bc-3a8c-1198c8fd3b74" May 13 00:30:06.357075 containerd[1654]: 2025-05-13 00:30:06.187 [INFO][7541] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="d20cf9d6e8012019de8c66e56402082068f8419b948332fc02ef397838479d49" May 13 00:30:06.357075 containerd[1654]: 2025-05-13 00:30:06.187 [INFO][7541] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d20cf9d6e8012019de8c66e56402082068f8419b948332fc02ef397838479d49" May 13 00:30:06.357075 containerd[1654]: 2025-05-13 00:30:06.324 [INFO][7553] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d20cf9d6e8012019de8c66e56402082068f8419b948332fc02ef397838479d49" HandleID="k8s-pod-network.d20cf9d6e8012019de8c66e56402082068f8419b948332fc02ef397838479d49" Workload="localhost-k8s-calico--apiserver--5fcdd59ffd--gtcr6-eth0" May 13 00:30:06.357075 containerd[1654]: 2025-05-13 00:30:06.326 [INFO][7553] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 00:30:06.357075 containerd[1654]: 2025-05-13 00:30:06.326 [INFO][7553] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 00:30:06.357075 containerd[1654]: 2025-05-13 00:30:06.353 [INFO][7553] ipam/ipam_plugin.go 431: Released address using handleID ContainerID="d20cf9d6e8012019de8c66e56402082068f8419b948332fc02ef397838479d49" HandleID="k8s-pod-network.d20cf9d6e8012019de8c66e56402082068f8419b948332fc02ef397838479d49" Workload="localhost-k8s-calico--apiserver--5fcdd59ffd--gtcr6-eth0" May 13 00:30:06.357075 containerd[1654]: 2025-05-13 00:30:06.353 [INFO][7553] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d20cf9d6e8012019de8c66e56402082068f8419b948332fc02ef397838479d49" HandleID="k8s-pod-network.d20cf9d6e8012019de8c66e56402082068f8419b948332fc02ef397838479d49" Workload="localhost-k8s-calico--apiserver--5fcdd59ffd--gtcr6-eth0" May 13 00:30:06.357075 containerd[1654]: 2025-05-13 00:30:06.354 [INFO][7553] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 00:30:06.357075 containerd[1654]: 2025-05-13 00:30:06.355 [INFO][7541] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="d20cf9d6e8012019de8c66e56402082068f8419b948332fc02ef397838479d49" May 13 00:30:06.359877 containerd[1654]: time="2025-05-13T00:30:06.357744013Z" level=info msg="TearDown network for sandbox \"d20cf9d6e8012019de8c66e56402082068f8419b948332fc02ef397838479d49\" successfully" May 13 00:30:06.359877 containerd[1654]: time="2025-05-13T00:30:06.357767412Z" level=info msg="StopPodSandbox for \"d20cf9d6e8012019de8c66e56402082068f8419b948332fc02ef397838479d49\" returns successfully" May 13 00:30:06.360099 systemd[1]: run-netns-cni\x2da90a45cd\x2da343\x2d66bc\x2d3a8c\x2d1198c8fd3b74.mount: Deactivated successfully. May 13 00:30:06.509298 kubelet[2967]: I0513 00:30:06.508724 2967 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9ppzc\" (UniqueName: \"kubernetes.io/projected/d2f1ddd1-14cd-452c-934e-9a28944a7b23-kube-api-access-9ppzc\") pod \"d2f1ddd1-14cd-452c-934e-9a28944a7b23\" (UID: \"d2f1ddd1-14cd-452c-934e-9a28944a7b23\") " May 13 00:30:06.509298 kubelet[2967]: I0513 00:30:06.508758 2967 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/d2f1ddd1-14cd-452c-934e-9a28944a7b23-calico-apiserver-certs\") pod \"d2f1ddd1-14cd-452c-934e-9a28944a7b23\" (UID: \"d2f1ddd1-14cd-452c-934e-9a28944a7b23\") " May 13 00:30:06.528798 systemd[1]: var-lib-kubelet-pods-d2f1ddd1\x2d14cd\x2d452c\x2d934e\x2d9a28944a7b23-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d9ppzc.mount: Deactivated successfully. May 13 00:30:06.528886 systemd[1]: var-lib-kubelet-pods-d2f1ddd1\x2d14cd\x2d452c\x2d934e\x2d9a28944a7b23-volumes-kubernetes.io\x7esecret-calico\x2dapiserver\x2dcerts.mount: Deactivated successfully. May 13 00:30:06.537260 kubelet[2967]: I0513 00:30:06.535016 2967 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d2f1ddd1-14cd-452c-934e-9a28944a7b23-calico-apiserver-certs" (OuterVolumeSpecName: "calico-apiserver-certs") pod "d2f1ddd1-14cd-452c-934e-9a28944a7b23" (UID: "d2f1ddd1-14cd-452c-934e-9a28944a7b23"). InnerVolumeSpecName "calico-apiserver-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" May 13 00:30:06.537299 kubelet[2967]: I0513 00:30:06.535515 2967 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d2f1ddd1-14cd-452c-934e-9a28944a7b23-kube-api-access-9ppzc" (OuterVolumeSpecName: "kube-api-access-9ppzc") pod "d2f1ddd1-14cd-452c-934e-9a28944a7b23" (UID: "d2f1ddd1-14cd-452c-934e-9a28944a7b23"). InnerVolumeSpecName "kube-api-access-9ppzc". PluginName "kubernetes.io/projected", VolumeGidValue "" May 13 00:30:06.611793 kubelet[2967]: I0513 00:30:06.611773 2967 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-9ppzc\" (UniqueName: \"kubernetes.io/projected/d2f1ddd1-14cd-452c-934e-9a28944a7b23-kube-api-access-9ppzc\") on node \"localhost\" DevicePath \"\"" May 13 00:30:06.611793 kubelet[2967]: I0513 00:30:06.611795 2967 reconciler_common.go:289] "Volume detached for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/d2f1ddd1-14cd-452c-934e-9a28944a7b23-calico-apiserver-certs\") on node \"localhost\" DevicePath \"\"" May 13 00:30:06.786954 kubelet[2967]: I0513 00:30:06.786882 2967 scope.go:117] "RemoveContainer" containerID="2742db4c4fa26718c3fed97f7bef56514ae859abe8be7b826f422bc4385854fc" May 13 00:30:06.793650 containerd[1654]: time="2025-05-13T00:30:06.791435811Z" level=info msg="RemoveContainer for \"2742db4c4fa26718c3fed97f7bef56514ae859abe8be7b826f422bc4385854fc\"" May 13 00:30:06.804281 containerd[1654]: time="2025-05-13T00:30:06.804000023Z" level=info msg="RemoveContainer for \"2742db4c4fa26718c3fed97f7bef56514ae859abe8be7b826f422bc4385854fc\" returns successfully" May 13 00:30:06.806784 kubelet[2967]: I0513 00:30:06.805216 2967 scope.go:117] "RemoveContainer" containerID="2742db4c4fa26718c3fed97f7bef56514ae859abe8be7b826f422bc4385854fc" May 13 00:30:06.811585 containerd[1654]: time="2025-05-13T00:30:06.807209655Z" level=error msg="ContainerStatus for \"2742db4c4fa26718c3fed97f7bef56514ae859abe8be7b826f422bc4385854fc\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2742db4c4fa26718c3fed97f7bef56514ae859abe8be7b826f422bc4385854fc\": not found" May 13 00:30:06.825347 kubelet[2967]: E0513 00:30:06.825185 2967 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2742db4c4fa26718c3fed97f7bef56514ae859abe8be7b826f422bc4385854fc\": not found" containerID="2742db4c4fa26718c3fed97f7bef56514ae859abe8be7b826f422bc4385854fc" May 13 00:30:06.830857 kubelet[2967]: I0513 00:30:06.826125 2967 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2742db4c4fa26718c3fed97f7bef56514ae859abe8be7b826f422bc4385854fc"} err="failed to get container status \"2742db4c4fa26718c3fed97f7bef56514ae859abe8be7b826f422bc4385854fc\": rpc error: code = NotFound desc = an error occurred when try to find container \"2742db4c4fa26718c3fed97f7bef56514ae859abe8be7b826f422bc4385854fc\": not found" May 13 00:30:08.322940 kubelet[2967]: I0513 00:30:08.322804 2967 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d2f1ddd1-14cd-452c-934e-9a28944a7b23" path="/var/lib/kubelet/pods/d2f1ddd1-14cd-452c-934e-9a28944a7b23/volumes" May 13 00:30:09.085467 systemd[1]: Started sshd@16-139.178.70.107:22-139.178.68.195:59610.service - OpenSSH per-connection server daemon (139.178.68.195:59610). May 13 00:30:09.188700 sshd[7566]: Accepted publickey for core from 139.178.68.195 port 59610 ssh2: RSA SHA256:iRPXTnUjHi5KLd+fyNY68ZfnJOxaHNAGmZgm5aHJa9U May 13 00:30:09.190500 sshd[7566]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:30:09.196467 systemd-logind[1625]: New session 19 of user core. May 13 00:30:09.200664 systemd[1]: Started session-19.scope - Session 19 of User core. May 13 00:30:09.471118 sshd[7566]: pam_unix(sshd:session): session closed for user core May 13 00:30:09.477104 systemd[1]: Started sshd@17-139.178.70.107:22-139.178.68.195:59624.service - OpenSSH per-connection server daemon (139.178.68.195:59624). May 13 00:30:09.477410 systemd[1]: sshd@16-139.178.70.107:22-139.178.68.195:59610.service: Deactivated successfully. May 13 00:30:09.479647 systemd-logind[1625]: Session 19 logged out. Waiting for processes to exit. May 13 00:30:09.479652 systemd[1]: session-19.scope: Deactivated successfully. May 13 00:30:09.481148 systemd-logind[1625]: Removed session 19. May 13 00:30:09.522394 sshd[7577]: Accepted publickey for core from 139.178.68.195 port 59624 ssh2: RSA SHA256:iRPXTnUjHi5KLd+fyNY68ZfnJOxaHNAGmZgm5aHJa9U May 13 00:30:09.523482 sshd[7577]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:30:09.527814 systemd-logind[1625]: New session 20 of user core. May 13 00:30:09.532609 systemd[1]: Started session-20.scope - Session 20 of User core. May 13 00:30:09.947526 sshd[7577]: pam_unix(sshd:session): session closed for user core May 13 00:30:09.953884 systemd[1]: Started sshd@18-139.178.70.107:22-139.178.68.195:59634.service - OpenSSH per-connection server daemon (139.178.68.195:59634). May 13 00:30:09.954796 systemd[1]: sshd@17-139.178.70.107:22-139.178.68.195:59624.service: Deactivated successfully. May 13 00:30:09.958371 systemd[1]: session-20.scope: Deactivated successfully. May 13 00:30:09.959815 systemd-logind[1625]: Session 20 logged out. Waiting for processes to exit. May 13 00:30:09.960623 systemd-logind[1625]: Removed session 20. May 13 00:30:09.991843 sshd[7589]: Accepted publickey for core from 139.178.68.195 port 59634 ssh2: RSA SHA256:iRPXTnUjHi5KLd+fyNY68ZfnJOxaHNAGmZgm5aHJa9U May 13 00:30:09.993830 sshd[7589]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:30:09.997157 systemd-logind[1625]: New session 21 of user core. May 13 00:30:10.003539 systemd[1]: Started session-21.scope - Session 21 of User core. May 13 00:30:11.800487 systemd[1]: Started sshd@19-139.178.70.107:22-139.178.68.195:59648.service - OpenSSH per-connection server daemon (139.178.68.195:59648). May 13 00:30:11.807572 sshd[7589]: pam_unix(sshd:session): session closed for user core May 13 00:30:11.817385 systemd[1]: sshd@18-139.178.70.107:22-139.178.68.195:59634.service: Deactivated successfully. May 13 00:30:11.820114 systemd-logind[1625]: Session 21 logged out. Waiting for processes to exit. May 13 00:30:11.820524 systemd[1]: session-21.scope: Deactivated successfully. May 13 00:30:11.821929 systemd-logind[1625]: Removed session 21. May 13 00:30:11.895652 sshd[7608]: Accepted publickey for core from 139.178.68.195 port 59648 ssh2: RSA SHA256:iRPXTnUjHi5KLd+fyNY68ZfnJOxaHNAGmZgm5aHJa9U May 13 00:30:11.896736 sshd[7608]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:30:11.901364 systemd-logind[1625]: New session 22 of user core. May 13 00:30:11.904540 systemd[1]: Started session-22.scope - Session 22 of User core. May 13 00:30:12.841970 sshd[7608]: pam_unix(sshd:session): session closed for user core May 13 00:30:12.847499 systemd[1]: Started sshd@20-139.178.70.107:22-139.178.68.195:59652.service - OpenSSH per-connection server daemon (139.178.68.195:59652). May 13 00:30:12.865452 systemd-journald[1185]: Under memory pressure, flushing caches. May 13 00:30:12.861602 systemd-resolved[1542]: Under memory pressure, flushing caches. May 13 00:30:12.861607 systemd-resolved[1542]: Flushed all caches. May 13 00:30:12.864969 systemd[1]: sshd@19-139.178.70.107:22-139.178.68.195:59648.service: Deactivated successfully. May 13 00:30:12.866195 systemd[1]: session-22.scope: Deactivated successfully. May 13 00:30:12.873125 systemd-logind[1625]: Session 22 logged out. Waiting for processes to exit. May 13 00:30:12.876843 systemd-logind[1625]: Removed session 22. May 13 00:30:12.906612 sshd[7632]: Accepted publickey for core from 139.178.68.195 port 59652 ssh2: RSA SHA256:iRPXTnUjHi5KLd+fyNY68ZfnJOxaHNAGmZgm5aHJa9U May 13 00:30:12.907820 sshd[7632]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:30:12.910601 systemd-logind[1625]: New session 23 of user core. May 13 00:30:12.915496 systemd[1]: Started session-23.scope - Session 23 of User core. May 13 00:30:13.289305 sshd[7632]: pam_unix(sshd:session): session closed for user core May 13 00:30:13.293454 systemd[1]: sshd@20-139.178.70.107:22-139.178.68.195:59652.service: Deactivated successfully. May 13 00:30:13.297116 systemd[1]: session-23.scope: Deactivated successfully. May 13 00:30:13.298661 systemd-logind[1625]: Session 23 logged out. Waiting for processes to exit. May 13 00:30:13.301748 systemd-logind[1625]: Removed session 23. May 13 00:30:14.909494 systemd-resolved[1542]: Under memory pressure, flushing caches. May 13 00:30:14.910399 systemd-journald[1185]: Under memory pressure, flushing caches. May 13 00:30:14.909498 systemd-resolved[1542]: Flushed all caches. May 13 00:30:18.294934 systemd[1]: Started sshd@21-139.178.70.107:22-139.178.68.195:45040.service - OpenSSH per-connection server daemon (139.178.68.195:45040). May 13 00:30:18.356010 sshd[7687]: Accepted publickey for core from 139.178.68.195 port 45040 ssh2: RSA SHA256:iRPXTnUjHi5KLd+fyNY68ZfnJOxaHNAGmZgm5aHJa9U May 13 00:30:18.356912 sshd[7687]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:30:18.364324 systemd-logind[1625]: New session 24 of user core. May 13 00:30:18.366526 systemd[1]: Started session-24.scope - Session 24 of User core. May 13 00:30:18.612593 sshd[7687]: pam_unix(sshd:session): session closed for user core May 13 00:30:18.615468 systemd[1]: sshd@21-139.178.70.107:22-139.178.68.195:45040.service: Deactivated successfully. May 13 00:30:18.618946 systemd-logind[1625]: Session 24 logged out. Waiting for processes to exit. May 13 00:30:18.619009 systemd[1]: session-24.scope: Deactivated successfully. May 13 00:30:18.620614 systemd-logind[1625]: Removed session 24. May 13 00:30:20.862645 systemd-resolved[1542]: Under memory pressure, flushing caches. May 13 00:30:20.872369 systemd-journald[1185]: Under memory pressure, flushing caches. May 13 00:30:20.862651 systemd-resolved[1542]: Flushed all caches. May 13 00:30:22.909593 systemd-resolved[1542]: Under memory pressure, flushing caches. May 13 00:30:22.962267 systemd-journald[1185]: Under memory pressure, flushing caches. May 13 00:30:22.909598 systemd-resolved[1542]: Flushed all caches. May 13 00:30:23.616474 systemd[1]: Started sshd@22-139.178.70.107:22-139.178.68.195:39752.service - OpenSSH per-connection server daemon (139.178.68.195:39752). May 13 00:30:23.682017 sshd[7703]: Accepted publickey for core from 139.178.68.195 port 39752 ssh2: RSA SHA256:iRPXTnUjHi5KLd+fyNY68ZfnJOxaHNAGmZgm5aHJa9U May 13 00:30:23.683032 sshd[7703]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:30:23.686539 systemd-logind[1625]: New session 25 of user core. May 13 00:30:23.691538 systemd[1]: Started session-25.scope - Session 25 of User core. May 13 00:30:23.945493 sshd[7703]: pam_unix(sshd:session): session closed for user core May 13 00:30:23.953294 systemd[1]: sshd@22-139.178.70.107:22-139.178.68.195:39752.service: Deactivated successfully. May 13 00:30:23.956574 systemd-logind[1625]: Session 25 logged out. Waiting for processes to exit. May 13 00:30:23.957064 systemd[1]: session-25.scope: Deactivated successfully. May 13 00:30:23.958134 systemd-logind[1625]: Removed session 25. May 13 00:30:24.235613 containerd[1654]: time="2025-05-13T00:30:24.235514270Z" level=info msg="StopPodSandbox for \"d20cf9d6e8012019de8c66e56402082068f8419b948332fc02ef397838479d49\"" May 13 00:30:24.957406 systemd-resolved[1542]: Under memory pressure, flushing caches. May 13 00:30:24.957412 systemd-resolved[1542]: Flushed all caches. May 13 00:30:24.959355 systemd-journald[1185]: Under memory pressure, flushing caches. May 13 00:30:25.545197 containerd[1654]: 2025-05-13 00:30:24.979 [WARNING][7729] cni-plugin/k8s.go 566: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="d20cf9d6e8012019de8c66e56402082068f8419b948332fc02ef397838479d49" WorkloadEndpoint="localhost-k8s-calico--apiserver--5fcdd59ffd--gtcr6-eth0" May 13 00:30:25.545197 containerd[1654]: 2025-05-13 00:30:24.981 [INFO][7729] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="d20cf9d6e8012019de8c66e56402082068f8419b948332fc02ef397838479d49" May 13 00:30:25.545197 containerd[1654]: 2025-05-13 00:30:24.981 [INFO][7729] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d20cf9d6e8012019de8c66e56402082068f8419b948332fc02ef397838479d49" iface="eth0" netns="" May 13 00:30:25.545197 containerd[1654]: 2025-05-13 00:30:24.981 [INFO][7729] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="d20cf9d6e8012019de8c66e56402082068f8419b948332fc02ef397838479d49" May 13 00:30:25.545197 containerd[1654]: 2025-05-13 00:30:24.981 [INFO][7729] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d20cf9d6e8012019de8c66e56402082068f8419b948332fc02ef397838479d49" May 13 00:30:25.545197 containerd[1654]: 2025-05-13 00:30:25.534 [INFO][7736] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d20cf9d6e8012019de8c66e56402082068f8419b948332fc02ef397838479d49" HandleID="k8s-pod-network.d20cf9d6e8012019de8c66e56402082068f8419b948332fc02ef397838479d49" Workload="localhost-k8s-calico--apiserver--5fcdd59ffd--gtcr6-eth0" May 13 00:30:25.545197 containerd[1654]: 2025-05-13 00:30:25.536 [INFO][7736] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 00:30:25.545197 containerd[1654]: 2025-05-13 00:30:25.536 [INFO][7736] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 00:30:25.545197 containerd[1654]: 2025-05-13 00:30:25.541 [WARNING][7736] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d20cf9d6e8012019de8c66e56402082068f8419b948332fc02ef397838479d49" HandleID="k8s-pod-network.d20cf9d6e8012019de8c66e56402082068f8419b948332fc02ef397838479d49" Workload="localhost-k8s-calico--apiserver--5fcdd59ffd--gtcr6-eth0" May 13 00:30:25.545197 containerd[1654]: 2025-05-13 00:30:25.541 [INFO][7736] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d20cf9d6e8012019de8c66e56402082068f8419b948332fc02ef397838479d49" HandleID="k8s-pod-network.d20cf9d6e8012019de8c66e56402082068f8419b948332fc02ef397838479d49" Workload="localhost-k8s-calico--apiserver--5fcdd59ffd--gtcr6-eth0" May 13 00:30:25.545197 containerd[1654]: 2025-05-13 00:30:25.542 [INFO][7736] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 00:30:25.545197 containerd[1654]: 2025-05-13 00:30:25.544 [INFO][7729] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="d20cf9d6e8012019de8c66e56402082068f8419b948332fc02ef397838479d49" May 13 00:30:25.550460 containerd[1654]: time="2025-05-13T00:30:25.550423868Z" level=info msg="TearDown network for sandbox \"d20cf9d6e8012019de8c66e56402082068f8419b948332fc02ef397838479d49\" successfully" May 13 00:30:25.550460 containerd[1654]: time="2025-05-13T00:30:25.550454044Z" level=info msg="StopPodSandbox for \"d20cf9d6e8012019de8c66e56402082068f8419b948332fc02ef397838479d49\" returns successfully" May 13 00:30:25.553797 containerd[1654]: time="2025-05-13T00:30:25.553772198Z" level=info msg="RemovePodSandbox for \"d20cf9d6e8012019de8c66e56402082068f8419b948332fc02ef397838479d49\"" May 13 00:30:25.554665 containerd[1654]: time="2025-05-13T00:30:25.554647356Z" level=info msg="Forcibly stopping sandbox \"d20cf9d6e8012019de8c66e56402082068f8419b948332fc02ef397838479d49\"" May 13 00:30:25.629467 containerd[1654]: 2025-05-13 00:30:25.581 [WARNING][7754] cni-plugin/k8s.go 566: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="d20cf9d6e8012019de8c66e56402082068f8419b948332fc02ef397838479d49" WorkloadEndpoint="localhost-k8s-calico--apiserver--5fcdd59ffd--gtcr6-eth0" May 13 00:30:25.629467 containerd[1654]: 2025-05-13 00:30:25.581 [INFO][7754] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="d20cf9d6e8012019de8c66e56402082068f8419b948332fc02ef397838479d49" May 13 00:30:25.629467 containerd[1654]: 2025-05-13 00:30:25.581 [INFO][7754] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d20cf9d6e8012019de8c66e56402082068f8419b948332fc02ef397838479d49" iface="eth0" netns="" May 13 00:30:25.629467 containerd[1654]: 2025-05-13 00:30:25.582 [INFO][7754] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="d20cf9d6e8012019de8c66e56402082068f8419b948332fc02ef397838479d49" May 13 00:30:25.629467 containerd[1654]: 2025-05-13 00:30:25.582 [INFO][7754] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d20cf9d6e8012019de8c66e56402082068f8419b948332fc02ef397838479d49" May 13 00:30:25.629467 containerd[1654]: 2025-05-13 00:30:25.615 [INFO][7762] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d20cf9d6e8012019de8c66e56402082068f8419b948332fc02ef397838479d49" HandleID="k8s-pod-network.d20cf9d6e8012019de8c66e56402082068f8419b948332fc02ef397838479d49" Workload="localhost-k8s-calico--apiserver--5fcdd59ffd--gtcr6-eth0" May 13 00:30:25.629467 containerd[1654]: 2025-05-13 00:30:25.615 [INFO][7762] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 00:30:25.629467 containerd[1654]: 2025-05-13 00:30:25.615 [INFO][7762] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 00:30:25.629467 containerd[1654]: 2025-05-13 00:30:25.625 [WARNING][7762] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d20cf9d6e8012019de8c66e56402082068f8419b948332fc02ef397838479d49" HandleID="k8s-pod-network.d20cf9d6e8012019de8c66e56402082068f8419b948332fc02ef397838479d49" Workload="localhost-k8s-calico--apiserver--5fcdd59ffd--gtcr6-eth0" May 13 00:30:25.629467 containerd[1654]: 2025-05-13 00:30:25.625 [INFO][7762] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d20cf9d6e8012019de8c66e56402082068f8419b948332fc02ef397838479d49" HandleID="k8s-pod-network.d20cf9d6e8012019de8c66e56402082068f8419b948332fc02ef397838479d49" Workload="localhost-k8s-calico--apiserver--5fcdd59ffd--gtcr6-eth0" May 13 00:30:25.629467 containerd[1654]: 2025-05-13 00:30:25.626 [INFO][7762] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 00:30:25.629467 containerd[1654]: 2025-05-13 00:30:25.627 [INFO][7754] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="d20cf9d6e8012019de8c66e56402082068f8419b948332fc02ef397838479d49" May 13 00:30:25.630220 containerd[1654]: time="2025-05-13T00:30:25.629496723Z" level=info msg="TearDown network for sandbox \"d20cf9d6e8012019de8c66e56402082068f8419b948332fc02ef397838479d49\" successfully" May 13 00:30:25.634875 containerd[1654]: time="2025-05-13T00:30:25.634852620Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d20cf9d6e8012019de8c66e56402082068f8419b948332fc02ef397838479d49\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 13 00:30:25.635343 containerd[1654]: time="2025-05-13T00:30:25.634900306Z" level=info msg="RemovePodSandbox \"d20cf9d6e8012019de8c66e56402082068f8419b948332fc02ef397838479d49\" returns successfully" May 13 00:30:27.005406 systemd-resolved[1542]: Under memory pressure, flushing caches. May 13 00:30:27.006423 systemd-journald[1185]: Under memory pressure, flushing caches. May 13 00:30:27.005412 systemd-resolved[1542]: Flushed all caches. May 13 00:30:28.951571 systemd[1]: Started sshd@23-139.178.70.107:22-139.178.68.195:39764.service - OpenSSH per-connection server daemon (139.178.68.195:39764). May 13 00:30:29.114003 sshd[7772]: Accepted publickey for core from 139.178.68.195 port 39764 ssh2: RSA SHA256:iRPXTnUjHi5KLd+fyNY68ZfnJOxaHNAGmZgm5aHJa9U May 13 00:30:29.117932 sshd[7772]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:30:29.124037 systemd-logind[1625]: New session 26 of user core. May 13 00:30:29.127548 systemd[1]: Started session-26.scope - Session 26 of User core. May 13 00:30:29.565589 sshd[7772]: pam_unix(sshd:session): session closed for user core May 13 00:30:29.572762 systemd-logind[1625]: Session 26 logged out. Waiting for processes to exit. May 13 00:30:29.573022 systemd[1]: sshd@23-139.178.70.107:22-139.178.68.195:39764.service: Deactivated successfully. May 13 00:30:29.574140 systemd[1]: session-26.scope: Deactivated successfully. May 13 00:30:29.575578 systemd-logind[1625]: Removed session 26. May 13 00:30:30.845450 systemd-resolved[1542]: Under memory pressure, flushing caches. May 13 00:30:30.846496 systemd-journald[1185]: Under memory pressure, flushing caches. May 13 00:30:30.845455 systemd-resolved[1542]: Flushed all caches.